You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The command will generate a flamegraph in your _target_ folder that maps the number of ACIR opcodes and their corresponding locations in your program source code.
55
+
The command generates a flamegraph in your _target_ folder that maps the number of ACIR opcodes to their corresponding locations in your program's source code.
56
56
57
57
Opening the flamegraph in a web browser will provide a more interactive experience, allowing you to click into different regions of the graph and examine them.
58
58
59
59
Flamegraph of the demonstrative project generated with Nargo v1.0.0-beta.2:
The demonstrative project consists of 387 ACIR opcodes in total, which from the flamegraph we can see that the majority of them comes from the write to `array[i]`.
63
+
The demonstrative project consists of 387 ACIR opcodes in total. From the flamegraph, we can see that the majority come from the write to `array[i]`.
64
64
65
-
Knowing the insight on our program's bottleneck, let's optimize it.
65
+
With insight into our program's bottleneck, let's optimize it.
Instead of writing our array in a fully constrained manner, we first write our array inside an unconstrained function and then assert every value in the array returned from the unconstrained function in a constrained manner.
96
+
Instead of writing our array in a fully constrained context, we first write our array inside an unconstrained function. Then, we assert every value in the array returned from the unconstrained function in a constrained context.
97
97
98
98
This brings the ACIR opcodes count of our program down to a total of 284 opcodes:
99
99
@@ -111,7 +111,7 @@ Check "Matched" in the bottom right corner to learn the percentage out of total
111
111
112
112
If you try searching for `memory::op` before and after the optimization, you will find that the search will no longer have matches after the optimization.
113
113
114
-
This comes from the optimization removing the use of a dynamic array (i.e. an array with a dynamic index, that is its values rely on witness inputs). After the optimized rewrite into reading two arrays from known constant indices, simple arithmetic operations replaces the original memory operations.
114
+
This comes from the optimization removing the use of a dynamic array (i.e. an array with a dynamic index, that is its values rely on witness inputs). After the optimization, the program reads from two arrays with known constant indices, replacing the original memory operations with simple arithmetic operations.
115
115
116
116
:::
117
117
@@ -162,7 +162,7 @@ For example, we can find a 13.9% match `new_array` in the flamegraph above.
162
162
In contrast, if we profile the pre-optimization demonstrative project:
You will notice that it does not consist any `new_array`, and executes a total of 1,582 Brillig opcodes (versus 2,125 Brillig opcodes post-optimization).
165
+
You will notice that it does not contain `new_array` and executes a total of 1,582 Brillig opcodes (versus 2,125 Brillig opcodes post-optimization).
166
166
167
167
As new unconstrained functions were added, it is reasonable that the program would consist of more Brillig opcodes. That said, the tradeoff is often easily justifiable by the fact that proving speeds are more commonly the major bottleneck of Noir programs versus execution speeds.
0 commit comments