Arrays & Structs in-depth Part IV
by
, 18-Sep-2009 at 08:00 AM (4854 Views)
In Part III we discovered that arrays work great together with struct types, and that native arrays are clearly much easier to use than the old Array class, but what about performance? A simple test shows that the basic use of native arrays (reading/writing elements) compare very favorably against the old-school Array class. In fact, native arrays are a little faster. Your mileage may vary depending on the data type as well as hardware, but the below is a good approximation:
First thing to note here is that both reading and writing array elements is very fast, most likely proportionately so much faster than any real work that presumably would be done in reality with each value, that getting/setting the actual array values themselves is pure noise and completely irrelevant in a real program. As can be seen, over one thousand elements can be added to a native array variable in less than one millisecond.
You can also see that the native array type appears to be faster, especially at writing elements. Although in most cases the difference is so small that you'll never notice it. As you can see, even adding 10,000 elements only differs in a couple of milliseconds.
OK, so +1 for native arrays so far. What about the claim that you can pass native arrays around as parameters without performance implications?
In this test we're passing an array as a parameter, and simply reading an array element from the parameter, in 1000 (actually 1001) iterations. As you can see, the size of the array makes no difference to the performance of the function call. Thus we can conclude that the copy-on-write performance optimization really works with passing arrays as parameters. (In the chart above the difference in each run is within 3 microseconds, that is 3 thousandth of a millisecond, well within the expected error margin. Which is also why that micro difference appears random between test runs. There's in effect absolutely no difference with respect to the size of the array, as expected.)
As it's copy-on-write, it means that if we modify the array passed as a parameter, we should see a dramatic drop in performance, exactly as one would expect without copy-on-write optimizations.
This test is based on the above, but also modifying the local array parameter, and doing one thousand iterations. As we can see, when modifying the local array variable the performance drops considerably with respect to the size of the array, just as expected. (The last bar should actually be off the chart, as the value was 3500ms in my test, I cut off the chart at 2000ms to make it more readable, the effect is quite obvious anyway.) Again we can conclude that the copy-on-write optimization works as it should, if you modify the array it performs the copy operation so as to not disturb the original variable, just as expected with standard VDF parameter passing.
Note also that unless you have hundreds of thousands of elements or call methods and pass the array around thousands of times, the performance implication is mostly noise and likely irrelevant. Even the most extreme case above, which is passing an array of 10,000 elements and modifying it (incurring a copy operation) in thousand iterations, only takes a little over 3 seconds. The great thing about the internal copy-on-write optimization is that it's completely transparent, you never have to compromise on flexibility, and for those performance critical uses it gives you all the performance you need.
In summary, native arrays are not only easier to use than the old Array class, they're also as fast or even faster than the old Array class in most situations. Passing arrays around as parameters and return values is also very fast thanks to the built-in copy-on-write optimization. And when the copy-on-write optimization cannot be utilized and a copy operation is incurred, it rarely is a performance concern anyway.
There's however a situation where you may be tripping the copy operation for accessing native arrays over and over, effectively defeating the copy-on-write optimization, and thus suffer poor performance. We'll examine that in detail, why it's easy to fall into that trap, and how you can avoid it in Part V.