View RSS Feed

Development Team Blog

Arrays & Structs in-depth - Conclusion

Rating: 2 votes, 5.00 average.
To wrap up this multi-part series about arrays and structs I'll highlight a few key areas and answer some questions that came up.

Part I starts by showing the basic use of structs and arrays, how they can be used together and as parameter & return types.

Part II discusses how sorting and searching works with native arrays.

Part III examines multi-dimensional arrays in more detail, and why it's often easier and better to use struct instead.

Part IV discusses performance with native arrays and especially compared to the old Array class, showing that native arrays are often not only easier to use but also faster.

Part V examines how you can get in trouble with performance when using large arrays and trying to tweak the native array type to behave more like the old school Array class.

Part VI reveals how to resolve those performance problems by using the native array type as it wants to be used, and avoiding bending it backwards to make it appear like the old Array class, breaking your old way of thinking about arrays.

Searching & Sorting with Custom Comparison Functions
In any wide topic as this one there's always a lot to cover, and sometimes important aspects are hidden in a few power-packed sentences. In Part II the differences between BinarySearchArray and SearchArray are briefly discussed together with custom comparison functions. Remember that in both cases you can always reuse the same comparison function as with sorting. It's a common misconception that the comparison function with SearchArray must return EQ/NE and cannot use the same function as SortArray. That's wrong, you can use the same comparison function as for sorting regardless of whether you use SearchArray or BinarySearchArray, so neither has any advantage with respect to that alone.

The key point is that SearchArray also has the additional flexibility that you can use a different comparison function that can return NE instead of LT/GT. This extra flexibility lets you do things that you cannot do with BinarySearchArray, such as comparing RowID values which can only be compared for equality and non-equality. In addition, with BinarySearchArray you're actually forced to use the exact same comparison function as when the array was sorted. SearchArray has the additional advantage that you can search for an element via a criteria not used when sorting, and of course that you can search for elements without sorting. For example, you can keep the array sorted by last name followed by first name, and yet search only by first name using SearchArray. That cannot be done with BinarySearchArray as it requires that the search criteria are exactly the same as the sort criteria.

All in all this means that SearchArray is much easier to use, more flexible and it allows you to search for items in ways you cannot do with BinarySearchArray. The only advantage to BinarySearchArray is that it will be faster when performing many searches over lots of elements without modifying the array. While that kind of performance optimization can be important, most of the time the performance gain is overestimated, therefore it's usually better to just stick to SearchArray, and only use BinarySearchArray when you have determined and proven it will give better performance.

Another tricky detail when using SearchArray or BinarySearchArray to find an element of struct type is that you must pass a struct variable to match against and not a literal. For example you can't do:
Code:
Move  (SearchArray("Joe",myArrayOfStructs,self,get_CompareCustomerFirstName)) to iFoundIndex
You have to do:
Code:
tCustomerInfo myTestStruct
Move "Joe" to myTestStruct.sFirstName
Move  (SearchArray(myTestStruct,myArrayOfStructs,self,get_CompareCustomerFirstName)) to iFoundIndex
Copy-on-write Optimization
It can be a little tricky to wrap your head around this thing. Another way to think of this is that you basically have two ways to pass parameters in VDF, by-value and by-reference. The default behavior is by-value, which always makes a complete copy of the source variable to the destination method. In addition, return values are always by-value and always copied when passed back to the calling method.

By-reference essentially lets you reference the original variable directly, just with a different name. Any changes to the by-reference parameter immediately affects the original variable as it's really accessing the original variable. One side effect of by-reference is that it can be faster if the original variable holds a lot of data that would otherwise have to copied. But that's really just a side effect.

When passing array and struct parameters they follow exactly the same rules as any other native VDF types. By default it's by-value, and you can optionally specify by-reference. In theory by-value can be rather inefficient since arrays are often quite large and copying the entire array variable each time can be a performance issue. Therefore struct and array types automatically utilizes an internal copy-on-write optimization, so that passing an array or struct variable merely passes a shallow reference rather than making a complete copy. The twist is that if you try to modify the passed array it will then make a copy. Basically, it's like "deferred copy", where it's truly by-value but it defers the copy operation until it's absolutely necessary.

A key advantage with the copy-on-write optimization is that it also implicitly applies to return values. This means that not only can you pass a large array as a parameter without a performance penalty, you can also return a large array completely without a performance penalty. In fact, that's why you can Get an array property without any performance penalty. You can Get an array property to a local variable, and then examine as many elements as you like without any performance penalty. It's only if you modify the local array variable that you incur the copy operation.

Grouping Changes into Fewer Transactions
Frank mentions that in a real-world example with a treeview control he's using TVN_GETDISPINFO to color tree nodes. He realized that populating the underlying array with the respective node color for every node in the tree all at once was too slow, because it requires lots of expensive database operations. It can also be considered unnecessary work to find out which color a particular tree node should have if it hasn't even been expanded and cannot be seen by the user anyway. So basically the solution is to calculate the color for only those nodes that are visible, progressively adding information as the user expands the nodes.

So far this sounds great, it's now much better. However, this can still get very slow if you expand a lot of items. The reason for this is that it's essentially the same example as outlined in Part V. To illustrate, when expanding a tree node, the treeview essentially triggers a TVN_GETDISPINFO for all the child nodes. I'm sure there's a lot more to it, but for the purpose of this discussion that's probably close enough. Obviously, you can easily imagine that as a loop like this in pseudo code:

Code:
For each child node in expandedNode
    Send TVN_GETDISPINFO
Loop

Procedure TVN_GETDISPINFO
    Get Array Property
    Modify array
    Set Array Property
End_Procedure
We can clearly see how this is essentially the same as outlined in Part V with the access methods, yielding that same quadratic performance.

Now, I don't know if the following is applicable in Frank's case as I don't really know all the details and I also have to restrain myself from turning this into a specific case support incident, it would have to be taken offline. Nonetheless I figured this was such a great real world example that it's worth mentioning at least a variation of this type of problem.

As was mentioned in Part VI, the real trick to get the best performance is:
...to move up one abstraction level to find the actual loop and push the code up.
It can sometimes be quite tricky to figure out how to do just that, and this case is a perfect example. We can see how the loop is centered around expanding the parent node, so based on that we know that if we can push the code up we may be able to improve the performance. As luck would have it, it turns out that the treeview control has an event TVN_ITEMEXPANDING that might allow us to push the code up above the loop.

So basically a potential solution would be to use TVN_ITEMEXPANDING to group the modification of the array elements corresponding to the child nodes into one transaction. You can then still use TVN_GETDISPINFO to color the actual tree nodes if you like, as there's no performance penalty for accessing the property as long as you do not modify the array. And since the modifications can now be pushed up to TVN_ITEMEXPANDING and grouped together, you may be able to get something more like linear performance. In pseudo code it would look something like this:
Code:
Procedure TVN_ITEMEXPANDING Handle expandedNode
    Get Array Property to localArray
    For each child node in expandedNode
        Move xyz to localArray[iIndex].xyz
    Loop
    Set Array Property
End_Procedure
If you expand multiple nodes at the same time you may still have multiple transactions, but they will be far fewer than before, which may just push the performance enough. One could then imagine trying to take advantage of grouping the expansion of multiple nodes as well, but that may not be necessary and I'd leave that as an exercise for the very brave readers.

Finally, any time you're looking at performance improvements it's very important to profile the whole operation to find the bottleneck. It's far too easy to make hasty assumptions about performance issues. If you can take a piece of code that takes 100ms and squeeze it down to 50ms, it may sound like a huge win. But taken into a bigger context where the actual user operation may take something like 10 seconds, squeezing it down to 9.950 seconds isn't exactly as impressive anymore. Nobody is even going to notice the difference. While this whole thing with grouping transactions can have a very significant impact on performance, remember that unless you're working with really large arrays it's unlikely to be of any real significance relative to other work such as finding database records, network traffic and modifying records.

That concludes this series on structs and arrays.

Comments