Hi DAW,
Testing in DF19.0 - probably the same in DF18.2 but having tested.
There may be an optimisation issue with Binary Large Object (BLOB) in DataFlex.
We have a table with a BLOB column which is relatively small (10MB) but FYI BLOBs can be up to 2GB.
We have optimised the BLOB columns within DB2 to have its own table space and containers, with file system caching turned on, and its own buffer-pool too - in essence much like MSSQL's filestream option for BLOB or probably better.
We have also set the DataFlex DB2 driver parameter JIT_TRESHOLD = 1, to ensure that columns larger than 1MB are not retrieved in the standard data fetching mechanism, but only when needed.
We have notice this mainly on WebApps.
Scenario
1. We have a cWebList displaying columns of a table that has a BLOB column.
2. Some rows have BLOB data and some have NULL BLOB data.
2. The columns displayed are standard columns - in-fact the webapp doesn't even touch the BLOB column.
The issue we found
Moving from row to row - it can take 3 times slower to move from a row without BLOB data to a row with BLOB data
Question:
Just wondering if the DataDictionary is doing a get_field_value to store the UCValue of the BLOB data on every OnNewCurrentRecord (even tough it may not be needed yet).
Thanks.