DF19 w/MySQL: it’s time to rewrite an old CSV import process that must analyze & bring in 40 columns by 100,000 rows of data in one go every day. The writing of records is not straightforward, it requires a fair amount of analysis, sorting and searching by row before committing, and that includes looking and comparing to on file records and values.

I understand there are limits to 32-bit clients and we will move to DF2022 as fast as possible. As we head initially towards rewriting this using struct arrays I’m curious if we will bumping up against limits with memory, I’m wondering if we should consider using temporary files instead. Performance is essential.


I’ve looked at the docs, read lots of posts and I’m not seeing any clear-cut information that would help make a quality decision here. I’d appreciate hearing your experiences and the choices, pros and cons you came to when faced with similar circumstance. Would struct arrays or temporary files serve best?