Improve the handling of big files (GB scale)
The existing unpacker can be configured to read only the first N events, but it will always copy and byte-swap the whole input file. Turn it into a "streamer" that can read events in successive batches of defined size and maintains state between reads.
What is the expected correct behavior?
The memory usage is no longer linear in the size of the input file.