- it requires realtively small code change which fits there conceptually
- having both in the same place makes it easier to find and understand
- this is the only way in which we can get any debugging information if full-result truncation happens
However, this has some disadvantages:
- need to pass the max size to the tool through the special DataFlowConfig job options object, which is online-specific
- can only compare the limit to the summed size of all serialised EDM collections - this is not really the same as the final serialised output size, which includes also bits, stream tags, status, and is also rearranged into a contiguous memory chunk
In any case, the parameter passed from TDAQ actually corresponds to the size of the event buffer block on disk, which is the available space for the full event including detector data. Thus, the limit on HLT result size has to be some fraction of that. I added a property to set the fraction, with a default value of 0.8.
The block size at P1 is of the order of 100 MB, so full-result truncation is extremely unlikely to happen (especially given the per-module truncation limits), but such protection still has to be in place and is implemented here. The default value of max size in athenaHLT (the
--hltresult-size command-line option) is currently 10 MB, which gives the result size limit of 8 MB.
Tested by running:
athenaHLT.py --hltresult-size 1 -l DEBUG -o output -n 20 -f /cvmfs/atlas-nightlies.cern.ch/repo/data/data-art/TrigP1Test/data17_13TeV.00327265.physics_EnhancedBias.merge.RAW._lb0100._SFO-1._0001.1 TrigUpgradeTest/full_menu.py
where truncation on total size > 1 MB happens on event #10 and is handled correctly.