Rate reported for 16 streams configuration is factor of 10 too low?
I wonder if the rates reported by the rate tests for the 16 streams configuration e.g. https://lhcbpr-hlt.web.cern.ch/UpgradeRateTest/BandwidthTest_lhcb-master.2034_Moore_hlt2_bandwidth_x86_64_v3-centos7-gcc11+detdesc-opt+g_2023-05-29_06:01:23_+0200/rates_streaming.html are a factor of 10 too low?
Because when you run the rate scripts, you are using 1e4 events: https://gitlab.cern.ch/lhcb-datapkg/PRConfig/-/blob/master/scripts/benchmark-scripts/Moore_hlt2_bandwidth.sh#L26 while the script that calculates the rates assumes 1e5 events by default: https://gitlab.cern.ch/lhcb-datapkg/PRConfig/-/blob/master/python/MooreTests/line-and-stream-rates.py#L314
Of course, it's easy to just add a "-n=1e4" option to the line-and-stream-rates.py
call, but I wonder if it's useful to calculate somehow how many events the script was running on in reality? Since it might happen that your input file has too few events, in which case you will underestimate the rate. This is what happens, e.g. if one has to use the upgrade_minbias_hlt1_filtered
minbias file. It only has 14k events, so I have to manually specify this number when running over it, otherwise the rates are massively underestimated.