FYI @lugrazet
-
this discussion - hopefully remove a slightly-hidden copy, -
this discussion - reduce boilerplate/code duplication in html - Much cleaner html handling now from !384 (merged)
-
this - just make the html files you need, not copy them? -
this - over-specification - Much cleaner html handling now from !384 (merged)
-
this discussion - misleading multiple-processes
argument. -
Trim down text in the bandwidth pages if we can - Much cleaner html handling now from !384 (merged)
- Always more to be done on this front so I won't tick the box
😉
Some makeover related comments are being addressed !384 (merged)
More from !393 (merged)
-
csv files for download forNo, the per-WG pages have titles/headers etc. All the same info is there in principle in the all-lines-csv.all_lines_per_production_stream
and theall_lines_per_production_stream_per_wg
? -
Rate/bandwidth tables for each stream, split also by WG, with 1 row per trigger line
andRate/bandwidth tables for each stream, with 1 row per trigger line
are essentially duplicated for the sprucing test -
Consistent function naming scheme in combine_rate_output.py
-
Can we dynamically grab the stream assignment regexs from sprucing production files in Moore? Rather than having to duplicate them and stay aligned.
Wish list
-
Do the comparison by line by bandwidth (rather than/in addition to rate), -
Comparison should catch cases where the line was removed, not only added, -
spruce comparison - lots of small/confusing changes whenever we have upstream HLT2 changes. I think this is rounding effects. Would be good if we could remove this somehow, not sure we can though short of computing the sprucing rates as HLT1 output * spruce accepted/ n_in_to_hlt2. -
Extra words on comparison page to tell people how to read it, particularly if there are multiple MRs contributing/reference hasn't been updated since last merge -
We don't currently see per line changes due to persistence, as the rate doesn't change. Would be alleviated by comparing as function of bandwidth. -
Checking success based on more than just error codes - #16 (closed) shows us that sometimes the software can fail but also terminate successfully and give a 0 error code. It'd be good to catch this. -
Comparison page shouldn't report NaNs - instead do N/A or "-"
Edited by Ross John Hunter