LHCb merge requestshttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests2023-10-30T14:34:49+01:00https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4312Streamline LumiFSRtoTTree2023-10-30T14:34:49+01:00Gerhard RavenStreamline LumiFSRtoTTree- more paranoid dynamic_cast
- keep count per run from the start instead of keeping all individual
counts and summing at the end
- avoid std::list
- remove unnecessary includes
- make explicitly thread-safe- more paranoid dynamic_cast
- keep count per run from the start instead of keeping all individual
counts and summing at the end
- avoid std::list
- remove unnecessary includes
- make explicitly thread-safeEduardo RodriguesJamie GoodingEduardo Rodrigueshttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4127split track containers and persistency (versioning)2023-09-22T11:34:37+02:00Maarten Van Veghelsplit track containers and persistency (versioning)See https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2338
Related to https://gitlab.cern.ch/lhcb/LHCb/-/issues/312
In total, goes with https://gitlab.cern.ch/lhcb/Rec/-/merge_requests/3423, https://gitlab.cern.ch/lhcb/Moore/-/merge_r...See https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2338
Related to https://gitlab.cern.ch/lhcb/LHCb/-/issues/312
In total, goes with https://gitlab.cern.ch/lhcb/Rec/-/merge_requests/3423, https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2338, https://gitlab.cern.ch/lhcb/Alignment/-/merge_requests/394, https://gitlab.cern.ch/lhcb/DaVinci/-/merge_requests/926, https://gitlab.cern.ch/lhcb/MooreAnalysis/-/merge_requests/126, https://gitlab.cern.ch/lhcb/MooreOnline/-/merge_requests/268
Cherry picks from https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4191RTA/2023.07.31https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4138Fix detection and propagation of FSRs from input files2023-08-04T16:43:45+02:00Marco Clemencicmarco.clemencic@cern.chFix detection and propagation of FSRs from input filesInstead of trying to open the input file (which fails when dealing with LFNs) we use, if available, the new incident introduced with gaudi/Gaudi!1456.
I also added a new incident to notify possible listeners that an FSR has been read fr...Instead of trying to open the input file (which fails when dealing with LFNs) we use, if available, the new incident introduced with gaudi/Gaudi!1456.
I also added a new incident to notify possible listeners that an FSR has been read from the input file.
Note: Needs Gaudi v36r13
/cc @cburr @egraveriRTA/2023.06.1Sebastien PonceRosen MatevPatrick KoppenburgNicole SkidmoreSebastien Poncehttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3796Run3 File Summary Record2023-08-04T16:43:45+02:00Marco Clemencicmarco.clemencic@cern.chRun3 File Summary RecordThis MR introduces a new File Summary Record implementation for Run3.
A few key aspects of the implementation are:
- leverage on the new Gaudi::Monitoring infrastructure
- simple (JSON) persistent format recorded in the ROOT file as a s...This MR introduces a new File Summary Record implementation for Run3.
A few key aspects of the implementation are:
- leverage on the new Gaudi::Monitoring infrastructure
- simple (JSON) persistent format recorded in the ROOT file as a simple string
These features guarantee that:
- it is trivial to add a new entry to the FSR
- extracting the FSR from a ROOT file is trivial:
```py
import json
import ROOT
f = ROOT.TFile.Open("my_file.root")
fsr = json.loads(str(f.FileSummaryRecord))
```
At the moment there is still much work to do (reading, merging, more frameworks info to record, ...), but it's a reasonable proof of concept.
Requires gaudi/Gaudi!1379 gaudi/Gaudi!1389 lhcb/LHCb!3830
Note: that lhcb/LHCb!3830 is needed only because without it LHCb does not compile if gaudi/Gaudi!1389 is applied
References:
- https://its.cern.ch/jira/browse/LBCOMP-58
- https://indico.cern.ch/event/1086171/?note=176068
- https://indico.cern.ch/event/1110473/Rosen MatevRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4149Add UnpackDstDataBank which unpacks the entire DstData raw bank2023-08-03T14:54:34+02:00Gerhard RavenAdd UnpackDstDataBank which unpacks the entire DstData raw bankAdd UnpackDstDataBank, an algorithm that 'just' unpacks everything in the DstData raw bank and repopulates the event store -- it does _not_ provide data handles which could be used during configuration, as the data it produces is fully d...Add UnpackDstDataBank, an algorithm that 'just' unpacks everything in the DstData raw bank and repopulates the event store -- it does _not_ provide data handles which could be used during configuration, as the data it produces is fully determined by the content of the DstData bank it encounters... so while fully deterministic, it is not a-priori predictable without inspecting the data. But it can be useful in GaudiPython scripts which want to explore the event store, and eg. validate the content of the DstData raw bank.RTA/2023.07.04Rosen MatevRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4165Merging for sprucing output2023-06-26T07:38:23+02:00Nicole SkidmoreMerging for sprucing outputScript to merge Sprucing output in production. Needed for Sprucing handshake, see https://gitlab.cern.ch/lhcb-dpa/project/-/issues/258.Script to merge Sprucing output in production. Needed for Sprucing handshake, see https://gitlab.cern.ch/lhcb-dpa/project/-/issues/258.Nicole SkidmoreNicole Skidmorehttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/4064Fsr and lumi counting2023-05-01T18:19:42+02:00Nicole SkidmoreFsr and lumi countingChanges
- 'input_process' with the type 'InputProcessTypes' has been promoted to Moore LbExec options (from DV) and takes over 'process'
- InputProcessTypes 'Turbo' -> 'TurboPass' so the stream and the process are not mixed up
- Added F...Changes
- 'input_process' with the type 'InputProcessTypes' has been promoted to Moore LbExec options (from DV) and takes over 'process'
- InputProcessTypes 'Turbo' -> 'TurboPass' so the stream and the process are not mixed up
- Added FSR writer if `options.input_process == InputProcessTypes.Hlt2`
For https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2153.
Work towards DPA task https://gitlab.cern.ch/lhcb-dpa/project/-/issues/26.Nicole SkidmoreNicole Skidmorehttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3952Add routing bits filter to control flow2023-02-25T21:47:53+01:00Nicole SkidmoreAdd routing bits filter to control flowTowards Sprucing commissioning on Run 3 data
Addition of RoutingBits filter
To go with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2052Towards Sprucing commissioning on Run 3 data
Addition of RoutingBits filter
To go with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/2052Rosen MatevRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3797Add Options.apply_binds to allow applications to bind before calling user code2022-10-07T13:11:35+02:00Chris BurrAdd Options.apply_binds to allow applications to bind before calling user codeIn DaVinci a few cases have come up where code ends up looking like:
```python
@configurable
def my_thing(process="MyDefault"):
...
```
Users then use this thing by running:
```
my_thing(process=options.process)
```
This gets a b...In DaVinci a few cases have come up where code ends up looking like:
```python
@configurable
def my_thing(process="MyDefault"):
...
```
Users then use this thing by running:
```
my_thing(process=options.process)
```
This gets a bit annoying when code starts being nested with there being multiple layers of
```python
def my_func(process="MyDefault"):
return my_other_func(process=process)
def my_other_func(process="MyDefault"):
return my_thing(process=process)
```
A slightly better option is to make `process` a keyword only argument and bind:
```python
@configurable
def my_thing(*, process):
...
def my_func():
return my_other_func()
def my_other_func():
return my_thing()
# Usage looks like
with my_thing.bind(process=options.process):
my_func()
```
This MR expands this slightly by allowing the application-specific `Options` class to automatically apply binds so the user code doesn't need to use bind. See https://gitlab.cern.ch/lhcb/DaVinci/-/merge_requests/731 for an example of this being used.https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3470Add LbExec implementation2022-07-15T10:41:28+02:00Chris BurrAdd LbExec implementationThis merge request is adds the `lbexec` command proposed in #198 and the associated `Options` class. After `Moore` and `DaVinci` are adapted I'll make a follow-up merge request to remove the `ApplicationOptions` class and start cleaning ...This merge request is adds the `lbexec` command proposed in #198 and the associated `Options` class. After `Moore` and `DaVinci` are adapted I'll make a follow-up merge request to remove the `ApplicationOptions` class and start cleaning up/adapating the tests.
This turned into quite a lot of code when compared to [the prototype](https://gitlab.cern.ch/lhcb/LHCb/-/compare/master...cburr%2Flbexec-prototype) to however the majority of it is test code and error handling. In particular it tries to give people a more helpful message when the application is already doomed to fail (expand below for examples).
### `lbexec --help`
```bash
usage: lbexec [-h] [--dry-run] [--export EXPORT] function options [extra_args ...]
positional arguments:
function Function to call with the options that will return the configuration. Given in the form 'my_module:function_name'.
options YAML data to populate the Application.Options object with. Multiple files can merged using 'file1.yaml+file2.yaml'.
extra_args
optional arguments:
-h, --help show this help message and exit
--dry-run Do not run the application, just generate the configuration.
--export EXPORT Write a file containing the full options (use "-" for stdout)
```
### Error handling examples
#### Wrong specification of the function to run
<p>
<details>
<summary>Click to view</summary>
![Screenshot_2022-03-22_at_16.30.40](/uploads/12c9dfcba4d1412277c92d8c7b26f1f6/Screenshot_2022-03-22_at_16.30.40.png)
</details>
</p>
#### Bad contents of the `options.yaml` file(s)
<p>
<details>
<summary>Click to view</summary>
![Screenshot_2022-03-22_at_16.43.54](/uploads/5308388080c26ff89344b29dcfe5111e/Screenshot_2022-03-22_at_16.43.54.png)
</details>
</p>
#### Tracebacks don't include `lbexec`'s implementation details
<p>
<details>
<summary>Click to view</summary>
![Screenshot_2022-03-22_at_16.34.10](/uploads/ca03ff9724d7543ff05f8c67cd528d0d/Screenshot_2022-03-22_at_16.34.10.png)
</details>
</p>
Related to DPA tasks https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAnalysisProductions/-/issues/69 and https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAnalysisProductions/-/issues/68.Rosen MatevNicole SkidmoreRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3608Add missing properties to LbExec.Options for LHCbDIRAC2022-06-13T12:21:38+02:00Chris BurrAdd missing properties to LbExec.Options for LHCbDIRACThese properties are set by LHCbDIRAC when using ProdConf and should also be available for Run 3.
Soon I'll add tests to make sure these properties are applied correctly but these will be in merge requests to DaVinci and Moore (I've alr...These properties are set by LHCbDIRAC when using ProdConf and should also be available for Run 3.
Soon I'll add tests to make sure these properties are applied correctly but these will be in merge requests to DaVinci and Moore (I've already tested locally).
Required for https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAnalysisProductions/-/issues/69 and https://gitlab.cern.ch/lhcb-dpa/analysis-productions/LbAnalysisProductions/-/issues/68.https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3268New persistency model for packing2022-05-02T21:14:17+02:00Sevda EsenNew persistency model for packingA significant re-write of persistency framework.
Main changes in c++ are:
- New packers take unpacked objects and produce HltPackedDataBuffers (MC packers unchanged, old packers fully removed from configuration)
- New HltPackedBufferWr...A significant re-write of persistency framework.
Main changes in c++ are:
- New packers take unpacked objects and produce HltPackedDataBuffers (MC packers unchanged, old packers fully removed from configuration)
- New HltPackedBufferWriter takes a list of DataBuffers and writes out the raw banks/event (This replaces HltPackedDataWriter)
- New HltPackedBufferDecoder takes rawbanks/event and produces HltPackedDataBuffers to the requested locations
- New unpackers take HltPackedDataBuffers and produce unpacked objects (MC unpackers unchanged, old unpackers are still used for old dsts.)
- Un/PackParticlesAndVertices algorithms are removed. So objects in TES are not automatically packed/writen/decoded/unpacked.
Main changes in configuration:
- Packers/Unpackers/Decoder/Writer are all functional so all input and output locations needed to be given [They are given as DataObjectRead(Write)Handles. See https://gitlab.cern.ch/lhcb/LHCb/-/issues/180 for explanation.]
- For packing/writing, locations are picked up from reco objects list and requested-line dependencies. For each location, object type in that location is also registered to ANNSvc json file. Only known object types in PyConf/object_types are packed.
- For decoding/unpacking, we don't know the data dependencies at the moment. So we take all registered packed locations in requested stream. They are checked against registered object types to decide which unpacker to use.
- PersistRecoConf is extended to all object types not only reconstruction objects. More than one location can be given for a type. reconstruction_objects list should be modified to benefit from this. Moore#248
To go with Moore!1085 MooreAnalysis!69 DaVinci!634 lhcb-datapkg/PRConfig!213
Related issues: LHCb#151 Moore#354 lhcb-dpa/project#120Rosen MatevChristopher Rob Jonesjonesc@hep.phy.cam.ac.ukLorenzo PicaRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3438For event size analytics2022-03-07T08:41:11+01:00Nicole SkidmoreFor event size analyticsTo go with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1356
## Motivation
Want event size analysis of Spruced events **per** line. See https://gitlab.cern.ch/lhcb-dpa/project/-/issues/90
## How
Exploit the fact that each stream...To go with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1356
## Motivation
Want event size analysis of Spruced events **per** line. See https://gitlab.cern.ch/lhcb-dpa/project/-/issues/90
## How
Exploit the fact that each stream gets its own instance of `CombineRawBankViewsToRawEvent`.
Add `Gaudi::Accumulators::StatCounter` to `CombineRawBankViewsToRawEvent` to give table of the form
```
EvtSize_<line name> INFO Number of counters : X
| Counter | # | sum | mean/eff^* | rms/err^* | min | max |
| "DstData bank size (bytes)" | X | Y | YY | YYY | YYYY | YYYYY |
| "Event size (bytes)" | X | Z | ZZ | ZZZ | ZZZZ | ZZZZZ |
```
Note that by running `CombineRawBankViewsToRawEvent` in VERBOSE mode - achieved through
`CombineRawBankViewsToRawEvent.bind(OutputLevel=1)` one also has the log of the individual sizes of ALL RawBanks in `banks`.Rosen MatevNicole SkidmoreRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3395reading.py changes to allow MC object reading for sprucing2022-02-21T16:57:21+01:00Nicole Skidmorereading.py changes to allow MC object reading for sprucing* Trivial configuration of `OutputLevel` in helpers
* Adding `annsvc_name` parameter to `decoder` - needed for simplification of `Hlt/RecoConf/python/RecoConf/reco_objects_for_spruce.py` in https://gitlab.cern.ch/lhcb/Moore/-/merge_reque...* Trivial configuration of `OutputLevel` in helpers
* Adding `annsvc_name` parameter to `decoder` - needed for simplification of `Hlt/RecoConf/python/RecoConf/reco_objects_for_spruce.py` in https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1299
To go with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1299.Nicole SkidmoreNicole Skidmorehttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3303Use rawbank views for sprucing2022-01-27T10:50:36+01:00Nicole SkidmoreUse rawbank views for sprucingTo go with https://gitlab.cern.ch/lhcb/DaVinci/-/merge_requests/618, https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1137, https://gitlab.cern.ch/lhcb/MooreAnalysis/-/merge_requests/65
Make Sprucing use RawBank::View to access persis...To go with https://gitlab.cern.ch/lhcb/DaVinci/-/merge_requests/618, https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/1137, https://gitlab.cern.ch/lhcb/MooreAnalysis/-/merge_requests/65
Make Sprucing use RawBank::View to access persistreco objects rawbanks from previous passes
## Motivation
This is a move towards the extensive use of "views" which will allow the Sprucing AND HLT2 to customise which rawbanks get persisted in a NEW RawEvent without mutilating the input RawEvent by using eg. BankKillers
**Sprucing will now have a new RawEvent `"Event/{stream}"` where "stream"=default if none given and will not rely on the `"DAQ/RawEvent"`**
This functionality does not exist for HLT2 yet either, customising which raw banks are output only appears to work when you use Brunel MC input with `input_raw_format = 4.3`. Running HLT1->HLT2 where `input_raw_format = 0.3` means everything is in `/DAQ/RawEvent` and you cannot customise which (detector for instance) rawbanks are persisted
https://gitlab.cern.ch/lhcb-dpa/project/-/issues/124
## Whats changed
### `Hlt/HltDAQ/src/component/HltPackedDataDecoder.cpp`
* Hardcoded to take View from "/Event/DAQ/RawBanks/DstData" achieved through `LHCb__UnpackRawEvent` - this is temporary!
### `Hlt/HltDAQ/src/component/HltPackedDataWriter.cpp`
* Now uses `MergingTransformer` such that a "OutputRawEvent" and "OutputView" are accessible
* Had to change the way 2 errors are reported
* Had to remove `counter( containerPath ) += objectSize;` as could not get `counter` to work - just a DEBUG?
### `DAQ/DAQUtils/src/CombineRawBankViewsToRawEvent.cpp`
* Combines vector of RawBank::View and returns new RawEvent. For sprucing this is the new `DstData`, new Sprucing `HltDecReports` and any raw banks from previous passes that are requested to be persisted. For passthrough a new `DstData` bank is not created and the one from the HLT2 step is persisted.
### `GaudiConf/python/GaudiConf/reading.py`
Complete overhaul using `process` which is
- 'Turbo' (HLT2 that has been through Sprucing passthrough),
- 'Spruce'
- 'Hlt2' - at some point 'Hlt2' will not be required as an option downstream as all streams will go through Sprucing so it would be 'Turbo'.
Driving the reading based on `process` removes complications of raw event locations for analysts. The `stream` argument must also be handed to these functions as 'Turbo' and 'Spruce' RawEvents are located at `"Event/{stream}"`.
* Added `unpack_rawevent` function which calls `LHCb__UnpackRawEvent` to unpack RawEvent to Views. Location of RawEvent is handled by `process` argument. By default only "DstData" and "HltDecReports" banks are unpacked which is sufficient for quick file reading
* `unpackers` is now driven by `process` argument which removes complications of TES ROOT for analysts
* Although the `DstData` view is hardcoded now in `HltPackedDataDecoder.cpp` (which is temporary), `HltPackedDataDecoder.cpp` still needs access to the `HltDecReports` in the RawEvent. For Sprucing/passthrough this is `"Event/{stream}"` but for HLT2 remains as `"DAQ/RawEvent"`. This is handled behind the scenes through the `process` argument in `decoder`
* Added a `hlt2_decisions` function driven by `process` argument
* Added a `spruce_decisions` function that handles the fact that the sprucing `HltDecReports` is now in `"Event/{stream}"`
* `mc_unpackers` will throw an error if `process!=Hlt2` as not ready for sprucing yet.
### `Hlt/HltDAQ/src/component/HltDecReportsWriter.cpp`
* Now gives both a "OutputRawEvent" and a *new* "OutputView"
* Note the check "// delete any previously inserted dec reports" has been removed as a result
### `DAQ/DAQUtils/src/BackwardsCompatibleMergeViewIntoRawEvent.cpp`
* Needed to copy vector of RawBank::View into existing RawEvent ie. `"DAQ/RawEvent"` (see comment on `reading.py`) Temporary measure needed if Writers only output Views
cc @graven, @sesen
One can use the following to inspect a DST now utilising the improved functions in `reading.py`
<details><summary>Click to expand</summary>
```python
##Usage, inside Moore/
#./run python -i DSTexplore.py -input xxx.[dst, mdf] -tck yyy.tck.json -process process -stream stream
import sys, GaudiPython as GP
from GaudiConf import IOExtension
from GaudiConf.reading import (decoder, unpackers, unpack_rawevent,
hlt2_decisions, spruce_decisions)
from Configurables import ApplicationMgr, LHCbApp
LHCb = GP.gbl.LHCb
import argparse
#Argument parser
parser = argparse.ArgumentParser(
usage="./run python -i %(prog)s xxx.[dst, mdf] yyy.tck.json process stream",
description='Inspect Moore output')
parser.add_argument('-input', type=str, help='Input MDF or DST')
parser.add_argument('-tck', type=str, help='.tck.json file from the job')
parser.add_argument(
'-process',
type=str,
help=
'process can be Spruce or Turbo (or Hlt2). Note Hlt2 option will be removed as all streams will go through Sprucing'
)
parser.add_argument(
'-stream',
type=str,
nargs='?',
default="default",
help='Stream to test, default if no value given')
args = parser.parse_args()
# Helper function to select events with given line decision
def advance_HLT(decision):
"""Return event with given line decision."""
while True:
appMgr.run(1)
if not evt['/Event']:
sys.exit("Did not find positive {0} decision".format(decision))
if args.process == 'Hlt2':
loc = '/Event/Hlt/DecReports'
else:
loc = '/Event/' + args.stream
reports = evt[loc]
report = reports.decReport('{0}Decision'.format(decision))
if report.decision() == 1:
break
return
##Helper functions for returning routing bits
def routing_bits():
"""Return a list with the 96 routing bit values."""
if args.process == 'Hlt2':
loc = '/Event/DAQ/RawEvent'
else:
loc = '/Event/' + args.stream
rawevent = evt[loc]
rbbanks = rawevent.banks(LHCb.RawBank.HltRoutingBits)
on_bits = []
for bank in range(0, len(rbbanks)):
d = rbbanks[bank].data()
bits = "{:032b}{:032b}{:032b}".format(d[2], d[1], d[0])
ordered = list(map(int, reversed(bits)))
for i in range(0, len(ordered)):
if ordered[i] == 1:
on_bits.append(str(i))
return on_bits
# Change the tags if required
LHCbApp(
DataType="Upgrade",
Simulation=True,
DDDBtag="dddb-20201211",
CondDBtag="sim-20201218-vc-md100",
)
assert args.process == "Spruce" or args.process == "Turbo" or args.process == "Hlt2", 'process is Turbo or Spruce (or Hlt2). Note Hlt2 option will be removed as all streams will go through Sprucing'
# Setup HltANNSvc
if args.process == 'Hlt2':
from Moore.tcks import load_hlt2_configuration as load_tck
else:
from Moore.tcks import load_sprucing_configuration as load_tck
ann = load_tck(args.tck, annsvc_name="HltANNSvc")
inputFiles = [args.input]
IOExtension().inputFiles((inputFiles), clear=True)
# Unpack the raw event to give RawBank::Views.
# Unpacks 'DstData' and 'HltDecReports' to RawBank::Views by default
# which are input to `decoder` (which calls `HltPackedDataDecoder`)
# and `HltDecReportsDecoder`
algs = [
unpack_rawevent(process=args.process, stream=args.stream, OutputLevel=4)
]
# Decoder uses HltANNSvc to convert integers back to the packed TES locations according to tck
algs += [decoder(process=args.process, stream=args.stream, OutputLevel=4)]
# HltDecReports decoders uses HltANNSvc to decode integers back to the line decision names according to tck
algs += [
hlt2_decisions(process=args.process, stream=args.stream, OutputLevel=4)
]
if args.process == 'Spruce' or args.process == 'Turbo':
algs += [
spruce_decisions(
process=args.process, stream=args.stream, OutputLevel=4)
]
# Unpack TES locations
algs += unpackers(process=args.process, OutputLevel=4)
ApplicationMgr(TopAlg=algs)
appMgr = GP.AppMgr()
evt = appMgr.evtsvc()
appMgr.run(1)
evt.dump()
```
</details>Rosen MatevRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/3168Add a selection id for Sprucing2021-08-06T13:40:55+02:00Shunan Zhangshunan.zhang@cern.chAdd a selection id for SprucingAdd a `SpruceSelectionID` for Sprucing.
Related to lhcb-dpa/project#133.
Needed by lhcb/Moore!943.
cc @nskidmorAdd a `SpruceSelectionID` for Sprucing.
Related to lhcb-dpa/project#133.
Needed by lhcb/Moore!943.
cc @nskidmorRosen MatevLingzhu BianRosen Matevhttps://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/2965Configurable ANNSvc and expose PR descriptor names2021-04-27T11:22:25+02:00Nicole SkidmoreConfigurable ANNSvc and expose PR descriptor namesThese are required changes for the sprucing
* We can have a `HltANNSvc` for writing and reading. Some algorithms assumed default `ANNSvc`
* PersistRecoPacking now has a ```unpackers_by_key``` method exposing the descriptor names
See al...These are required changes for the sprucing
* We can have a `HltANNSvc` for writing and reading. Some algorithms assumed default `ANNSvc`
* PersistRecoPacking now has a ```unpackers_by_key``` method exposing the descriptor names
See also https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/763https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/2927Move PersistRecoConf and reading to LHCb2021-02-24T21:22:48+01:00Nicole SkidmoreMove PersistRecoConf and reading to LHCbMove PersistRecoConf and reading to LHCb. Remove packing/serialisation from TurboConf
This supercedes https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/2921 as things got messy when https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/685...Move PersistRecoConf and reading to LHCb. Remove packing/serialisation from TurboConf
This supercedes https://gitlab.cern.ch/lhcb/LHCb/-/merge_requests/2921 as things got messy when https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/685 was merged
All issues were resolved before closing and implemented in this MR
Runs with https://gitlab.cern.ch/lhcb/Moore/-/merge_requests/735