Skip to content
Snippets Groups Projects

Commented createODIN in Decoders' declarations

Merged Sebastien Ponce requested to merge sponce_createODINFix into future

In Future branch, createODIN should not be added to the sequence as it usually fails when it tries to run. The reason is that the ODIN bank is usually already present (whenever a condition is used anywhere to be precise) and the functional version of createODIN will try to overwrite it. In the master branch, the old version was only creating the ODIN entry if not already there. But this behavior is not allowed anymore. The conclusion is that by default createODIN should not appear in sequences in the new framework, except in very special cases (tiny tests) where no conditions are used.

Edited by Sebastien Ponce

Merge request reports

Checking pipeline status.

Merged by avatar (Jan 12, 2025 12:02am UTC)

Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Edited by Software for LHCb
  • As this is not a RICH specific problem (for example, also createUTClusters requires createODIN), would it not be a better hack to make sure that the createODIN action in the decoder DB would be a no-operation? That would quarantine/localize the problem more...

    Also, maybe the scheduler should be tought to check (maybe only when configured to do so, as potentially expensive!) whether the promised output of some algorithm is already present, and at least warn about it as that would indicate a misconfiguration of the data flow... (as there must be more than one component producing the output -- in this case createODIN and the event time service)

  • Another point: it seems that the typical usage of the decoderDB is such that items are 'keyed' of the 'decoder' instance, and not on the 'product' of that decoder. The latter would be an explicit dataflow, the former is basically using 'inside information' by assuming knowledge of the output location of a given decoder -- something I also commented on here. So updating the use of the decoderDB to use the dataflow instead would ease the migration towards properly configuring the scheduler... (as the configuration would just be declaring all decoders to the scheduler, which would then sort out whether and in which order, to run them, doing dynamically, at initialisation time, what the decoderDB now hard-wires at configuration time)

  • I agree with @graven. It makes no sense to only make this change for the RICH, and leave the others that do exactly the same untouched.

  • added 1 commit

    • bfe58e47 - Commented crateODIN in Decoders' declarations

    Compare with previous version

  • Sebastien Ponce changed title from Commented rich Decoder's requirement on createODIN to Commented crateODIN in Decoders' declarations

    changed title from Commented rich Decoder's requirement on createODIN to Commented crateODIN in Decoders' declarations

  • Sebastien Ponce changed the description

    changed the description

  • I've now updated this merge request and it's now generic (not Rich specific). Bottom line : createODIN should not appear anymore in the sequences.

  • Wouldn't something like this work (and if it does, be better?):

    diff --git a/DAQ/DAQSys/python/DAQSys/Decoders.py b/DAQ/DAQSys/python/DAQSys/Decoders.py
    index 1bb00aa..2ecb37c 100644
    --- a/DAQ/DAQSys/python/DAQSys/Decoders.py
    +++ b/DAQ/DAQSys/python/DAQSys/Decoders.py
    @@ -10,6 +10,7 @@ DecoderDB={}
     #===========ODIN===========
     Decoder("createODIN",active=True,banks=["ODIN"],
             publicTools=["OdinTimeDecoder/ToolSvc.OdinTimeDecoder"],
    +        properties={ "Enable" : "False" }
             conf=DecoderDB)
     
     Decoder("OdinTimeDecoder/ToolSvc.OdinTimeDecoder",active=False,
    

    That keeps createODIN around (and, from first principles, it should(*)!) but just disables it...

    (*) how do we want to solve this asymptotically? What I have in mind is that we have to avoid the need for begin-event incident triggered conditions service which in turn calls an 'event clock service' which implicitly creates a TES object -- and I was under the impression @clemenci added an explicit 'place holder' in the data dependencies of all algorithms representing a conditions 'barrier'.

    In that scenario, createODIN would not get such a dependency, and there would be one other algorithm without such a dependency, and that algorithm would consume an ODIN bank, and produces the 'place holder', i.e. a transformer from 'ODIN' -> place holder', where the place holder is functionally equivalent to an std::condition_variable, and the transformer just gets scheduled as part of the data dependency graph, and instead of the conditions service registering to the 'begin event' incident, it is this transformer that explicitly invokes DetDataSvc::setEventTime (which is now done by the EventClockSvc after it explicitly triggers the decoding of the ODIN bank from an incident handler here) and UpdateManagerSvc::newEvent from its place_holder operator()(const ODIN& odin) const call operator...

    Edited by Gerhard Raven
  • No your solution does not work. Decoders depending on createODIN would fail saying that an enabled decoder depends on a disable one. Actually this is the error that triggered all the discussion ! For the asymptotic solution, I let Marco answer as I do not think I master all this. My impression though is that the ODIN bank should be created in all cases by some dedicated code and algorithms can take it for granted.

  • Note that there is a difference between an active decoder (i.e. in my proposal it still says active=True) and at the level of Gaudi disabling the running of an algorithm (which is done by setting the property Enable to False which the Decoder DB is completely ignorant about). So the decoder DB thinks the decoder is enabled, it is added to all sequences as before, it is just skips its event processing at runtime...

  • Ah ok. then I did not try this and it may work... but... is it really something we want to have 2 ways of disabling/enabling such that you can say enable = true and still "it is just skips its" ?!?!? All this decoders business looks very fragile and I believe it should be completely dropped once we are all functional and the data dependencies are native.

  • Note that yet another 'hack' would be to make a variation on OdinTimeDecoder which, instead of invoking the full decoding of ODIN which puts it in the TES, would just take the raw event as input, grab the ODIN raw bank, and do the minimal thing to figure out the event time and use that (without putting anything on the TES). It would duplicate a some code and work, but would be less disruptive.

  • See my second comment above: at some point, the decoder DB just disappears as its role gets naturally subsumed by the scheduler. All decoders now registered to the decoder DB just configure their input and output, and once that is known by the scheduler, that's it. (and note that the decoder DB talks about 'Active' whereas Gaudi talks about Enabled -- and the decoder DB only functions at the python configuration level, and the Enable property is used in SysExecute to determine whether or not to call execute....

  • Sebastien Ponce changed title from Commented crateODIN in Decoders' declarations to Commented createODIN in Decoders' declarations

    changed title from Commented crateODIN in Decoders' declarations to Commented createODIN in Decoders' declarations

  • I agree with @gerhard here. I think we need to quite soon just abandon all the old configuration options as currently used, that explicitly schedule the algorithms, and start building up a new set of configuration options for the upgrade that do things 'properly', by which I mean removes any explicitly scheduling, and just defines the inputs and outputs and leave the scheduling to where it should be done, in the scheduler. This should be done in brand new packages, which can then exist for as long as they are required along side the current Run II options. Hacks like this are not only not the way things need to be done long term, but also break running with the old options, so no one wins...

    Time spent discussing 'band aid' MRs such as this are in the end just a waste of time really.

  • I also agree with you and Gerhard on the long term. Now if you have time to start designing the new options, you are more than welcome, but for the moment, I have other priorities (like a TDR to write) and thus have no time for it. But I need something running, so as ugly as the change might be (and this one is not that bad), it is a necessity right now, not only a waste of time.

  • I understand. I just fear the changes like this are introducing divergences between what its needed to run the old framework, and the new one. Sharing the same configuration framework just seems the wrong way to go (but I agree we should have split of at the very start, and now is not the best time). So lets do what is needed for the TDR, but would suggest we perhaps /undo' it before the master/future merge ?

  • @jonrob I don't quite follow your last point. Why would we need to undo this if we're going to drop the decoders anyway?

  • My point was the change in this MR affects both usage of this module, so probably breaks things for the old usage. I agree that for the new the better change is to instead of making this change in this MR, just remove the use of the module entirely.

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
Please register or sign in to reply
Loading