Improving bookkeeping to help users find relevant productions
As discussed in lhcb-datapkg/AnalysisProductions!166 (merged) it is possible that a production might be relevant for analyses in different working groups. We should think about if we want to enable users to flag this in some way. The more general issue here is, how can we make it easier for users to find relevant productions across the entirety of the LHCb AProds set?
I see three possibilities:
- Allowing users to specify more than one WG in their
info.yaml
and this propagating all the way through to the database and web page. Then for approval they just pick one of the WGs liaisons, I think having to get approval from all WGs would make things unnecessarily tedious. But this way I suppose you might run the risk of tuples being set up in a way that is useful for one WG but perhaps not so useful for another, even if the underlying data is relevant, so labelling it with both WGs might be misleading. - Another solution might be to expand the web page bookkeeping and filtering. Perhaps we can allow users to flag existing productions as valid for multiple working groups e.g. if one user makes a Charm production and someone a year later in B2OC finds that it's useful for their own analysis then they could add
B2OC
as a tag to the particular jobs of the production that they used. - Another way we could expand our bookkeeping that would perhaps be more useful is if we develop a way to tag productions by the decays they reconstruct, e.g. my
B02DKPi
production would end up with tags likeBdToD0Kpi_D0ToKsHH
,BdToD0Kpi_D0ToHH
and so on. How exactly that would be achieved is I guess something to think about but DaVinci already expands decay descriptors in the job log so I imagine the simplest (but maybe not very efficient) method would be to search the logs for all the expanded reconstructed tuple decay descriptors and from that determine all of the reconstructed decays for each job. Perhaps there is some nice way of doing this in DaVinci itself instead of scraping it off the logs and we can find a way to insert it into each job.