Wouter's presentation is everything that is available as far as I know https://indico.cern.ch/event/1395756/contributions/5875210/attachments/2825512/4935525/About%20vertex%20fitting_and%20PV%20unbiasing.pdf But please note that DTF still needs to be tested thoroughly, and maybe it is not very useful in this case without a PV constraint
The ParticleVertexFitter in
will try to combine the proton from the Lc+ and the pion from the Lb into a single vertex, hence selecting only Lc mesons with flight distance compatible with zero, while the desired behaviour is probably different. ParticleAdder probably wouldn't solve the problem due to huge combinatorics. This looks like a good place to use DecayTreeFitter (to be tested). Otherwise, the trigger line should probably be removed.
Tommaso Pajero (d140f623) at 20 Mar 19:13
update charm codeowners
Hi @lpica @dfazzini the tests are still failing and they are blocking Moore!3087 (merged). Could you have a look?
Implements few changes to reduce the bandwidth of Charm HLT2 lines:
d0_to_hlnux.py
persistreco
with selective persistence in a cone for d_to_hhhgamma.py
cbaryon_spectroscopy.py
(inclusive selection of Lc and Xic baryons), which has persistreco
activated. TODO: Note that the new IPCHI2 requirement on the baryons significantly reduces the rates at the price of removing also charmed baryons from b-hadron decays. The impact on spectroscopy measurements is expected to be small, but this should be checked in the future and maybe some workarounds should be developed (e.g. a prescaled line without IPCHI2 cut, or a non-prescaled line without IPCHI2 cut and no persistreco
activated)In addition, the general event cut (GEC) is deactivated for all charm lines. A non-optimal tuning of the GEC in previous tests was responsible for the ~65% increase in the charm bandwidth observed by Ross in the RTA meeting of 2023.03.08. Analysts are currently checking that the increase in bandwidth corresponds to an equal increase in signal efficiency (preliminary studies are consistent with this assumption). The GEC will be activated only on a line-by-line basis when needed, as done in the other WGs.
To be tested with DaVinci!1053 and LHCbIntegrationTests!65
FYI @fbetti @dmitzel @tpajero @aanelli @cacochat @fsouzade @roneil
Many thanks @gunther for following this up Glad to see that it's ready to be merged
Ping @lpica
By the way, shall we move to a calibration line if possible, to make these tests independent of the tuning of the physics lines? (even though I don't expect any significant changes to the new line adopted in the this MR)
From the perspective of the Charm WG, we are happy to merge Not sure if have to wait for the DaVinci MR to be tested, too, though
Many thanks Ross!
My guess is that the test doesn't run with the latest reconstruction options, require_gec.bind(skipUT=True)
(see !2994). Then the GEC tuning is wrong and the GEC requirement, which has been lifted in the last commit, is a prescale on all charm. The removal would then lead to the same increase in rates and bandwidth as observed by @rjhunter at the last RTA meeting.
Summing up, this MR tightens the selections and reduces our total bandwidth by around 40% when you don't account for the GEC, but the removal of the GEC removes a global prescale of nearly 50%, which explains the increase in throughput and bandwidth. This also implies that charm contributes to around 8% of the HLT2 throughput. Not sure if this is expected and acceptable or if we should try to reduce it.
The changes look fine to me as far as throughput is concerned. Maybe you can improve more by tightening the F.PT
cut in the combination cut for D0 -> K3pi, which is currently 1900 MeV vs. 2500 MeV in the composite cut (I don't think that the vertex fit significantly changes the value of F.PT). Also the electrons have a IPCHI2 > 0 cut that should be removed.
My point was more about the rates and total bandwidth, which are still very big and haven't been approved physics-wise in the previous MR. I am fine with merging this MR as it is for the time being, but more work is desirable in the future to reduce the bandwidth. 20 MB/s is a lot and I am not sure it can just be justified by a 35% increase of the event size from the photons. Radiative decays D0 -> D0 e+ e- should be suppressed by O(1%) compared to D+ -> D0 pi+, and we have O(4 kHz) of D*+ -> D0 pi+ selected by Turbo. By the way, what do you mean by exclusive and inclusive rates?
Do you have an estimate for your signal rate?
We should make sure at least that the D0/Ds+ signals are very pure, and that most of the background is due to electrons (I see that in Andre's thesis you had a requirement pT(D0 -> K3pi) > 3.5 GeV, which is much tighter than the current ones. If this needs to be set offline anyways, we may well do that in the trigger and save some bandwidth). I wonder if you could also slightly tighten the pT and add a p cut for the electrons, e.g. would you lose a significant fraction of usable signal by going from 50 MeV to 100 MeV?
Hi @cacochat before these changes this module took around 15 MB/s of the bandwidth, which will likely increase by 60% after removing the GEC (link) This number is very large, when compared to the one of our D*+ tagged D0 -> hh and D0 -> hhhh lines. Have you checked if this is mostly due to the persisted photons, or if the purity of your sample could be improved, too? E.g., what is the typical event size and rate for your lines?
Could you check what is the new bandwidth taken after these changes + removal of GEC?
Have any studies on the predicted sensitivity of these searches been presented at LHCb?
Hi ok I see. There is also another problem, though, which is the S/B ratio (see the other thread)
Tommaso Pajero (6ae56bb1) at 13 Mar 13:54
remove GEC from all charm lines
@cacochat out of curiosity how much do you gain in signal efficiency by having your set of D0 -> Kpi and D0 -> K3pi builders + your BDT rather than using the standard ones using for CPV studies? Moving to the standard builders could represent another solution