@wouter Does it effect performance?
with @mgiza we run on 2023 data, recent MC, and made many studies.
The summary was presented at the last LHCb week:
I would like to understand more your issue.
I have seen TBLV failing when beamline was not correctly set up (or you run MC with DD4HEP, or you need to set up it manually).
We are happy to investigate but need more information exactly when it happened.
@msaur will you update also VertexCompare test?
That sounds good for me. Indeed it is a known issue of DD4Hep, so this solution makes sense.
@msaur The efficiency and fake rate makes sense, however the bottom part, namely:
PrimaryVertexChecker_52fc5742 INFO 1_res_all : x: +0.000, y: +0.000, z: +0.088
PrimaryVertexChecker_52fc5742 INFO 2_res_ntracks<10 : x: +0.000, y: +0.000, z: +0.169
PrimaryVertexChecker_52fc5742 INFO 3_res_ntracks(10,30) : x: +0.000, y: +0.000, z: +0.111
PrimaryVertexChecker_52fc5742 INFO 4_res_ntracks>30 : x: +0.000, y: +0.000, z: +0.043
PrimaryVertexChecker_52fc5742 INFO 5_res_z<-50 : x: +0.000, y: +0.000, z: +0.106
PrimaryVertexChecker_52fc5742 INFO 6_res_z(-50,50) : x: +0.000, y: +0.000, z: +0.087
PrimaryVertexChecker_52fc5742 INFO 7_res_z>50 : x: +0.000, y: +0.000, z: +0.071
PrimaryVertexChecker_52fc5742 INFO
PrimaryVertexChecker_52fc5742 INFO 1_pull_width_all : x: +0.000, y: +0.000, z: +1.145
PrimaryVertexChecker_52fc5742 INFO 1_pull_mean_all : x: +0.000, y: +0.000, z: +0.020
Makes no sense, as you the x/y columns are filled with 0.0.
We have seen this issue before: DD4HEP does not see the beamline shift, we had to switch it back to DetDesc in order to get these numbers right.
@gunther Not really randomly, there were studies in the past which have actually checked it.
Still for Run2, but I think at some point we checked it for Run3 too, however I cannot find this plot that easily.
As discussed in Rec!3612 and Rec!3631 the PV tests, namely:
hlt1_hlt2_pvs_vertex_compare.py and hlt1_pvs_PatPV3DFuture.py
need to be updated to MC 2024 when possible.
Most likely go together or just after gitlab.cern.ch/lhcb/Moore/-/merge_requests/2907
@gunther do you mean:
Different content of Counters for algo PatPV3DFuture_1e9bbb95
(PatPV3DFuture_1e9bbb95 ref) Nb PVs | 500 | 914 | 1.8280
(PatPV3DFuture_1e9bbb95 new) Nb PVs | 500 | 835 | 1.6700
in 2023 sample, right?
It looks weird to me, indeed. I would expect some difference, but not that big. I commented Moore part of MR, but I don't think anything what I commented on there could be related, unless somewhere algorithm picks up wrong configuration with min 4 tracks instead of 3 tracks. Let me investigate a bit and. I will be back to you.
And no, sorry, too many paper work these days, but it is on my to-do list so it won'r be forgotten
and here we miss "BeamSpotRHighMultiplicityCut**=**0.3,"
here should be 3 tracks for VELO open case
Just to add we are at the end of a very longer of merging it: !3612 (merged) so you @spradlin you need these modifications.
Let me just summarize where we are on behalf of @gciezare and myself.
The nominal conditions gives
Then we have a scenario where we switched off the radial distance:
So there is remaining blue background.
Then going to VeloOpen:
so if you compare nominal vs radial distance it suppresses the background but PV perfomance does not change that much. Then going to VeloOpen does the job for removing background, but I wouldn't recommend having fake rate of 5.5% and basically suppression with respect to radial case is much smaller.
@dovombru @mveghel VeloOpen = false only partially solves the problem. From my point of view this is still not well understood why it happened. The scenario "it is bad" vs "it is partially fixed" has the overall efficiency difference of 0.15% for PatPV3D, basically in the central region, while going to VeloOpen just makes such a bad performance I would not recommend it to use it to anyone. And no... tests are not covering it, as we still don't fully understand what happened there. To me this issue is not fully understood yet, but if we go with TBLV for 2024, it has much smaller impact.
@msaur in our studies we have been using either expected MC 2024, or MiniBrunel_2018_MinBias_FTv4_DIGI_retinacluster
But the performance between the two is known to be different. We wouldn't mind keeping retina input for now though, and indeed follow up with the issue and switch it to MC 2024 when your MR is ready.
@msaur yes, our test should use baseline expected 2024 minbias, no special options. So how should we proceed here? We don't want to clash with your MR.
@mgzia I don't understand here something:
You are loading expected 2024 MC:
https://gitlab.cern.ch/lhcb-datapkg/PRConfig/-/blob/master/python/PRConfig/TestFileDB.py#L13529
which is DIGI, and not MDF, but your options:
a) call mdf_input_and_conds_mc_2024.py which should have different name. I guess more appropriate would be: default_input_and_conds_mc_2024.py
b) your arguments are:
['$MOOREROOT/tests/options/mdf_input_and_conds_mc_2024.py', '$MOOREROOT/tests/options/download_mdf_input.py', '$RECOCONFROOT/options/hlt1_pvs_PatPV3DFuture.py']
so you call
$MOOREROOT/tests/options/download_mdf_input.py
which to my eyes is totally not needed here.
@msaur could you please confirm?