Fix hardcoded geometry in forward/seeding algorithms
Fixing the geometry hardcoding in forward/seeding algorithms: #463 (closed)
Changing DumpFTGeometry
to dump in the .bin geometry files also the info of the average z and average dxdy per layer.
Through Moore, dumped the geometry in scifi_geometry.bin
.
./Moore/run gaudirun.py Moore/Hlt/Moore/tests/options/default_input_and_conds_hlt1_retinacluster.py Allen/Dumpers/BinaryDumpers/options/dump_geometry.py
The Consumers
at the beginning of the Allen sequence take these constants from the geometry and store it in the GPU.
The constants
are accessible in each Allen kernel using constants.dev_scifi_geometry
in the operator
.
Implemented new versioning of scifi geometry for retro-compatibility:
- v0 hardcoded - v2 read-in geometry: for scifi decoding v4,5,6
- v1 hardcoded - v3 read in geometry: for scifi decoding v7,8
All new dumped geometries will be either v2 or v3 depending on the decoding. If Allen reads a geometry v0 or v1 then hardcoded scifi geometry values are used (retro-compatibility)
Testing the effect of going for layer-level to scifi quarter-level geometry constants on a second MR to factorise the work: #497
Merge request reports
Activity
assigned to @ascarabo
added RTA label
- Resolved by Dorothea Vom Bruch
As a first test, I tried the new geometry pick-up on the Allen default MC sample "MiniBrunel_2018_MinBias_FTv4_DIGI_retinacluster_v1.mdf". Testing long track reconstruction with hardcoded and geometry-input configurations (almost no difference). Next goal to test on data using a mep file.
Edited by Alessandro Scarabotto
- Resolved by Dorothea Vom Bruch
While averaging over all the mats of a layer is probably better than not taking them into account at all, I think it's somewhat wrong because a SciFi layer consists of too many parts that move independently of each other. Even if computational performance may forbid it in the end, it could be helpful to check using the correct per-hit z positions in the least square fit for example (and possibly other places, I am not so familiar with this code). And as soon as you have an x candidate track looking for stereo hits, you could go to the SciFi quarter level as in HLT2, I guess?
Edited by Andre Gunther
- Resolved by Dorothea Vom Bruch
Added the same fixes on the scifi seeding algorithm (average z per layer and average dxdy per layer from geometry). One issue I found is the hardcoded dxdy value the seeding is using is
0.086
while from the geometry we get something more correct at0.087489
. This causes an increase on the ghost rate which by a quick check is due to search windows which are tuned to be too large in the uv layers. One quick fix is to reduce the tolerance of the uv search windows fromTUNING_TOL = 2.f
toTUNING_TOL = 1.f
. Of course this would need an optimisation. I tag here @lohenry @ahennequ @cagapopo @lcalefic for opinions. Checking the effect on data as I did above for the forward (I think I have no time today I will do it in January when I come back)Allen Checker:
standard minbias sample: MiniBrunel_2018_MinBias_FTv4_DIGI_retinacluster_v1.mdf (she should use a more recent sample for the tuning, here is just to show you the effect)
MASTER (hardcoded geometry)
GEOMETRY INPUT (average z and dxdy per layer)
QUICK TUNING (reduce uv window tolerance to
TUNING_TOL = 1.f
)- Resolved by Dorothea Vom Bruch
Can the change in the geometry description be handled with a new version such that the old tests still pass and we only update one with a recent sample?
mentioned in issue #463 (closed)
added 152 commits
-
1cd69ccd...c0b5420b - 143 commits from branch
master
- a23f1ad2 - add scifi consumer
- d3a6df16 - change FT dumper
- 3fe04448 - change dumper scifi
- cc8bf3da - fix hardcoded geometry in forward
- 3bb3f223 - Fixed formatting
- 7c40d8d5 - remove hardcoded geometry in seeding
- 1a0aa526 - add consumer for geometry versioning
- 84c50356 - add comment
- 7fa61bf2 - cleanup hardcoded constants
Toggle commit list-
1cd69ccd...c0b5420b - 143 commits from branch