Commit 8dc7fac6 authored by Dorothea vom Bruch's avatar Dorothea vom Bruch
Browse files

move documentation to Sphinx

parent 31be16a6
Pipeline #3639632 passed with stages
in 79 minutes and 27 seconds
......@@ -7,7 +7,10 @@ variables:
TARGET_BRANCH: master
ALLEN_DATA: "/scratch/allen_data"
LCG_VERSION: "LCG_101"
LCG_VERSION: LCG_101
LB_NIGHTLY_SLOT: lhcb-head
BINARY_TAG: x86_64_v2-centos7-gcc11-opt
NO_LBLOGIN: "1" # prevent lbdocker containers to start LbLogin/LbEnv
PROFILE_DEVICE: "a5000"
RUN_THROUGHPUT_OPTIONS_CUDAPROF: "-n 500 -m 500 -r 1 -t 1"
......@@ -24,6 +27,7 @@ stages:
- run # Build and run (throughput, efficiency, etc...)
- test # Runs various tests of the software
- publish # Publishes the results of the tests and runs in channels and grafana
- docs # Build Sphinx documentation
- manual trigger # Blocks full pipeline from running in merge requests
- build full
......@@ -54,9 +58,9 @@ check-env:
check-copyright:
<<: *active_branches
stage: check
image: gitlab-registry.cern.ch/ci-tools/ci-worker:cc7
image: gitlab-registry.cern.ch/ci-tools/ci-worker:cc7
script:
- curl -o lb-check-copyright "https://gitlab.cern.ch/lhcb-core/LbDevTools/-/raw/master/LbDevTools/SourceTools.py?inline=False"
- python lb-check-copyright --license=Apache-2.0 origin/${TARGET_BRANCH}
......@@ -84,7 +88,6 @@ check-formatting:
expire_in: 1 week
needs: []
docker_image:build:
stage: build docker
only:
......@@ -101,4 +104,40 @@ docker_image:build:
allow_failure: true
needs: []
include: "scripts/ci/config/main.yaml"
pages:
stage: docs
image: gitlab-registry.cern.ch/lhcb-core/lbdocker/centos7-build:latest
tags:
- cvmfs
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
script:
- . /cvmfs/lhcb.cern.ch/lib/LbEnv.sh
# use nightly build for python environment, needed for API reference of python selection functions
# if today's nightly build does not exist, use yesterday's
- |
if [ -d "/cvmfs/lhcbdev.cern.ch/nightlies/lhcb-head/Today" ] ; then
export DAY=Today
echo "Using today's nightly"
else
export DAY=Yesterday
echo "Using yesterday's nightly"
fi
- lb-run --nightly lhcb-head/$DAY Allen/HEAD make -C doc linkcheck || reasons+='ERROR failed link check\n'
- lb-run --nightly lhcb-head/$DAY Allen/HEAD make -C doc html || reasons+='ERROR failed html generation\n'
- mv doc/_build/html public
- if [ -n "$reasons" ]; then echo -e $reasons; exit 1; fi
allow_failure:
exit_codes: 77
artifacts:
paths:
- public
cache:
key: "$CI_JOB_NAME"
paths:
- .cache/pip
needs: []
include: "scripts/ci/config/main.yaml"
\ No newline at end of file
Produce MDF input for standalone Allen
================================
These are instructions for how to produce MDF files for Allen standalone running by running Moore. The MDF files will contain raw banks with the raw data from the sub-detectors and raw banks containing MC information about tracks and vertices required for the physics checks inside Allen.
Follow [these](../Rec/Allen/readme.md) instruction to call Allen from Moore. Then use this [options script](https://gitlab.cern.ch/lhcb/Moore/blob/master/Hlt/RecoConf/options/mdf_for_standalone_Allen.py) to dump both an MDF file
and the geometry information required for standalone Allen processing. The easiest is to use as input files from the TestFileDB, then only the key has to be specified. The output will be located in a directory, whose name is the TestFileDB key. This directory will contain two subdirectories: `mdf` with the MDF file containing the raw banks and `geometry_dddb-tag_sim-tag` with binary files containing the geometry information required for Allen.
Call the script in a stack setup like so:
```
./Moore/run gaudirun.py Moore/Hlt/RecoConf/options/mdf_for_standalone_Allen.py
```
If you would like to dump a large amount of events into MDF files, it is convenient to produce several MDF output files to avoid too large single files. This [python script](https://gitlab.cern.ch/lhcb/Moore/-/blob/master/Hlt/RecoConf/options/mdf_split_for_standalone_Allen.py) produces several MDF output files from the input files. Again, the TestFileDB entry is used to specify the input. The output MDF files combine a number of input files, configurable with `n_files_per_chunk`. Call this script like so:
```
./Moore/run bash --norc
python Moore/Hlt/RecoConf/scripts/mdf_split_for_standalone_Allen.py
```
Call Allen from Gaudi, event loop directed by Moore
=============================
The software can be compiled either based on the nightlies or by compiling the full stack, as described [here](https://gitlab.cern.ch/lhcb/Allen/-/blob/master/readme.md#call-allen-with-gaudi-steer-event-loop-from-moore).
Call the executable from within the stack directory as in the following examples:
```
./Moore/run gaudirun.py Moore/Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Moore/Hlt/RecoConf/options/hlt1_reco_allen_track_reconstruction.py
```
This will call the full Allen sequence, convert reconstructed tracks to Rec objects and run the MC checkers for track reconstruction efficiencies. The input sample is defined in `Hlt/Moore/tests/options/default_input_and_conds_hlt1.py`.
For a comparison of the Allen standalone track checker and the PrChecker called from Moore, it can be helpful to dump the binary files required for Allen standalone running at the same time
as calling the track reconstruction checker in Moore. For this, the dumping can be enabled in the script by setting `dumpBinaries = True`.
If you want to run the PV checker, you need to use [this](https://gitlab.cern.ch/lhcb/Rec/tree/dovombru_twojton_pvchecker) branch in Rec and the following executable:
```
./Moore/run gaudirun.py Moore/Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Moore/Hlt/RecoConf/options/hlt1_reco_allen_pvchecker.py
```
To check the IP resolution:
```
./Moore/run gaudirun.py Moore/Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Moore/Hlt/RecoConf/options/hlt1_reco_allen_IPresolution.py
```
To check the track momentum resolution:
```
./Moore/run gaudirun.py Moore/Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Moore/Hlt/RecoConf/options/hlt1_reco_allen_trackresolution.py
```
To check the muon identification efficiency and misID efficiency:
```
./Moore/run gaudirun.py Hlt/Moore/tests/options/default_input_and_conds_hlt1.py Moore/Hlt/RecoConf/options/Moore/hlt1_reco_allen_muonid_efficiency.py
```
The scripts in `Moore/Hlt/RecoConf/scripts/` can be used to produce plots of the various efficiencies and resolutions from the ROOT files produced by one of the previous calls to Moore.
Running a different sequence
------------------------------
To run a different sequence, the function call that sets up the
control flow can be wrapped using a `with` statement:
```python
with sequence.bind(sequence="hlt1_pp_no_gec"):
run_allen(options)
```
A different set of configuration parameters can be configured in the
same way:
```python
with sequence.bind(json="/path/to/custom/config.json"):
run_allen(options)
```
To obtain a JSON file that contains all configurable parameters, using Allen standalone build, the
following can be used:
```console
./Allen --sequence='hlt1_pp_ecal' --write-configuration 1
```
This will write a file `config.json` in the current working
directory, which can be used to rerun with a different set of cuts
without rebuilding.
Call HLT1 selection efficiency script in MooreAnalysis
------------------------------
The [MooreAnalysis](https://gitlab.cern.ch/lhcb/MooreAnalysis) repository contains the `HltEfficiencyChecker` tool for giving rates and
efficiencies. To get `MooreAnalysis`, you can use the nightlies or do `make MooreAnalysis` from the top-level directory of the stack.
To get the efficiencies of all the Allen lines, from the top-level directory (`Allen_Gaudi_integration`) do:
```
./MooreAnalysis/run MooreAnalysis/HltEfficiencyChecker/scripts/hlt_eff_checker.py MooreAnalysis/HltEfficiencyChecker/options/hlt1_eff_example.yaml
```
and to get the rates:
```
MooreAnalysis/run MooreAnalysis/HltEfficiencyChecker/scripts/hlt_eff_checker.py MooreAnalysis/HltEfficiencyChecker/options/hlt1_rate_example.yaml
```
Full documentation for the `HltEfficiencyChecker` tool, including a walk-through example for HLT1 efficiencies with Allen, is given
[here](https://lhcbdoc.web.cern.ch/lhcbdoc/moore/master/tutorials/hltefficiencychecker.html).
Testing Allen algorithms in Gaudi
---------------------------------
To test Allen algorithms in Gaudi, the data produced by an Allen
algorithm must be copied from memory managed by Allen's memory manager
to a members of the `HostBuffers` object. The `HostBuffers` will be
put in the TES by the `RunAllen` algorithm that wraps the entire Allen
sequence.
An example for the Velo clusters can be seen
[here](https://gitlab.cern.ch/lhcb/Allen/-/blob/raaij_decoding_tests/stream/sequence/include/HostBuffers.cuh#L59)
and [here](https://gitlab.cern.ch/lhcb/Allen/-/blob/raaij_decoding_tests/device/velo/mask_clustering/src/MaskedVeloClustering.cu#L49),
where additional members have been added to the `HostBuffers` and data
produced by the `velo_masked_clustering` algorithm is copied there.
The `TestVeloClusters` algorithm implements an example algorithm that
recreates the required Allen event model object - in this case
`Velo::ConstClusters` - from the data in `HostBuffers` and loops over
the clusters.
An example options file to run `TestVeloClusters` as a test can be
found [here](https://gitlab.cern.ch/lhcb/Moore/-/blob/master/Hlt/RecoConf/tests/qmtest/decoding.qms/hlt1_velo_decoding.qmt).
Future Developments for tests
------
Developments are ongoing to allow Allen algorithms to be directly run
as Gaudi algorithms through automatically generated wrappers. All data
produced by Allen algorithm will then be directly stored in the TES
when running with the CPU backend. The following merge requests tracks
the work in progress:
[https://gitlab.cern.ch/lhcb/Allen/-/merge_requests/431](https://gitlab.cern.ch/lhcb/Allen/-/merge_requests/431)
Once that work is completed and merged, Allen algorithms will no
longer need to copy data into the `HostBuffers` object and any Gaudi
algorithms used for testing will have to be updated to obtain their
data directly from the TES instead of from `HostBuffers`.
Track Checker
==============
For producing plots of efficiencies etc., Allen needs to be compiled with ROOT (see main [readme](../../readme.md) ).
Create the directory Allen/output, then the ROOT file PrCheckerPlots.root will be saved there when running the MC validation (option -c).
Efficiency plots
------------------------
Histograms of reconstructible and reconstructed tracks are saved in `Allen/output/PrCheckerPlots.root`.
Plots of efficiencies versus various kinematic variables can be created by running `efficiency_plots.py` in the directory
`checker/plotting/tracking`. The resulting ROOT file `efficiency_plots.root` with graphs of efficiencies is saved in the directory `plotsfornote_root`.
Momentum resolution plots
--------------------------
A 2D histogram of momentum resolution versus momentum is also stored in `Allen/output/PrCheckerPlots.root` for Upstream and Forward tracks.
Velo tracks are straight lines, so no momentum resolution is calculated. Running the script `momentum_resolution.py` in the directory `checker/plotting/tracking`
will produce a plot of momentum resolution versus momentum in the ROOT file `momentum_resolution.root` in the directory `plotsfornote_root`.
In this script, the 2D histogram of momentum resolution versus momentum is projected onto the momentum resolution axis in slices of the momentum.
The resulting 1D histograms are fitted with a Gaussian function if they have more than 100 entries. The Gaussian fit is constrained to the region [-0.05,0.05] in
the case of Forward tracks and to [-0.5, 0.5] for Upstream tracks respectively to avoid the non-Gaussian tails. (It is to be studied whether we need a different fit function).
The mean and sigma of the Gaussian are used as value and uncertainty in the momentum resolution versus momentum plot.
The plot is only generated if at least one momentum slice histogram has more than 100 entries.
\ No newline at end of file
Configuring the sequence of algorithms
======================================
Allen centers around the idea of running a __sequence of algorithms__ on input events. This sequence is predefined and will always be executed in the same order.
The sequence can be configured with python. Existing configurations can be browsed under `configuration/sequences`. The sequence name is the name of each individual file, without the `.py` extension, in that folder. For instance, some sequence names are `velo`, `veloUT`, or `hlt1_pp_default`.
The sequence can be chosen _at compile time_ with the cmake option `SEQUENCE`, and passing a concrete sequence name. For instance:
# Configure the VELO sequence
cmake -DSEQUENCE=velo ..
# Configure the ut sequence
cmake -DSEQUENCE=veloUT ..
# Configure the hlt1_pp_default sequence (by default)
cmake ..
The rest of this readme explains the workflow to generate a new sequence.
Inspecting algorithms
---------------------
In order to generate a new sequence, python 3 and (cvmfs and CentOS 7, or clang 9 or higher) are required.
It is possible to inspect all algorithms defined in Allen interactively by using the _python view_ that the parser automatically generates. From a build directory:
```sh
foo@bar:build$ cmake ..
foo@bar:build$ cd sequences
foo@bar:build/sequences$ python3
Python 3.8.2 (default, Feb 28 2020, 00:00:00)
[GCC 10.0.1 20200216 (Red Hat 10.0.1-0.8)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from AllenConf import algorithms
>>>
```
Now, you can inspect any existing algorithm. For instance, all algorithms starting with `velo_`:
```sh
>>> algorithms.velo_
algorithms.velo_calculate_number_of_candidates_t( algorithms.velo_kalman_filter_t(
algorithms.velo_calculate_phi_and_sort_t( algorithms.velo_masked_clustering_t(
algorithms.velo_consolidate_tracks_t( algorithms.velo_pv_ip_t(
algorithms.velo_copy_track_hit_number_t( algorithms.velo_search_by_triplet_t(
algorithms.velo_estimate_input_size_t( algorithms.velo_three_hit_tracks_filter_t(
```
One can see the input and output parameters and properties of an algorithm by using the its `getDefaultProperties` class method. For instance:
```sh
>>> algorithms.velo_calculate_number_of_candidates_t.getDefaultProperties()
OrderedDict([('host_number_of_events_t',
DataHandle('host_number_of_events_t','R','unsigned int')),
('dev_event_list_t', DataHandle('dev_event_list_t','R','mask_t')),
('dev_velo_raw_input_t',
DataHandle('dev_velo_raw_input_t','R','char')),
('dev_velo_raw_input_offsets_t',
DataHandle('dev_velo_raw_input_offsets_t','R','unsigned int')),
('dev_number_of_candidates_t',
DataHandle('dev_number_of_candidates_t','W','unsigned int')),
('verbosity', ''),
('block_dim_x', '')])
```
Creating a new sequence
-----------------------
In order to create a new sequence, head to `configuration/sequences` and create a new sequence file with extension `.py`.
You may reuse what exists already in `AllenConf` and extend that instead. In order to create a new sequence, you should:
* Instantiate algorithms. Algorithm inputs must be assigned other algorithm outputs.
* Generate at least one CompositeNode with the algorithms we want to run.
* Generate the configuration with the `generate()` method.
As an example, let us add the SAXPY algorithm to a custom sequence. Start by including algorithms and the VELO sequence:
```python
from AllenConf.algorithms import saxpy_t
from AllenConf.velo_reconstruction import decode_velo, make_velo_tracks
from AllenConf.utils import initialize_number_of_events
from PyConf.control_flow import CompositeNode
from AllenCore.generator import generate, make_algorithm
number_of_events = initialize_number_of_events()
decoded_velo = decode_velo()
velo_tracks = make_velo_tracks(decoded_velo)
```
`initialize_number_of_events`, `decode_velo` and `make_velo_tracks` are already defined functions that instantiate the relevant algorithms that
we will need for our example.
We should now add the SAXPY algorithm. We can use the interactive session to explore what it requires:
```sh
>>> algorithms.saxpy_t.getDefaultProperties()
OrderedDict([('host_number_of_events_t',
DataHandle('host_number_of_events_t','R','unsigned int')),
('dev_number_of_events_t',
DataHandle('dev_number_of_events_t','R','unsigned int')),
('dev_offsets_all_velo_tracks_t',
DataHandle('dev_offsets_all_velo_tracks_t','R','unsigned int')),
('dev_offsets_velo_track_hit_number_t',
DataHandle('dev_offsets_velo_track_hit_number_t','R','unsigned int')),
('dev_saxpy_output_t',
DataHandle('dev_saxpy_output_t','W','float')),
('verbosity', ''),
('saxpy_scale_factor', ''),
('block_dim', '')])
```
The inputs should be passed into our sequence to be able to instantiate `saxpy_t`. Knowing which inputs to pass is up to the developer. For this one, let's just pass:
```python
saxpy = make_algorithm(
saxpy_t,
name = "saxpy",
host_number_of_events_t = number_of_events["host_number_of_events"],
dev_number_of_events_t = number_of_events["dev_number_of_events"],
dev_offsets_all_velo_tracks_t = velo_tracks["dev_offsets_all_velo_tracks"],
dev_offsets_velo_track_hit_number_t = velo_tracks["dev_offsets_velo_track_hit_number"])
```
Finally, let's create a CompositeNode just with our algorithm inside, and generate the sequence:
```python
saxpy_sequence = CompositeNode("Saxpy", [saxpy])
generate(saxpy_sequence)
```
The final configuration file is therefore:
```python
from AllenConf.algorithms import saxpy_t
from AllenConf.velo_reconstruction import decode_velo, make_velo_tracks
from AllenConf.utils import initialize_number_of_events
from PyConf.control_flow import CompositeNode
from AllenCore.generator import generate, make_algorithm
number_of_events = initialize_number_of_events()
decoded_velo = decode_velo()
velo_tracks = make_velo_tracks(decoded_velo)
saxpy = make_algorithm(
saxpy_t,
name = "saxpy",
host_number_of_events_t = number_of_events["host_number_of_events"],
dev_number_of_events_t = number_of_events["dev_number_of_events"],
dev_offsets_all_velo_tracks_t = velo_tracks["dev_offsets_all_velo_tracks"],
dev_offsets_velo_track_hit_number_t = velo_tracks["dev_offsets_velo_track_hit_number"])
saxpy_sequence = CompositeNode("Saxpy", [saxpy])
generate(saxpy_sequence)
```
Now, we can save this configuration as `configuration/sequences/saxpy.py`, and build it and run it:
```sh
mkdir build
cd build
cmake -DSEQUENCE=saxpy ..
make
./Allen
```
After the command `make` you should be able to see the sequence generation as part of the build:
```sh
[3/36] Generating ../Sequence.json
Generated sequence represented as algorithms with execution masks:
host_init_event_list_t/initialize_event_lists
host_init_number_of_events_t/initialize_number_of_events
data_provider_t/velo_banks
velo_calculate_number_of_candidates_t/velo_calculate_number_of_candidates
host_prefix_sum_t/prefix_sum_offsets_velo_candidates
velo_estimate_input_size_t/velo_estimate_input_size
host_prefix_sum_t/prefix_sum_offsets_estimated_input_size
velo_masked_clustering_t/velo_masked_clustering
velo_calculate_phi_and_sort_t/velo_calculate_phi_and_sort
velo_search_by_triplet_t/velo_search_by_triplet
velo_three_hit_tracks_filter_t/velo_three_hit_tracks_filter
host_prefix_sum_t/prefix_sum_offsets_number_of_three_hit_tracks_filtered
host_prefix_sum_t/prefix_sum_offsets_velo_tracks
velo_copy_track_hit_number_t/velo_copy_track_hit_number
host_prefix_sum_t/prefix_sum_offsets_velo_track_hit_number
saxpy_t/saxpy
Generating sequence files...
Generated sequence files Sequence.h, ConfiguredInputAggregates.h and Sequence.json
```
To find out how to write a trigger line in Allen and how to add it to the sequence, follow the instructions [here](../selections.md).
** Introduction to Forward Tracking **
A detailed introduction in Forward tracking (with real pictures!) can be found here:
(2002) http:cds.cern.ch/record/684710/files/lhcb-2002-008.pdf
(2007) http:cds.cern.ch/record/1033584/files/lhcb-2007-015.pdf
(2014) http:cds.cern.ch/record/1641927/files/LHCb-PUB-2014-001.pdf
** Short Introduction in geometry **
The SciFi Tracker Detector, or simple Fibre Tracker (FT) consits out of 3 stations.
Each station consists out of 4 planes/layers. Thus there are in total 12 layers,
in which a particle can leave a hit. The reasonable maximum number of hits a track
can have is thus also 12 (sometimes 2 hits per layer are picked up).
Each layer consists out of several Fibre mats. A fibre has a diameter of below a mm.(FIXME)
Several fibres are glued alongside each other to form a mat.
A Scintilating Fibre produces light, if a particle traverses. This light is then
detected on the outside of the Fibre mat.
Looking from the collision point, one (X-)layer looks like the following:
y 6m
^ ||||||||||||| Upper side
| ||||||||||||| 2.5m
| |||||||||||||
-|--||||||o||||||----> -x
|||||||||||||
||||||||||||| Lower side
||||||||||||| 2.5m
All fibres are aranged parallel to the y-axis. There are three different
kinds of layers, denoted by X,U,V. The U/V layers are rotated with respect to
the X-layers by +/- 5 degrees, to also get a handle of the y position of the
particle. As due to the magnetic field particles are only deflected in
x-direction, this configuration offers the best resolution.
The layer structure in the FT is XUVX-XUVX-XUVX.
The detector is divided into an upeer and a lower side (>/< y=0). As particles
are only deflected in x direction there are only very(!) few particles that go
from the lower to the upper side, or vice versa. The reconstruction algorithm
can therefore be split into two independent steps: First track reconstruction
for tracks in the upper side, and afterwards for tracks in the lower side.
Due to construction issues this is NOT true for U/V layers. In these layers the
complete(!) fibre modules are rotated, producing a zic-zac pattern at y=0, also
called "the triangles". Therefore for U/V layers it must be explicetly also
searched for these hit on the "other side", if the track is close to y=0.
Sketch (rotation exagerated!):
_.*
y ^ _.* _.*
| .*._ Upper side _.*._
| *._ _.* *._
|--------*._ _.* *._----------------> x
| *._ _.* *._ _.*
*._.* Lower side *._.*
**Zone ordering**
y ^
| 1 3 5 7 9 11 13 15 17 19 21 23
| | | | | | | | | | | | |
| x u v x x u v x x u v x <-- type of layer
| | | | | | | | | | | | |
|------------------------------------------------> z
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| 0 2 4 6 8 10 12 14 16 18 20 22
# CompassUT
To configure the number of candidates to search for go to `CompassUTDefinitions.cuh` and change:
```cpp
constexpr unsigned max_considered_before_found = 4;
```
A higher number considers more candidates but takes more time. It saves the best found candidate.
\ No newline at end of file
Event Model
============
The event model includes objects to store hits, track candidates (collection of
hit indices) and tracks. The format is the same for the various sub-detectors.
### Hits
Hit variables are stored in structures of arrays (SoAs). The allocated size corresponds
exactly to the number of htis / clusters present in the sub-detector. This structure is used
for the output of the decoding / clustering algorithm and as input for the pattern recognition algorithm.
### Track Candidates
Track Candidates are stored in a TrackHits object containing the indices to the hits
in the SoA for those hits that represent a track candidate. Additional information
necessary for the pattern recognition step might be added, such as the `qop`.
### Consolidated Tracks
After the pattern recognition step, a consolidation of the hit SoAs is applied, such
that only those hits belonging to a track remain in memory. More specifially, for every track in
every event the hits contributing to this track are written into the hit SoA consecutively.
This means that if one hit is used in two different tracks, it will be present
in the coalesced hit SoA twice in different locations.
The consolidation precedure saves memory and
hits belonging to one track are coalesced. In general, the consolidated hit SoAs have the same format as
the hit SoAs described above, but their size is smaller.
The consolidated tracks are defined in
`common/include/ConsolidatedTypes.cuh`. Sub-detector specific implementations
inherit from these general types.
Consolidated tracks are based on two pieces of information:
* An array with the number of tracks in every event
* An array with the offset to every track for every event. This offset describes
where the hits for this track are to be found in the SoA of consolidated hits.
### States
Two different types of states exist in the current event model:
* VeloState: Containing the position `x, y, z` and slopes `tx, ty` as well as a reduced covariance matrix with only those entries that are filled in the straight line fit / simplified Kalman filter for the Velo where the x- and y-dimensions are assumed to be independent.
* FullState: Containing the position `x, y, z` and slopes `tx, ty` as well as all entries of the covariance matrix defined after the SciFi pattern recognition step.
[This](https://indico.cern.ch/event/692177/contributions/3252276/subcontributions/269262/attachments/1771794/2879440/Allen_event_model_DvB.pdf) presentation gives an overview of the event model.
## TwoTrackMVA
The training procedure for the TwoTrackMVA is found in https://github.com/niklasnolte/HLT_2Track.
The event types used for training can be seen in [here](https://github.com/niklasnolte/HLT_2Track/blob/main/hlt2trk/utils/config.py#L384).
The model exported from there goes into Allen/input/parameters/two_track_mva_model.json
VELO
=====
Velo Tracking
Masked Clustering
-----------------------
Format needed by one RawEvent for Daniel's clustering code:
```
-----------------------------------------------------------------------------
name | type | size | array_size
=============================================================================
number_of_rawbanks | uint32_t | 1
-----------------------------------------------------------------------------
raw_bank_offset | uint32_t | number_of_rawbanks
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
sensor_index | uint32_t | 1 |
------------------------------------------------------------------------------
sp_count | uint32_t | 1 | number_of_rawbanks
------------------------------------------------------------------------------
sp_word | uint32_t | sp_count |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
```
raw_bank_offset: array containing the offsets to each raw bank, this array is
currently filled when running Brunel to create this output; it is needed to process
several raw banks in parallel
sp = super pixel
sp_word: contains super pixel address and 8 bit hit pattern
###############################################################################
# (c) Copyright 2021 CERN for the benefit of the LHCb Collaboration #
# #
# This software is distributed under the terms of the GNU General Public #
# Licence version 3 (GPL Version 3), copied verbatim in the file "COPYING". #
# #