Commit 97f0d9b0 authored by Jens Kroeger's avatar Jens Kroeger
Browse files
parents 5fdcdcb4 99bebe7c
Pipeline #1274315 failed with stages
in 2 minutes and 52 seconds
......@@ -109,8 +109,8 @@ IF(LATEX_COMPILER)
usermanual/figures/trackChi2ndof_goodexample.pdf
usermanual/figures/residualX_goodexample.pdf
usermanual/figures/correlationX_goodexample.pdf
usermanual/figures/datadrivenandframebased.png
usermanual/figures/datadrivendevice.png
usermanual/figures/corrymanual_eventbuilding_datadriven.pdf
usermanual/figures/corrymanual_eventbuilding_framebased.pdf
usermanual/figures/reconstruction-chain-simple.png
usermanual/figures/reconstruction-chain-complicated.png
usermanual/figures/onlinemon.png
......
This diff is collapsed.
......@@ -10,23 +10,23 @@ The repository contains a few tools to facilitate contributions and to ensure co
\section{Writing Additional Modules}
Given the modular structure of the framework, its functionality can be easily extended by adding a new module.
To facilitate the creation of new modules including their CMake files and initial documentation, the script \command{addModule.sh} is provided in the \dir{etc/} directory of the repository.
It will ask for a name and type of the module and create all code necessary to compile a first (and empty) version of the files.
To facilitate the creation of new modules including their CMake files and initial documentation, the script \file{addModule.sh} is provided in the \dir{etc/} directory of the repository.
It will ask for a name and type of the module as described in Section~\ref{sec:module_manager} and create all code necessary to compile a first (and empty) version of the files.
The content of each of the files is described in detail in the following paragraphs.
\subsection{Files of a Module}
\label{sec:module_files}
Every module directory should at minimum contain the following documents (with \texttt{ModuleName} replaced by the name of the module):
Every module directory should at minimum contain the following documents (with \texttt{<ModuleName>} replaced by the name of the module):
\begin{itemize}
\item \textbf{CMakeLists.txt}: The build script to load the dependencies and define the source files of the library.
\item \textbf{README.md}: Full documentation of the module.
\item \textbf{\textit{ModuleName}.h}: The header file of the module.
\item \textbf{\textit{ModuleName}.cpp}: The implementation file of the module.
\item \textbf{\file{CMakeLists.txt}}: The build script to load the dependencies and define the source files of the library.
\item \textbf{\file{README.md}}: Full documentation of the module.
\item \textbf{\file{<ModuleName>.h}}: The header file of the module.
\item \textbf{\file{<ModuleName>.cpp}}: The implementation file of the module.
\end{itemize}
These files are discussed in more detail below.
By default, all modules added to the \textit{src/modules/} directory will be built automatically by CMake.
If a module depends on additional packages which not every user may have installed, one can consider adding the following line to the top of the module's \textit{CMakeLists.txt}:
By default, all modules added to the \dir{src/modules/} directory will be built automatically by CMake.
If a module depends on additional packages which not every user may have installed, one can consider adding the following line to the top of the module's \file{CMakeLists.txt}:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{cmake}
CORRYVRECKAN_ENABLE_DEFAULT(OFF)
\end{minted}
......@@ -50,7 +50,7 @@ Only ROOT is automatically included and linked to the module.
\item A line containing \parameter{CORRYVRECKAN_MODULE_INSTALL(${MODULE_NAME})} to set up the required target for the module to be installed to.
\end{enumerate}
A simple CMakeLists.txt for a module named \parameter{Test} which should run only on DUT detectors of type \emph{Timepix3} is provided below as an example.
A simple \file{CMakeLists.txt} for a module named \parameter{Test} which should run only on DUT detectors of type \emph{Timepix3} is provided below as an example.
\vspace{5pt}
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{cmake}
......@@ -69,7 +69,7 @@ CORRYVRECKAN_MODULE_INSTALL(${MODULE_NAME})
\paragraph{README.md}
The \file{README.md} serves as the documentation for the module and should be written in Markdown format~\cite{markdown}.
It is automatically converted to \LaTeX using Pandoc~\cite{pandoc} and included in the user manual in Chapter~\ref{ch:modules}.
It is automatically converted to \LaTeX~using Pandoc~\cite{pandoc} and included in the user manual in Chapter~\ref{ch:modules}.
By documenting the module functionality in Markdown, the information is also viewable with a web browser in the repository within the module sub-folder.
The \file{README.md} should follow the structure indicated in the \file{README.md} file of the \parameter{Dummy} module in \dir{src/modules/Dummy}, and should contain at least the following sections:
......@@ -94,21 +94,21 @@ The parameters should be briefly explained in an itemised list with the name of
\item An H3-size section with the title \textbf{Usage} which should contain at least one simple example of a valid configuration for the module.
\end{itemize}
\paragraph{\texttt{ModuleName}.h and \texttt{ModuleName}.cpp}
\paragraph{ModuleName.h and ModuleName.cpp}
All modules should consist of both a header file and a source file.
In the header file, the module is defined together with all of its methods.
Brief Doxygen documentation should be added to explain what each method does.
The source file should provide the implementation of every method and also its more detailed Doxygen documentation.
Doxygen documentation should be added to explain what each method does.
The source file should provide the implementation of every method.
Methods should only be declared in the header and defined in the source file in order to keep the interface clean.
\subsection{Module structure}
\label{sec:module_structure}
All modules must inherit from the \texttt{Module} base class, which can be found in \textit{src/core/module/Module.hpp}.
All modules must inherit from the \parameter{Module} base class, which can be found in \dir{src/core/module/Module.hpp}.
The module base class provides two base constructors, a few convenient methods and several methods which the user is required to override.
Each module should provide a constructor using the fixed set of arguments defined by the framework; this particular constructor is always called during by the module instantiation logic.
These arguments for the constructor differ for global and detector/DUT modules.
For global modules, the constructor for a \texttt{TestModule} should be:
For global modules, the constructor for a \module{TestModule} should be:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{c++}
TestModule(Configuration& config, std::vector<std::shared_ptr<Detector>> detectors): Module(std::move(config), detectors) {}
\end{minted}
......@@ -124,9 +124,9 @@ In addition to the constructor, each module can override the following methods:
\begin{itemize}
\item \parameter{initialise()}: Called after loading and constructing all modules and before starting the analysis loop.
This method can for example be used to initialize histograms.
\item \parameter{run(std::shared_ptr<Clipboard> clipboard)}: Called for every time frame or triggered event to be analyzed in the simulation. The argument represents a pointer to the clipboard where the event data is stored.
\item \parameter{run(std::shared_ptr<Clipboard> clipboard)}: Called for every time frame or triggered event to be analyzed. The argument represents a pointer to the clipboard where the event data is stored.
A status code is returned to signal the framework whether to continue processing data or to end the run.
\item \parameter{finalise()}: Called after processing all events in the run and before destructing the module.
Typically used to save the output data (like histograms).
Typically used to summarize statistics like the number of tracks used in the analysis or analysis results like the chip efficiency.
Any exceptions should be thrown from here instead of the destructor.
\end{itemize}
This diff is collapsed.
This diff is collapsed.
\chapter{Alignment Procedure}
\label{ch:howtoalign}
This chapter provides a description of how to use the alignment features of \corry.
It also includes step-by-step instructions on how to align a new set of testbeam data.
As decribed in Section~\ref{sec:detector_config}, an analysis with \corry requires a configuration file defining which detectors are present in the setup.
This file also contains the position and rotation of each detector plane.
The Z-positions of all planes can \textbf{and must} be measured by hand in the existing test beam setup and entered in this configuration file for the analysis.
The X- and Y-positions as well as the rotations cannot be measured precisely by hand.
+However, these have a strong influence on the tracking since a misalignment of a fraction of a millimeter might already correspond to a shift by multiple pixel pitches.
Consequently, an alignment procedure is needed in which the detector planes are shifted and rotated iteratively relative to the detector with \parameter{role = reference} to increase the tracking quality.
More technically, the track residuals on all planes,~i.e. the distribution of the spatial distance between the interpolated track intercept and the associated cluster on the plane need to be centered around zero and 'as narrow as possible' -- the width of the distribution depends on the tracking resolution of the telescope and is influenced by many factors such as the beam energy, the material budget of the detector planes, the distance between the detector planes, etc.
+It is important to correctly set the \parameter{spatial_resolution} specified in the detector configuration file described in Section~\ref{sec:detector_config} because is defining the uncertainty on the cluster positions and therefore influences the track $\chi^2$.
This chapter provides a description of how to use the alignment features of \corry.
It also includes step-by-step instructions on how to align the detector planes for new set of test beam data.
Example configuration files can be found in the \dir{testing/} directory of the repository.
These are based on a Timepix3~\cite{timepix3} telescope with an ATLASpix~\cite{atlaspix} DUT at the CERN SPS with a pion beam of \SI{120}{GeV}.
For the alignment of the \textbf{reference telescope} and \textbf{device-under-test (DUT)}, the following modules are available in \corry.
\begin{itemize}
\item \texttt{[Prealignment]} for both telescope and DUT prealignment (see Section~\ref{prealignment}).
\item \texttt{[AlignmentTrackChi2]} used for telescope alignment (see Section~\ref{alignmenttrackchi2}).
\item \texttt{[AlignmentMillepede]}, an alternative telescope alignment algorithm (see Section~\ref{alignmentmillepede}).
\item \texttt{[AlignmentDUTResidual]} used for DUT alignment (see Section~\ref{alignmentdutresidual}).
\item \module{Prealignment} for both telescope and DUT prealignment (see Section~\ref{prealignment}).
\item \module{AlignmentTrackChi2} used for telescope alignment (see Section~\ref{alignmenttrackchi2}) and is relatively robust against an initial misalignment but usually needs several iterations.
\item \module{AlignmentMillepede}, an alternative telescope alignment algorithm (see Section~\ref{alignmentmillepede}) which requires fewer iterations to reach a precise alignment but needs a better prealignment.
\item \module{AlignmentDUTResidual} used for DUT alignment (see Section~\ref{alignmentdutresidual}).
\end{itemize}
The general procedure that needs to be followed for a successful alignment is outlined here and explained in detail below.
......@@ -25,24 +37,33 @@ When using the alignment modules, the new geometry is written out to a new geome
For details, see Section~\ref{sec:framework_parameters}.
\end{warning}
\paragraph{Correlation vs. Residual}
A spatial \textbf{correlation} plot is filled with the spatial difference of any cluster on a given detector plane minus the position of any cluster on the reference plane. No tracking is required to fill these histograms.
A spatial \textbf{residual} plot shows the difference of the interpolated track intercept onto a given plane minus the position of its associated cluster.\\
Consequently, the goal of the alignment is to force the \textbf{residuals} to be centered around zero.
The \textbf{correlations} do not necessarily have to be centered at zero as a possible offset reflects the \emph{physical displacement} of a detector plane in X and Y with respect to the reference plane.
However, it can be useful to inspect the \textbf{correlation} plots especially in the beginning when the alignment is not yet good enough for a reasonable tracking.
\section{Aligning the Telescope}
\label{sec:align_tel}
Initially, the telescope needs to be aligned.
Initially, the telescope needs to be aligned.
For this, the DUT is ignored.
\subsection*{Prealignment of the Telescope}
The \texttt{[AlignmentTrackChi2]} module requires a careful prealignment. Otherwise it does not converge and the alignment will fail.
For the prealignment, two strategies can be applied.
The \module{AlignmentTrackChi2} module requires a careful prealignment. Otherwise it does not converge and the alignment will fail.
The Z-positions of all planes need to be measured by hand \textbf{in the existing test beam setup} and then adjusted in the detectors file.
For X and Y, the alignment file from an already aligned run with the same telescope plane arrangement is a solid basis to start from.
If no previous alignment is available, all values for X and Y should be set to 0.
For the prealignment, two strategies can be applied:
\begin{itemize}
\item The \texttt{[Prealignment]} module can be used (see Section~\ref{prealignment}).
\item If the above does not bring the expected result, a manual prealignment can be performed by having a look at correlations plots between a defined reference plane and the other planes in both x and y and the residuals of tracks with respect to hits on the DUT.
\item The \module{Prealignment} module can be used (see Section~\ref{prealignment}).
\item If the above does not bring the expected result, a manual prealignment can be performed as described below.
\end{itemize}
The z-positions of all planes need to be measured by hand \textbf{in the existing testbeam setup} and then adjusted in the detectors file.
These will not be changed during the alignment process.
For x and y, the alignment file from the last testbeam is a solid basis to start from.
If no previous alignment is available, all values for x and y should be set to 0.
To have a first look at the initial alignment guess, one can run
\begin{verbatim}
$ /path/to/corryvreckan/bin/corry \
......@@ -52,22 +73,17 @@ $ /path/to/corryvreckan/bin/corry \
-o EventLoaderTimepix3.input_directory=<inputDir>]
\end{verbatim}
The \parameter{spatial_cut} in \texttt{[Tracking4D]} should be set to multiple ($\sim4$) pixel pitch.
The \parameter{spatial_cut_abs/rel} in \module{Tracking4D} should be set to multiple ($\sim4$) pixel pitch.
One can inspect the track $\chi^2$, the correlation in x and y and the residuals with the online monitoring or by opening the generated ROOT file after finishing the script.
These can be found in the modules \texttt{[Tracking4D]} (see Section~\ref{tracking4d}) and \texttt{[Correlations]} (see Section~\ref{correlations}).
One can inspect the spatial correlations in X and Ythe track $\chi^2$, and the residuals with the online monitoring or by opening the generated ROOT file after finishing the script.
These can be found in the modules \module{Correlations} (see Section~\ref{correlations}) and \module{Tracking4D} (see Section~\ref{tracking4d}).
\begin{warning}
\textbf{Tip:} To save time, one can limit the number of processed tracks. For instance, set \parameter{number_of_tracks = 100000} (see Section~\ref{sec:framework_parameters}).
\end{warning}
To save time, one can limit the number of processed tracks. For instance, set \parameter{number_of_events = 10000} or \parameter{number_of_tracks = 10000} (see Section~\ref{sec:framework_parameters}).
If no peak at all is apparent in the correlations, the hitmaps can be checked to see if valid data is actually available for all planes.
If no peak at all is apparent in the correlations or residuals, the hitmaps can be checked to see if valid data is actually available for all planes.
Now, the \texttt{[Prealignment]} module can be used.
To prealign only the telescope, the DUT can be excluded by using \parameter{type = <detector_type_of_telescope>} (e.g.~\parameter{CLICPIX2}). For details, see Section~\ref{sec:module_manager}.
However, all planes including the DUT can be prealigned at once.
Since the prealignment utilizes hit correlations rather than tracks, no cuts are needed here.
To use the module, \file{align_tel.conf} needs to be edited such that \texttt{[Prealignment]} is enabled and \texttt{[Alignment]} is disabled:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{ini}
......@@ -92,20 +108,27 @@ $ /path/to/corryvreckan/bin/corry \
-o EventLoaderTimepix3.input_directory=<inputDir>]
\end{verbatim}
The actual prealignment is only performed after the events have been analyzed and written to the detectors file in the finalizing step.
The actual prealignment is only performed after the events have been analyzed and written to the detectors file in the finalizing step.
This means to check whether the alignment has improved, one needs to re-run the analysis or the next iteration of the alignment as the previously generated ROOT file corresponds to the initial alignment.
This is the case for every iteration of the prealignment or alignment.
Generally, it suffices to run the \texttt{[Prealignment]} module once and then proceed with the next step.
If the prealignment using the modulde \texttt{[Prealignment]} does not bring the expected results, one can also perform the same steps manually by investigating the residuals of the DUT with respect to tracks.
\subsubsection*{Manual Prealignment of the Telescope}
If the prealignment using the module \texttt{[Prealignment]} does not bring the expected results, one can also perform the same steps manually by investigating the residuals of the DUT with respect to tracks.
For the residuals, the shift of the peak from 0 can be estimated with a precision of $\mathcal{O}(\SI{100}{\micro m})$ by zooming in using the \texttt{TBrowser}.
For instance, if the peak is shifted by +\SI{+300}{\micro m}, the detectors file needs to be edited and \SI{+300}{\micro m} should be added to the respective position, if \SI{-300}{\micro m}, subtracted.
After modifying the positions of individual planes in the configuration file, \corry can be re-run to check the correlation plots for the updated geometry.
After modifying the positions of individual planes in the configuration file, \corry can be re-run to check the correlation and residual plots for the updated geometry.
These steps need to be iterated a few times until the peaks of the \textbf{residuals} are centered around 0.
It is important \textbf{not} to force the peak of the \textbf{correlations} to be at exactly 0 because the position of the peak in fact corresponds to the physical offset of a plane from its ideal position.
Rotational misalignments can be inferred from the slope of the 2D spatial correlation plots, the actual rotation angle has to be calculated using the respective pixel pitches of the devices.
\begin{warning}
It is important \textbf{not} to force the peak of the spatial \textbf{correlations} to be at exactly 0 because the position of the peak corresponds to the \textit{physical displacement} of a detector plane in X and Y with respect to the reference plane.
The spatial \textbf{correlations} should \textbf{only be used} if the spatial \textbf{residual} plots are not filled reasonable due to bad tracking.
Hence, the spatial correlations can be shifted towards zero in a first iteration.
\end{warning}
\subsection*{Alignment of the Telescope}
......@@ -123,11 +146,11 @@ align_position=true
\end{minted}
The algorithm performs an optimisation of the track $\chi^2$.
Typically, the alignment needs to be iterated a handful of times until the residuals (which again can be inspected in the ROOT file after re-running the analysis) are nicely centered around 0 and narrow.
In fact, the RMS of the residuals corresponds to the spatial resolution of each plane (convolved with the resolution of the telescope) and should thus be $\lesssim$ pixel pitch$/\sqrt{12}$.
Starting with a \parameter{spatial_cut} in \texttt{[Tracking4D]} (see Section~\ref{tracking4d}) of multiple ($\sim4$) pixel pitches, it should be decreased incrementally down to the pixel pitch (e.g. run \SI{200}{\micro\m} twice, then run \SI{150}{\micro\m} twice, then \SI{100}{\micro\m} twice, and then \SI{50}{\micro\m}) twice.
Typically, the alignment needs to be iterated a handful of times until the residuals (which again can be inspected in the ROOT file after re-running the analysis) are nicely centered around 0 and 'as narrow as possible' -- the RMS of the residuals corresponds to the spatial resolution of each plane (convolved with the resolution of the telescope) and should thus be $\lesssim$ pixel pitch$/\sqrt{12}$.
Starting with a \parameter{spatial_cut_abs/rel} in \texttt{[Tracking4D]} (see Section~\ref{tracking4d}) of multiple ($\sim4$) pixel pitches, it should be decreased incrementally down to the pixel pitch (e.g. run \SI{200}{\micro\m} twice, then run \SI{150}{\micro\m} twice, then \SI{100}{\micro\m} twice, and then \SI{50}{\micro\m} twice).
This allows to perform the alignment with a tight selection of very high quality tracks only.
Also the \parameter{max_track_chi2ndof} should be decrease for the same reason.
Also the \parameter{max_track_chi2ndof} should be decreased for the same reason.
For the further analysis, the cuts can be released again.
It may happen that the procedure runs into a 'false minimum', i.e. it converges in a wrong alignment in which the residuals are clearly not centered around 0.
......@@ -156,25 +179,26 @@ align_position=true
\end{subfigure}
\begin{subfigure}[t]{0.66\textwidth}
\includegraphics[width=\textwidth]{correlationX_goodexample}
\caption{Good example of a correlation plot.}
\caption{Good example of a spatial correlation plot between two telescope planes. The offset from zero corresponds to the \emph{physical displacement} of the plane with respect to the reference plane.}
\label{fig:correlationX}
\end{subfigure}
\begin{subfigure}[t]{0.66\textwidth}
\includegraphics[width=\textwidth]{residualX_goodexample}
\caption{Good example of a residual distribution.}
\caption{Good example of a spatial residual distribution. It is centered around zero.}
\label{fig:residualX}
\end{subfigure}
\caption{Examples of distributions as they should look like.}
\caption{Examples of distributions how they should look after a successful alignment of the Timepix3 telescope at the CERN SPS with \SI{120}{\GeV} pions.}
\label{fig:exampleAlignment}
\end{figure}
Instead of using \texttt{[AlignmentTrackChi2]}, one can also use the module \texttt{[AlignmentMillepede]} (see Section~\ref{alignmentmillepede}).
It should be noted that this module requires a rather good alignment already.
Once the alignment has converged, this module stops and the alignment is complete.
It allows a simultaneous fit of both the tracks and the alignment constants.
The modules stops if the convergence, i.e.~the absolute sum of all corrections over the total number of parameters, is smaller than the configured value, and the aligment is complete.
It should be noted that this module requires a rather good prealignment already.
\section{Aligning the DUT}
\label{sec:align_dut}
Once the telescope is aligned, its geometry is not touched anymore. From now on, it is used to build tracks which are then matched to clusters on the DUT.
Once the telescope is aligned, its geometry is not changed anymore. From now on, it is used to build tracks which are then matched to clusters on the DUT.
\subsection*{Prealignment of the DUT}
The prealignment of the DUT follows the same strategy as for the telescope. To look at the current alignment, the script
......@@ -189,8 +213,8 @@ $ /path/to/corryvreckan/bin/corry \
needs to be run.
If no better guess is available, the initial alignment of the DUT should be set to $x=y=0$.
Then, by repeatedly running \corry and modifying the position of the DUT in the detectors file one should be able to bring the peaks of the correlations in x and y close to 0.
If no peak at all can be seen in the correlation plots, potentially parameters related to the corresponding event loader need to be corrected in the configuration file.
Then, by repeatedly running \corry and modifying the position of the DUT in the detectors file one should be able to bring the peaks of the spatial residuals in X and Y close to zero.
If no peak at all can be seen in the residual plots, the spatial correlations plots can be inspected. In addition, potentially parameters related to the corresponding event loader need to be corrected in the configuration file.
\begin{warning}
If using the \texttt{[Prealignment]} module, it is possible to prealign all planes at once as described above in Section~\ref{sec:align_tel}.
......@@ -212,7 +236,7 @@ align_position=true
\end{minted}
\subsection*{Alignment of the DUT}
Again, the alignment strategy for the DUT is similar as for the telescope and requires multiple iterations.
The alignment strategy for the DUT is similar as for the telescope and requires multiple iterations.
In \file{align_dut.conf}, the prealignment needs to be disabled and the alignment enabled.
Now, the algorithm optimizes the residuals of the tracks through the DUT.
......@@ -238,8 +262,8 @@ $ /path/to/corryvreckan/bin/corry \
-o EventLoaderATLASpix.input_directory=<inputDir_APX>]
\end{verbatim}
Like for the telescope alignment, the RMS of the residuals can be interpreted as the spatial resolution of the DUT (convolved with the resolution of the telescope) and should thus be $\lesssim$~pixel pitch$/\sqrt{12}$.
Again, starting with a \parameter{spatial_cut} in \texttt{[DUTAssociation]} (see Section~\ref{dutassociation}) of multiple ($\sim4$) pixel pitches, it should be decreased incrementally down to the pixel pitch. Note that an asymmetric pixel geometry requires the \parameter{spatial_cut} to be chosen accordingly.
Like for the telescope alignment, the RMS of the residuals can be interpreted as the spatial resolution of the DUT (convolved with the track resolution of the telescope at the position if the DUT) and should thus be $\lesssim$~pixel pitch$/\sqrt{12}$.
Again, starting with a \parameter{spatial_cut_abs/rel} in \texttt{[DUTAssociation]} (see Section~\ref{dutassociation}) of multiple ($\sim4$) pixel pitches, it should be decreased incrementally down to the pixel pitch. Note that an asymmetric pixel geometry requires the \parameter{spatial_cut_abs/rel} to be chosen accordingly.
If the alignment keeps to fail, it is possible to allow only for rotational or translational alignment while freezing the other for one or a few iterations.
......
......@@ -12,22 +12,22 @@ This chapter contains details on the standard installation process and informati
Furthermore, the continuous integration of the project ensures correct building and functioning of the software framework on CentOS\,7 (with GCC and LLVM), SLC\,6 (with GCC and LLVM) and Mac OS Mojave (OS X 10.14, with AppleClang).
\section{CMVFS}
\label{sec:cvmfs}
\label{sec:cvmfs_install}
The software is automatically deployed to CERN's VM file system (CVMFS)~\cite{cvmfs} for every new tag.
In addition, the \parameter{master} branch is built and deployed every night.
New versions are published to the folder \dir{/cvmfs/clicdp.cern.ch/software/corryvreckan/} where a new folder is created for every new tag, while updates via the \parameter{master} branch are always stored in the \dir{latest} folder.
The deployed version currently comprises all modules which are active by default and do not require additional dependencies.
A \file{setup.sh} is placed in the root folder of the respective release, which allows to set up all runtime dependencies necessary for executing this version.
The deployed version currently comprises of all modules that are active by default and do not require additional dependencies.
A \file{setup.sh} is placed in the root folder of the respective release, which allows all runtime dependencies necessary for executing this version to be set up.
Versions for both SLC\,6 and CentOS\,7 are provided.
\section{Docker}
\label{sec:docker}
Docker images are provided for the framework to allow anyone to run analyses without the need of installing \corry on their system.
The only required program is the Docker executable, all other dependencies are provided within the Docker images.
Docker images are provided for the framework allowing anyone to run analyses without needing to install \corry on their system.
The only required program is the Docker executable as all other dependencies are provided within the Docker images.
In order to exchange configuration files and output data between the host system and the Docker container, a folder from the host system should be mounted to the container's data path \dir{/data}, which also acts as the Docker \parameter{WORKDIR} location.
The following command creates a container from the latest Docker image in the project registry and start an interactive shell session with the \command{corry} executable already in the \texttt{\$PATH}.
The following command creates a container from the latest Docker image in the project registry and starts an interactive shell session with the \command{corry} executable already in the \texttt{\$PATH}.
Here, the current host system path is mounted to the \dir{/data} directory of the container.
\begin{verbatim}
......@@ -64,13 +64,13 @@ The following paragraphs describe how to compile the \corry framework and its in
\subsection{Prerequisites}
\label{sec:prerequisites}
The core framework is compiled separately from the individual modules and \corry has therefore only one required dependency: ROOT 6 (versions below 6 are not supported)~\cite{root}.
The core framework is compiled separately from the individual modules, therefore \corry has only one required dependency: ROOT 6 (versions below 6 are not supported)~\cite{root}.
Please refer to~\cite{rootinstallation} for instructions on how to install ROOT.
ROOT has several components of which the GenVector package is required to run \corry, a package included in the default build.
ROOT has several components and to run \corry the GenVector package is required, a package that is included in the default build.
\subsection{Downloading the source code}
The latest version of \corry can be downloaded from the CERN Gitlab repository~\cite{corry-repo}.
For production environments it is recommended to only download and use tagged software versions, as many of the available git branches are considered development versions and might exhibit unexpected behavior.
For production environments, it is recommended to only download and use tagged software versions as many of the available git branches are considered development versions and might exhibit unexpected behavior.
For developers, it is recommended to always use the latest available version from the git \texttt{master} branch.
The software repository can be cloned as follows:
......@@ -82,10 +82,10 @@ $ cd corryvreckan
\subsection{Configuration via CMake}
\label{sec:cmake_config}
\corry uses the CMake build system to configure, build and install the core framework as well as all modules.
\corry uses the CMake build system to configure, build, and install the core framework as well as all modules.
An out-of-source build is recommended: this means CMake should not be directly executed in the source folder.
Instead, a \textit{build} folder should be created, from which CMake should be run.
For a standard build without any additional flags this implies executing:
Instead, a \dir{build} folder should be created from which CMake should be run.
For a standard build without any additional flags this entails executing:
\begin{verbatim}
$ mkdir build
......@@ -97,32 +97,33 @@ CMake can be run with several extra arguments to change the type of installation
These options can be set with -D\textit{option}.
The following options are noteworthy:
\begin{itemize}
\item \parameter{CMAKE_INSTALL_PREFIX}: The directory to use as a prefix for installing the binaries, libraries and data.
Defaults to the source directory (where the folders \textit{bin/} and \textit{lib/} are added).
\item \parameter{CMAKE_BUILD_TYPE}: Type of build to install, defaults to \parameter{RelWithDebInfo} (compiles with optimizations and debug symbols).
\item \parameter{CMAKE_INSTALL_PREFIX}: The directory to use as a prefix for installing the binaries, libraries, and data.
Defaults to the source directory (where the folders \dir{bin/} and \dir{lib/} are added).
\item \parameter{CMAKE_BUILD_TYPE}: The type of build to install, which defaults to \parameter{RelWithDebInfo} (compiles with optimizations and debug symbols).
Other possible options are \texttt{Debug} (for compiling with no optimizations, but with debug symbols and extended tracing using the Clang Address Sanitizer library) and \texttt{Release} (for compiling with full optimizations and no debug symbols).
\item \textbf{\texttt{BUILD\_\textit{ModuleName}}}: If the specific module \parameter{ModuleName} should be installed or not.
Defaults to \texttt{ON} for most modules, however some modules with additional dependencies such as EUDAQ or EUDAQ2~\cite{eudaq,eudaq2} are disabled by default.
This set of parameters allows to configure the build for minimal requirements as detailed in Section~\ref{sec:prerequisites}.
\item \parameter{BUILD_ALL_MODULES}: Build all included modules, defaulting to \parameter{OFF}.
\item \parameter{BUILD_ALL_MODULES}: Build all included modules, defaulting to \texttt{OFF}.
This overwrites any selection using the parameters described above.
\end{itemize}
An example of a custom debug build, without the \parameter{EventLoaderCLICpix2} module and with installation to a custom directory is shown below:
An example of a custom debug build, including the \module{EventLoaderEUDAQ2} module and with installation to a custom directory, is shown below:
\begin{verbatim}
$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX=../install/ \
-DCMAKE_BUILD_TYPE=DEBUG \
-DBUILD_EventLoaderCLICpix2=OFF ..
-DBUILD_EventLoaderEUDAQ2=OFF ..
\end{verbatim}
It should be noted that the \module{EventLoaderEUDAQ2} module requires additional dependencies and is therefore not built by default.
\subsection{Compilation and installation}
Compiling the framework is now a single command in the build folder created earlier (replacing \textit{\textless number\_of\_cores\textgreater} with the number of cores to use for compilation):
Compiling the framework is now a single command in the build folder created earlier, where \parameter{<number_of_cores>} is replaced with the number of cores to use for compilation:
\begin{verbatim}
$ make -j<number_of_cores>
\end{verbatim}
The compiled (non-installed) version of the executable can be found at \textit{src/exec/corry} in the \dir{build} folder.
The compiled (non-installed) version of the executable can be found at \file{src/exec/corry} in the \dir{build} folder.
Running \corry directly without installing can be useful for developers.
It is not recommended for normal users, because the correct library and model paths are only fully configured during installation.
......@@ -131,4 +132,4 @@ To install the library to the selected installation location (defaulting to the
$ make install
\end{verbatim}
The binary is now available as \textit{bin/corry} in the installation directory.
The binary is now available as \file{bin/corry} in the installation directory.
......@@ -2,20 +2,23 @@
\label{ch:introduction}
\corry is a flexible, fast and lightweight test beam data reconstruction framework based on a modular concept of the reconstruction chain.
It is designed to fulfill the requirements for offline event building in complex environments combining detectors with very different readout architectures.
It is designed to fulfill the requirements for offline event building in complex data-taking environments combining detectors with very different readout architectures.
\corry reduces external dependencies to a minimum by implementing its own flexible but simple data format to store intermediate reconstruction steps as well as final results.
The modularity of the reconstruction chain allows users to add their own functionality (such as event loaders to support different data formats or analysis modules to investigate specific features of their detectors) without having to deal with centrally provided functionality such as coordinate transformations, input and output or parsing of user input and configuration of the analysis.
The modularity of the reconstruction chain allows users to add their own functionality (such as event loaders to support different data formats or analysis modules to investigate specific features of detectors), without having to deal with centrally provided functionality, such as coordinate transformations, input and output, parsing of user input, and configuration of the analysis.
In addition, tools for batch submission of runs to a cluster scheduler such as \command{HTCondor} are provided to ease the (re-)analysis of complete test beam campaigns within a few minutes.
This project strongly profits from the developments undertaken for the \emph{Allpix Squared} project~\cite{apsq,apsq-website}, a \emph{Generic Pixel Detector Simulation Framework}.
Both frameworks employ very similar philosophies for configuration and modularity, and users of one framework will find it very easy to get started with the other companion.
Some parts of the code base are shared explicitly such as the configuration class or the module instantiation logic.
Also the \textit{FileReader} and \textit{FileWriter} modules have profited heavily from their corresponding framework components in Allpix Squared.
The relevant sections of the Allpix Squared manual~\cite{apsq-manual,clicdp-apsq-manual} have been adapted for this document.
This project strongly profits from the developments undertaken for the \apsq project~\cite{apsq,apsq-website}: \emph{A Generic Pixel Detector Simulation Framework}.
Both frameworks employ very similar philosophies for configuration and modularity, and users of one framework will find it easy to get started with the other companion.
Some parts of the code base are shared explicitly, such as the configuration class or the module instantiation logic.
In addition, the \module{FileReader} and \module{FileWriter} modules have profited heavily from their corresponding framework components in \apsq.
The relevant sections of the \apsq manual~\cite{apsq-manual,clicdp-apsq-manual} have been adapted for this document.
It is also possible to combine the usage of both software frameworks: data produced by \apsq can be read in and analysed with \textit{Corryvreckan}.
This allows, for instance, to perform data/Monte Carlo comparisons when simulating a beam telescope configuration and analysing it with the same parameters as the read test-beam data.
\section{Scope of this Manual}
This document is meant to be the primary User Guide for \corry.
This document is meant to be the primary user Guide for \corry.
It contains both an extensive description of the user interface and configuration possibilities, and a detailed introduction to the code base for potential developers.
This manual is designed to:
\begin{itemize}
......@@ -26,19 +29,25 @@ This manual is designed to:
\item Describe the required steps for implementing new reconstruction modules and algorithms.
\end{itemize}
Within the scope of this document, only an overview of the framework can be provided and more detailed information on the code itself can be found in the Doxygen reference manual~\cite{corry-doxygen} available online.
No programming experience is required from novice users, but knowledge of (modern) C++ will be useful in the later chapters and might contribute to the overall understanding of the mechanisms.
More detailed information on the code itself can be found in the Doxygen reference manual~\cite{corry-doxygen} available online.
No programming experience is required from novice users, but knowledge of (modern) \CPP will be useful in the later chapters and may contribute to the overall understanding of the mechanisms.
\subsection{Getting Started}
An installation guideline is provided in Chapter~\ref{ch:installation}.
To get started with the analysis, some working examples can be found in the \dir{testing/} directory of the repository.
In addition, tutorials are available on \url{https://cern.ch/corryvreckan}.
\section{Support and Reporting Issues}
As for most of the software used within the high-energy particle physics community, only limited support on best-effort basis for this software can be offered.
The authors are, however, happy to receive feedback on potential improvements or problems arising.
Reports on issues, questions concerning the software as well as the documentation and suggestions for improvements are very much appreciated.
As for most of the software used within the high-energy particle physics community, only limited support on a best-effort basis can be offered for this software.
The authors are, however, happy to receive feedback on potential improvements or problems.
Reports on issues, questions concerning the software and documentation, and suggestions for improvements are very much appreciated.
These should preferably be brought up on the issues tracker of the project which can be found in the repository~\cite{corry-issue-tracker}.
\section{Contributing Code}
\label{sub:contributing}
\corry is a community project that benefits from active participation in the development and code contributions from users.
Users and prospective developers are encouraged to discuss their needs via the issue tracker of the repository~\cite{corry-issue-tracker} to receive ideas and guidance on how to implement a specific feature.
Getting in touch with other developers early in the development cycle avoids spending time on features which already exist or are currently under development by other users.
Getting in touch with other developers early in the development cycle avoids spending time on features that already exist or are currently under development by other users.
The repository contains a few tools to facilitate contributions and to ensure code quality as detailed in Chapter~\ref{ch:testing}.
The repository contains a few tools to facilitate contributions and to ensure code quality, as detailed in Chapter~\ref{ch:testing}.
\chapter{Modules}
\label{ch:modules}
This chapter describes the currently available Corryvreckan modules in detail.
The description comprises a description of the module implemented as well as possible configuration parameters along with their defaults.
Furthermore, a list of (potentially optional) output plots is provided.
For inquiries about certain modules or their documentation, the respective maintainers should be contacted directly.
This chapter describes the currently available \corry modules in detail.
It comprises a description of the implemented modules as well as possible configuration parameters along with their defaults.
Furthermore, an overview of output plots is provided.
For inquiries about certain modules or their documentation, the \corry issue tracker which can be found in the repository~\cite{corry-issue-tracker} should be used as described in Section~\ref{sub:contributing}.
The modules are listed in alphabetical order.
\chapter{Using \corry as Online Monitor}
Reconstructing test beam data with \corry does not require many dependencies and is usually very fast due to efficient data handling and fast reconstruction routines.
Reconstructing test beam data with \corry does not require many dependencies and is usually very fast due to its efficient data handling and fast reconstruction routines.
It is therefore possible to directly perform a full reconstruction including tracking and analysis of the DUT data during data taking.
On Linux machines this is even possible on the data currently recorded since multiple read pointers are allowed per file.
On Linux machines, this is even possible on the data currently recorded since multiple read pointers are allowed per file.
The \corry framework comes with an online monitoring tool in form of a module for data quality monitoring and immediate feedback to the shifter.
The \command{OnlineMonitor} is a relatively simple graphical user interface which displays and updates histograms and graphs produced by other modules during the run.
The \module{OnlineMonitor} is a relatively simple graphical user interface that displays and updates histograms and graphs produced by other modules during the run.
It should therefore be placed at the very end of the analysis chain in order to have access to all histograms previously registered by other modules.
\begin{figure}[tbp]
......@@ -17,7 +17,7 @@ It should therefore be placed at the very end of the analysis chain in order to
A screenshot of the interface is displayed in Figure~\ref{fig:onlinemon}, showing histograms from the reconstruction of data recorded with the setup presented in Section~\ref{sec:reco_mixedmode}.
The histograms are distributed over several canvases according to their stage of production in the reconstruction chain.
It is possible to add histograms for all registered detectors through the \parameter{%DETECTOR%} keyword in the configuration.
Histograms only from all detectors marked as device under test can be added by placing \parameter{%DUT%} in the histogram path.
It is possible to display histograms for all registered detectors through the \parameter{%DETECTOR%} keyword in the configuration.
Histograms only from all detectors marked as DUT can be added by placing \parameter{%DUT%} in the histogram path.
The module has a default configuration which should match many reconstruction configurations, but each of the canvases and histograms can be freely configured as described in the documentation of the \command{OnlineMonitor} in Section~\ref{onlinemonitor}.
The module has a default configuration that should match many reconstruction configurations, but each of the canvases and histograms can be freely configured as described in the documentation of the \module{OnlineMonitor} in Section~\ref{onlinemonitor}.
......@@ -3,9 +3,6 @@
The following chapter will introduce a few tools included in the framework to ease development and help to maintain a high code quality. This comprises tools for the developer to be used while coding, as well as a continuous integration (CI) and automated test cases of various framework and module functionalities.
The chapter is structured as follows.
Section~\ref{sec:targets} describes the available \command{make} targets for code quality and formatting checks, Section~\ref{sec:ci} briefly introduces the CI, and Section~\ref{sec:tests} provides an overview of the currently implemented framework, module, and performance test scenarios.
\section{Additional Targets}
\label{sec:targets}
......@@ -21,7 +18,7 @@ Currently, the following targets are provided:
\end{minted}
once.
\item[\command{make check-format}] also invokes the \command{clang-format} tool but does not apply the required changes to the code. Instead, it returns an exit code 0 (pass) if no changes are necessary and exit code 1 (fail) if changes are to be applied. This is used by the CI.
\item[\command{make lint}] invokes the \command{clang-tidy} tool to provide additional linting of the source code. The tool tries to detect possible errors (and thus potential bugs), dangerous constructs (such as uninitialized variables) as well as stylistic errors. In addition, it ensures proper usage of modern C++ standards. The configuration used for the \command{clang-tidy} command can be found in the \file{.clang-tidy} file in the root directory of the repository.
\item[\command{make lint}] invokes the \command{clang-tidy} tool to provide additional linting of the source code. The tool tries to detect possible errors (and thus potential bugs), dangerous constructs (such as uninitialized variables) as well as stylistic errors. In addition, it ensures proper usage of modern \CPP standards. The configuration used for the \command{clang-tidy} command can be found in the \file{.clang-tidy} file in the root directory of the repository.
\item[\command{make check-lint}] also invokes the \command{clang-tidy} tool but does not report the issues found while parsing the code. Instead, it returns an exit code 0 (pass) if no errors have been produced and exit code 1 (fail) if issues are present. This is used by the CI.
\item[\command{make cppcheck}] runs the \command{cppcheck} command for additional static code analysis. The output is stored in the file \file{cppcheck_results.xml} in XML2.0 format. It should be noted that some of the issues reported by the tool are to be considered false positives.
\item[\command{make cppcheck-html}] compiles a HTML report from the defects list gathered by \command{make cppcheck}. This target is only available if the \command{cppcheck-htmlreport} executable is found in the \dir{PATH}.
......@@ -117,7 +114,7 @@ Tests are marked as failed if either of the CMake targets \command{make check-fo
No code that fails to satisfy the coding conventions and formatting tests will be merged into the repository.
The \textbf{documentation} stage prepares this user manual as well as the Doxygen source code documentation for publication.
This also allows to identify e.g.\ failing compilation of the \LaTeX documents or additional files which accidentally have not been committed to the repository.
This also allows to identify e.g.\ failing compilation of the \LaTeX~documents or additional files which accidentally have not been committed to the repository.
The \textbf{packaging} stage wraps the compiled binaries up into distributable tarballs for several platforms.
This includes adding all libraries and executables to the tarball as well as preparing the \file{setup.sh} script to prepare run-time dependencies using the information provided to the build system.
......@@ -142,12 +139,12 @@ The software is automatically deployed to CERN's VM file system (CVMFS)~\cite{cv
In addition, the \parameter{master} branch is built and deployed every night.
New versions are published to the folder \dir{/cvmfs/clicdp.cern.ch/software/corryvreckan/} where a new folder is created for every new tag, while updates via the \parameter{master} branch are always stored in the \dir{latest} folder.
The deployed version currently comprises all modules as well as the detector models shipped with the framework.
An additional \file{setup.sh} is placed in the root folder of the respective release, which allows to set up all runtime dependencies necessary for executing this version.
Versions both for SLC\,6 and CentOS\,7 are provided.
The deployed version currently comprises of all modules as well as the detector models shipped with the framework.
An additional \file{setup.sh} is placed in the root folder of the respective release, which allows all the runtime dependencies necessary for executing this version to be set up.
Versions for both SLC\,6 and CentOS\,7 are provided.
The deployment CI job runs on a dedicated computer with a GitLab SSH runner.
Job artifacts from the packaging stage of the CI are downloaded via their ID using the script found in \dir{.gitlab-ci.d/download_artifacts.py}, and are made available to the \emph{cvclicdp} user which has access to the CVMFS interface.
Job artifacts from the packaging stage of the CI are downloaded via their ID using the script found in \dir{.gitlab-ci.d/download_artifacts.py}, and are made available to the \emph{cvclicdp} user who has access to the CVMFS interface.
The job checks for concurrent deployments to CVMFS and then unpacks the tarball releases and publishes them to the CLICdp experiment CVMFS space, the corresponding script for the deployment can be found in \dir{.gitlab-ci.d/gitlab_deployment.sh}.
This job requires a private API token to be set as secret project variable through the GitLab interface, currently this token belongs to the service account user \emph{corry}.
......@@ -188,7 +185,7 @@ $ docker login gitlab-registry.cern.ch