Commit 832f7815 authored by Jens Kroeger's avatar Jens Kroeger
Browse files

Merge branch 'docs' into 'master'

Write missing chapter for Manual

See merge request !164
parents c17ab6af 32901c42
Pipeline #1127039 passed with stages
in 22 minutes and 43 seconds
......@@ -111,15 +111,15 @@ IF(LATEX_COMPILER)
usermanual/figures/correlationX_goodexample.pdf
usermanual/figures/datadrivenandframebased.png
usermanual/figures/datadrivendevice.png
usermanual/figures/framebaseddevice.png
usermanual/figures/reconstruction-chain-simple.png
usermanual/figures/reconstruction-chain-complicated.png
usermanual/figures/onlinemon.png
INPUTS
usermanual/chapters/introduction.tex
usermanual/chapters/installation.tex
usermanual/chapters/framework.tex
usermanual/chapters/configuration.tex
usermanual/chapters/correlation.tex
usermanual/chapters/eventbuilding.tex
usermanual/chapters/onlinemonitoring.tex
usermanual/chapters/howtoalign.tex
usermanual/chapters/modules.tex
......
\chapter{Correlating Different Devices}
\todo{describe how different devices can be correlated}
\section{Triggered Devices}
\todo{if event has no time information, event definition is the trigger}
\section{Triggerless Devices}
\todo{event needs to be defined by the metronome}
\section{Mixing Devices}
\todo{set triggered device first, start/stop information from frame is taken as event (metronome information replacement) and subsequent untriggered devices use this time window to define the event}
\begin{figure}[h]
\centering
\includegraphics[width=0.66\textwidth]{framebaseddevice.png}
\caption{text here}
\label{fig:framebased}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.66\textwidth]{datadrivendevice.png}
\caption{text here}
\label{fig:datadriven}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.66\textwidth]{datadrivenandframebased.png}
\caption{text here}
\label{fig:datadrivenandframebased}
\end{figure}
\chapter{Event Building}
\label{ch:events}
\corry implements a very flexible algorithm for offline event building which allows to combine devices with completely different readout schemes.
This is possible via the concept of detector modules which allows to process data from different detectors individually as described in Section~\ref{sec:module_manager}.
The following sections provide an introduction to event building and detail the procedure using a few examples.
\section{The Order is Key}
When building events is is important to carefully choose the order of event loader modules.
The first module to run has to define the event extent in time by creating a frame and adding it to the clipboard.
\begin{warning}
Once this event is set, no changes to its start and end time are possible, and no new event definition can be set by a subsequent module.
\end{warning}
The following modules can only access the defined event and compare its extent to the currently processed data.
An exception to this are triggered devices which do not provide reference timestamps as will be discussed in Section~\ref{sec:triggered_devices}.
The event is cleared at the end of the processing chain.
However, not all event loader modules are capable of defining an event.
Detectors which run in a data-driven mode usually do just provide individual measurements together with a time stamp.
In order to slice this continuous data stream into consumable frames, the \command{Metronome} module can be used as described in Section~\ref{sec:metronome}.
\section{Before, During, or After?}
After the event has been defined subsequent modules should compare their currently processed data to the defined event.
For this, the event object can be retrieved from the clipboard storage, and the timestamps of the current detector can be tested against it via a set of member functions.
These functions return one of the following four possible positions:
\begin{description}
\item[\textbf{\parameter{BEFORE}:}] The data currently under consideration are dated \emph{before} the event. Therefore they should be discarded and more data should be processed.
\item[\textbf{\parameter{DURING}:}] The data currently under consideration are \emph{within} the defined event frame and should be taken into account. More data should be processed.
\item[\textbf{\parameter{AFTER}:}] The data are dated \emph{after} the current event has finished. The processing of data for this detector should therefore be stopped and the current data retained for consideration in the next event.
\item[\textbf{\parameter{UNKNOWN}:}] The event does not allow to determine where these data belong to. The data should be skipped and the next data should be processed until one of the above conditions can be reached.
\end{description}
Depending on what information is available from the detector data, different member functions are available.
\subsection{Data-Driven Detectors}
If the data provides a single timestamp, such as the data from a data-driven detector, the \command{getTimestampPosition()} function can be used:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{c++}
// Timestamp of the data currently under scrutiny:
double my_timestamp = 1234;
// Fetching the event definition
auto event = clipboard->getEvent();
// Comparison of the timestamp to the event:
auto position = event->getTimestampPosition(my_timestamp);
\end{minted}
Since an event \emph{always} has to define its beginning and end, this function cannot return an \parameter{UNKNOWN} position.
\subsection{Frame-Based Detectors}
If the data to be added are from a source which defines a time frame, such as frame-based or shutter-based detectors, there are two timestamps to consider.
The appropriate function for position comparison is called \command{getFramePosition()} and takes two timestamps for begin and end of the data frame as well as an additional flag to choose between the interpretation modes \emph{inclusive} and \emph{exclusive} (see below).
Several modules, such as the \command{EventLoaderEUDAQ2}, expose a configuration parameter to influence the matching behavior.
\paragraph{Inclusive Selection:}
The inclusive interpretation will return \parameter{DURING} as soon as there is some overlap between the frame and the event, i.e. as soon as the end of the frame is later than the event start or the frame start is before the event end.
If the event has been defined by a reference detector, this mode can be used for the device under test to make sure the data extends well beyond the devices providing the reference tracks.
This allows to correctly measure e.g.\ the efficiency without being biased by DUT data which lies outside the frame of the reference detector.
\paragraph{Exclusive Selection:}
In the exclusive mode, the frame will be classified as \parameter{DURING} only if start and end are both within the defined event.
This mode could be used if the event is defined by the device under test itself.
In this case the reference data which is added to the event afterwards should not extend beyond this boundary but should only be considered if fully contained within the DUT event to avoid the creation of artificial inefficiencies.
The \command{getFramePosition()} takes start and end as well as the matching behavior flag:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{c++}
// Timestamp of the data currently under scrutiny:
double my_frame_start = 1234, my_frame_stop = 1289;
// Fetching the event definition
auto event = clipboard->getEvent();
// Comparison of the frame to the event, in this case inclusive:
auto position = event->getFramePosition(my_frame_start, my_frame_stop, true);
\end{minted}
The function returns \parameter{UNKNOWN} if the end of the given time frame is before its start and therefore ill-defined.
\subsection{Triggered Devices Without Timestamp}
\label{sec:triggered_devices}
Many devices return data on the basis of external trigger decisions.
If the device also provides a timestamp, the data can be directly assigned to events based on the algorithms outlined above.
The situation becomes more problematic if the respective data only have the trigger ID or number assigned but should be combined with un-triggered devices which define their events based on timestamps only.
In order to process such data, a device which relates timestamps and trigger IDs such as a trigger logic unit, is required.
This device should be placed before the detector in question in order to assign trigger IDs to the event using the \command{addTrigger(...)} function.
Then, the triggered device without timestamps can query the event for the position of its trigger ID with respect to the event:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{c++}
// Trigger ID of the current data
uint32_t my_trigger_id = 1234;
// Fetching the event definition
auto event = clipboard->getEvent();
// Comparison of the trigger ID to the event
auto position = event->getTriggerPosition(my_trigger_id);
\end{minted}
If the given trigger ID is smaller than the smallest trigger ID known to the event, the function places the data as \parameter{BEFORE}.
If the trigger ID is larger than the largest know ID, the function returns \parameter{AFTER}. returned. If the trigger ID is known to the event, the respective data is dated as being \parameter{DURING} the current event.
In cases where either no trigger ID has been previously added to the event or where the given ID lies between the smallest and largest known ID but is not part of the event, \parameter{UNKNOWN} is returned.
\section{The Metronome}
\label{sec:metronome}
In some cases, none of the available devices requires a strict event definition such as a trigger or a frame.
This is sometimes the case when all or many of the involved devices have a data-driven readout.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.66\textwidth]{datadrivendevice.png}
\caption{Event building with fixed-length time frames from the Metronome for data-driven detectors}
\label{fig:datadriven}
\end{figure}
In this situation, the \command{Metronome} module can be used to slice the data stream into regular time frames with a defined length as indicated in Figure~\ref{fig:datadriven}.
In addition to splitting the data stream into frames, trigger numbers can be added to each of the frames as described in the module documentation of the \command{Metronome} in Section~\ref{metronome}.
This can be used to process data exclusively recorded with triggered devices, where no timing information is available or necessary from any of the devices involved.
The module should then be set up to always add one trigger to each of its events, and all subsequent detectors will simply compare to this trigger number and add their event data.
If used, the \command{Metronome} module always has to be placed as first module in the data analysis chain because it will attempt to define the event and to add it to the clipboard.
This operation fails if an event has already been defined by a previous module.
\section{Example Configurations for Event Building}
It is assumed that all data have been recorded using the \emph{EUDAQ2} framework, and the \command{EventLoaderEUDAQ2} is used for all devices to read and decode their data.
However, this pattern is not limited to that and a very similar configuration could be used when the device data have been stored to the device-specific native data files.
\subsection{Event Definition by Frame-Based DUT}
\label{sec:reco_mixedmode}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.66\textwidth]{datadrivenandframebased.png}
\caption{Event building strategy in a mixed-mode situation with devices adhering to different readout schemes. Here, the event is defined by the frame-based device and all other data are added within the event boundaries.}
\label{fig:datadrivenandframebased}
\end{figure}
This example demonstrates the setup for event building based on a frame-based device under test.
The event building strategy for this situation is sketched in Figure~\ref{fig:datadrivenandframebased}.
The configuration contains four different devices:
\begin{description}
\item[\parameter{CLICpix2}:] This prototype acts as device under test and is a detector designed for operation at a linear collider, implementing a frame-based readout scheme. In this example, it will define the event.
\item[\parameter{TLU}:] The trigger logic unit records scintillator coincidence signals with their respective timestamps, generates trigger signals and provides the reference clock for all other devices.
\item[\parameter{Timepix3}:] This detector is operated in its data-driven mode, where the external clock from the TLU is used to timestamp each individual pixel hit. The hits are directly read out and sent to the data acquisition system in a continuous data stream.
\item[\parameter{MIMOSA26}:] These detectors are devices with a continuous rolling shutter readout, which do not record any timing information in the individual frames. The data are tagged with the incoming trigger IDs by its DAQ system.
\end{description}
In order to build proper events from these devices, the following configuration is used:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{ini}
[Corryvreckan]
# ...
[EventLoaderEUDAQ2]
name = "CLICpix2_0"
file_name = /data/run001_file_caribou.raw
[EventLoaderEUDAQ2]
name = "TLU_0"
file_name = /data/run001_file_ni.raw
[EventLoaderEUDAQ2]
name = "Timepix3_0"
file_name = /data/run001_file_spidr.raw
[EventLoaderEUDAQ2]
type = "MIMOSA26"
file_name = /data/run001_file_ni.raw
\end{minted}
The first module will define the event using the frame start and end timestamps from the \parameter{CLICpix2} device.
Then, the data from the \parameter{TLU} are added by comparing the trigger timestamps to the event.
The matching trigger numbers are added to the event for later use.
Subsequently the \parameter{Timepix3} device adds its data to the event by comparing the individual pixel timestamps to the existing event definition.
Finally, the six \parameter{MIMOSA26} detectors are added one-by-one via the automatic \parameter{type}-instantiation of detector modules described in Section~\ref{sec:module_instantiation}.
Here, the trigger numbers from the detector data are compared to the ones stored in the event as described in Section~\ref{sec:triggered_devices}.
It should be noted that in this example, the data from the \parameter{TLU} and the six \parameter{MIMOSA26} planes are read from the same file.
The event building algorithm is completely transparent to how the individual detector data are stored, and the very same building pattern could be used when storing the data in separate files.
% \begin{figure}[tbp]
% \centering
% \includegraphics[width=0.66\textwidth]{framebaseddevice.png}
% \caption{text here}
% \label{fig:framebased}
% \end{figure}
\subsection{Event Definition by Trigger Logic Unit}
This example demonstrates the setup for event building based on the trigger logic unit.
The configuration contains the same devices as the example in Section~\ref{sec:reco_mixedmode} but replacing the \parameter{CLICpix2} by the \parameter{ATLASpix}:
\begin{description}
\item[\parameter{ATLASpix}:] This prototype acts as device under test and is a detector initially designed for the ATLAS ITk upgrade, implementing a triggerless column-drain readout scheme.
\end{description}
In order to build proper events from these devices, the following configuration is used:
\begin{minted}[frame=single,framesep=3pt,breaklines=true,tabsize=2,linenos]{ini}
[Corryvreckan]
# ...
[EventLoaderEUDAQ2]
name = "TLU_0"
file_name = /data/run001_file_ni.raw
adjust_event_times = [["TluRawDataEvent", -115us, +230us]]
[EventLoaderEUDAQ2]
type = "MIMOSA26"
file_name = /data/run001_file_ni.raw
[EventLoaderEUDAQ2]
name = "Timepix3_0"
file_name = /data/run001_file_spidr.raw
[EventLoaderALTASpix]
name = "ATLASpix_0"
input_directory = /data/run001/
\end{minted}
The first module will define the event using the frame start and end timestamps from the \parameter{TLU} device.
As in the previous example, the \parameter{MIMOSA26} data will be added based on the trigger number.
The frame start provided by the TLU corresponds to the trigger timestamp and the frame end is one clockcycle later.
However, due to the rolling shutter readout scheme, the hits that are read out from the \parameter{MIMOSA26} planes when receiving a trigger may have happened in a time period before or after the trigger signal.
Consequently, the event times need to be corrected using the \parameter{adjust_event_times} parameter such that the data from the \parameter{ATLASpix} and \parameter{Timepix3} can be added to the event in the entire time window in which the \parameter{MIMOSA26} hits may have occurred.
......@@ -23,8 +23,8 @@ Options are specified as key/value pairs in the same syntax as used in the confi
There are three types of keys that can be specified:
\begin{itemize}
\item Keys to set \textbf{framework parameters}. These have to be provided in exactly the same way as they would be in the main configuration file (a section does not need to be specified). An example to overwrite the standard output directory would be \texttt{corry -c <file> -o output\_directory="run123456"}.
\item Keys for \textbf{module configurations}. These are specified by adding a dot (\texttt{.}) between the module and the actual key as it would be given in the configuration file (thus \textit{module}.\textit{key}). An example to overwrite the deposited particle to a positron would be \texttt{corry -c <file> -o Tracking4D.timingCut="100ns"}.
\item Keys to specify values for a particular \textbf{module instantiation}. The identifier of the instantiation and the name of the actual key are split by a dot (\texttt{.}), in the same way as for keys for module configurations (thus \textit{identifier}.\textit{key}). The unique identifier for a module can contains one or more colons (\texttt{:}) to distinguish between various instantiations of the same module. The exact name of an identifier depends on the name of the detector and the optional input and output name. Those identifiers can be extracted from the logging section headers. An example to change the temperature of propagation for a particular instantiation for a detector named \textit{dut} could be \texttt{corry -c <file> -o Clustering4D:dut.timingCut="1us"}.
\item Keys for \textbf{module configurations}. These are specified by adding a dot (\texttt{.}) between the module and the actual key as it would be given in the configuration file (thus \textit{module}.\textit{key}). An example to overwrite the number of hits required for accepting a track candidate would be \command{corry -c <file> -o Tracking4D.min_hits_on_track=5}.
\item Keys to specify values for a particular \textbf{module instantiation}. The identifier of the instantiation and the name of the actual key are split by a dot (\texttt{.}), in the same way as for keys for module configurations (thus \textit{identifier}.\textit{key}). The unique identifier for a module can contains one or more colons (\texttt{:}) to distinguish between various instantiations of the same module. The exact name of an identifier depends on the name of the detector. An example to change the neighbor pixel search radius of the clustering algorithm for a particular instantiation of the detector named \textit{dut} could be \command{corry -c <file> -o Clustering4D:dut.neighbour_radius_row=2}.
\end{itemize}
Note that only the single argument directly following the \texttt{-o} is interpreted as the option. If there is whitespace in the key/value pair this should be properly enclosed in quotation marks to ensure the argument is parsed correctly.
\item \texttt{-g <option>}: Passes extra detector options which are added and overwritten in the detector configuration file.
......@@ -42,16 +42,25 @@ No interaction with the framework is possible during the reconstruction. Signals
\section{The Clipboard}
The clipboard is the framework's infrastructure for temporary storing information during the event processing.
The clipboard is the framework infrastructure for temporary storing information during the event processing.
Every module can access the clipboard and both read and write information.
Collections or individual elements on the clipboard are accessed via their name, and is internally stored as map.
Collections or individual elements on the clipboard are accessed via their data type and an optional key to identify them with a certain detector by its name.
The clipboard consists of two parts, a temporary storage and a persistent storage space.
The clipboard consists of three parts, the event, a temporary data storage and a persistent storage space.
\subsection{The Event}
The event is the central element storing meta-information about the data currently processed.
This includes the time frame (or data slice) within which all data are located as well as trigger numbers associated with these data.
New data to be added are always compared to the event on the clipboard to determine whether they should be discarded, included or buffered for later use.
A detailed description of the event building process is provided in Chapter~\ref{ch:events}.
\subsection{Temporary Data Storage}
The temporary data storage is only available during the processing of a single event.
The temporary data storage is only available during the processing of each individual event.
It is automatically cleared at the end of the event processing and has to be populated with new data in the new event to be processed.
The temporary storage acts as the main data structure to communicate information between different modules and can hold multiple collections of \corry objects such as pixel hits, clusters or tracks.
In order to be able to flexibly store different data types on the clipboard, the access methods for the temporary data storage are implemented as templates, and vectors of any data type deriving from \parameter{corry::Object} can be stored and retrieved.
\subsection{Persistent Storage}
The persistent storage is not cleared at the end of each event processing and can be used to store information used in multiple events.
......@@ -113,6 +122,11 @@ There are three different kind of modules which can be defined:
The type of module determines the constructor used, the internal unique name and the supported configuration parameters.
For more details about the instantiation logic for the different types of modules, see Section~\ref{sec:module_instantiation}.
Furthermore, detector modules can be restricted to certain types of detectors.
This allows the framework to automatically determine for which of the given detectors the respective module should be instantiated.
The \command{EventLoaderCLICpix2} module for example will be only instantiated for detectors with the correct type \parameter{CLICpix2} without additional configuration effort required by the user.
This procedure is described in detail in Section~\ref{sec:module_files}.
\subsection{Module Status Codes}
The \command{run()} function of each module returns a status code to the module manager, indicating the status of the module.
......@@ -131,7 +145,11 @@ The following status codes are currently supported:
Modules are executed in the order in which they appear in the configuration file, the sequence is repeated for every time frame or event.
If one module in the chain raises e.g.\ a \command{DeadTime} status, subsequent modules are not executed anymore.
This should be taken into account when choosing the order of modules in the configuration, since e.g.\ data from detectors catered by subsequent event loaders is not processed and hit maps are not updated if an earlier module requested to skip the rest of the module chain.
The execution order is of special importance in the event building process, since the first detector will always define the event while subsequent event loader modules will have to adhere to the time frame defined by the first module.
A more complete description of the event building process is provided in Chapter~\ref{ch:events}.
This behavior should also be taken into account when choosing the order of modules in the configuration, since e.g.\ data from detectors catered by subsequent event loaders is not processed and hit maps are not updated if an earlier module requested to skip the rest of the module chain.
\subsection{Module instantiation}
\label{sec:module_instantiation}
......@@ -143,7 +161,7 @@ The module search order is as follows:
\begin{enumerate}
\item Modules already loaded before from an earlier section header
\item All directories in the global configuration parameter \parameter{library_directories} in the provided order, if this parameter exists.
\item \item The internal library paths of the executable, that should automatically point to the libraries that are built and installed together with the executable.
\item The internal library paths of the executable, that should automatically point to the libraries that are built and installed together with the executable.
These library paths are stored in \dir{RPATH} on Linux, see the next point for more information.
\item The other standard locations to search for libraries depending on the operating system.
Details about the procedure Linux follows can be found in~\cite{linuxld}.
......
......@@ -9,7 +9,7 @@ This chapter contains details on the standard installation process and informati
\section{Supported Operating Systems}
\label{sec:os}
\corry is designed to run without issues on either a recent Linux distribution or Mac OS\,X.
Furthermore, the continuous integration of the project ensures correct building and functioning of the software framework on CentOS\,7 (with GCC and LLVM), SLC\,6 (with GCC and LLVM) and Mac OS Sierra (OS X 10.12, with AppleClang).
Furthermore, the continuous integration of the project ensures correct building and functioning of the software framework on CentOS\,7 (with GCC and LLVM), SLC\,6 (with GCC and LLVM) and Mac OS Mojave (OS X 10.14, with AppleClang).
\section{CMVFS}
\label{sec:cvmfs}
......@@ -19,7 +19,7 @@ New versions are published to the folder \dir{/cvmfs/clicdp.cern.ch/software/cor
The deployed version currently comprises all modules which are active by default and do not require additional dependencies.
A \file{setup.sh} is placed in the root folder of the respective release, which allows to set up all runtime dependencies necessary for executing this version.
Versions both for SLC\,6 and CentOS\,7 are provided.
Versions for both SLC\,6 and CentOS\,7 are provided.
\section{Docker}
\label{sec:docker}
......@@ -38,7 +38,7 @@ $ docker run --interactive --tty \
bash
\end{verbatim}
Alternatively it is also possible to directly start the simulation instead of an interactive shell, e.g. using the following command:
Alternatively it is also possible to directly start the reconstruction process instead of an interactive shell, e.g. using the following command:
\begin{verbatim}
$ docker run --tty --rm \
--volume "$(pwd)":/data \
......@@ -50,12 +50,12 @@ where an analysis described in the configuration \file{my_analysis.conf} is dire
This closely resembles the behavior of running \corry natively on the host system.
Of course, any additional command line arguments known to the \command{corry} executable described in Section~\ref{sec:executable} can be appended.
For tagged versions, the tag name should be appended to the image name, e.g.\ \parameter{gitlab-registry.cern.ch/corryvreckan/corryvreckan:v1.0}, and a full list of available Docker containers is provided via the project's container registry~\cite{corry-container-registry}.
For tagged versions, the tag name should be appended to the image name, e.g.\ \parameter{gitlab-registry.cern.ch/corryvreckan/corryvreckan:v1.0}, and a full list of available Docker containers is provided via the project container registry~\cite{corry-container-registry}.
\section{Binaries}
Binary release tarballs are deployed to EOS to serve as downloads from the web to the directory \dir{/eos/project/c/corryvreckan/www/releases}.
New tarballs are produced for every tag as well as for nightly builds of the \parameter{master} branch, which are deployed with the name \file{allpix-squared-latest-<system-tag>-opt.tar.gz}.
New tarballs are produced for every tag as well as for nightly builds of the \parameter{master} branch, which are deployed with the name \file{corryvreckan-latest-<system-tag>-opt.tar.gz}.
\section{Compilation from Source}
......
\chapter{Introduction}
\label{ch:introduction}
\corry is a
\corry is a flexible, fast and lightweight test beam data reconstruction framework based on a modular concept of the reconstruction chain.
It is designed to fulfill the requirements for offline event building in complex environments combining detectors with very different readout architectures.
\corry reduces external dependencies to a minimum by implementing its own flexible but simple data format to store intermediate reconstruction steps as well as final results.
The modularity of the reconstruction chain allows users to add their own functionality (such as event loaders to support different data formats or analysis modules to investigate specific features of their detectors) without having to deal with centrally provided functionality such as coordinate transformations, input and output or parsing of user input and configuration of the analysis.
In addition, tools for batch submission of runs to a cluster scheduler such as \command{HTCondor} are provided to ease the (re-)analysis of complete test beam campaigns within a few minutes.
This project strongly profits from the developments undertaken for the \emph{Allpix Squared} project~\cite{apsq,apsq-website}, a \emph{Generic Pixel Detector Simulation Framework}.
Both frameworks employ very similar philosophies for configuration and modularity, and users of one framework will find it very easy to get started with the other companion.
Some parts of the code base are shared explicitly such as the configuration class or the module instantiation logic.
Also the \textit{FileReader} and \textit{FileWriter} modules have profited heavily from their corresponding framework components in Allpix Squared.
The relevant sections of the Allpix Squared manual~\cite{apsq-manual,clicdp-apsq-manual} have been adapted for this document.
\section{Scope of this Manual}
This document is meant to be the primary User Guide for \corry.
It contains both an extensive description of the user interface and configuration possibilities, and a detailed introduction to the code base for potential developers.
This manual is designed to:
\begin{itemize}
\item Guide new users through the installation;
\item Introduce new users to the toolkit for the purpose of running their own test beam data reconstruction and analysis;
\item Explain the structure of the core framework and the components it provides to the reconstruction modules;
\item Provide detailed information about all modules and how to use and configure them;
\item Describe the required steps for implementing new reconstruction modules and algorithms.
\end{itemize}
Within the scope of this document, only an overview of the framework can be provided and more detailed information on the code itself can be found in the Doxygen reference manual~\cite{corry-doxygen} available online.
No programming experience is required from novice users, but knowledge of (modern) C++ will be useful in the later chapters and might contribute to the overall understanding of the mechanisms.
\section{Support and Reporting Issues}
As for most of the software used within the high-energy particle physics community, only limited support on best-effort basis for this software can be offered.
The authors are, however, happy to receive feedback on potential improvements or problems arising.
Reports on issues, questions concerning the software as well as the documentation and suggestions for improvements are very much appreciated.
These should preferably be brought up on the issues tracker of the project which can be found in the repository~\cite{corry-issue-tracker}.
\section{Contributing Code}
\corry is a community project that benefits from active participation in the development and code contributions from users.
Users and prospective developers are encouraged to discuss their needs via the issue tracker of the repository~\cite{corry-issue-tracker} to receive ideas and guidance on how to implement a specific feature.
Getting in touch with other developers early in the development cycle avoids spending time on features which already exist or are currently under development by other users.
The repository contains a few tools to facilitate contributions and to ensure code quality as detailed in Chapter~\ref{ch:testing}.
\chapter{Using \corry as Online Monitor}
Reconstructing test beam data with \corry does not require many dependencies and is usually very fast due to efficient data handling and fast reconstruction routines.
It is therefore possible to directly perform a full reconstruction including tracking and analysis of the DUT data during data taking.
On Linux machines this is even possible on the data currently recorded since multiple read pointers are allowed per file.
The \corry framework comes with an online monitoring tool in form of a module for data quality monitoring and immediate feedback to the shifter.
The \command{OnlineMonitor} is a relatively simple graphical user interface which displays and updates histograms and graphs produced by other modules during the run.
It should therefore be placed at the very end of the analysis chain in order to have access to all histograms previously registered by other modules.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{onlinemon.png}
\caption{Screenshot of the OnlineMonitor module displaying reconstruction histograms during the \corry run.}
\label{fig:onlinemon}
\end{figure}
A screenshot of the interface is displayed in Figure~\ref{fig:onlinemon}, showing histograms from the reconstruction of data recorded with the setup presented in Section~\ref{sec:reco_mixedmode}.
The histograms are distributed over several canvases according to their stage of production in the reconstruction chain.
It is possible to add histograms for all registered detectors through the \parameter{%DETECTOR%} keyword in the configuration.
Histograms only from all detectors marked as device under test can be added by placing \parameter{%DUT%} in the histogram path.
The module has a default configuration which should match many reconstruction configurations, but each of the canvases and histograms can be freely configured as described in the documentation of the \command{OnlineMonitor} in Section~\ref{onlinemonitor}.
......@@ -165,7 +165,7 @@ Currently, this uses the credentials of the service account user \emph{corry}.
\subsection{Release tarball deployment to EOS}
Binary release tarballs are deployed to EOS to serve as downloads from the website to the directory \dir{/eos/project/c/corryvreckan/www/releases}.
New tarballs are produced for every tag as well as for nightly builds of the \parameter{master} branch, which are deployed with the name \file{allpix-squared-latest-<system-tag>-opt.tar.gz}.
New tarballs are produced for every tag as well as for nightly builds of the \parameter{master} branch, which are deployed with the name \file{corryvreckan-latest-<system-tag>-opt.tar.gz}.
The files are taken from the packaging jobs and published via the \parameter{ci-web-deployer} Docker image from the CERN GitLab CI tools.
This job requires the secret project variables \parameter{EOS_ACCOUNT_USERNAME} and \parameter{EOS_ACCOUNT_PASSWORD} to be set via the GitLab web interface.
......@@ -188,7 +188,7 @@ $ docker login gitlab-registry.cern.ch
# Compile the new image version:
$ docker build --file etc/docker/Dockerfile.base \
--tag gitlab-registry.cern.ch/corryvreckan/\
allpix-squared/corryvreckan-base \
corryvreckan/corryvreckan-base \
.
# Upload the image to the registry:
$ docker push gitlab-registry.cern.ch/corryvreckan/\
......
......@@ -104,7 +104,7 @@
\input{chapters/configuration}
% Time correlation
\input{chapters/correlation}
\input{chapters/eventbuilding}
\input{chapters/onlinemonitoring}
......
......@@ -60,6 +60,22 @@
note = {{EP-ESE seminar}},
URL = {https://indico.cern.ch/event/267425},
}
@online{corry-issue-tracker,
title = {The \corry Project Issue Tracker},
author = {},
url = {https://gitlab.cern.ch/corryvreckan/corryvreckan/issues},
year = {2019},
month = sep,
day = {27}
}
@online{corry-doxygen,
title = {The \corry Code Documentation},
author = {Morag Williams and Simon Spannagel and Jens Kroeger},
url = {https://cern.ch/corryvreckan/reference/},
year = {2017},
month = aug,
day = {22}
}
@online{corry-container-registry,
title = {The \corry Docker Container Registry},
author = {Simon Spannagel},
......@@ -89,6 +105,48 @@
month = dec,
day = {12}
}
@online{linuxld,
title = {Linux Programmer's Manual},
subtitle = {ld.so, ld-linux.so - dynamic linker/loader},
author = {Michael Kerrisk},
url = {http://man7.org/linux/man-pages/man8/ld.so.8.html}
}
@article{apsq,
title = "Allpix$^2$: A modular simulation framework for silicon detectors",
journal = "Nucl. Instr. Meth. A",
volume = "901",
pages = "164 - 172",
year = "2018",
issn = "0168-9002",
doi = "10.1016/j.nima.2018.06.020",
author = "S. Spannagel and others",
Archiveprefix = {arXiv},
Eprint = {1806.05813},
keywords = "Simulation, Silicon detectors, Geant4, TCAD, Drift–diffusion"
}
@misc{apsq-website,
title = {The {Allpix Squared} project},
url = "https://cern.ch/allpix-squared/",
note = {accessed 8~2019}
}
@TechReport{clicdp-apsq-manual,
author = "Wolters, K. and Spannagel, S. and Hynds, D.",
title = "{User Manual for the Allpix$^2$ Simulation Framework}",
month = "12",
year = "2017",
Number = "CLICdp-Note-2017-006",
institution = {CERN},
url = "https://cds.cern.ch/record/2295206",
}
@TechReport{apsq-manual,
author = "Wolters, K. and Spannagel, S. and Hynds, D.",
title = "{User Manual for the Allpix$^2$ Simulation Framework}",
month = "6",
year = "2019",
institution = {CERN},
url = "https://cern.ch/allpix-squared/usermanual/allpix-manual.pdf",
}
@online{tomlgit,
title = {TOML},
subtitle = {Tom's Obvious, Minimal Language},
......
......@@ -40,14 +40,14 @@ It is recommended to only build the necessary libraries of EUDAQ2 to avoid linki
This can be achieved e.g. by using the following CMake configuration for EUDAQ2:
```bash
$ cmake -DEUDAQ_BUILD_EXECUTABLE=OFF -DEUDAQ_BUILD_GUI=OFF ..
cmake -DEUDAQ_BUILD_EXECUTABLE=OFF -DEUDAQ_BUILD_GUI=OFF ..
```
It should be noted that individual EUDAQ2 modules should be enabled for the build, depending on the data to be processed with Corryvreckan.
To e.g. allow decoding of Caribou data, the respective EUDAQ2 module has to be built using
```bash
$ cmake -DUSER_CARIBOU_BUILD=ON ..
cmake -DUSER_CARIBOU_BUILD=ON ..
```