Skip to content
Snippets Groups Projects
Commit 3180a96e authored by Chris Burr's avatar Chris Burr Committed by Eduardo Rodrigues
Browse files

Update documentation for lbexec

parent 0f85ea0e
No related branches found
No related tags found
2 merge requests!1103Draft: Add AnalysisHelpers to DaVinci Stack,!724Update documentation for lbexec
......@@ -33,32 +33,24 @@ def main(options):
fields['MuPlus'] = 'J/psi(1S) -> ^mu+ mu-'
#FunTuple: make collection of functors for Jpsi
variables_jpsi = FunctorCollection({
'LOKI_P':
'P',
'LOKI_PT':
'PT',
'LOKI_Muonp_PT':
'CHILD(PT, 1)',
'LOKI_Muonm_PT':
'CHILD(PT, 2)',
'LOKI_MAXPT':
'TRACK_MAX_PT',
'LOKI_N_HIGHPT_TRCKS':
'NINTREE(ISBASIC & HASTRACK & (PT > 1500*MeV))',
'THOR_P':
F.P,
'THOR_PT':
F.PT
})
variables_jpsi = {
'LOKI_P': 'P',
'LOKI_PT': 'PT',
'LOKI_Muonp_PT': 'CHILD(PT, 1)',
'LOKI_Muonm_PT': 'CHILD(PT, 2)',
'LOKI_MAXPT': 'TRACK_MAX_PT',
'LOKI_N_HIGHPT_TRCKS': 'NINTREE(ISBASIC & HASTRACK & (PT > 1500*MeV))',
'THOR_P': F.P,
'THOR_PT': F.PT
}
#FunTuple: make collection of functors for Muplus
variables_muplus = FunctorCollection({'LOKI_P': 'P', 'THOR_P': F.P})
variables_muplus = {'LOKI_P': 'P', 'THOR_P': F.P}
#FunTuple: associate functor collections to field (branch) name
variables = {}
variables['Jpsi'] = variables_jpsi
variables['MuPlus'] = variables_muplus
variables['Jpsi'] = FunctorCollection(variables_jpsi)
variables['MuPlus'] = FunctorCollection(variables_muplus)
#FunTuple: define list of preambles for loki
loki_preamble = ['TRACK_MAX_PT = MAXTREE(ISBASIC & HASTRACK, PT, -1)']
......@@ -77,7 +69,7 @@ def main(options):
fields_KS['KS'] = 'KS0 -> pi+ pi-'
#associate the functor collections to KS field name (NB: here we use functor collection used for jpsi)
variables_KS = {}
variables_KS['KS'] = variables_jpsi
variables_KS['KS'] = FunctorCollection(variables_jpsi)
#funtuple instance
tuple_kshorts = Funtuple(
name="KsTuple",
......
......@@ -19,46 +19,38 @@ from DaVinci import make_config
def main(options):
#FunTuple: define fields (branches)
# FunTuple: define fields (branches)
fields = {
'B0': "[B0 -> D_s- pi+]CC",
'Ds': "[B0 -> ^D_s- pi+]CC",
'pip': "[B0 -> D_s- ^pi+]CC",
}
#FunTuple: define variables for the B meson
variables_B = FunctorCollection({
'LOKI_MAXPT':
'TRACK_MAX_PT',
'LOKI_Muonp_PT':
'CHILD(PT, 1)',
'LOKI_Muonm_PT':
'CHILD(PT, 2)',
'LOKI_NTRCKS_ABV_THRSHLD':
'NINTREE(ISBASIC & (PT > 15*MeV))'
})
# FunTuple: define variables for the B meson
variables_B = {
'LOKI_MAXPT': 'TRACK_MAX_PT',
'LOKI_Muonp_PT': 'CHILD(PT, 1)',
'LOKI_Muonm_PT': 'CHILD(PT, 2)',
'LOKI_NTRCKS_ABV_THRSHLD': 'NINTREE(ISBASIC & (PT > 15*MeV))'
}
#FunTuple: make functor collection from the imported functor library Kinematics
# FunTuple: make functor collection from the imported functor library Kinematics
variables_all = Kinematics()
#FunTuple: associate functor collections to field (branch) name
# FunTuple: associate functor collections to field (branch) name
variables = {
'ALL': variables_all, #adds variables to all fields
'B0': variables_B,
'ALL': variables_all, # adds variables to all fields
'B0': FunctorCollection(variables_B),
}
line = "SpruceB2OC_BdToDsmPi_DsmToKpKmPim_Line"
config = {
"location":
"/Event/Spruce/SpruceB2OC_BdToDsmPi_DsmToKpKmPim_Line/Particles",
"filters":
["HLT_PASS('SpruceB2OC_BdToDsmPi_DsmToKpKmPim_LineDecision')"],
"location": f"/Event/Spruce/{line}/Particles",
"filters": [f"HLT_PASS('{line}Decision')"],
"preamble": ['TRACK_MAX_PT = MAXTREE(ISBASIC & HASTRACK, PT, -1)'],
"tuple":
"DecayTree",
"fields":
fields,
"variables":
variables,
"tuple": "DecayTree",
"fields": fields,
"variables": variables,
}
algs = configured_FunTuple(options, {"B0Dspi": config})
......
......@@ -25,7 +25,7 @@ def main(options):
turbo_line = "Hlt2BsToJpsiPhi_JPsi2MuMu_PhiToKK_Line"
input_data = force_location(f"/Event/HLT2/{turbo_line}/Particles")
#Add a filter: We are not really filtering over particles, we are getting over a technical hurdle here.
# Add a filter: We are not really filtering over particles, we are getting over a technical hurdle here.
# The hurdle being that if the event hasn't fired a HLT2 line then no TES location exists
# and therefore if any algorithm tries to look for this location, we run into a problem.
# Side step this issue with a filter, where:
......
......@@ -47,8 +47,13 @@ extensions = [
"sphinx.ext.graphviz",
"sphinx.ext.todo",
"graphviz_linked",
"sphinxcontrib.autodoc_pydantic",
]
# Control the display of the DaVinci.Options object
autodoc_pydantic_model_show_json = True
autodoc_pydantic_settings_show_json = False
# Assume unmarked references (in backticks) refer to Python objects
default_role = "py:obj"
......
This diff is collapsed.
......@@ -4,6 +4,5 @@ DaVinci
.. toctree::
application
options
Application
===========
The ``DaVinci.options`` object holds an instance of
`PyConf.application.ApplicationOptions`.
.. autoclass:: PyConf.application.ApplicationOptions
:members:
The other members of the ``DaVinci`` module are used for high-level application
configuration. Most 'main' options files will call `DaVinci.run_davinci`.
.. automodule:: DaVinci
.. autofunction:: DaVinci.run_davinci
Options YAML
============
The YAML provided to populate the ``options`` object passed to the user provided function, often called ``options.yaml``, is parsed using the following model:
.. autopydantic_model:: DaVinci.Options
:inherited-members: BaseModel
:model-show-field-summary: False
.. autoclass:: GaudiConf.LbExec.options.DataTypeEnum
:members:
:undoc-members:
.. autoclass:: GaudiConf.LbExec.options.FileFormats
:members:
:undoc-members:
.. autoclass:: GaudiConf.LbExec.options.EventStores
:members:
:undoc-members:
Welcome to DaVinci's documentation!
===================================
DaVinci is the LHCb offline analysis application.
DaVinci is the LHCb offline analysis application.
It allows the users to produce the tuples in which the relevant information
of the reconstructed particles of the decay of interest are stored.
Consider it as your way to access LHCb data!
......@@ -26,10 +26,10 @@ and Turbo output.
:caption: User Guide
:maxdepth: 3
configuration/davinci_configuration
tutorials/running
configuration/davinci_configuration
.. toctree::
:caption: API Reference
:maxdepth: 3
......
......@@ -3,4 +3,5 @@
# means default value... https://github.com/sphinx-doc/sphinx/pull/8546
sphinx==4.4.0
sphinx_rtd_theme==1.0.0
gitpython
\ No newline at end of file
gitpython
autodoc_pydantic==1.6.2
Running DaVinci
===============
Broadly speaking, DaVinci is a repository of Python files which can be used to
configure a Gaudi application. In this way, "running DaVinci" is the same as
running any other LHCb application (such as Brunel or Moore)::
From ``DaVinci/v62r0`` the DaVinci configuration has been modernized and revisited in order to improve
the user accessibility and hide all technicalities the user doesn't need to deal with.
The two major changes with respect to the old configuration are:
lb-run DaVinci/latest gaudirun.py some_options_file.py
* the general structure 'À la' PyConf
* the requirement to use ``lbexec`` (see the `talk from the 104th LHCb week for details <https://indico.cern.ch/event/1160084/#249-replacing-gaudirunpy-with>`__)
The configuration of your job is now declared in two files:
Until `DaVinci/v53r0` the above was the only way to run DaVinci.
With that release a new way of running DaVinci is introduced::
* A Python function that takes an ``options`` returns the PyConf configuration
* A YAML file which declares data specific configuration and which is used to populate the ``options`` object
lb-run DaVinci/latest davinci [davinci_option] command [command_option] [extra_options]
DaVinci can then be run using:
This command exploits the potential of a Click-based script that allows to set all the options directly by command line.
\ No newline at end of file
.. code-block:: bash
lb-run DaVinci/vXrY lbexec my_module:my_function options.yaml
Replacing ``lb-run DaVinci/vXrY`` with a specific version, or in the case of development builds replacing it with with the ``./run`` script.
Minimal example
---------------
Make a file named ``my_module.py`` that contains a function that takes an ``options`` argument and returns the result of ``DaVinci.make_config```:
.. code-block:: python
from DaVinci import make_config
from DaVinci.algorithms import add_filter
from PyConf.Algorithms import PrintDecayTree
from PyConf.dataflow import force_location
def print_decay_tree(options):
turbo_line = "Hlt2BsToJpsiPhi_JPsi2MuMu_PhiToKK_Line"
input_data = force_location(f"/Event/HLT2/{turbo_line}/Particles")
user_algorithms = [
add_filter(options, "HDRFilter_SeeNoEvil", f"HLT_PASS('{turbo_line}')"),
PrintDecayTree(name="PrintBsToJpsiPhi", Input=input_data)
]
return make_config(options, user_algorithms)
Also make a file named ``options.yaml`` containing:
.. code-block:: yaml
input_files:
- root://eoslhcb.cern.ch//eos/lhcb/wg/dpa/wp3/tests/hlt2_passthrough_thor_lines.dst
annsvc_config: root://eoslhcb.cern.ch//eos/lhcb/wg/dpa/wp3/tests/hlt2_passthrough_thor_lines.tck.json
input_type: ROOT
evt_max: 100
ntuple_file: davinci_ntuple.root
enable_unpack: True
process: Turbo
print_freq: 1
data_type: Upgrade
simulation: true
conddb_tag: sim-20180530-vc-md100
dddb_tag: dddb-20180815
This example can then be run using:
.. code-block:: bash
lb-run DaVinci/vXrY lbexec my_module:print_decay_tree options.yaml
For a more detailed explanation of this job, as well as many more examples, see the `DaVinci tutorials repository <https://gitlab.cern.ch/lhcb/DaVinci/-/tree/master/DaVinciTutorials>`__.
Options YAML
------------
The full schema with which the ``options.yaml`` file is parsed can be found in :class:`~DaVinci.Options`.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment