Skip to content
Snippets Groups Projects

TriggerDB

Introduction

This package contains the code to create and manipulate the trigger database. It contains:

  • The SQL script to create the TriggerDB database schema.
  • Python scripts to upload and download trigger configurations to/from TriggerDB.
  • Menu comparison scripts and formatting scripts.

The scripts for uploading and downloading configurations provide a direct command-line interface to TriggerDB, and will be integrated with the development of the web-based TriggerTool.

Sourcing/Building the Package

The latest version of this package is stored on AFS at /afs/cern.ch/user/a/attrgcnf/TriggerTool/Run3/current/, and the scripts are made accessible from the command line by executing the following command:

source /afs/cern.ch/user/a/attrgcnf/TriggerTool/Run3/current/installed/setup.sh

To manually build the package, start by creating the directory where you want to build the package, and then within this directory use git clone to get a copy of the package.

The build process requires a CMakeLists.txt file. Either add a custom one or copy the one from the most recent AFS version:

cp /afs/cern.ch/user/a/attrgcnf/TriggerTool/Run3/current/CMakeLists.txt .

Then run the following commands:

alias cm_setup='source /cvmfs/atlas.cern.ch/repo/sw/tdaq/tools/cmake_tdaq/bin/cm_setup.sh' 
cm_setup tdaq-09-04-00 x86_64-centos7-gcc11-opt
cmake_config x86_64-centos7-gcc11-opt
cd x86_64-centos7-gcc11-opt
make install
cd ..

This will result in a folder called installed/ being created, and the scripts are then accessible through:

source installed/setup.sh

Uploading Database Schema

To upload a new schema:

Setup oracle, here with version 19.3.0:

Upload the new schema:

sqlplus -S ATLAS_CONF_DEV1/<password>@INTR < createSchema.sql

Use with care, as this deletes the current tables of the TriggerDB schema. This command requires schema owner permissions.

For accounts on ATLR, to enable Frontier access the grantFrontier.sql script needs to be used to give permissions to the Frontier reader account.

Creating a Set of Configuration Files for Uploading

Below are example commands for how to create JSON files, more advanced and regularly updated details are provided on TriggerOnlineMenuUploadRun3.

JSON files are also produced as part of many of the TrigP1 nightly ART tests

Setup Athena, and add if necessary the patch area to the environment:

asetup Athena,22.0.47

Run athenaHLT job:

input=...
athenaHLT.py --dump-config-exit -M -n 5 -f ${input} TriggerJobOpts/runHLT_standalone.py

At P1 set the input file to /atlas/datafiles/dzanzi/data18_13TeV.00360026.physics_EnhancedBias.merge.RAW._lb0146._SFO-2._0001.1, on lxplus set it to /cvmfs/atlas-nightlies.cern.ch/repo/data/data-art/TrigP1Test/data17_13TeV.00327265.physics_EnhancedBias.merge.RAW._lb0100._SFO-1._0001.1

This will create the files L1Menu_LS2_v1_22.0.47.json, L1Prescale_LS2_v1_22.0.47.json, HLTMenu_LS2_v1_22.0.47.json, HLTPrescale_LS2_v1_22.0.47.json, and HLTJobOptions.json. 👍

Example prescale sets can also be created from the menu This was an initial feature until prescale files were produced by TriggerMenuMT, and should only be used for local testing.

Will generate prescale sets with all items/chains enabled, taking the relevant menu as input:

Menu2Prescales.py L1Menu_LS2_v1_22.0.47.json
Menu2Prescales.py HLTMenu_LS2_v1_22.0.47.json

This will create the files L1Prescale_LS2_v1_22.0.47.json and HLTPrescale_LS2_v1_22.0.47.json. 👍

Uploading the Configuration Files

Upload only a set of trigger menus

Use insertMenu.py, specifying the files to be uploaded and the database connection alias:

insertMenu.py --l1menu L1Menu.json --hltmenu HLTMenu.json --hltjo HLTJobOptions.json --mongroup HLTMonitoring.json --dbalias <alias>

All four of --l1menu, --hltmenu, --hltjo and --mongroup are required. The valid options for these arguments are:

  • a filepath to the JSON menu, e.g providing --l1menu L1Menu_LS2_v1_22.0.47.json will upload the file L1Menu_LS2_v1_22.0.47.json.
  • a numerical key specifying an existing table entry, e.g providing --l1menu 1 would link to the existing entry with L1 menu ID 1.
  • 'off', specifying that a SMK should be created without this menu, e.g. providing --hltjo off would result in an upload of only the L1 and HLT menus. --l1menu cannot be set to off, meaning either a L1 menu file or index must be provided every time.

The script takes an additional optional argument, --comment. This allows the user to add a comment to the super-master table.

The script uploads any new menus to the correct tables, and links them together to create a super-master key. If an existing key is provided, or an identical menu file is found to already be contained in the database, the script will attempt to use the existing entry rather than uploading duplicates.

Upload only prescale sets (L1, HLT, or both)

Use insertPrescales.py, specifying the files to be uploaded, the SMK to link them to, and the database connection alias:

insertPrescales.py --l1ps L1Prescale.json --hltps HLTPrescale.json --smk <SMK> --dbalias <alias>

Either --l1ps or --hltps can be omitted if the user wants to upload a single prescale set, but at least one must be provided.

The script takes an additional optional argument, --comment. This allows the user to add a comment with the uploaded prescale sets.

The script uploads the prescale sets to the correct tables, linking them to the specified super-master key.

Upload only monitoring groups

Use insertMonitoringGroups.py, specifying the files to be uploaded, the SMK to link them to, and the database connection alias:

insertMonitoringGroups.py --mongroup HLTMonitoring.json --smk <SMK> --dbalias <alias>

The script takes an additional optional argument, --comment. This allows the user to add a comment with the uploaded Monitoring Groups.

The script uploads the Monitoring Groups to the correct tables, linking them to the specified super-master key via the HLTMenu. Monitoring Groups are used to process the data at Tier0 to configure which HLT chains should appear in which signatures/folders in the offline monitoring.

By default a new monitoring group file is set to be in use if there are no other monitoring groups attached to the SMK or off if there is an existing one attached. This can be changed, reverted or set to an even older one by using the below command. Do not change the monitoring groups for a smk that has already been used to take data without first discussing with Tier0 ops as the existing montiroing groups could already have been used to produce monitoring at Tier0.

setMonitoringGroupsInUse.py --mongroup <MONGROUP> --smk <SMK> --dbalias <alias>

To find out which Monitoring Groups are attached to an SMK use the script listed below in relation to downloading Monitoring Groups.

Upload a complete set of trigger menus and prescale sets

Use insertAll.py, specifying a directory containing the files to be uploaded and the database connection alias:

insertAll.py --directory <directory> --dbalias <alias>

The directory specified should contain the JSON files for the L1 menu, HLT menu, HLT JO, and prescale sets to be uploaded.

The script takes several additional optional arguments:

  • --comment: This allows the user to add a comment with the upload (to the super-master table and prescale tables).
  • --l1menu, --hltmenu, --hltjo, and --mongroup: These function as for insertMenu.py and insertMonitoringGroups.py, and can be set to 'off' or to a numerical key specifying an existing entry. This overrides any file found in the specified directory, e.g. if --l1menu 1 is specified, the existing L1 menu entry 1 will be used rather than any files found in the directory. If --hltmenu off is specified, any HLT menu, Monitoring Groups or prescale files found in the directory will be ignored.

The script ties together the functionality of insertMenu.py, insertMonitoringGroups.py and insertPrescales.py, uploading the trigger menu files within the directory to create a super-master key. If the directory contains Monitoring Groups and any number of prescale sets, all of these are uploaded and linked to this super-master key.

Upload a bunch group set

Use insertBunchGroupSet.py, specifying the file to be uploaded and the database connection alias:

insertBunchGroupSet.py --bgs BunchGroupSet.json --dbalias <alias>

All arguments are required.

Downloading the Configuration Files

To download the menu JSON files, use extractMenu.py, specifiying the SMK you want to download the files for, and the database connection alias:

extractMenu.py --smk <SMK> --dbalias <alias>

All arguments are required.

The script downloads the JSON files, saving them to a local directory as SMK_<SMK>/L1Menu_<SMK>.json, SMK_<SMK>/HLTMenu_<SMK>.json, SMK_<SMK>/MonitoringGroups_<SMK>.json and SMK_<SMK>/HLTJO_<SMK>.json

To download prescale JSON files, use extractPrescales.py, specifying the prescale set IDs for which you want to download the files, and the database connection alias:

extractPrescales.py --l1psk <L1PSK> --hltpsk <HLTPSK> --dbalias <alias>

Either --l1psk or --hltpsk can be omitted if the user wants to download a single prescale set, but at least one must be provided.

The script downloads the JSON files, saving them to the local directory as ./L1Prescale_<L1PSK> and ./HLTPrescale_<HLTPSK>.

To see which menus or prescales are available for a given SMK, you can use listMenusBySMK.py and listPrescalesBySMK.py, specifying the SMK to be checked, and the database connection alias:

listMenusBySMK.py --smk <SMK> --dbalias <alias>
listPrescalesBySMK.py --smk <SMK> --dbalias <alias>

All arguments are required.

These scripts will return the keys for the relevant tables that can be passed to extractMenu.py or extractPrescales.py to download the relevant JSON files.

To download monitoring groups JSON files, use extractMonitoringGroups.py, specifying which you want to download and the database connection alias:

extractMonitoringGroups.py --mongroup <MONGROUP> --dbalias <alias>
extractMonitoringGroups.py --smk <SMK> --dbalias <alias>
extractMonitoringGroups.py --hltmenu <HLTMENU> --dbalias <alias>

One of these versions must be used to extract the monitoring groups either directly or the one associated to a particular menu. Each version will download the single monitoring groups file which has been set to be enabled. To see all the monitoring groups attached to a SMK and whether they are enabled or not please use the below script.

The script downloads the JSON files, saving them to the local directory as ./MonitoringGroup_<MONGROUP>, ./MonitoringGroupSMK_<SMK>, ./MonitoringGroupHLTMenu_<HLTMENU>.

To see which monitoring group are available for a given SMK and whether they are enabled, you can use listMonitoringGroupsSMK.py specifying the SMK to be checked, and the database connection alias:

listMonitoringGroupsBySMK.py --smk <SMK> --dbalias <alias>

All arguments are required.

These scripts will return the keys for the relevant tables that can be passed to extractMonitoringGroups.py to download the relevant JSON files.

To download bunch group set JSON files, use extractBunchGroupSet.py, specifiying the bunch group set ID for which you want to download the file and the database connection alias:

extractBunchGroupSet.py --bgsk <BGSK> --dbalias <alias>

All arguments are required.

Checking the Upload is Successful

Every successful upload to the database will create a JSON file that functions as a record of the upload. This is useful for checking that an upload has proceeded as expected, and is utilised by the nightly tests.

The JSON file is generated with the name uploadRecord<timestamp>.json, and contains the following information:

  • The SMK created/modified by the upload.
  • Details of the files/keys uploaded.
  • The uploader's username.
  • The timestamp of the upload.
  • Any comment attached to the upload.

AutoPrescaler features

The AutoPrescaler has some specific features in the TriggerDB Scripts.

  • Prescale name: To avoid duplication of keys, AP generated keys no longer have "AUTO_" in the name parameter inside the JSON file

  • Instead, the insertPrescales script appends this to the table entry

  • Therefore, TTweb and TriggerPanel the name will appear still with "AUTO_"

  • If manually edited and re-uploaded by an expert the name in the DB would no longer be "AUTO_"

  • Username: As the AutoPrescaleEditor calls the scripts directly it sets the L1Prescale upload username to the application rather than shell user, as the latter doesn't exist in the environment.

  • Hidden: L1Prescales are uploaded as hidden by default for the AutoPrecaler.

  • L1Menu: Needs only the L1Menu from the SMK, so extracts this in extractMenu.py with the option --l1only.

L1CT and MUCTPI hardware files

The full set of instructions for the L1 menu experts is maintained on the L1 Central Trigger page

Generation

The generation of the hardware files happens outside this package and is done by the L1CTP experts. The files are generated and expected to be in the following structure

File directory structure

> topDir
> |
> |--- L1_CTP_FILES
> |    |--- cam.dat  
> |    |--- lut.dat
> |    |--- smx.dat
> |    |--- mon_sel_SLOT7.dat
> |    |--- mon_sel_SLOT8.dat
> |    |--- mon_sel_SLOT9.dat
> |    |--- mon_sel_CTPMON.dat
> |    |--- mon_dec_SLOT7.dat
> |    |--- mon_dec_SLOT8.dat  
> |    |--- mon_dec_SLOT9.dat
> |    |--- mon_dec_CTPMON.dat  
> |    |--- mon_dmx_CTPMON.dat
> |
> |--- L1_CTP_SMX
> |    |--- smxo.dat
> |    |--- smx_SLOT7.vhd
> |    |--- smx_SLOT8.vhd
> |    |--- smx_SLOT9.vhd
> |    |--- ctpin_smx_slot7.svf
> |    |--- ctpin_smx_slot8.svf
> |    |--- ctpin_smx_slot9.svf
> |
> |--- L1_MUCTPI_FILES
> |    |--- muctpi.json
> |
> |--- L1_TMC_SIGNALS
> |    |---tmc.json
> |
> |--- L1_MENU
>      |--- L1Menu_PhysicsP1_pp_run3_v1_22.0.54_v2.json

Upload

insertL1CT.py --topdir <dir> --dbalias <alias> --smk <smk>

List available SMKs and attached files

listL1CTInfo.py --dbalias <alias> [--smk <smk>]

Expert upload actions

The insert script checks that the target smk is the correct one, by comparing the l1menu hash. If needed this can be disabled using the option --skipTargetCheck.

The content of a table will not be overwritten. One can still update a table that is already filled, using the option --force ctp smx muctpi tmc.

Database Alias Options

All the upload/download scripts requre a --dbalias argument. This specifies which copy of TriggerDB you want to interact with.

The following options are currently available:

Alias Database Purpose
TRIGGERDB_RUN3 ATLAS_CONF_TRIGGER_RUN3 Production database for Run3
TRIGGERDBDEV1_I8 ATLAS_CONF_TRIGGER_DEV1 Main LS2 development database on INT8R
TRIGGERDBDEV1 ATLAS_CONF_DEV1 Original LS2 development database on INTR
TRIGGERDBDEV2 ATLAS_CONF_DEV2 Original LS2 development database on INTR
TRIGGERDBART ATLAS_CONF_TRIGGER_ART Nightly test database
TRIGGERDBATN ATLAS_TRIGGER_ATN Nightly test database
TRIGGERDBREPR_RUN3 ATLAS_CONF_TRIGGER_RUN3_REPR Trigger Reprocessing database

For more information see TriggerDatabaseVersions

Getting the Hash of a File

To manually get the hash of a JSON file (the same hash that is stored in the database to avoid duplicate files being uploaded), use hashJSON.py, specifying the JSON file to be hashed:

hashJSON.py --jsonfile <filename>.json 

The hashing is performed using the MD5 message-digest algorithm, and is setup to be independent of changes in whitespace or key ordering within the JSON file, i.e. only changes to the contents of the keys or their values result in a difference in the hash being returned.

Reprocessing key upload

To simplify uploading keys for reprocessing experts the script createReprKeys.py is provided. This script will find files that were generated in a specified ART test for a given nightly and upload them to the Reprocessing DB. Comments for the keys are all auto generated based on the input arguments. The script performs the upload by setting up the TriggerDB scripts and using the insertAll.py script.

createReprKeys.py --jira <jira> --release <release> --testFiles <test name>

Additional settings can be added but have appropriate defaults:

  • --releaseBranch defaults to 22.0
  • --buildTag defaults to x86_64-centos7-gcc11-opt

Bunchgroup set generation and upload at P1

From a filling scheme, provided as csv or json file

To create a bunchgroup set json file from a provided filling scheme file <name of filling scheme>.csv or <name of filling scheme>.json:

ReadBunchGroup.py --fromFile --file <name of filling scheme>.csv --write-json

This will create a file BunchGroupSet_<name of filling scheme>.json, that can then be uploaded:

insertBunchGroupSet.py --dbalias TRIGGERDB_RUN3 --bgs BunchGroupSet_<name of filling scheme>.json

This can also be done in a single step:

ReadBunchGroup.py --fromFile --file <name of filling scheme>.csv -u

Again, this also works for a filling scheme given in json format.

From the current LHC filling pattern

To create a bunchgroup set json file from the current LHC filling scheme, run

ReadBunchGroup.py --fromLHC --write-json

The name of the bunchgroup set will be LHC_<nbunches>bunches_<ntrains>trains_<nindivs>indivs. And in case of a hybrid filling scheme _hybrid is added to the name.

The name of the created file will be BunchGroupSet_<name of bgset>.json, which can be uploaded:

insertBunchGroupSet.py --dbalias TRIGGERDB_RUN3 --bgs BunchGroupSet_<name of bgset>.json

This can also be done in a single step:

ReadBunchGroup.py --fromLHC -u