Commit 7a8ed511 authored by Thibaud Marie Eric Buffet's avatar Thibaud Marie Eric Buffet
Browse files

Merge branch 'SIGMON-120' into 'dev'

README

See merge request !19
parents 3e41eb19 8bedd15d
Pipeline #2653709 passed with stage
in 16 seconds
# Contributing Manual
## Issue reporting and tracking
To report issues and track their progress we use a [JIRA dashboard](https://its.cern.ch/jira/secure/RapidBoard.jspa?rapidView=7215&projectKey=SIGMON&view=detail).
- When a new issue is raised, one can report it by creating a new story on the JIRA board on the `Create` button.
- This new story/task should be assigned yourself as a `Reporter` and `Assignee` and put into the `BACKLOG` column.
- This new `story` (which could be broken down into `tasks`) should be assigned to yourself.
Search and select your username as a `Reporter` and `Assignee` and move the story (and related tasks) to the `BACKLOG` column.
- The issue must contain a complete and explicit description. Feel free to use images, screenshots or figures to illustrate the issue.
- It is now waiting for someone to check it out and start working on it.
......@@ -32,11 +32,12 @@ Note: issues can move back to their previous steps if needed.
### 3. Commit and push your changes to Gitlab:
- __IMPORTANT__ -> Before pushing all the notebooks must stay __CLEAN__ !! They should **NOT** contain any SWAN execution outputs.
- __IMPORTANT__ -> To avoid later confusion, before pushing all the notebooks must stay __CLEAN__ !! They should **NOT** contain any SWAN execution outputs.
To clean a notebook click on the tab `CELL` --> `All Output->Clear`
- __IMPORTANT__ -> Increment the version number in the `hwc_working/__init__.py` file.
- Open the SWAN terminal.
![SWAN_terminal](figures/swan-cli-button.png)
- Then run the command `source /eos/project/l/lhcsm/public/contributor_push.sh`
- (Note the working directory `hwc_working` has been deleted, if you want to reload it you can run in the terminal `source /eos/project/l/lhcsm/public/contributor.sh`)
......@@ -44,12 +45,13 @@ Note: issues can move back to their previous steps if needed.
### 4. Open a merge request on [Gitlab with your changes to be reviewed](https://gitlab.cern.ch/LHCData/lhc-sm-hwc/-/merge_requests):
- select source branch `dev`
- select target branch `master`
- The merge request is waiting review from software experts.
- The merge request is waiting review from software experts (minimum of 2 reviewer approvals). You can notify them by sending an email to `lhc-signal-monitoring@cern.ch`.
##### On JIRA dashboard:
- When the merge request is open move the issue to the `REVIEW` column.
### 5. Once the merge request has been reviewed and approved then click on the `Merge` button of the merge request page.
- The `DEV` [pipeline](https://gitlab.cern.ch/LHCData/lhc-sm-api/-/pipelines) is now triggered. Wait until the pipeline is completed and all steps are green.
If the pipeline turns red an issue have been introduced. You should fix it, or contact `lhc-signal-monitoring@cern.ch`.
- To load the `DEV` environment you can restart the SWAN configuration with the option Environment script: `/eos/project/l/lhcsm/public/packages_notebooks_dev.sh`
- On this `DEV` environment experts (you?) can validate that all the changes are correct.
##### On JIRA dashboard:
......@@ -59,6 +61,7 @@ Note: issues can move back to their previous steps if needed.
### RELEASE TO PRO
### 6. When changes are validated, you must __TAG__ the `master` branch with the previously given **_version_ _number_**: https://gitlab.cern.ch/LHCData/lhc-sm-hwc/-/tags/new
- The `PRO` [pipeline](https://gitlab.cern.ch/LHCData/lhc-sm-api/-/pipelines) is now triggered. Wait until the pipeline is completed and all steps are green.
If the pipeline turns red an issue have been introduced. You should fix it in the `dev` branch, or contact `lhc-signal-monitoring@cern.ch`.
- To load the `PRO` environment you can restart the SWAN configuration with the option Environment script: `/eos/project/l/lhcsm/public/packages_notebooks.sh`
##### On JIRA dashboard:
- The issue is fixed and can be moved to `DONE` column.
\ No newline at end of file
......@@ -31,10 +31,24 @@ Please follow a procedure below to request the NXCALS access.
- system: WinCCOA, CMW
- NXCALS environment: PRO
Optionally please can mention that the NXCALS database will be accessed through SWAN.
Optionally please mention that the NXCALS database will be accessed through SWAN.
Once the access is granted, you can use NXCALS with SWAN.
### 2. Logging to SWAN
### 2. EOS lhcsm Access
EOS lhcsm access are granted via e-groups.
Allow for storage of reports.
- [cernbox-project-mp3-writers](https://e-groups.cern.ch/e-groups/Egroup.do?egroupId=10382587)
- [cernbox-project-mp3-readers](https://e-groups.cern.ch/e-groups/Egroup.do?egroupId=10382586)
To modify the package (env), scripts and notebooks.
- [cernbox-project-lhcsm-writers](https://e-groups.cern.ch/e-groups/Egroup.do?egroupId=10356104)
- [cernbox-project-lhcsm-readers](https://e-groups.cern.ch/e-groups/Egroup.do?egroupId=10356103)
**Note it could take some time for the changes to propagate.**
### 3. Logging to SWAN
The following steps should be followed in order to log-in to SWAN
1. Go to http://swan.cern.ch
......@@ -43,14 +57,25 @@ The following steps should be followed in order to log-in to SWAN
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan-inactive-cernbox-error.png" width=50%>
### 3. Setting an Environment Script
### 4. Setting an Environment Script
In order to execute the HWC notebooks, one requires `lhc-sm-api` package and HWC notebooks. To this end, we created a dedicated environment script to prepare the SWAN project space.
The script sets a path to a virtual environment with the necessary packages (for more details, cf. https://lhc-sm-api.web.cern.ch/lhc-sm-api/user_install.html#preinstalled-packages) as well as makes a copy of HWC notebooks to `hwc` notebooks. **Note that in order to ensure compatibility between package and notebook versions, the `hwc` folder is deleted each time the script is executed.**
Firstly, contact the Signal Monitoring team (<a href="mailto:lhc-signal-monitoring@cern.ch">lhc-signal-monitoring@cern.ch</a>) in order to get read access to the EOS folder with pre-installed packages and HWC analysis notebooks.
Once the access is granted, at every log-in to SWAN, please provide the following environment script:
The script sets a path to a virtual environment with the necessary packages (for more details, cf. https://lhc-sm-api.web.cern.ch/lhc-sm-api/user_install.html#preinstalled-packages) as well as makes a copy of HWC notebooks.
The SWAN Notebooks can be loaded in 3 different environments:
- PRO
- DEV
- DEV CONTRIBUTOR
#### 4.1 PRO
This is the PRO and stable version of the notebooks, users and experts should use this script.
At every log-in to SWAN, please provide the following environment script:
`/eos/project/l/lhcsm/public/packages_notebooks.sh`
**Note that in order to ensure compatibility between package and notebook versions, the `hwc` folder is deleted each time the script is executed.**
**Note: Notebooks changes are not persisted when using this environment script.**
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan_environment_script.png" width=25%>
Note the following settings while configuring environment:
......@@ -61,15 +86,34 @@ Note the following settings while configuring environment:
- Memory: `16 GB`
- Spark cluster: `BE NXCALS (NXCals)`
### 4. Running Notebook
#### 4.1. Open notebook
This script creates a new `hwc` folder in SWAN.
#### 4.2 DEV
This is the DEV version of the notebooks, this script is used for new feature validations and verification.
At every log-in to SWAN, please provide the following environment script:
`/eos/project/l/lhcsm/public/packages_notebooks_dev.sh`
This script creates a new `hwc_dev` folder in SWAN.
**Note: that in order to ensure compatibility between package and notebook versions, the `hwc_dev` folder is deleted each time the script is executed.**
**Note: Notebooks changes are not persisted when using this environment script.**
#### 4.3 DEV CONTRIBUTOR
This is a DEV version of the notebooks, this script is used to implement new features and new notebooks.
see [CONTRIBUTING.md](CONTRIBUTING.md)
This script uses the `hwc_working` folder in SWAN. If the folder already exists, then it just loads the DEV environment.
### 5. Running Notebook
#### 5.1. Open notebook
To do so simply open `hwc` folder and then select a circuit. Afterwards click name of a notebook to open a new page. The top of the notebook is presented in Figure below.
To do so simply open the folder created on the previous step and then select a circuit. Afterwards click name of a notebook to open a new page. The top of the notebook is presented in Figure below.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan-rb-fpa-analysis-intro.png" width=50%>
#### 4.2. Connect to the NXCALS Spark Cluster
#### 5.2. Connect to the NXCALS Spark Cluster
Once a notebook is opened, please click a star button as shown in Figure below in order to open the Spark cluster configuration in a panel on the right side of an active notebook.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan-open-spark-cluster-configuration.png" width=50%>
......@@ -83,7 +127,7 @@ The last step is a confirmation of a successful connection to the cluster.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan-spark-cluster-connection.png" width=75%>
#### 4.3. Analysis Notebook Execution
#### 5.3. Analysis Notebook Execution
A notebook is composed by cells. A cell contains either a markdown text with description or python code to execute. Cells with markdown text have white background and can contain text, tables, figures, and hyperlinks. Cells with code have gray background and are executed by clicking a run icon in the top bar highlighted in Figure below. Alternatively, one can put a cursor in a cell with code an press a keyboard shortcut Ctrl+Enter.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/swan-execute-cell.png" width=50%>
......@@ -93,16 +137,20 @@ A state of a cell is indicated by square brackets located on the left to a cell.
1. Select from the top menu: `Kernel` -> `Interrupt` and execute the problematic cell again (either a run button (cf. Figure above) or Ctrl+Enter).
2. In case the first option does not help, select from the top menu `Kernel` -> `Restart & Clear Output`. Then all cells prior to the problematic one have to be executed again (multiple cell selection is possible by clicking on the left of a cell to select it and afterwards selecting others with pressed Shift button). After this operation one needs to reconnect to the NXCALS Spark cluster.
#### 4.4. Analysis Assumptions:
#### 5.4. Analysis Assumptions:
1. We consider standard analysis scenarios, i.e., all signals can be queried. Depending on what signal is missing, an analysis can raise a warning and continue or an error and abort the analysis.
2. It is recommended to execute each cell one after another. However, since the signals are queried prior to an analysis, any order of execution is allowed. In case an analysis cell is aborted, the following ones may not be executed (e.g. I_MEAS not present).
#### 4.5. FPA Notebooks
#### 5.5. FPA Notebooks
##### Analysis Workflow
An FPA analysis workflow consists of four steps: (i) finding an FGC Post Mortem timestamp; (ii) executing analysis cells on the cluster (iii); (iv) storing output files on EOS; see Figure below.
An FPA analysis workflow consists of four steps:
- finding an FGC Post Mortem timestamp;
- executing analysis cells on the cluster;
- plotting and validating the analysis results;
- storing output files on EOS; see Figure below.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/raw/master/figures/fpa-analysis-workflow.png" width=75%>
......@@ -170,7 +218,7 @@ e.g.,
- CSV file with MP3 results table with a subset analysis results - [circuit-name]_FPA-[fgc-timestamp]-[analysis-execution-date].csv;
#### 4.6. HWC Notebooks
#### 5.6. HWC Notebooks
##### Analysis Workflow
A HWC analysis workflow consists of four steps: (i) finding of start and end time of an HWC test (ii) executing analysis cells on the cluster (iii); (iv) storing output files on EOS; see Figure below.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment