Commit 79b123da authored by Michal Maciejewski's avatar Michal Maciejewski
Browse files

Created a notebook for IPD analysis

parent fdc09cb6
This diff is collapsed.
%% Cell type:markdown id: tags:
<h1><center>Analysis of an FPA in an IT Circuit</center></h1>
The main quadrupole magnet circuits of the 8 Inner Triplet (IT) systems in the LHC are composed of four single aperture quadrupole magnets in series and have a particular powering configuration, consisting of three nested power converters (PC), see Figure below.
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/-/raw/master/figures/it/IT.png" width=75%>
Main quadrupole magnet circuit of the Inner Triplet system for IT’s at points 1 and 5 (left) and IT’s at points 2 and 8 (right).
Note that the configuration for the IT’s in points 1 and 5 is different from the configuration in points 2 and 8. An earth detection system is present at the minus of the RTQX2 converter. Detailed information concerning the converters is given in EDMS 1054483. Note that the currents in the quadrupole magnets are given by:
\begin{equation}
I_\text{Q1} = I_\text{RQX} + I_\text{RTQX1} \\
I_\text{Q2} = I_\text{RQX} + I_\text{RTQX2} \\
I_\text{Q3} = I_\text{RQX} \\
\end{equation}
The two magnets Q1 and Q3 are type MQXA and the two combined magnets Q2a and Q2b are type MQXB. Q1 is located towards the interaction point.
Note that the IT’s at points 2 and 8 have a slightly higher nominal operating current than the IT’s at points 1 and 5, see Table 1.
|Circuit|I\_PNO RQX|I\_PNO RTQX2|I\_PNO RTQX1|
|-------|----------|------------|------------|
|RQX.L2, RQX.R2, RQX.L8, RQX.R8|7180 A| 4780 A|550 A|
|RQX.L1, RQX.R1, RQX.L5, RQX.R5|6800 A| 4600 A|550 A|
Nominal operating currents for 7 TeV of the three PC’s as given in the LHC design report volume I. For the nominal current during HWC see EDMS 1375861.
source: Test Procedure and Acceptance Criteria for the Inner Triplet Circuits in the LHC, MP3 Procedure, <a href="https://edms.cern.ch/document/874886/2.1">https://edms.cern.ch/document/874886/2.1</a>
%% Cell type:markdown id: tags:
# Analysis Assumptions
- We consider standard analysis scenarios, i.e., all signals can be queried. If a signal is missing, an analysis can raise a warning and continue or an error and abort the analysis.
- In case a signal is not needed for the analysis, a particular analysis is skipped. In other words, all signals have to be available in order to perform an analysis.
- It is recommended to execute each cell one after another. However, since the signals are queried prior to analysis, any order of execution is allowed. In case an analysis cell is aborted, the following ones may not be executed (e.g. I\_MEAS not present).
# Plot Convention
- Scales are labeled with signal name followed by a comma and a unit in square brackets, e.g., I_MEAS, [A].
- If a reference signal is present, it is represented with a dashed line.
- If the main current is present, its axis is on the left. Remaining signals are attached to the axis on the right. The legend of these signals is located on the lower left and upper right, respectively.
- The grid comes from the left axis.
- The title contains timestamp, circuit name, and signal name allowing to re-access the signal.
- The plots assigned to the left scale have colors: blue (C0) and orange (C1). Plots presented on the right have colors red (C2) and green (C3).
- Each plot has an individual time-synchronization mentioned explicitly in the description.
- If an axis has a single signal, then the color of the label matches the signal's color. Otherwise, the label color is black.
%% Cell type:markdown id: tags:
# 0. Initialise Working Environment
## 0.1. Import Necessary Packages
%% Cell type:code id: tags:
``` python
import io
import re
import sys
import pandas as pd
import numpy as np
from datetime import datetime
import time
from IPython.display import display, Javascript
from lhcsmapi.Time import Time
from lhcsmapi.Timer import Timer
from lhcsmapi.pyedsl.QueryBuilder import QueryBuilder
from lhcsmapi.analysis.ItCircuitQuery import ItCircuitQuery
from lhcsmapi.analysis.ItCircuitAnalysis import ItCircuitAnalysis
from lhcsmapi.analysis.expert_input import get_expert_decision
from lhcsmapi.analysis.report_template import apply_report_template
# GUI
from lhcsmapi.gui.qh.DateTimeBaseModule import DateTimeBaseModule
from lhcsmapi.gui.pc.FgcPmSearchModuleMediator import FgcPmSearchModuleMediator
from lhcsmapi.gui.pc.ItFgcPmSearchBaseModule import ItFgcPmSearchBaseModule
analysis_start_time = datetime.now().strftime("%Y.%m.%d_%H%M%S.%f")
```
%% Cell type:markdown id: tags:
## 0.2. LHCSMAPI Version
%% Cell type:code id: tags:
``` python
import lhcsmapi
print('Analysis executed with lhcsmapi version: {}'.format(lhcsmapi.__version__))
```
%% Cell type:markdown id: tags:
## 0.3. Notebook Version
%% Cell type:code id: tags:
``` python
with io.open("../__init__.py", "rt", encoding="utf8") as f:
version = re.search(r'__version__ = "(.*?)"', f.read()).group(1)
print('Analysis executed with lhc-sm-hwc notebooks version: {}'.format(version))
```
%% Cell type:markdown id: tags:
# 1. Select FGC Post Mortem Entry
%% Cell type:markdown id: tags:skip_cell
In order to perform the analysis of a FPA in an IT circuit please:
1. Select circuit name (e.g., RQX.R1)
2. Choose start and end time
3. Choose analysis mode (Automatic by default)
Once these inputs are provided, click 'Find FGC PM entries' button. This will trigger a search of the PM database in order to provide a list of timestamps of FGC events associated with the selected circuit name for the provided period of time. Select one timestamp from the 'FGC PM Entries' list to be processed by the following cells.
%% Cell type:code id: tags:
``` python
circuit_type = 'IT'
fgc_pm_search = FgcPmSearchModuleMediator(DateTimeBaseModule(start_date_time='2018-08-29 00:00:00+01:00',
end_date_time='2018-08-31 00:00:00+01:00'), ItFgcPmSearchBaseModule(), circuit_type=circuit_type)
display(fgc_pm_search.widget)
```
%% Cell type:markdown id: tags:
# 2. Query All Signals Prior to Analysis
%% Cell type:code id: tags:skip_output
``` python
circuit_name = fgc_pm_search.get_circuit_name()
timestamp_fgc = fgc_pm_search.get_fgc_timestamp()
author = fgc_pm_search.get_author()
is_automatic = fgc_pm_search.is_automatic_mode()
it_query = ItCircuitQuery(circuit_type, circuit_name, max_executions=12)
with Timer():
# PIC
timestamp_pic = it_query.find_timestamp_pic(timestamp_fgc, spark=spark)
# PC
source_timestamp_fgc_df = QueryBuilder().with_pm() \
.with_duration(t_start=timestamp_fgc, duration=[(1, 's'), (1, 's')]) \
.with_circuit_type(circuit_type) \
.with_metadata(circuit_name=circuit_name, system='PC', source='*') \
.event_query().df
source_fgc_rqx, timestamp_fgc_rqx = it_query.split_source_timestamp_fgc(source_timestamp_fgc_df, 'RQX')
source_fgc_rtqx1, timestamp_fgc_rtqx1 = it_query.split_source_timestamp_fgc(source_timestamp_fgc_df, 'RTQX1')
source_fgc_rtqx2, timestamp_fgc_rtqx2 = it_query.split_source_timestamp_fgc(source_timestamp_fgc_df, 'RTQX2')
i_meas_rqx_df, i_ref_rqx_df, i_a_rqx_df, i_earth_rqx_df = it_query.query_pc_pm_with_source(timestamp_fgc_rqx, timestamp_fgc_rqx,
source_fgc_rqx, signal_names=['I_MEAS', 'I_REF', 'I_A', 'I_EARTH'])
i_meas_rtqx1_df, i_ref_rtqx1_df, i_a_rtqx1_df, i_earth_rtqx1_df = it_query.query_pc_pm_with_source(timestamp_fgc_rtqx1, timestamp_fgc_rtqx1,
source_fgc_rtqx1, signal_names=['I_MEAS', 'I_REF', 'I_A', 'I_EARTH'])
i_meas_rtqx2_df, i_ref_rtqx2_df, i_a_rtqx2_df, i_earth_rtqx2_df = it_query.query_pc_pm_with_source(timestamp_fgc_rtqx2, timestamp_fgc_rtqx2,
source_fgc_rtqx2, signal_names=['I_MEAS', 'I_REF', 'I_A', 'I_EARTH'])
i_meas_q1_df = pd.DataFrame(index=i_meas_rqx_df.index, data=i_meas_rqx_df.values + i_meas_rtqx1_df.values, columns=['I_MEAS_Q1'])
i_meas_q2_df = pd.DataFrame(index=i_meas_rqx_df.index, data=i_meas_rqx_df.values + i_meas_rtqx2_df.values, columns=['I_MEAS_Q2'])
i_meas_q3_df = pd.DataFrame(index=i_meas_rqx_df.index, data=i_meas_rqx_df.values, columns=['I_MEAS_Q3'])
# QDS
source_timestamp_qds_df = it_query.find_source_timestamp_qds(timestamp_fgc_rqx, duration=[(2, 's'), (2, 's')])
timestamp_qds = np.nan if source_timestamp_qds_df.empty else source_timestamp_qds_df.loc[0, 'timestamp']
u_res_q1_df, u_res_q2_df, u_res_q3_df = it_query.query_qds_pm(timestamp_qds, timestamp_qds, signal_names=['U_RES_Q1', 'U_RES_Q2', 'U_RES_Q3'])
u_1_q1_df, u_1_q2_df, u_1_q3_df = it_query.query_qds_pm(timestamp_qds, timestamp_qds, signal_names=['U_1_Q1', 'U_1_Q2', 'U_1_Q3'])
u_2_q1_df, u_2_q2_df, u_2_q3_df = it_query.query_qds_pm(timestamp_qds, timestamp_qds, signal_names=['U_2_Q1', 'U_2_Q2', 'U_2_Q3'])
# QH
u_hds_dfss = it_query.query_qh_pm(source_timestamp_qds_df, signal_names=['U_HDS'])
u_hds_dfs = u_hds_dfss[0] if u_hds_dfss else []
# # Reference
u_hds_ref_dfss = it_query.query_qh_pm(source_timestamp_qds_df, signal_names=['U_HDS'], is_ref=True)
u_hds_ref_dfs = u_hds_ref_dfss[0] if u_hds_ref_dfss else []
# LEADS
u_hts_dfs = it_query.query_leads(timestamp_fgc, source_timestamp_qds_df, system='LEADS', signal_names=['U_HTS'], spark=spark, duration=[(300, 's'), (900, 's')])
u_res_dfs = it_query.query_leads(timestamp_fgc, source_timestamp_qds_df, system='LEADS', signal_names=['U_RES'], spark=spark, duration=[(300, 's'), (900, 's')])
# Results Table
results_table = it_query.create_report_analysis_template(timestamp_fgc=timestamp_fgc, author=author)
it_analysis = ItCircuitAnalysis(circuit_type, results_table, is_automatic=is_automatic)
```
%% Cell type:markdown id: tags:
# 3. Timestamps
The analysis for MP3 consists of checking the existence of PM file and of consistency of the PM timestamps (PC, QPS). The criterion of passing this test described in detail in 600APIC2.
In short the following criteria should be checked:
- The PC timestamp (51_self) is QPS timestamp +-20 ms.
- The difference between QPS board A and B timestamp = 1ms.
If one or more of these conditions are not fulfilled, then an in-depth analysis has to be performed by the QPS team.
%% Cell type:code id: tags:
``` python
timestamp_dct = {'FGC_RQX': timestamp_fgc_rqx, 'FGC_RTQX1': timestamp_fgc_rtqx1, 'FGC_RTQX2': timestamp_fgc_rtqx2,
'PIC': timestamp_pic,
'QDS_A':source_timestamp_qds_df.loc[0, 'timestamp'] if len(source_timestamp_qds_df) > 0 else np.nan,
'QDS_B':source_timestamp_qds_df.loc[1, 'timestamp'] if len(source_timestamp_qds_df) > 1 else np.nan}
it_analysis.create_timestamp_table(timestamp_dct)
```
%% Cell type:markdown id: tags:
# 4. PC
## 4.1. Main Current
*QUERY*:
|Variable Name |Variable Type |Variable Unit |Database|Comment
|---------------|---------------|---------------|--------|------|
|i_meas_rq_df |DataFrame |A |PM|Main current of a power converter, I_MEAS_RQX, I_MEAS_RTQX1, I_MEAS_RTQX2|
|i_a_rq_df |DataFrame |A |PM|Actual current of a power converter, I_A_RQX, I_A_RTQX1, I_A_RTQX2|
|i_ref_rq_df |DataFrame |A |PM|Reference current of the RQX power converter, I_REF_RQX, I_REF_RTQX1, I_REF_RTQX2|
Note that **rq** in the table above denotes RQX, RTQX1, and RTQX2, i.e., there are three signals for each power converter.
*ANALYSIS*:
- calculation of the ramp rate
- calculation of the duration of a plateau prior to a quench
- calculation of DCCT MIIts
*GRAPHS*:
- t = 0 s corresponds to the FGC timestamp
- one plot for each power converter
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_meas_pc(circuit_name, timestamp_fgc_rqx, [i_meas_rqx_df, i_ref_rqx_df, i_meas_rtqx1_df, i_ref_rtqx1_df, i_meas_rtqx2_df, i_ref_rtqx2_df], xlim=(-0.1, 0.5))
```
%% Cell type:code id: tags:
``` python
t_quench = it_analysis.find_time_of_quench(i_ref_rqx_df)
it_analysis.plot_i_meas_pc(circuit_name, timestamp_fgc_rqx, [i_meas_rqx_df, i_ref_rqx_df, i_a_rqx_df],
xlim=(-0.1, 0.2), ylim=(i_meas_rqx_df.max().values[0]-1500, i_meas_rqx_df.max().values[0]+500))
it_analysis.calculate_current_slope(i_meas_rqx_df.rename(columns={'STATUS.I_MEAS_RQX': 'STATUS.I_MEAS'}))
it_analysis.calculate_current_miits(i_meas_rqx_df, t_quench, col_name='MITTs_DCCT_RQX')
it_analysis.calculate_quench_current(i_meas_rqx_df, t_quench, col_name='I_RQX')
```
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_meas_pc(circuit_name.replace('RQX', 'RTQX1'), timestamp_fgc_rqx, [i_meas_rtqx1_df, i_ref_rtqx1_df, i_a_rtqx1_df],
xlim=(-0.1, 0.2), ylim=(i_meas_rtqx1_df.max().values[0] - 250, i_meas_rtqx1_df.max().values[0] + 250))
it_analysis.calculate_current_slope(i_meas_rtqx1_df.rename(columns={'STATUS.I_MEAS_RTQX1': 'STATUS.I_MEAS'}))
it_analysis.calculate_current_miits(i_meas_rtqx1_df, t_quench, col_name='MITTs_DCCT_RTQX1')
it_analysis.calculate_quench_current(i_meas_rtqx1_df, t_quench, col_name='I_RTQX1')
```
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_meas_pc(circuit_name.replace('RQX', 'RTQX2'), timestamp_fgc_rqx, [i_meas_rtqx2_df, i_ref_rtqx2_df, i_a_rtqx2_df],
xlim=(-0.1, 0.2), ylim=(i_meas_rtqx2_df.max().values[0] - 1500, i_meas_rtqx2_df.max().values[0] + 500))
it_analysis.calculate_current_slope(i_meas_rtqx2_df.rename(columns={'STATUS.I_MEAS_RTQX2': 'STATUS.I_MEAS'}))
it_analysis.calculate_current_miits(i_meas_rtqx2_df, t_quench, col_name='MITTs_DCCT_RTQX2')
it_analysis.calculate_quench_current(i_meas_rtqx2_df, t_quench, col_name='I_RTQX2')
```
%% Cell type:markdown id: tags:
## 4.2. Magnet Current
*QUERY*:
*No need for a query as the magnet currents are calculated from I_MEAS power converter currents*
|Variable Name |Variable Type |Variable Unit |Comment
|---------------|---------------|---------------|------|
|i_meas_q_df |DataFrame |A |Main current of the Q1 magnet, I_MEAS_Q1, I_MEAS_Q2, I_MEAS_Q3|
Note that **q** in the table above denotes Q1, Q2, and Q3, i.e., there is one signal for each magnet.
*ANALYSIS*:
- calculation of the magnet MIITs
- calculation of the quench current
*GRAPHS*:
- t = 0 s corresponds to the FGC timestamp
- one plot for each magnet
%% Cell type:code id: tags:
``` python
it_analysis.plot_magnet_current(circuit_name, timestamp_fgc, i_meas_q1_df, i_meas_q2_df, i_meas_q3_df)
it_analysis.calculate_current_miits(i_meas_q1_df, t_quench, col_name='MITTs_Q1')
it_analysis.calculate_quench_current(i_meas_q1_df, t_quench, col_name='I_Q1')
it_analysis.calculate_current_miits(i_meas_q2_df, t_quench, col_name='MITTs_Q2')
it_analysis.calculate_quench_current(i_meas_q2_df, t_quench, col_name='I_Q2')
it_analysis.calculate_current_miits(i_meas_q3_df, t_quench, col_name='MITTs_Q3')
it_analysis.calculate_quench_current(i_meas_q3_df, t_quench, col_name='I_Q3')
```
%% Cell type:markdown id: tags:
## 4.3. Earth Current
*QUERY*:
|Variable Name |Variable Type |Variable Unit |Database|Comment
|---------------|---------------|---------------|--------|-------|
|i_earth_rq_df |DataFrame |A |PM|Main current of the RQX power converter, I_EARTH_RQX, I_EARTH_RTQX1, I_EARTH_RTQX2|
Note that **rq** in the table above denotes RQX, RTQX1, and RTQX2, i.e., there is one signal for each power converter.
*ANALYSIS*:
- calculation of the maximum earth current
*GRAPHS*:
- t = 0 s corresponds to the FGC timestamp
- one plot for each power converter
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_earth_pc(circuit_name, timestamp_fgc_rqx, i_earth_rqx_df)
it_analysis.calculate_max_i_earth_pc(i_earth_rqx_df, col_name='max_I_EARTH_RQX')
```
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_earth_pc(circuit_name.replace('RQX', 'RTQX1'), timestamp_fgc_rtqx1, i_earth_rtqx1_df)
it_analysis.calculate_max_i_earth_pc(i_earth_rtqx1_df, col_name='max_I_EARTH_RTQX1')
```
%% Cell type:code id: tags:
``` python
it_analysis.plot_i_earth_pc(circuit_name.replace('RQX', 'RTQX2'), timestamp_fgc_rtqx2, i_earth_rtqx2_df)
it_analysis.calculate_max_i_earth_pc(i_earth_rtqx2_df, col_name='max_I_EARTH_RTQX2')
```
%% Cell type:markdown id: tags:
# 5. Quench Detection System
The signal names used for quench detection are shown in Figure below (picture from A. Erokhin).
<img src="https://gitlab.cern.ch/LHCData/lhc-sm-hwc/-/raw/master/figures/it/IT_QPS_Signals.png" width=50%>
A quench in the superconducting circuits is detected and all the quench heaters are fired as soon as one of the following signals exceeds the threshold.
\begin{equation}
U_\text{RES,Q1} = U_\text{1,Q1} + U_\text{2,Q1} \\
U_\text{RES,Q2} = U_\text{1,Q2} + U_\text{2,Q2} \\
U_\text{RES,Q3} = U_\text{1,Q3} + U_\text{2,Q3} \\
\end{equation}
More details on the QPS system and the quench heaters can be found on the MP3 web site (https://cern.ch/MP3).
%% Cell type:markdown id: tags:
## 5.1. Resistive Voltage
*QUERY*:
|Variable Name |Variable Type |Variable Unit |Database|Comment
|---------------|---------------|---------------|--------|------|
|u_res_q_df |DataFrame |V |PM|Resistive voltage of magnets measured with QPS, U_RES_Q1, U_RES_Q2, U_RES_Q3|
|u_1_q_df |DataFrame |V |PM|Voltage of the first aperture measured with QPS, U_1_Q1, U_1_Q2, U_1_Q3|
|u_2_q_df |DataFrame |V |PM|Voltage of the first aperture measured with QPS, U_2_Q1, U_2_Q2, U_2Tak_Q3|
Note that **q** in the table above denotes Q1, Q2, and Q3, i.e., there are three signals for each magnet.
*ANALYSIS*:
- Initial voltage slope of U_RES signal. The slope is calculated as a ratio of the voltage change from 50 to 200 mV and the corresponding time change.
- Origin of a quench, i.e., name of the first magnet for which the U_RES voltage exceeded the 100 mV threshold.
*GRAPHS*:
t = 0 s corresponds to the QPS timestamp
%% Cell type:code id: tags:
``` python
it_analysis.plot_u_res_q1_q2_q3(circuit_name, timestamp_qds, u_res_q1_df, u_res_q2_df, u_res_q3_df)
it_analysis.find_quench_origin(u_res_q1_df, u_res_q2_df, u_res_q3_df)
it_analysis.results_table['Reason'] = get_expert_decision('Reason for FPA: ', ['quench', 'coupling', 'other'])
```
%% Cell type:code id: tags:
``` python
u_res_q1_slope_df = it_analysis.calculate_u_res_slope(u_res_q1_df, col_name='dU_RES_dt_Q1')
it_analysis.plot_u_res_u_res_slope_u_1_u_2(circuit_name, timestamp_qds, u_res_q1_df, u_res_q1_slope_df, u_1_q1_df, u_2_q1_df, suffix='_Q1')
```
%% Cell type:code id: tags:
``` python
u_res_q2_slope_df = it_analysis.calculate_u_res_slope(u_res_q2_df, col_name='dU_RES_dt_Q2')
it_analysis.plot_u_res_u_res_slope_u_1_u_2(circuit_name, timestamp_qds, u_res_q2_df, u_res_q2_slope_df, u_1_q2_df, u_2_q2_df, suffix='_Q2')
```
%% Cell type:code id: tags:
``` python
u_res_q3_slope_df = it_analysis.calculate_u_res_slope(u_res_q3_df, col_name='dU_RES_dt_Q3')
it_analysis.plot_u_res_u_res_slope_u_1_u_2(circuit_name, timestamp_qds, u_res_q3_df, u_res_q3_slope_df, u_1_q3_df, u_2_q3_df, suffix='_Q3')
```
%% Cell type:code id: tags:
``` python
it_analysis.results_table['Reason'] = get_expert_decision('Reason for FPA: ', ['quench', 'coupling', 'other'])
```
%% Cell type:markdown id: tags:
# 6. Quench Heaters
*QUERY*:
|Variable Name |Variable Type |Variable Unit |Database|Comment
|---------------|---------------|---------------|--------|------|
|u_hds_dfs |list of DataFrame| V |PM| A list of U_HDS_1-2/U_HDS_1-4, signals from PM for quenched magnets|
|u_hds_ref_dfs |list of DataFrame| V |PM| A list of U_HDS_1-2 from PM for reference magnets|
*CRITERIA*:
- all characteristic times of an exponential decay calculated with the 'charge' approach for voltage is +/- 5 ms from the reference ones
- the initial voltage should be between 810 V and 1000 V
- the final voltage should be between 0 V and 10 V
*GRAPHS*:
t = 0 s corresponds to the start of the pseudo-exponential decay
Voltage view (linear and log)
- the queried and filtered quench heater voltage on the left axis (actual signal continuous, reference dashed), U_HDS
%% Cell type:code id: tags:
``` python
if u_hds_dfs:
it_analysis.analyze_qh(circuit_name, timestamp_qds, u_hds_dfs, u_hds_ref_dfs)
```
%% Cell type:markdown id: tags:
# 7. Current Leads
*QUERY*:
|Variable Name |Variable Type |Variable Unit |Database|Comment
|---------------|---------------|---------------|--------|------|
|u_res_dfs |DataFrame |V |PM|Leads resisitive voltage, U_RES|
|u_hts_dfs |DataFrame |V |PM|Leads HTS voltage, U_HTS|
*CRITERIA*:
- quench detection for U_HTS for 2 consecutive datapoints above the threshold of 3 mV
- detection for U_RES for 2 consecutive datapoints above the threshold of 100 mV
*GRAPHS*:
- t = 0 s corresponds to the LEADS timestamp.
%% Cell type:code id: tags:
``` python
it_analysis.analyze_leads_voltage(u_hts_dfs, circuit_name, timestamp_qds, signal='U_HTS', value_min=-0.003, value_max=0.003)
```
%% Cell type:code id: tags:
``` python
it_analysis.analyze_leads_voltage(u_res_dfs, circuit_name, timestamp_qds, signal='U_RES', value_min=-0.1, value_max=0.1)
```
%% Cell type:markdown id: tags:
# 8. Final Report
## 8.1. Display Results Table
%% Cell type:code id: tags:skip_output
``` python
pd.set_option('display.max_columns', None)
it_analysis.results_table
```
%% Cell type:markdown id: tags:
## 8.2. Write Results Table to CSV
%% Cell type:code id: tags:
``` python
file_name = "{}-{}-PM_IT_FPA".format(Time.to_datetime(timestamp_fgc).strftime("%Y.%m.%d_%H%M%S.%f"), analysis_start_time)
```
%% Cell type:code id: tags:
``` python
# Create folder for circuit_name
!mkdir -p /eos/project/l/lhcsm/operation/IT/$circuit_name
full_path = '/eos/project/l/lhcsm/operation/IT/{}/{}_results_table.csv'.format(circuit_name, file_name)
it_analysis.results_table.to_csv(full_path)
print('Full results table saved to (Windows): ' + '\\\\cernbox-smb' + full_path.replace('/', '\\'))
```
%% Cell type:markdown id: tags:
## 8.3. Display MP3 Results Table
%% Cell type:code id: tags:skip_output
``` python
mp3_results_table = it_analysis.create_mp3_results_table()
mp3_results_table
```
%% Cell type:markdown id: tags:
## 8.4. Write MP3 Quench Database Table to CSV
%% Cell type:code id: tags:
``` python
!mkdir -p /eos/project/l/lhcsm/operation/IT/$circuit_name
full_path = '/eos/project/l/lhcsm/operation/IT/{}/{}_mp3_results_table.csv'.format(circuit_name, file_name)
mp3_results_table.to_csv(full_path)
print('MP3 results table saved to (Windows): ' + '\\\\cernbox-smb' + full_path.replace('/', '\\'))
```
%% Cell type:markdown id: tags:
## 8.5. Export Notebook as an HTML File
%% Cell type:code id: tags:
``` python
file_name_html = file_name + '_report.html'
!mkdir -p /eos/project/l/lhcsm/operation/IT/$circuit_name
full_path = '/eos/project/l/lhcsm/operation/600A/{}/{}'.format(circuit_name, file_name_html)
full_path = '/eos/project/l/lhcsm/operation/IT/{}/{}'.format(circuit_name, file_name_html)
print('Notebook report saved to (Windows): ' + '\\\\cernbox-smb' + full_path.replace('/', '\\'))
display(Javascript('IPython.notebook.save_notebook();'))
time.sleep(5)
!{sys.executable} -m jupyter nbconvert --to html $'AN_IT_FPA.ipynb' --output /eos/project/l/lhcsm/operation/IT/$circuit_name/$file_name_html
```
%% Cell type:markdown id: tags:
## 8.6. Export Compact Notebook as an HTML File
%% Cell type:code id: tags:
``` python
apply_report_template()
file_name_html = file_name + '_compact_report.html'
full_path = '/eos/project/l/lhcsm/operation/IT/{}/{}'.format(circuit_name, file_name_html)
print('Compact notebook report saved to (Windows): ' + '\\\\cernbox-smb' + full_path.replace('/', '\\'))
display(Javascript('IPython.notebook.save_notebook();'))
time.sleep(5)
!{sys.executable} -m jupyter nbconvert --to html $'AN_IT_FPA.ipynb' --output /eos/project/l/lhcsm/operation/IT/$circuit_name/$file_name_html --TemplateExporter.exclude_input=True --TagRemovePreprocessor.remove_all_outputs_tags='["skip_output"]' --TagRemovePreprocessor.remove_cell_tags='["skip_cell"]'
```
%% Cell type:code id: tags:
``` python
```
......
This diff is collapsed.
from scipy.optimize import leastsq
import numpy as np
import pandas as pd
from scipy import signal as s
from lhcsmapi.dbsignal.SignalAnalysis import SignalAnalysis
def tau_charge(df):
exponent = 1
slice_squared_df = df.pow(exponent)
return ExpSignalAnalysis.calculate_characteristic_time_in_seconds_with_integral(slice_squared_df, exponent)
def tau_energy(df):
exponent = 2
slice_squared_df = df.pow(exponent)
return ExpSignalAnalysis.calculate_characteristic_time_in_seconds_with_integral(slice_squared_df, exponent)
def tau_lin_reg(df):
return calculate_time_constant_from_log(df)
def calculate_time_constant_from_log(df):
right_exp = lambda p, x: -p[1] * x + p[0]
err = lambda p, x, y: (right_exp(p, x) - y)
time = df.index
rate = np.log(df.values)
v0 = [0., time.min()]
out = leastsq(err, v0, args=(time, rate))
v = out[0] # fit parameters out []
time_constant = 1 / v[1]
return time_constant
def tau_exp_fit(df):
return calculate_time_constant_with_optimization(df)
def calculate_time_constant_with_optimization(df):
right_exp = lambda p, x: p[0] * np.exp(-p[2] * (x - p[1]))
err = lambda p, x, y: (right_exp(p, x) - y)
time = df.index
rate = df.values
v0 = [0., 0., 0.]
out = leastsq(err, v0, args=(time, rate))
v = out[0] # fit parameters out []
time_constant = 1 / v[2]
return time_constant
class ExpSignalAnalysis(object):
__name_to_function = {"tau_charge": tau_charge,
"tau_energy": tau_energy,
"tau_lin_reg": tau_lin_reg,
"tau_exp_fit": tau_exp_fit}
@staticmethod
def calculate_char_time(series, t_start_decay, t_end_decay):
# ToDo: possible duplicate of tau_energy - to be fixed with the DSL
index_start_decay = series.index.get_loc(t_start_decay, method='nearest')
index_end_decay = series.index.get_loc(t_end_decay, method='nearest')
df_decay = series.iloc[index_start_decay:index_end_decay] - series.min()
df_decay_squared = df_decay.pow(2)
df_decay_tau = ExpSignalAnalysis.calculate_characteristic_time_in_seconds_with_integral(
df_decay_squared, exponent=2)
return df_decay_tau[0]
@staticmethod
def calculate_characteristic_time_in_seconds_with_integral(exp_decay_in_sec, exponent):
return exponent * np.trapz(exp_decay_in_sec, x=exp_decay_in_sec.index, axis=0) / (
exp_decay_in_sec.iloc[0] - exp_decay_in_sec.iloc[-1])
@staticmethod
def get_characteristic_time_of_exponential_decay(df, function_names_to_apply):
"""**returns dataframe with the time constants, calculated with the chosen methods**
Function returns a dataframe with time constant according to the input method chosen for the passed DataFrame object
Note that passed DataFrame object should just contain the decay, the functions getWindowWithDecayOnly or
getWindowWithGivenMeanAndStd can be used for filtering
:type df: DataFrame
:type function_names_to_apply: list
:return: DataFrame
- Example :
from source.signals.SignalAnalysis import SignalAnalysis
import pandas as pd
import numpy as np
x = np.arange(100)*0.001
timeConstant = 0.8
df = pd.DataFrame(data = np.exp(-x/timeConstant), index = x, columns = ["Voltage_1"])
functionNamesToApply = ["tau_charge", "tau_energy", "tau_lin_reg", "eFit"]
SignalAnalysis.getDecayTimeConstant(df, functionNamesToApply)
>>> Voltage_1
>>>tau_energy 0.8
>>>tau_energy 0.8
>>>tau_lin_reg 0.8
>>>eFit 0.8
"""
functions_to_apply = [ExpSignalAnalysis.__name_to_function[x] for x in function_names_to_apply]
if isinstance(df, pd.DataFrame) and not df.empty:
return df.aggregate(functions_to_apply)
else:
return df
@staticmethod
def calculate_change_in_characteristic_of_exponential_decay(origin_series, window=100):
"""Method returns the change in the characteristic of the exponential decay over time as a series
"""
sampling_time = (origin_series.index[1] - origin_series.index[0])
diff_series = origin_series.diff(periods=window) / sampling_time / window
tau_series = -origin_series.div(diff_series)
return tau_series
@staticmethod
def find_start_index_of_exponential_decay(exp_df):
# find start index based on current
if any(['I_HDS' in col for col in exp_df.columns]):
i_decay_start_index = ExpSignalAnalysis.get_window_with_given_mean_and_std(exp_df.filter(regex='I_HDS_1'))
else:
decay_start_index = 0
if decay_start_index != 0:
return (decay_start_index, 'current')
# find start index based on voltage correlation to an ideal decay
# find start index based on voltage decrease
decay_start_index = ExpSignalAnalysis.get_window_with_deviation_from_initial_value(exp_df.filter(regex='U_HDS_1'))
if decay_start_index != 0:
return (decay_start_index, 'voltage')
# if not found, then take the full dataframe
return (0, 'failed')
@staticmethod
def get_df_with_decay_only(exp_df):
"""" Method looks for beginning of the decay and returns DF with only the decay
"""
decay_start_index, source = ExpSignalAnalysis.find_start_index_of_exponential_decay(exp_df)
return exp_df.iloc[(decay_start_index + 1):len(exp_df)]
@staticmethod
def has_window_mean_std(x, mean=5, std=0.1):
return (x.mean() >= mean) and (x.std() <= std)
# mean=50 => mean=5 to be able to analyse low charge discharges, by Zinur
@staticmethod
def get_window_with_given_mean_and_std(origin_series, index_increment=20):
"""Method returns a series within a given mean and standard deviation
"""
origin_series_short = origin_series[origin_series.index < int(len(origin_series) / 10)]
t_start = origin_series_short.rolling(index_increment).apply(ExpSignalAnalysis.has_window_mean_std, raw=True). \
shift(-index_increment + 1)
t_start = t_start.dropna()
if len(t_start[t_start == 1]) == 0:
return 0
else:
return origin_series_short.index.get_loc(t_start[t_start == 1].index[0])
@staticmethod
def get_window_with_deviation_from_initial_value(origin_series, initial_value=0, allowed_deviation=0.998):
"""Method applies low pass filter to series
"""
if initial_value == 0:
initial_value = np.mean(origin_series.iloc[0:19].rolling(window=3).median()).values[0]
# get df with initial values
filtered_series = SignalAnalysis.apply_frequency_filter(origin_series)
initial_series = filtered_series[filtered_series.values > initial_value * allowed_deviation]
if len(initial_series) > 0:
return np.flatnonzero(origin_series.index == initial_series.index[-1])[0]
else:
return 0
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment