Commit 4125f128 authored by Marcel Rieger's avatar Marcel Rieger
Browse files

Merge branch 'master' into feature/termynal_docs.

parents d19fe91d 791c2097
Pipeline #3138843 passed with stage
in 1 minute and 4 seconds
[flake8]
max-line-length = 100
ignore = E306, E402, E128, E501, E722, E731, E741, F822
ignore = E306, E402, E128, E501, E502, E722, E731, E741, F822, W504
stages:
# - formatting
- docs
# temporarily disabled
# black:
# stage: formatting
# tags:
# - docker
# image: python:3.7
# script:
# - pip install --no-cache-dir --upgrade pip
# - pip install --no-cache-dir black==20.8b1
# - black -l 100 --check --diff dhi
build_image:
stage: docs
when: manual
......@@ -30,12 +18,33 @@ deploy_docs:
tags:
- docker
image: ${CI_REGISTRY_IMAGE}:latest
environment:
name: inference-${CI_COMMIT_REF_SLUG}
url: ${PAGE_URL}${CI_ENVIRONMENT_SLUG}
on_stop: delete_docs
script:
- test ! -z "${PAGE_ROOT}"
- export FULL_PAGE_ROOT="${PAGE_ROOT}${CI_ENVIRONMENT_SLUG}"
- echo "FULL_PAGE_ROOT is ${FULL_PAGE_ROOT}"
- cd docs
- mkdocs build --strict
- mkdocs build
- cd site
- tar -czf site.tgz *
- sshpass -v -p ${KRB_PASS} ssh -o "StrictHostKeyChecking=no" ${KRB_USER}@lxplus.cern.ch "rm -rf ${PAGE_ROOT}/*"
- sshpass -v -p ${KRB_PASS} scp -o "StrictHostKeyChecking=no" site.tgz ${KRB_USER}@lxplus.cern.ch:${PAGE_ROOT}
- sshpass -v -p ${KRB_PASS} ssh -o "StrictHostKeyChecking=no" ${KRB_USER}@lxplus.cern.ch "cd ${PAGE_ROOT}; tar -xzf site.tgz; rm site.tgz"
- sshpass -v -p ${KRB_PASS} ssh -o "StrictHostKeyChecking=no" ${KRB_USER}@lxplus.cern.ch "[ ! -d ${FULL_PAGE_ROOT} ] && mkdir ${FULL_PAGE_ROOT} || rm -rf ${FULL_PAGE_ROOT}/*"
- sshpass -v -p ${KRB_PASS} scp -o "StrictHostKeyChecking=no" site.tgz ${KRB_USER}@lxplus.cern.ch:${FULL_PAGE_ROOT}/site.tgz
- sshpass -v -p ${KRB_PASS} ssh -o "StrictHostKeyChecking=no" ${KRB_USER}@lxplus.cern.ch "cd ${FULL_PAGE_ROOT}; tar -xzf site.tgz; rm site.tgz"
delete_docs:
stage: docs
when: manual
tags:
- docker
image: ${CI_REGISTRY_IMAGE}:latest
environment:
name: inference-${CI_COMMIT_REF_SLUG}
action: stop
script:
- test ! -z "${PAGE_ROOT}"
- export FULL_PAGE_ROOT="${PAGE_ROOT}${CI_ENVIRONMENT_SLUG}"
- echo "FULL_PAGE_ROOT is ${FULL_PAGE_ROOT}"
- sshpass -v -p ${KRB_PASS} ssh -o "StrictHostKeyChecking=no" ${KRB_USER}@lxplus.cern.ch "rm -rf ${FULL_PAGE_ROOT}"
......@@ -8,12 +8,12 @@
This repository uses submodules, so you should clone it recursively via
```shell
# ssh
# ssh (recommended)
git clone --recursive ssh://git@gitlab.cern.ch:7999/hh/tools/inference.git
# or
# https
# https (discouraged)
git clone --recursive https://gitlab.cern.ch/hh/tools/inference.git
```
......@@ -39,7 +39,7 @@ source setup.sh some_name
```
where the value of `some_name` is your choice, and the script interactively guides you through the quick setup process.
To use the same configuration the next time, ==make sure to use the same value you passed before.==
To use the same configuration the next time, **make sure to use the same value you passed before**.
Internally, a file `.setups/some_name.sh` is created which contains export statements line by lines that you can be update anytime.
......@@ -57,6 +57,8 @@ git clone --recursive ssh://git@gitlab.cern.ch:7999/hh/results/datacards_run2.gi
(note that using `ssh://...` is recommended), and then use `/your/path` for the `DHI_DATACARDS_RUN2` variable later on.
After that, you can use the names of the HH channels in the `--datacards` parameters of the inference tasks, e.g. `--datacards bbww` or `--datacards bbww,bbbb` to get results of the latest bbWW or bbWW+bbbb channels, respectively.
#### Reinstalling software
......@@ -82,6 +84,9 @@ You should see:
indexing tasks in 1 module(s)
loading module 'dhi.tasks', done
module 'law.contrib.cms.tasks', 1 task(s):
- law.cms.BundleCMSSW
module 'law.contrib.git', 1 task(s):
- law.git.BundleGitRepository
......@@ -90,46 +95,58 @@ module 'dhi.tasks.combine', 3 task(s):
- CombineDatacards
- CreateWorkspace
module 'dhi.tasks.base', 2 task(s):
module 'dhi.tasks.base', 3 task(s):
- BundleCMSSW
- BundleRepo
- BundleSoftware
module 'dhi.tasks.limits', 6 task(s):
module 'dhi.tasks.snapshot', 1 task(s):
- Snapshot
module 'dhi.tasks.limits', 7 task(s):
- UpperLimits
- PlotUpperLimits
- PlotUpperLimitsAtPoint
- PlotUpperLimits
- PlotUpperLimits2D
- PlotMultipleUpperLimitsByModel
- MergeUpperLimits
- PlotMultipleUpperLimits
- MergeUpperLimits
module 'dhi.tasks.likelihoods', 5 task(s):
- LikelihoodScan
- PlotLikelihoodScan
- PlotMultipleLikelihoodScansByModel
- MergeLikelihoodScan
- PlotMultipleLikelihoodScans
- MergeLikelihoodScan
module 'dhi.tasks.significances', 4 task(s):
- SignificanceScan
- PlotSignificanceScan
- MergeSignificanceScan
- PlotMultipleSignificanceScans
- MergeSignificanceScan
module 'dhi.tasks.pulls_impacts', 3 task(s):
- PullsAndImpacts
- PlotPullsAndImpacts
- MergePullsAndImpacts
module 'dhi.tasks.postfit_shapes', 2 task(s):
- PostfitShapes
module 'dhi.tasks.postfit', 3 task(s):
- FitDiagnostics
- PlotPostfitSOverB
- PlotNuisanceLikelihoodScans
module 'dhi.tasks.gof', 3 task(s):
module 'dhi.tasks.gof', 4 task(s):
- GoodnessOfFit
- PlotGoodnessOfFit
- PlotMultipleGoodnessOfFits
- PlotGoodnessOfFit
- MergeGoodnessOfFit
module 'dhi.tasks.eft', 4 task(s):
- EFTLimitBase
- EFTBenchmarkLimits
- PlotEFTBenchmarkLimits
- MergeEFTBenchmarkLimits
module 'dhi.tasks.test', 1 task(s):
- test.TestPlots
......@@ -138,11 +155,14 @@ module 'dhi.tasks.studies.model_selection', 3 task(s):
- study.PlotMorphedDiscriminant
- study.PlotStatErrorScan
module 'dhi.tasks.studies.model_plots', 1 task(s):
- study.PlotSignalEnhancement
module 'dhi.tasks.exclusion', 2 task(s):
- PlotExclusionAndBestFit
- PlotExclusionAndBestFit2D
written 36 task(s) to index file '/your/path/inference/.law/index'
written 46 task(s) to index file '/your/path/inference/.law/index'
```
You can type
......@@ -162,10 +182,18 @@ to list all parameters of `SomeTask`.
Now you are done with the setup and can start running the statistical inference!
### Configurable postfit plots
Postfit plots are not yet covered by the tasks listed above but will be provided in the future.
In the meantime, you can use a separate set of scripts that allow to create fully configurable postfit plots for your analysis channel.
For more info, see the dedicated [README](dhi/scripts/README_postfit_plots.md).
## Documentation
The documentation is hosted at [cern.ch/cms-hh/tools/inference](https://cern.ch/cms-hh/tools/inference).
### For developers
It is built with [MkDocs](https://www.mkdocs.org) using the [material](https://squidfunk.github.io/mkdocs-material) theme and support for [PyMdown](https://facelessuser.github.io/pymdown-extensions) extensions.
......@@ -193,9 +221,3 @@ To start a server to browse the pages, run
and open your webbrowser at [http://localhost:8000](http://localhost:8000).
By default, all pages are *automatically rebuilt and reloaded* when a source file is updated.
## Developers
- Peter Fackeldey: peter.fackeldey@cern.ch (email)
- Marcel Rieger: marcel.rieger@cern.ch (email), marcel_r88 (skype)
......@@ -4,6 +4,11 @@
Constants such as cross sections and branchings, and common configs such as labels and colors.
"""
import os
import scipy as sp
import scipy.stats
from dhi.util import DotDict, ROOTColorGetter
......@@ -56,6 +61,10 @@ br_hh = DotDict(
wwgg=2.0 * br_h.ww * br_h.gg,
)
# aliases
br_hh["bbbb_boosted"] = br_hh.bbbb
br_hh["bbbb_boosted_ggf"] = br_hh.bbbb
br_hh["bbbb_boosted_vbf"] = br_hh.bbbb
br_hh["bbbb_boosted"] = br_hh.bbbb
br_hh["bbwwdl"] = br_hh.bbwwlvlv
br_hh["bbwwllvv"] = br_hh.bbwwlvlv
br_hh["bbwwsl"] = br_hh.bbwwqqlv
......@@ -63,14 +72,18 @@ br_hh["bbzz4l"] = br_hh.bbzzllll
# HH branching names (TODO: find prettier abbreviations)
br_hh_names = DotDict(
bbbb=r"bbbb",
bbbb=r"4b",
bbbb_low=r"4b, low $m_{HH}$",
bbbb_boosted=r"4b, high $m_{HH}$",
bbbb_boosted_ggf=r"4b #scale[0.75]{high $m_{HH}$, ggF}",
bbbb_boosted_vbf=r"4b #scale[0.75]{high $m_{HH}$, VBF}",
bbvv=r"bbVV",
bbww=r"bbWW",
bbwwqqlv=r"bbWW, qql$\nu$",
bbwwlvlv=r"bbWW, 2l2$\nu$",
bbzz=r"bbZZ",
bbzzqqll=r"bbZZ, qqll",
bbzzllll=r"bbZZ, 4l",
bbzzllll=r"bbZZ",
bbtt=r"bb$\tau\tau$",
bbgg=r"bb$\gamma\gamma$",
ttww=r"WW$\tau\tau",
......@@ -88,12 +101,13 @@ br_hh_names["bbwwllvv"] = br_hh_names.bbwwlvlv
br_hh_names["bbwwsl"] = br_hh_names.bbwwqqlv
br_hh_names["bbzz4l"] = br_hh_names.bbzzllll
# campaign labels, extended by combinations with HH branching names, and names itself
# campaign labels, extended by combinations with HH branching names
# lumi values from https://twiki.cern.ch/twiki/bin/view/CMS/TWikiLUM?rev=163
campaign_labels = DotDict({
"2016": "2016 (13 TeV)",
"2017": "2017 (13 TeV)",
"2018": "2018 (13 TeV)",
"run2": "Run 2 (13 TeV)",
"2016": "36.3 fb^{-1} (2016, 13 TeV)",
"2017": "41.5 fb^{-1} (2017, 13 TeV)",
"2018": "59.8 fb^{-1} (2018, 13 TeV)",
"run2": "138 fb^{-1} (13 TeV)",
})
for c, c_label in list(campaign_labels.items()):
for b, b_label in br_hh_names.items():
......@@ -101,22 +115,24 @@ for c, c_label in list(campaign_labels.items()):
campaign_labels.update(br_hh_names)
# poi defaults (value, range, points, taken from physics model) and labels
# note: C2V and CV are not following kappa notation and are upper case to be consistent to the model
poi_data = DotDict(
r=DotDict(range=(-20.0, 20.0), label="r"),
r_gghh=DotDict(range=(-20.0, 20.0), label="r_{gghh}"),
r_qqhh=DotDict(range=(-20.0, 20.0), label="r_{qqhh}"),
kl=DotDict(range=(-30.0, 30.0), label=r"\kappa_{\lambda}"),
kt=DotDict(range=(-10.0, 10.0), label=r"\kappa_{t}"),
C2V=DotDict(range=(-10.0, 10.0), label=r"\kappa_{VV}"),
CV=DotDict(range=(-10.0, 10.0), label=r"\kappa_{V}"),
r=DotDict(range=(-20.0, 20.0), label="r", sm_value=1.0),
r_gghh=DotDict(range=(-20.0, 20.0), label="r_{gghh}", sm_value=1.0),
r_qqhh=DotDict(range=(-20.0, 20.0), label="r_{qqhh}", sm_value=1.0),
r_vhh=DotDict(range=(-20.0, 20.0), label="r_{vhh}", sm_value=1.0),
kl=DotDict(range=(-30.0, 30.0), label=r"\kappa_{\lambda}", sm_value=1.0),
kt=DotDict(range=(-10.0, 10.0), label=r"\kappa_{t}", sm_value=1.0),
C2V=DotDict(range=(-10.0, 10.0), label=r"\kappa_{2V}", sm_value=1.0),
CV=DotDict(range=(-10.0, 10.0), label=r"\kappa_{V}", sm_value=1.0),
C2=DotDict(range=(-2.0, 3.0), label=r"c_{2}", sm_value=0.0),
CG=DotDict(range=(-2.0, 2.0), label=r"c_{g}", sm_value=0.0),
C2G=DotDict(range=(-2.0, 2.0), label=r"c_{2g}", sm_value=0.0),
)
# add "$" embedded labels
for poi, data in poi_data.items():
data["label_math"] = "${}$".format(data.label)
# nuisance parameters labels
nuisance_labels = {}
# colors
colors = DotDict(
root=ROOTColorGetter(
......@@ -144,22 +160,32 @@ colors = DotDict(
blue_cream=38,
blue_signal=(67, 118, 201),
blue_signal_trans=(67, 118, 201, 0.5),
purple=881,
),
)
# color sequence for plots with multiple elements
color_sequence = ["blue", "red", "green", "grey", "pink", "cyan", "orange", "light_green", "yellow"]
color_sequence += 10 * ["black"]
color_sequence += 10 * ["grey"]
# marker sequence for plots with multiple elements
marker_sequence = [21, 22, 23, 24, 25, 26, 32, 27, 33, 28, 34, 29, 30]
marker_sequence = [20, 21, 22, 23, 24, 25, 26, 32, 27, 33, 28, 34, 29, 30]
marker_sequence += 10 * [20]
# cumulative, inverse chi2 values in a mapping "n_dof -> n_sigma -> level"
# for the geometrical determination of errors of nll curves
# (computed with "sp.stats.chi2.ppf(g, n_dof)" with g being the gaussian intervals)
# chi2_levels = {
# 1: {1: 1.000, 2: 4.000},
# 2: {1: 2.296, 2: 6.180},
# ...
# }
get_gaus_interval = lambda sigma: 2 * sp.stats.norm.cdf(sigma) - 1.
get_chi2_level = lambda sigma, ndof: sp.stats.chi2.ppf(get_gaus_interval(sigma), ndof)
chi2_levels = {
1: {1: 1.000, 2: 4.000},
2: {1: 2.296, 2: 6.180},
3: {1: 3.527, 2: 8.025},
ndof: {sigma: get_chi2_level(sigma, ndof) for sigma in range(1, 8 + 1)}
for ndof in range(1, 3 + 1)
}
# postfix after "CMS" labels in plots, shwon when the --paper flag is not used
cms_postfix = os.getenv("DHI_CMS_POSTFIX", "Work in progress")
This diff is collapsed.
# coding: utf-8
"""
Collection of tools for the EFT workflow.
"""
__all__ = []
import re
from dhi.util import make_list
class EFTCrossSectionProvider(object):
"""
Helper class to calculate HH EFT cross sections in units of pb.
Coefficients and formulae are taken from
https://github.com/fabio-mon/HHStatAnalysis/blob/c8fc33d2ae3f7e04cfc83e773e2880657ffdce3b/AnalyticalModels/python/NonResonantModelNLO.py
with credits to F. Monti and P. Mandrik.
"""
def __init__(self):
super(EFTCrossSectionProvider, self).__init__()
# ggf nlo coefficients in pb, converted from fb values
# https://github.com/pmandrik/VSEVA/blob/23daf2b9966ddbcef3fa6622957d17640ab9e133/HHWWgg/reweight/reweight_HH.C#L117
self.coeffs_ggf_nlo_13tev = [0.001 * c for c in [
62.5088, 345.604, 9.63451, 4.34841, 39.0143, -268.644, -44.2924, 96.5595, 53.515,
-155.793, -23.678, 54.5601, 12.2273, -26.8654, -19.3723, -0.0904439, 0.321092, 0.452381,
-0.0190758, -0.607163, 1.27408, 0.364487, -0.499263,
]]
self.ggf_xsec_sm_nnlo = 0.03105 # pb
def get_ggf_xsec_nlo(self, kl=1., kt=1., c2=0., cg=0., c2g=0., coeffs=None):
if coeffs is None:
coeffs = self.coeffs_ggf_nlo_13tev
return (
coeffs[0] * kt**4 +
coeffs[1] * c2**2 +
coeffs[2] * kt**2 * kl**2 +
coeffs[3] * cg**2 * kl**2 +
coeffs[4] * c2g**2 +
coeffs[5] * c2 * kt**2 +
coeffs[6] * kl * kt**3 +
coeffs[7] * kt * kl * c2 +
coeffs[8] * cg * kl * c2 +
coeffs[9] * c2 * c2g +
coeffs[10] * cg * kl * kt**2 +
coeffs[11] * c2g * kt**2 +
coeffs[12] * kl**2 * cg * kt +
coeffs[13] * c2g * kt * kl +
coeffs[14] * cg * c2g * kl +
coeffs[15] * kt**3 * cg +
coeffs[16] * kt * c2 * cg +
coeffs[17] * kt * cg**2 * kl +
coeffs[18] * cg * kt * c2g +
coeffs[19] * kt**2 * cg**2 +
coeffs[20] * c2 * cg**2 +
coeffs[21] * cg**3 * kl +
coeffs[22] * cg**2 * c2g
)
def get_ggf_xsec_nnlo(self, kl=1., kt=1., c2=0., cg=0., c2g=0., coeffs=None):
xsec_bsm_nlo = self.get_ggf_xsec_nlo(kl=kl, kt=kt, c2=c2, cg=cg, c2g=c2g, coeffs=coeffs)
xsec_sm_nlo = self.get_ggf_xsec_nlo(kl=1., kt=1., c2=0., cg=0., c2g=0., coeffs=coeffs)
k_factor = self.ggf_xsec_sm_nnlo / xsec_sm_nlo
return xsec_bsm_nlo * k_factor
#: EFTCrossSectionProvider singleton.
eft_xsec_provider = EFTCrossSectionProvider()
#: Default ggF NLO cross section getter.
get_eft_ggf_xsec_nlo = eft_xsec_provider.get_ggf_xsec_nlo
#: Default ggF NNLO cross section getter.
get_eft_ggf_xsec_nnlo = eft_xsec_provider.get_ggf_xsec_nnlo
def sort_eft_benchmark_names(names):
"""
Example order: 1, 2, 3, 3a, 3b, 4, 5, a_string, other_string, z_string
"""
names = make_list(names)
# split into names being a number or starting with one, and pure strings
# store numeric names as tuples as sorted() will do exactly what we want
num_names, str_names = [], []
for name in names:
m = re.match(r"^(\d+)(.*)$", name)
if m:
num_names.append((int(m.group(1)), m.group(2)))
else:
str_names.append(name)
# sort and add
num_names.sort()
str_names.sort()
return ["{}{}".format(*pair) for pair in num_names] + str_names
def extract_eft_scan_parameter(name):
"""
c2_1p5 -> c2
"""
if "_" not in name:
raise ValueError("invalid datacard name '{}'".format(name))
return name.split("_", 1)[0]
def sort_eft_scan_names(scan_parameter, names):
"""
Names have the format "<scan_parameters>_<number>"" where number might have "-" replaced by
"m" and "." replaced by "d", so revert this and simply sort by number.
"""
names = make_list(names)
# extract the scan values
values = []
for name in names:
if not name.startswith(scan_parameter + "_"):
raise ValueError("invalid datacard name '{}'".format(name))
v = name[len(scan_parameter) + 1:].replace("d", ".").replace("m", "-")
try:
v = float(v)
except ValueError:
raise ValueError("invalid scan value '{}' in datacard name '{}'".format(v, name))
values.append((name, v))
# sort by value
values.sort(key=lambda tpl: tpl[1])
return values
......@@ -67,7 +67,9 @@ class HBRscaler(object):
self.modelBuilder = modelBuilder
# use SMHiggsBuilder to build single H XS and BR scalings
datadir = None
if "DHI_SOFTWARE" in os.environ:
if "CMSSW_BASE" in os.environ:
datadir = os.path.expandvars("$CMSSW_BASE/src/HiggsAnalysis/CombinedLimit/data/lhc-hxswg")
elif "DHI_SOFTWARE" in os.environ:
datadir = os.path.expandvars("$DHI_SOFTWARE/HiggsAnalysis/CombinedLimit/data/lhc-hxswg")
self.SMH = SMHiggsBuilder(self.modelBuilder, datadir=datadir)
......@@ -192,7 +194,7 @@ class HBRscaler(object):
return proc_f_BR_scalings, BRstr
def findSingleHMatch(self, process):
def findSingleHMatch(self, bin, process):
# single H process names must have the format "PROD_*" where PROD must be in
# SM_SCALED_SINGLE_HIGG_PROD; None is returned when this is not the case
production = process.split("_")[0]
......@@ -202,23 +204,24 @@ class HBRscaler(object):
proc_f_H_scaling = "CVktkl_pos_XSscal_%s_%s" % (production, energy)
if proc_f_H_scaling not in self.f_H_scalings:
raise Exception("scaling is not supported for single H process {}, this is likely a misconfiguration".format(
process))
raise Exception("scaling is not supported for single H process {} (bin {}), "
"this is likely a misconfiguration".format(process, bin))
return proc_f_H_scaling
def buildXSBRScalingHH(self, f_XS_scaling, process):
def buildXSBRScalingHH(self, f_XS_scaling, bin, process):
BRscalings, BRstr = self.findBRScalings(process)
# when no scalings were found, print a warning since this might be intentional and return,
# when != 2 scalings were found, this is most likely an error
if not BRscalings:
print("WARNING: the HH process {} does not contain valid decay strings to extract "
"branching ratios to apply the scaling with model parameters".format(process))
print("WARNING: the HH process {} (bin {}) does not contain valid decay strings to "
"extract branching ratios to apply the scaling with model parameters".format(
process, bin))
return None
elif len(BRscalings) != 2:
raise Exception("the HH process {} contains {} valid decay string(s) while two were "
"expected".format(process, len(BRscalings)))
raise Exception("the HH process {} (bin {}) contains {} valid decay string(s) while "
"two were expected".format(process, bin, len(BRscalings)))
f_XSBR_scaling = "%s_BRscal_%s" % (f_XS_scaling, BRstr)
......@@ -227,18 +230,19 @@ class HBRscaler(object):
return f_XSBR_scaling
def buildXSBRScalingH(self, f_XS_scaling, process):
def buildXSBRScalingH(self, f_XS_scaling, bin, process):
BRscalings, BRstr = self.findBRScalings(process)
# when no scalings were found, print a warning since this might be intentional and return,
# when != 1 scalings were found, this is most likely an error
if not BRscalings:
print("WARNING: the H process {} does not contain valid decay strings to extract "
"branching ratios to apply the scaling with model parameters".format(process))
print("WARNING: the H process {} (bin {}) does not contain valid decay strings to "
"extract branching ratios to apply the scaling with model parameters".format(
process, bin))
return None
elif len(BRscalings) != 1:
raise Exception("the H process {} contains {} valid decay string(s) while one was "
"expected".format(process, len(BRscalings)))
raise Exception("the H process {} (bin {}) contains {} valid decay string(s) while one "
"was expected".format(process, bin, len(BRscalings)))
f_XSBR_scaling = "%s_BRscal_%s" % (f_XS_scaling, BRstr)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# coding: utf-8
"""
Custom HH physics model implementing the gluon gluon fusion (ggf / gghh) mode (depending on C2, kl
and kt), and the vector boson fusion (vbf / qqhh) mode.
Authors:
- Torben Lange
- Marcel Rieger
"""
from collections import OrderedDict
import sympy
# wildcard import to have everything available locally, plus specific imports for linting
from hh_model import * # noqa
from hh_model import (
HHSample, HHFormula, VBFFormula, HHModelBase, HHModel as DefaultHHModel, vbf_samples,
create_ggf_xsec_str, ggf_k_factor, create_vbf_xsec_func,
)
####################################################################################################
### c2 based ggf sample
####################################################################################################
class GGFSample(HHSample):
"""
Class describing ggf samples, characterized by values of *kl*, *kt* and *C2*.
"""
label_re = r"^ggHH_kl_([pm0-9]+)_kt_([pm0-9]+)_c2_([pm0-9]+)$"
def __init__(self, kl, kt, C2, xs, label):
super(GGFSample, self).__init__(xs, label)
self.kl = kl
self.kt = kt
self.C2 = C2
# ggf samples with keys (kl, kt, C2)
ggf_samples = OrderedDict([
((0.0, 1.0, 0.0), GGFSample(kl=0.0, kt=1.0, C2=0.0, xs=0.069725, label="ggHH_kl_0p00_kt_1p00_c2_0p00")),