Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in
  • Urania Urania
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 2
    • Issues 2
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
  • Merge requests 1
    • Merge requests 1
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Container Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Issue
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • LHCbLHCb
  • UraniaUrania
  • Merge requests
  • !194

Version 1 of tool to calc error due to calibration sample size

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Cayo Mar Costa Sobral requested to merge LBPID-21.PIDCalibErrorTools into master Nov 30, 2018
  • Overview 1
  • Commits 4
  • Pipelines 0
  • Changes 4

A pair of python scripts that report the uncertainty in the PIDCalib procedure due to the size of the calibration samples used. The two differ in the approach used for smearing the efficiencies coming from the performance histograms. Both require the user to have already run the MultiTrack tool beforehand.

CalibSampleSizeError.py requires only the MultiTrack output file. The track efficiencies are smeared by sampling from a Gaussian with (mean,sigma) = (efficiency,error).

CalibSampleSizeError_BetaFunc.py also requires the histograms created by MakePerfHistsRunRange.py. Here, the track efficiencies are sampled from a Beta distribution with parameters (a,b) = (nPassed + 1, nTotal-nPassed+1), where nPassed are the number of events in the calibration sample that pass the PID cut.

Both scripts write a TTree that stores the average event efficiency calculated for each toy and a plot of the distribution of these average event efficiencies with a Gaussian fit overlayed. The width of this Gaussian is the uncertainty the user should quote.

Edited Nov 17, 2019 by Donal Hill
Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: LBPID-21.PIDCalibErrorTools