Version 1 of tool to calc error due to calibration sample size
A pair of python scripts that report the uncertainty in the PIDCalib procedure due to the size of the calibration samples used. The two differ in the approach used for smearing the efficiencies coming from the performance histograms. Both require the user to have already run the MultiTrack tool beforehand.
CalibSampleSizeError.py requires only the MultiTrack output file. The track efficiencies are smeared by sampling from a Gaussian with (mean,sigma) = (efficiency,error).
CalibSampleSizeError_BetaFunc.py also requires the histograms created by MakePerfHistsRunRange.py. Here, the track efficiencies are sampled from a Beta distribution with parameters (a,b) = (nPassed + 1, nTotal-nPassed+1), where nPassed are the number of events in the calibration sample that pass the PID cut.
Both scripts write a TTree that stores the average event efficiency calculated for each toy and a plot of the distribution of these average event efficiencies with a Gaussian fit overlayed. The width of this Gaussian is the uncertainty the user should quote.