Neural net TTVA tool
Adds TrackVertexAssociationTool/MVATrackVertexAssociationTool
, which evaluates neural nets using lwtnn
and inherits from ITrackVertexAssociationTool
. Adds TrackVertexAssociationTool/MVAInputEvaluator
for evaluation of inputs. No recommendations have been provided at this time.
Not sure how slow the tool is compared to the nominal tool or how to check this, but maybe some improvements might be apparent from reviewing the code. The evaluation of the value of the MVA as achieved using lwtnn
in Athena vs. Keras in python is detailed here. The difference is small but nonzero with a slight asymmetry. Not sure why, recommendations were to look at the same histogram in particular areas of phase space or to plot the MVA score versus the difference in score as a scatter plot to see if these points fall on the cut threshold for TTVA
@vcairo @kostyuk @keli @npetters @goblirsc
Edit: everything seems to working now imo
Merge request reports
Activity
added InnerDetector master review-pending-level-1 labels
- Resolved by Matthew Joseph Basso
added 1 commit
- aaad5302 - Revert "Update test alg in anticipation of MVA tool testing"
CI Result FAILURE (hash a78804f5)Athena AthSimulation AthGeneration AnalysisBase externals cmake make required tests optional tests Full details available on this CI monitor view
Athena: number of compilation errors 0, warnings 0
AthSimulation: number of compilation errors 0, warnings 0
AthGeneration: number of compilation errors 0, warnings 0
AnalysisBase: number of compilation errors 1, warnings 10
For experts only: Jenkins output [CI-MERGE-REQUEST-CC7 20623] CI Result FAILURE (hash aaad5302)Athena AthSimulation AthGeneration AnalysisBase externals cmake make required tests optional tests Full details available on this CI monitor view
Athena: number of compilation errors 0, warnings 0
AthSimulation: number of compilation errors 0, warnings 0
AthGeneration: number of compilation errors 0, warnings 0
AnalysisBase: number of compilation errors 1, warnings 11
For experts only: Jenkins output [CI-MERGE-REQUEST-CC7 20625]removed review-pending-level-1 label
added 2 commits
- Resolved by Matthew Joseph Basso
- Resolved by Matthew Joseph Basso
Hi all, so there was indeed a mistake in my validation yesterday. One was a bug in the input evaluator (I didn't apply the sqrt to the uncertainty in dz*sinTheta, so the MVA was instead using the uncertainty^2) and the other was a mistake in my validation (as Max already asked about, I was using inputs calculated in different contexts when evaluating the MVA score for a given track in my ntuple alg). This has been fixed and I now think that the network in lwtnn sees are the same as those seen in keras (in as much as I can see that the inputs in cpp are the same, so the assumption is that my ntuple alg then correctly dumps these inputs and that my python framework correctly reads them in as referenced to the best HS vertex determined using the truth validation package).
I've attached some plots of the lwtnn and keras output discriminant shapes. They look very similar. Additionally, I've attached a plot of the difference (the spread end up being relatively significant? ~0.05? but this distribution is symmetric so this is more consistent with numerical/precision differences). The 2D plot shows that this difference mostly exists at lwtnn score <0.1, so this should not be a problem around our TTVA threshold cut (which should always be >0.5). So I would now call things validated, but let me know if you disagree.