Set up a detailed evaluation, for the RTA presentation
I had previously set up a detailed evaluation, as a function of the track geometry, in the change_learning_rate evaluation branch. This detailed evaluation will be reproduced with new changes introduced by !2 (merged) .
For next presentation
-
Reproduce the same training dataset -
Run the overall inference pipeline -
Define functions to do the detailed evaluation, as it was done in the change_learning_rate evaluation branch -
Produce figures for tomorrow's presentation -
Add new comparisons
New features
-
Update MonteTracko: essentially fix all typehint bugs -
Be able to plot some metrics (e.g., efficiency) as a function of the geometry of the track (distance to line, distance to z-axis, etc.) -
Fix a few bugs -
Implement properly "reproduce=False" everywhere. The folder won't be reproduced if it is not empty, and this at every step, from preprocessing to track building -
Now, step 2 and 4 can be run from the model directly -
Add appropriate test samples
Edited by Anthony Correia