Skip to content
Snippets Groups Projects

Add a constant function to access UT Sectors

Merged Monir Hadji requested to merge mhadji-add-getutsector into master

PrStoreUTHit was using findSector for each ut clusters, this function is an usual logn lower bound on a vector<pair<key, sector>>

a second std::vector<sector*> was already there and created with an almost good order to be able to create some offsets during DeUTDetector::initialize and therefore doing an access with an almost constant complexity

maybe it's possible to do something similar with findMat (used by PrStoreFThit) if this one is also a logn getter and if it's a hotspot

Edited by Marco Cattaneo

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Monir Hadji added 1 commit

    added 1 commit

    • 18675853 - Replace vector by array for offsets and add some asserts to check UT geometry

    Compare with previous version

  • @cattanem yes, it should be transparent in principle. Do you want me to try it before unwipping or shall we just use the regular nightlies ?

  • @sponce we need to establish a procedure for validating the performance (both physics and computing) of such MRs. Are the nightlies sufficient?

  • @cattanem for things relating to UT SW we have been asked by the project management to always keep @tskwarni in the loop (not a criticism, I'm not sure this information's been propagated too widely)

    I think this is a good test case where a quick presentation to T&A or HLT would be in order, showing the throughput improvement and output of prChecker/signal efficiencies.

  • For the physics performance, I have no clue what should be the procedure, but again, this should not change it at all. If it does, it is not foreseen and should be debugged.

    Concerning the computing performance, the improvements nowadays are far too numerous to make a full analysis of the improvement brought by each single commit. Now if this one is particularly interesting or worrying, we can do it.

  • It makes sense to me to use some merge request as a test case, even if it might be overkill for this one. Ideally the presentation would consist of simply browsing to a few websites which provide numbers and plots.

  • And concerning a presentation, same comment applies that we in principle don't do one per commit but that it can be done. Now I would consider that @mhadji should present this, but we will have to wait that he is back form holidays in 2 weeks.

  • See my comment above, I would see it more as defining the procedure. We have to start at some point.

  • Okay that gives us two weeks to define the plots and numbers we want :).

  • We cannot really afford running the benchmarks for each commit. We do it once per day and get several commits in each time. But modulo this, in principle you can indeed use the nightlies/PR2.

    Just our of curiosity, why is this MR so controversial ?

  • @sponce I don't think this MR is especially controversial at all, I was simply reacting to @cattanem request to "establish a procedure" with a suggestion of how to proceed. I tend to agree with what @sstahl has said, we should start somewhere.

    Agree we cannot run things per commit, we will need to group things in bunches.

  • Indeed this MR was perhaps the wrong one to pick on, and perhaps demonstates the other part of the procedure, that we should exercise some judgment as to which MRs need to go through the whole validation process. Given the comments here, I should probably go ahead and merge this just on the basis of the nightly tests as usual. I suggest that we use Rec!937 (merged), which depends on this, to commission a validation procedure.

  • Agreed. My curiosity was indeed triggered by the fact that we started discussing on this one while Rec!937 (merged) looked much more impacting.

  • I appreciate Vava keeping me in the loop on this. I understand that you have sped up the UT code a lot by reprogramming how sector ID is calculated? Since UT code was never optimized for speed, I am not surprised that you find such opportunities.

    Somewhat related; we are just revising UT geometry definition a bit, and have perhaps the last chance to change LHCbID addressing scheme for UT. Previously sectors were defined as geometrical volumes, which has unfortunate consequence of splitting inner sensors into two volumes of silicon, whereas in reality there is only one. We want to undo it to avoid potential problems in alignment procedures. However, tentatively we have decided not to change LHCbID addressing scheme, thus we will keep sectors as virtual concept (this keeps the same number of channels in each sector). If people think that there might be a better addressing scheme, which might make the code faster, the time to discuss it is now.

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading