Looks like the $ECMENERGY environment variable is not correctly set. In this type of transform, the ECM is not required and not written into the log, which is probably the underlying issue. Any ideas on how to detect this?
As an aside, I noticed that the log.generate.short is claiming ecmEnergy of 13000 GeV, though the transform was run with 13600 TeV (not that it matters...). I guess the 13000 is a default somewhere, which maybe is causing some issues?
Cheers,
Matthew
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
OK @mgignac I found the bug and will fix it but if the ecmEnergy reported in log.generate is wrong then I should see where this comes from and decide what to do. Currently the code expects that the scope will be determined based on the ecmEnergy reported in log.generate
@mgignac no I don't so the basic question is how can we know the scope?
I suppose what I can do is to hardcode it that if it sees mc21 files it should always assume that the scope is mc21_13p6TeV then if there are more cases we can extend the hardcoded scopes. Otherwise we'd need a change in the tranform I guess which would anyway only work in specific releases.
Thanks Spyros! Indeed,I think hardcoding this is the only real option short of changing Gen_tf. For quite some time, I wouldn't anticipate 13 TeV requests in mc21, but this might change at some point.
It looks like there was a change here: atlas/athena@7f59f6af related to the line that throws this exception.
Do I understand correctly that it is no longer possible to do --jobConfig=../123456? @ewelina
If so how is this supposed to run? Looks like you have to run it from 700xxx which would then make a mess of the directory structure?
Oh, the change was required by some other modifications and I forgot people may want to give a (part of) path as jobConfig. I will fix it asap, but for the affeced releases we will have somehow to skip the checks ...
Are you suggesting letting all E2E Athena jobs fail (including releases going forward)? Or only for specific releases? Maybe we should raise at least a warning in the CI?
This will only affect (=lead to failed CI jobs) the E2E jobs in the releases for which the change here: atlas/athena@7f59f6af was applied: 22.6.12-22/6/15
Previous releases should run fine (we have run E2E jobs in the CI before) as well as the next releases after Ewelina fixes this problem in the transform.
OK, great. Then I would suggest I manually skip this CI test, but we continue to fail jobs, which will only happen in very few cases. For those we can manually skip things after confirming all is well. Does that work?
Yes and for your open MR which I used for testing I wonder: do you maybe want to wait for the fix to go in and then we try to run it with the new release to make sure everything works (in the future, i.e. >=22.6.16)?
For this MR, it's best we don't wait. It's needed urgently for Run-3 timelines. There are a few other E2E requests that will come through in the next few days, which are less urgent. We can use those to test. I'll tag you to keep you in the loop.