Define a workflow to launch reconstruction on many runs, with data stored in EOS.
Currently, the batch.sh
script can be used to launch reconstruction (i.e. any p348reco
-based executable) on files stored in EOS.
The code has to be executed from the folder where p348reco is:
cd p348reco ./batch.sh executable_name ...
The default behavior of the code is to create a local folder, where the output produced by the executable is stored at the end of the job execution. This can be problematic in case the job output is large. Specifically, at this moment I installed p348reco
in my work folder in AFS
, that has only 100 Gb. I also tried to install it in EOS, but when I tried to use batch.sh
I got the error:
ERROR: Failed to commit job submission into the queue. ERROR: EOS Submission is not currently supported by the HTCondor Service: https://batchdocs.web.cern.ch/troubleshooting/eos.html#no-eos-submission-allowed
We should introduce the possibility to run batch.sh, so that output files from reconstruction are saved in EOS, while log files are still in AFS. (https://batchdocs.web.cern.ch/tutorial/exercise11.html).
@akorneev what do you think about?