Commit 84cc58d9 authored by James Frost's avatar James Frost Committed by Graeme Stewart
Browse files

reenable DRAW in mu TCT (Tier0ChainTests-00-02-51)

parent a9e614d5
Version 1 - Max Baak, 20090303
Instructions for Tools/Tier0ChainTests package:
-----------------------------------------------
... are found below.
test/ directory:
----------------
Contains the xml files for tct tests defined for specific atlas projects. Eg:
####
# test/Tier0ChainTests_DefaultConfiguration.xml
# test/Tier0ChainTests_devval_AtlasOffline.xml
When installing the Tier0ChainTests package, cmt searches for the file with name:
> echo "test/Tier0ChainTests_`echo $AtlasArea | cut -d "/" -f 8,9 | sed 's/\//_/'`.xml"
(best to give this command a try yourself ;-)
If this file is not found, the fall-back file is:
test/Tier0ChainTests_DefaultConfiguration.xml
scripts/ directory:
-------------------
Contains wrapper run scripts called by the tct xml files.
Just calling a script will print a usage line.
Each script has a dryrun option that can be used for testing purposes. Do:
> tct_script -d arg1 arg2 arg3 etc
When the dryrun option is given, the tct_script wrapper is run, setting up the entire configuration,
but the underlying command is not executed.
####
# scripts/tct_getAmiTag.py
Scripts for getting the latest Recotrf.py configuration from ami.
Usage:
> tct_getAmiTag.py latest ami_recotrf.cmdargs
or eg
> tct_getAmiTag.py f85 myami_recotrf.cmdargs
the runarguments (eg preinclude, etc.) are then stored in the file: ami_recotrf.cmdargs.
This file can then be used by the script: scripts/tct_recotrf.sh
####
# scripts/tct_recotrf.sh
Script that calls the Recotrf.
Usage:
> tct_recotrf.sh 0 IDCosmic 3 500 /castor/cern.ch/grid/atlas/DAQ/2008/91890 ami_recotrf.cmdargs
where:
- 0: is the job id number
- IDCosmic: is the selected trigger stream
- 500: is the number of event to run over
- /castor/cern.ch/grid/atlas/DAQ/2008/91890: is the castor directory to get collections from.
In this example files are taken from: /castor/cern.ch/grid/atlas/DAQ/2008/91890/physics_IDCosmic
Where IDCosmis is the chosen trigstream.
- ami_recotrf.cmdargs: are the command arguments obtained from running tct_getAmiTag.py.
####
# scripts/tct_generate_copyscript.py
Called by scripts/tct_recotrf.sh. This script generates a dynamic script called:
copyscript.sh
which copies a unique set of input collections from castor to local disk.
These are the input collection for Recotrf.
####
# scripts/tct_recotrf_Prod.sh
# scripts/tct_recotrf_Tier0.sh
At date of writing, these are copies of scripts/tct_recotrf.sh
It is foreseen that AtlasTier0 or AtlasProduction may have different Recotrf.py occasionally.
Use these scripts to account for possible differences.
####
# scripts/tct_mergeMonHistograms.sh
Script for merging monitoring histograms:
Usage:
> tct_mergeMonHistograms.sh myMergedMonitoring.root myMergedMonitoring_IDCosmic_0.root myMergedMonitoring_IDCosmic_1.root etc
where:
- myMergedMonitoring.root: is the output file, and
- myMergedMonitoring_IDCosmic_0.root myMergedMonitoring_IDCosmic_1.root etc
are all input files to be merged.
####
# scripts/tct_dqwebdisplay.sh
Scripts for setting up the dq webdisplay. Usage:
> tct_dqwebdisplay.sh myMergedMonitoring.root Tier0ChainTests.TCTDQWebDisplayCosmics cosmics_run.latest.hcfg cosmics_minutes10.latest.hcfg cosmics_minutes30.latest.hcfg
where
- myMergedMonitoring.root : is the output produced by scripts/tct_mergeMonHistograms.sh
- Tier0ChainTests.TCTDQWebDisplayCosmics: is the dqwebdisplay configuration setup, found in the python/ directory.
Default setup is for local running: Tier0ChainTests.LocalDQWebDisplayCosmics
- cosmics_run.latest.hcfg cosmics_minutes10.latest.hcfg cosmics_minutes30.latest.hcfg: are han configuration files.
In case not set, fall-back han files are:
hanconfigfile=/afs/cern.ch/user/a/atlasdqm/dqmdisk/tier0/han_config/Cosmics/cosmics_run.1.80.hcfg
hanconfigfile=/afs/cern.ch/user/a/atlasdqm/dqmdisk/tier0/han_config/Cosmics/cosmics_minutes10.1.12.hcfg
hanconfigfile=/afs/cern.ch/user/a/atlasdqm/dqmdisk/tier0/han_config/Cosmics/cosmics_minutes30.1.9.hcfg
####
# scripts/tct_CreateDB.py
Scripts for setting up a dummy cool db. Used for testing of uploading dq flags.
This script is used inside: scripts/tct_dqwebdisplay.sh
####
# scripts/tct_dqutil.py
Utility script used by: scripts/tct_dqwebdisplay.sh
Specifically, used for interpreting the dqwebdisplay configuration, eg Tier0ChainTests.TCTDQWebDisplayCosmics
####
# scripts/tct_fixhanrootnames.py
Utility script used by: scripts/tct_dqwebdisplay.sh
Specifically, used for renaming produced han root files.
####
# scripts/tct_tagupload.sh
Script for testing of uploading tag files to db.
(In tct, called after AODtoTAG_trf.py command.)
Usage:
tct_tagupload.sh myTag.pool.root
####
# scripts/tct_dqupdatereferences.sh
Script to create new han config files based on old han config files and new han root files.
Usage:
> tct_dqupdatereferences.sh cosmics_minutes30.latest.hcfg run__minutes30_1_han.root new_cosmics_minutes30.latest.hcfg 91890
where :
- cosmics_minutes30.latest.hcfg is input han config file
- run__minutes30_1_han.root is the han root file
- new_cosmics_minutes30.latest.hcfg is the new han config file
- 91890 is the runnumber the han root file corresponds with.
####
# scripts/tct_changehanreferences.py
Utility script called by scripts/tct_dqupdatereferences.sh
Does the actual replacing of histogram references and creates the new han config file.
####
#scripts/tct_finishedMail.py
Send a mail when finished. Usage:
> scripts/tct_finishedMail.py email@cern.ch messagebody
####
# scripts/tct_runAll.sh
Script for runnign the entire tct chain in your shell.
Usage:
> tct_runAll.sh 2 IDCosmic 1 100 /castor/cern.ch/grid/atlas/DAQ/2008/0096544
to run two reco jobs, 1 bs collection each and over 100 events, over
IDCosmic collections from the castor directory
/castor/cern.ch/grid/atlas/DAQ/2008/0096544
python/ directory:
------------------
####
# python/CastorFileTool.py
Utility class used by scripts/tct_generate_copyscript.py
####
# python/LocalDQWebDisplayCosmics.py
# python/TCTDQWebDisplayCosmics.py
Two possible DQ webdisplay configuration settings. Local is the default, used for local running. TCT is run by the rtt.
package Tier0ChainTests
author Max Baak <mbaak@cern.ch>
use AtlasPolicy AtlasPolicy-*
apply_pattern declare_scripts files="../scripts/tct_*.py ../scripts/tct_*.sh"
#apply_pattern declare_joboptions files="../share/*.py"
apply_pattern declare_python_modules files="../python/*.py"
## note that the cut command is picking arguments 6 and 7, need for the built directory
## /build/atnight/localbuilds/nightlies/14.5.X.Y-T0/AtlasTier0/rel_0
## path_append TCTXMLFILE "../test/Tier0ChainTests_`echo $CMTPATH | sed 's/-VAL//' | sed 's/-T0//' | sed 's/-Prod//' | cut -d "/" -f 6,7 | sed 's/\//_/'`.xml"
path_append TCTXMLFILE "../test/Tier0ChainTests_`echo $CMTPATH | awk -F "nightlies" '{ print $2 }' | sed 's/-VAL//' | sed 's/-T0//' | sed 's/-Prod//' | cut -d "/" -f 2,3 | sed 's/\//_/'`.xml"
macro Tier0ChainTests_TestConfiguration "../../../InstallArea/share/Tier0ChainTests_TestConfiguration.xml"
apply_pattern declare_runtime extras="`if [ -f ${TCTXMLFILE} ]; then cp -f $TCTXMLFILE ../Tier0ChainTests_TestConfiguration.xml ; echo ../Tier0ChainTests_TestConfiguration.xml; else echo ../test/Tier0ChainTests_TestConfiguration.xml; fi`"
## For example, for $CMTPATH=/afs/cern.ch/atlas/software/builds/nightlies/devval/AtlasOffline/rel_4, the following command:
## > echo "../test/Tier0ChainTests_`echo $AtlasArea | sed 's/-VAL//' | sed 's/-T0//' | sed 's/-Prod//' | cut -d "/" -f 8,9 | sed 's/\//_/'`.xml"
## will result in: ../test/Tier0ChainTests_devval_AtlasOffline.xml
## if this file is not found, the fall-back file is: ../test/Tier0ChainTests_TestConfiguration.xml
private
use TestPolicy TestPolicy-*
public
private
apply_pattern validate_xml
public
# Copyright (C) 2002-2017 CERN for the benefit of the ATLAS collaboration
import os
from datetime import datetime
class CastorFileTool:
def __init__(self, baseCastorDir="/castor/cern.ch/grid/atlas/DAQ/2008/91890", inStream=["IDCosmic"], tmpdir="${TMPDIR}/",
nJobs=30, nFilesPerJob=7, nFilesNeeded=-1, linkdir="./", useBaseSubdirs=True,
runSubdirToSearch = "", staticRuns = [], lumiBlock = None):
# need this conversion map somewhere ...
self.monthMap = {}
self.monthMap["Jan"] = "01"
self.monthMap["Feb"] = "02"
self.monthMap["Mar"] = "03"
self.monthMap["Apr"] = "04"
self.monthMap["May"] = "05"
self.monthMap["Jun"] = "06"
self.monthMap["Jul"] = "07"
self.monthMap["Aug"] = "08"
self.monthMap["Sep"] = "09"
self.monthMap["Oct"] = "10"
self.monthMap["Nov"] = "11"
self.monthMap["Dec"] = "12"
self.twogblimit = 2000000000
self.baseCastorDir = baseCastorDir
self.inStream = inStream
if ( tmpdir.rfind("/")!=(len(tmpdir)-1) ):
tmpdir+="/"
self.tmpdir = tmpdir
self.nJobs = nJobs
self.nFilesPerJob = nFilesPerJob
self.nFilesNeeded = self.nJobs * self.nFilesPerJob
if (nFilesNeeded>0): self.nFilesNeeded = nFilesNeeded
self.fileIdx = 0
self.fileIdxEndOfRun = 0
self.fileList = []
self.fileListEndOfRun = []
if ( linkdir.rfind("/")!=(len(linkdir)-1) ):
linkdir+="/"
self.linkdir = linkdir
self.useBaseSubdirs = useBaseSubdirs
self.runSubdirToSearch = runSubdirToSearch
self.staticRuns = staticRuns
self.lumiBlock = lumiBlock
def init(self):
self.GetLatestRunFilesFromCastor()
def ListFromLocal(self,localDir):
timenow = datetime.today().strftime("%d%b%y.%Hh%M")
os.system("ls -l "+localDir+" > filelist.temp")
FileList=open("filelist.temp","r").readlines()
FileList1=[localDir+"/"+file.split()[8]
for file in FileList
if file.split()[4]!="0"]
os.system("rm -f filelist.temp")
return FileList1
def ListFromCastor(self,castorDir,stream="Express"):
timenow = datetime.today().strftime("%d%b%y.%Hh%M")
os.system("nsls -l "+castorDir+" | grep "+stream+" > filelist.temp")
FileList=open("filelist.temp","r").readlines()
FileList1=["rfio:"+castorDir+file.split()[8]
for file in FileList
if file.split()[4]!="0"]
os.system("rm filelist.temp")
return FileList1
def GetTmpdir(self):
# default
defaultTmpdir = ""
try:
defaultTmpdir = os.environ['TMPDIR']
except Exception,inst:
pass
# cern lxbatch machines
hostname = ""
try:
hostname = os.environ['HOSTNAME']
except Exception,inst:
pass
#hostname = os.environ['HOSTNAME']
hostPieces = hostname.split(".")
nhostPieces = len(hostPieces)
if(nhostPieces>=3):
if (hostPieces[0].find("lxb")!=-1) and (hostPieces[nhostPieces-2]=="cern"):
pwddir = os.environ['PWD']
if (pwddir.find("pool")!=-1): defaultTmpdir = pwddir
# use given tmpdir
if (len(self.tmpdir)>0): defaultTmpdir = self.tmpdir
# fall back option
if (len(defaultTmpdir)==0): defaultTmpdir="/tmp"
return defaultTmpdir
def StageCollections(self,inputFiles=[],stageFirstCollection=False):
baseTmpdir = self.GetTmpdir()
if ( baseTmpdir.rfind("/")!=(len(baseTmpdir)-1) ):
baseTmpdir+="/"
outputFiles = []
for input in inputFiles:
filePiece = input.split("/")
nPieces = len(filePiece)
tmpFile = baseTmpdir + "tcf_" + filePiece[nPieces-1]
outputFiles.append(tmpFile)
# stage first file ...
if(len(inputFiles)>=1 and stageFirstCollection):
rfioPiece = inputFiles[0].split(":")
nPieces = len(rfioPiece)
inFile = rfioPiece[nPieces-1]
outFile = outputFiles[0]
print "StageCollections() : Waiting till <%s> is staged." % (inputFiles[0])
os.system("rfcp "+inFile+" "+outFile)
os.system("touch "+outFile+"_stage.out")
os.system("touch "+outFile+"_stage.err")
os.system("ls -l -h %s" % (baseTmpdir))
return outputFiles
def GetFileTimeInt(self, castorLine):
# get current year
#from datetime import datetime
thisyearstr = datetime.today().strftime("%Y")
linesplit = castorLine.split()
# reconstruct directory creation time for sorting
monthstr = linesplit[5]
daystr = linesplit[6]
hourminstr = linesplit[7]
yearstr = ""
if (hourminstr.find(":")!=-1):
yearstr = thisyearstr
else:
yearstr = hourminstr
hourminstr = "00:00"
hourstr = hourminstr.split(":")[0]
minstr = hourminstr.split(":")[1]
# sortable time integer
timeint = int(yearstr+self.monthMap[monthstr]+daystr+hourstr+minstr)
return timeint
def GetRunNumber(self, fileName):
nameSplit = fileName.split(".")
# use first number in filename as runnumber
runString = ""
runNumber = -1
idx = 0
while ( idx<len(nameSplit) ):
try:
runNumber = int(nameSplit[idx]) # replace
except Exception,inst:
pass
if( runNumber!=-1 ): break
idx+=1
return runNumber
def LatestSubdirsFromCastor(self, castorDir, recursiveSearch=False):
if ( castorDir.rfind("/")!=(len(castorDir)-1) ):
castorDir+="/"
# dirs in given castor directory
timenow = datetime.today().strftime("%d%b%y.%Hh%M")
dirlist = "dirlist.temp" + timenow
os.system("nsls -l " + castorDir + " > " + dirlist)
FileList=open(dirlist,"r").readlines()
# construct map with subdirectories
dirList = {}
for line in FileList:
linesplit = line.split()
# filesize == 0 for a directory
if (linesplit[4]=="0"):
# sortable time integer
timeint = self.GetFileTimeInt(line)
timeFound = False
for key in dirList.keys():
if (timeint==key):
timeFound = True
break
if (not timeFound): dirList[timeint] = []
dirList[timeint].append(castorDir+linesplit[8])
# Be sure to delete tmpfile before next iteration.
os.system("rm -f "+dirlist)
keys = dirList.keys()
keys.sort()
# default is given castordir
if (len(keys)==0):
dirList[0] = [ castorDir ]
print "Using basedir : "
print castorDir
recursiveSearch = False
# return
if (recursiveSearch):
return self.LatestSubdirsFromCastor(dirList[keys[len(keys)-1]][0],recursiveSearch)
else: return dirList
def RunFilesFromCastor(self, castorDir, streamStr=["Express"]):
if ( castorDir.rfind("/")!=(len(castorDir)-1) ):
castorDir+="/"
if ( self.runSubdirToSearch != "" ):
if ( self.runSubdirToSearch.rfind("/")!=(len(self.runSubdirToSearch)-1) ):
self.runSubdirToSearch += "/"
castorDir += self.runSubdirToSearch
# dirs in given castor directory
timenow = datetime.today().strftime("%d%b%y.%Hh%M")
filelist = "filelist.temp" + timenow
if self.lumiBlock != None and ',' in self.lumiBlock:
FileList = []
for iblock in range(int(self.lumiBlock.split(',')[0]),int(self.lumiBlock.split(',')[1])+1):
cmd = "nsls -l " + castorDir
for grepStr in streamStr:
cmd += " | grep " + grepStr
pass
cmd += " | grep _lb%0.4d" % iblock
cmd += " > " + filelist
print "RunFilesFromCastor : running command",cmd
os.system(cmd)
FileList += open(filelist,"r").readlines()
pass
else:
cmd = "nsls -l " + castorDir
for grepStr in streamStr:
cmd += " | grep " + grepStr
if self.lumiBlock != None: cmd += " | grep _lb%0.4d" % int(self.lumiBlock)
cmd += " > " + filelist
print "RunFilesFromCastor : running command",cmd
os.system(cmd)
FileList=open(filelist,"r").readlines()
pass
# construct map with subdirectories
runMap = {}
for line in FileList:
linesplit = line.split()
# filesize == 0 for a directory
if (linesplit[4]!="0"):
# run number
runNumber = self.GetRunNumber(linesplit[8])
runFound = False
for key in runMap.keys():
if (runNumber==key):
runFound = True
break
if (not runFound): runMap[runNumber] = []
# two gb webserver limit
if (self.nFilesNeeded==5 and self.nFilesPerJob==1 and int(linesplit[4])>=self.twogblimit): continue
runMap[runNumber].append(castorDir+linesplit[8])
# Be sure to delete tmpfile before next iteration.
os.system("rm -f " + filelist)
return runMap
def GetLatestRunFilesFromCastor(self,nFilesNeeded=-1):
if (nFilesNeeded>0): self.nFilesNeeded = nFilesNeeded
print "CastorTool.GetLatestRunFilesFromCastor() : INFO"
print ">> Now retrieving names of latest <", self.nFilesNeeded, "> runfiles on Castor. This may take some time."
print ">> Searching in latest subdirector(y/ies) of <", self.baseCastorDir, ">, matching <", self.inStream, ">, and runsubdir <", self.runSubdirToSearch, ">."
fileList = []
fileListEndOfRun = []
dirMap = {}
# retrieve castor directories, with timestamps
if (self.useBaseSubdirs):
dirMap = self.LatestSubdirsFromCastor( self.baseCastorDir, recursiveSearch=False )
else: dirMap[len(self.staticRuns)] = [ self.baseCastorDir ]
# sort directories in time
timeints = dirMap.keys()
timeints.sort()
# add static runs to the top
idx = 1
for staticRunNumber in self.staticRuns:
dirMap[timeints[len(timeints)-1]+idx] = [ self.baseCastorDir + "/" + str(staticRunNumber)]
idx+=1
# sort directories again in time
timeints = dirMap.keys()
timeints.sort()
# loop over latest castor directories, till enough files found
noFilesInRunMap = 0
idx = len(timeints)-1
while (idx>=0):
timeint = timeints[idx]
# collect runfiles from next-to-latest directory
print "Now retrieving files from directory : " + dirMap[timeint][0],timeint
runMapTmp = self.RunFilesFromCastor( dirMap[timeint][0], self.inStream )
#print runMapTmp
# sort runs
runs = runMapTmp.keys()
runs.sort()
#if len(self.staticRuns)>0:
# runs = self.staticRuns + runs
# add enough files from next-to-latest run
jdx = len(runs)-1
while (jdx>=0 and noFilesInRunMap<self.nFilesNeeded):
run = runs[jdx]
print ">> Now including run <", run, ">"
noLoop = 0
if (len(runMapTmp[run]) > self.nFilesNeeded-noFilesInRunMap): # add only number of files needed
noLoop = self.nFilesNeeded-noFilesInRunMap
else: noLoop = len(runMapTmp[run]) # add everything
kdx = 0
while (kdx<noLoop):
#while (kdx<self.nFilesNeeded-noFilesInRunMap):
fileList.append(runMapTmp[run][kdx]) # reverse map for later use
fileListEndOfRun.append(runMapTmp[run][len(runMapTmp[run])-(kdx+1)])
kdx+=1
noFilesInRunMap += noLoop
jdx-=1
# stop when enough files have been found
if (noFilesInRunMap>=self.nFilesNeeded): break
idx-=1
self.fileList = fileList
self.fileListEndOfRun = fileListEndOfRun
self.nJobs = int(len(fileList)/self.nFilesPerJob)
print ">> Retrieved <", len(fileList), "> file names. Enough for <", self.nJobs, "> jobs with <", self.nFilesPerJob, "> files per job."
return fileList,fileListEndOfRun
def GetFilesForJob(self, nFiles=-1, fileList=[], fileListEndOfRun=[], addFilesEndOfRun=False, jobIdx=-1):
if (fileList==[]): fileList = self.fileList
if (fileListEndOfRun==[] and addFilesEndOfRun):
fileListEndOfRun = self.fileListEndOfRun
if (nFiles<=0): nFiles = self.nFilesPerJob
fileIdxTemp = self.fileIdx
if (jobIdx>=0): self.fileIdx = jobIdx * nFiles
if (self.fileIdx+nFiles>=len(fileList)):
print "CastorTool.GetFilesForJob() : WARNING"
print "fileindex > number of files",self.fileIdx,len(fileList),nFiles
self.fileIdx = (self.fileIdx+nFiles)%(len(fileList))
print "new fileindex is",self.fileIdx