Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
L
LHCb
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Iterations
Requirements
Jira
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
LHCb
LHCb
Commits
82eb106e
Commit
82eb106e
authored
1 year ago
by
Marco Clemencic
Browse files
Options
Downloads
Patches
Plain Diff
Make the FileSummaryRecord.periodic_flush test more reliable
parent
485cb0e7
No related branches found
No related tags found
1 merge request
!4061
Make the FileSummaryRecord.periodic_flush test more reliable
Pipeline
#5383135
passed
1 year ago
Stage: test
Stage: .post
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
Kernel/FileSummaryRecord/tests/python/FSRTests/periodic_flush.py
+17
-17
17 additions, 17 deletions
...FileSummaryRecord/tests/python/FSRTests/periodic_flush.py
with
17 additions
and
17 deletions
Kernel/FileSummaryRecord/tests/python/FSRTests/periodic_flush.py
+
17
−
17
View file @
82eb106e
###############################################################################
# (c) Copyright 2022 CERN for the benefit of the LHCb Collaboration
#
# (c) Copyright 2022
-2023
CERN for the benefit of the LHCb Collaboration #
# #
# This software is distributed under the terms of the GNU General Public #
# Licence version 3 (GPL Version 3), copied verbatim in the file "COPYING". #
...
...
@@ -28,37 +28,37 @@ def checkDiff(a, b):
def
keep_intermediate_files
(
filename
):
from
time
import
sleep
wait_time
=
0.1
max_time
=
90
max_iter
=
int
(
max_time
/
wait_time
)
counter
=
0
while
max_iter
and
counter
<
6
:
print
(
"
===> starting thread to copy FSR dumps
"
,
flush
=
True
)
from
time
import
sleep
,
time
wait_time
=
0.01
max_time
=
time
()
+
300
counter
=
-
1
# when to stop looking for new FSR dumps after event 5
# or if we have been running for too long
while
counter
<
5
and
time
()
<
max_time
:
try
:
data
=
open
(
filename
).
read
()
parsed_data
=
json
.
loads
(
data
)
# check that the file can be parser
# we make a copy of the FSR file on the first appearance and whenever
# we see a change, but only after the guid of the output file is included
# (if the test job is very slow we may end up with an FSR file with counter
# of 1 and no guid followed by a file with counter of 1 and a guid, but the
# validation fails if we have a file with counter of 1 and no guid)
if
counter
==
0
or
(
data
!=
open
(
f
"
{
filename
}
.~
{
counter
-
1
}
~
"
).
read
()
and
"
guid
"
in
parsed_data
):
# we make a copy of the FSR file for each event, only if the output file guid is filled
if
not
parsed_data
.
get
(
"
guid
"
):
continue
counter
=
parsed_data
.
get
(
"
EvtCounter.count
"
,
{}).
get
(
"
nEntries
"
,
-
1
)
if
counter
>=
0
and
not
os
.
path
.
exists
(
f
"
{
filename
}
.~
{
counter
}
~
"
):
with
open
(
f
"
{
filename
}
.~
{
counter
}
~
"
,
"
w
"
)
as
f
:
f
.
write
(
data
)
print
(
"
=== FSR begin ===
"
)
print
(
data
.
rstrip
())
print
(
"
=== FSR end ===
"
)
print
(
f
"
---> wrote
{
filename
}
.~
{
counter
}
~
"
,
flush
=
True
)
counter
+=
1
except
FileNotFoundError
:
pass
except
json
.
JSONDecodeError
:
continue
# the file is corrupted (it might be being written)
sleep
(
wait_time
)
max_iter
-=
1
print
(
"
===> stopping thread to copy FSR dumps
"
,
flush
=
True
)
def
config
():
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment