Commit bf9fd85e authored by scott snyder's avatar scott snyder
Browse files

Finish unit testing note.

parent 558f7aca
#+MACRO: version 0.1
#+TITLE: Unit testing in ATLAS
#+AUTHOR: Scott Snyder
* Unit tests in offline builds using UnitTest_run
# Put a frame around examples in LaTeX.
#+LaTeX_HEADER: \usepackage{fancyvrb}
#+LaTeX_HEADER: \RecustomVerbatimEnvironment{verbatim}{Verbatim}{frame=single}
#+LaTeX_HEADER: \usepackage{lineno}
#+LaTeX_HEADER: \linenumbers
#+LaTeX_HEADER: \usepackage{fancyhdr}
#+LaTeX_HEADER: \pagestyle{fancy}
#+LaTeX_HEADER: \rfoot{Version {{{version}}}}
#+LaTeX_HEADER: \lhead{}
* Introduction
An often-neglected practice in ATLAS is writing unit tests.
A ``unit test'' refers to a test that exercises one particular
software component. Ideally, it should execute all the code
within the component and try out all defined use cases.
Having good unit tests brings many advantages:
- First, it gives you confidence when writing the component that it is
actually doing what you intended.
- When someone is later modifying the component, perhaps to add new features,
the unit test can ensure that the changes do not inadvertently
break existing functionality.
- If some other part of the software on which your component depends
is changed in such a way that it breaks your component, then a unit test
can detect this early.
- Finally, a unit test can serve as an example of how the component
is supposed to work.
While ``integrated tests'' that run, say, the entire reconstruction
can also detect problems, unit tests will typically allow identifying
problems faster, more reliably, and with less effort.
For a quick summary of what you need to know to get started writing
unit tests for Athena, see the ``Basic usage'' section below. Read the
rest of this note for further details.
* Unit tests in offline builds using =UnitTest_run=
The simplest way to write a unit test for the Atlas offline builds
is to use the =UnitTest_run= pattern from the =TestTools= package.
......@@ -21,14 +64,14 @@ test for a class called =MyClass= that lives in =MyPackage=.
2. Add the test to the =requirements= file. You should have something
like this:
use TestTools TestTools-* AtlasTest
apply_pattern UnitTest_run unit_test=MyClass
Once you've added this, running `=make check=' should compile and
Once you've added this, running =make check= should compile and
run your test. The output of the test will be written to
......@@ -131,7 +174,7 @@ int main()
After the test completes, its output is compared with the reference file.
However, not all lines are compared; the output is filtered
to remove lines that look like they contain timestamps, pointer values,
to remove lines that look like they contain time stamps, pointer values,
and other things that can vary from run to run. This is done by the
script =TestTools/share/; look there to find the complete list
of patterns that are ignored.
......@@ -148,6 +191,276 @@ apply_pattern UnitTest_run unit_test=MyClass \
(If you need to embed a double-quote mark in a pattern, use the special
cmt macro =$(q)=.)
suppress locale
Messages produced using the =errorcheck= facilities in AthenaKernel
include the full name of the source file in which the message was generated.
This can also affect reference file comparisons. If this is a problem,
use this to suppress the file names from the messages:
#include "AthenaKernel/errorcheck.h"
int main()
** Helpers for unit testing
A few small utilities are available in TestTools to help with writing
unit tests. These include:
- =FLOATassert.h=: Includes =floatEQ=, =floatNEQ=, and =isEqual= to compare
floating-point numbers within a threshold.
- =expect_exception.h=: Provides the macro =EXPECT_EXCEPTION=, which
will trigger an assertion failure if an exception is _not_ raised.
For example,
#include "TestTools/expect_exception.h"
std::vector foo;
// check that raises out_of_range.
- =random.h=: This file provides a very simple random number generator
that can be used in regression tests. The quality of the random numbers
will not be very good (it's a 32-bit LCG), but the sequence should be
the same on all platforms.
This provides both free functions to generate random numbers
as well as classes compatible with the STL =RandomNumberGenerator=
and =UniformRandomNumberGenerator= interfaces.
- =initGaudi.h=: If your test needs to use Gaudi components,
you can use this to get Gaudi initialized. It takes as input
a _text_ (not Python) Gaudi job options file. Example:
#include "TestTools/initGaudi.h"
ISvcLocator* svcloc = 0;
if (!Athena_test::initGaudi ("test.txt", svcloc)) {
std::cerr << "Can't initialize Gaudi.\n";
The job options file is searched for first in the current directory,
then in =../share=. An example job options file is:
// common job opts for SG unit tests
ApplicationMgr.DLLs += { "StoreGate" };
MessageSvc.OutputLevel = 2;
ApplicationMgr.ExtSvc += {"IncidentSvc", "ChronoStatSvc",
In many cases, it may be more useful to use =setupStoreGate=
from the =StoreGate= package. This will initialize Gaudi, and
if a job options file is not supplied, will automatically create
one that will set up =StoreGateSvc=. For example:
#include "StoreGate/setupStoreGate.h"
ISvcLocator* svcloc = 0;
if (!Athena_test::setupStoreGate (argv[0])) {
std::cerr << "Can't initialize Gaudi.\n";
* Athena tests
Sometimes, it is most convenient to test a component in the context of a full
run of Athena. It could be, for example, that the component depends
on something like the full detector description, which is not straightforward
to set up outside of Athena. There is another pattern,
=athenarun_test= (in =TestPolicy=) that you can use to run such tests.
Generally, what you would want to do is to write your tests as an Athena
algorithm. This can be done either in C++, in which case the algorithm
should be included as part of the library, or in Python, in which case
the test algorithm can be included directly in the test job options file.
You should then create in the =share= directory a job options file
which will run your test. For an example of a test written in C++,
you can look at =CaloUtils/src/CaloTowerBuilderToolTestAlg.cxx=,
with the corresponding job options file
=CaloUtils/share/ For an example of a test
written in Python, see =CaloTriggerTool/share/
In either case, you'd declare the test in the requirements file
with something that looks like this:
use TestPolicy TestPolicy-*
use TestTools TestTools-* AtlasTest -no_auto_imports
apply_pattern athenarun_test \
name="CaloSuperCellIDTool_test" \
options="CaloTriggerTool/" \
pre_script="../cmt/" \
post_script="${TESTTOOLSROOT}/share/ CaloSuperCellIDTool_test"
Here, =options= gives the job options file that defines the test.
The argument =pre_script= is a script that will run before the test runs.
This can be used to set up paths to input files, and so on; however,
it's often better to just to that within the job options file itself,
to make it self-contained. If you don't need any special setup,
then you can specify the cmt file, as above.
The =post_script= argument gives a script to run after the test completes.
This is where the output will be compared to the reference file.
The example above uses the script from TestTools; this will compare
the output from the test against a reference file in the =share= directory.
(For the above example, the reference file will be called
=share/CaloSuperCellIDTool_test.ref=.) As mentioned above, the test output
is filtered to remove lines which likely vary from run to run.
If you need to add additional patterns to ignore, you can add them
as an additional argument to
post_script="${TESTTOOLSROOT}/share/ My_test \
$(q)ignore this|and that$(q)"
Note that you need to use =$(q)= to embed a quote in the cmt argument.
Alternatively, you can write your own script to check the test results.
For an example, see the package =Control/DataModelTest/DataModelRunTests=.
Use =make check= as before to run these tests.
To run the tests automatically during the build, create an test xml file
as outlined above.
* Python tests
One can also write tests for python components. As an additional feature,
Python components can also be checked for coverage; that is, the test will
fail if there are some executable lines of the component that did not
get run by the test.
The recommended way to set up tests for Python is with the doctest framework.
This is fully described in the Python library documentation, but a brief
summary will be given here. The idea is that with in the documentation
string at the start of any function, you can give a series of Python
commands and the corresponding expected output. When the test is run,
each line of input will be fed to the Python interpreter. For example:
def sanitize_hname(s):
"""Make a string safe to use as a histogram name.
Root does bad things if you put / in a histogram name,
so we remove them.
>>> print sanitize_hname('foo')
>>> print sanitize_hname('foo/bar')
foo DIV bar
return s.replace ('/', ' DIV ')
When you're writing your component, it's a good idea to add such tests
wherever they make sense.
Sometimes, though, the tests may be too involved to put directly in your
source code. In that case, tests can also be put in a separate file.
For example, if you have a source file =python/, you can create
a test file =test/ In this file, you can define additional
functions with just tests. For example,
def _regr_basic():
"""Very basic root_pickle test.
>>> import ROOT
>>> from PyAnalysisUtils import root_pickle
>>> h1 = ROOT.TH1F ('h1', 'h1', 10, 0, 10)
>>> h2 = ROOT.TH1F ('h2', 'h2', 10, 0, 10)
>>> l1 = [h1, h2]
>>> root_pickle.dump_root (l1, 'test.root')
>>> l2 = root_pickle.load_root ('test.root')
>>> assert len(l2) == 2
>>> print [h.GetName() for h in l2]
['h1', 'h2']
>>> import os
>>> os.remove ('test.root')
To enable the tests, you can put the following lines at the bottom
from PyUtils import coverage
c = coverage.Coverage (MyPackage.mycomp)
c.doctest_cover ()
(replacing =MyPackage= and =mycomp= as appropriate). This will run all
doctests in both and, and will generate a warning
if there are any executable lines in that didn't get executed.
To suppress this warning for specific lines, use this special comment:
if error:
do_something() #pragma: NO COVER
To just run the tests in (and don't check coverage), you can use:
import doctest
See the doctest documentation for further options.
The file can be executed interactively in order to run
the tests. To add it to cmt, use a statement like the following:
use TestPolicy TestPolicy-*
use TestTools TestTools-* AtlasTest \
document athenarun_launcher mycomp_utest -group=check \
athenarun_exe="python" \
athenarun_pre="'source ../cmt/'" \
athenarun_opt="../test/" \
athenarun_out="' >& mycomp_t.log'" \
athenarun_post="'${TESTTOOLS_ROOT}/share/ mycomp_t '"
Then =make check= should run the tests. To have the tests run automatically
during a build, you need to add a test xml file as outlined above.
For some examples of testing Python components, see the package
# LocalWords: LaTeX usepackage fancyvrb RecustomVerbatimEnvironment
# LocalWords: lineno linenumbers fancyhdr pagestyle rfoot UnitTest
# LocalWords: TestTools MyClass MyPackage AtlasTest xml atn mailto
# LocalWords: MyPackageTest makecheck timelimit errorMessage ok cxx
# LocalWords: warningMessage successMessage returnValue undef test1
# LocalWords: NDEBUG iostream cassert test2 extrapatterns cmt egrep
# LocalWords: IncidentSvc JobOptionsSvc errorcheck AthenaKernel LCG
# LocalWords: FLOATassert floatEQ floatNEQ isEqual STL initGaudi SG
# LocalWords: RandomNumberGenerator UniformRandomNumberGenerator h1
# LocalWords: Gaudi ISvcLocator svcloc txt ApplicationMgr DLLs argv
# LocalWords: StoreGate MessageSvc OutputLevel ExtSvc ChronoStatSvc
# LocalWords: AuditorSvc setupStoreGate StoreGateSvc athenarun pre
# LocalWords: TestPolicy CaloUtils CaloTriggerTool TESTTOOLSROOT h2
# LocalWords: CaloSuperCellIDTool doctest hname 'foo regr TH1F 'h1
# LocalWords: PyAnalysisUtils 'h2 l1 'test l2 len GetName 'h2' os '
# LocalWords: mycomp py PyUtils doctests pragma testmod utest exe '
# LocalWords: 'source log' PhysicsAnalysis
......@@ -2284,6 +2284,8 @@ The comment includes the fact that it is the perpendicular distance.
Check fwd/bwd quotes in conversion to html.
Some topics to add later.
- Move constructors / assignments
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment