Skip to content
Snippets Groups Projects
Commit 558f7aca authored by scott snyder's avatar scott snyder
Browse files

add note on unit testing.

parent e273451f
No related branches found
No related tags found
No related merge requests found
#+TITLE: Unit testing in ATLAS
#+AUTHOR: Scott Snyder
* Unit tests in offline builds using UnitTest_run
The simplest way to write a unit test for the Atlas offline builds
is to use the =UnitTest_run= pattern from the =TestTools= package.
** Basic usage
Here is an outline of the steps required to write a unit test.
For the sake of concreteness, suppose we want to write a unit
test for a class called =MyClass= that lives in =MyPackage=.
1. Create the unit test itself, in the file =MyPackage/test/MyClass_test.cxx=.
If this test exits with a non-zero status code, the test fails.
Further, the test may produce output. This will be compared
with a reference file, and the test fails if there are differences.
2. Add the test to the =requirements= file. You should have something
like this:
#+BEGIN_EXAMPLE
private
use TestTools TestTools-* AtlasTest
apply_pattern UnitTest_run unit_test=MyClass
#+END_EXAMPLE
Once you've added this, running `=make check=' should compile and
run your test. The output of the test will be written to
=MyPackage/run/MyClass_test.log=.
3. Put the expected output of your test in the file
=MyPackage/share/MyClass_test.ref=. Usually, you can just copy
the log file produced by running the test.
4. To get the tests to run as part of the release, you need to create
a file =MyPackage/test/MyPackage.xml=. This should look something
like this:
#+BEGIN_EXAMPLE
<?xml version="1.0"?>
<atn>
<TEST name="MyPackageTest" type="makecheck" suite="Examples">
<package>Full/Path/To/MyPackage</package>
<timelimit>10</timelimit>
<author> Joe Random Programmer </author>
<mailto> j.r.programmer@cern.ch </mailto>
<expectations>
<errorMessage>Athena exited abnormally</errorMessage>
<errorMessage>differ</errorMessage>
<warningMessage> # WARNING_MESSAGE : post.sh> ERROR</warningMessage>
<successMessage>check ok</successMessage>
<returnValue>0</returnValue>
</expectations>
</TEST>
</atn>
#+END_EXAMPLE
** Some finer points of unit testing
You can write your tests in one of two ways. The check can be done
within the test itself, with the test returning a non-zero error code
on failure. Alternatively, the test program can produce output which
is then compared to the reference files. Note that these alternatives
are not exclusive; a given test may use both methods. Generally,
it is preferable to do as much checking as possible within the
test program itself, and to limit the amount of output produced.
This makes it easier to try out the tests interactively: if the test
program crashes on a failure, then it is immediately obvious, while
a failure that shows up as differing output may not be noticed.
Also, limiting the amount of output produced by the test program
makes it easier to maintain the reference files.
In writing tests, it is convenient to use the =assert= macro for tests.
However, remember that assertions are turned off by default in optimized
builds. Thus, the =_test.cxx= file should usually start with the line
(before any =#include= directives):
#+BEGIN_EXAMPLE
#undef NDEBUG
#+END_EXAMPLE
Following this, the next line should usually be the =#include= for the component
being tested. Putting this first ensures that you'll get a compilation
error if any needed =#include= directives are missing from the
component's header.
While it's a good idea to keep output from the tests to a minimum,
it can be useful to print out a heading before major steps of the test.
This aids in localizing the piece of code that failed when there's
a crash.
So a complete test might look something like this:
#+BEGIN_EXAMPLE
#undef NDEBUG
#include "MyPackage/MyClass.h"
#include <iostream>
#include <cassert>
void test1()
{
std::cout << "test1\n";
MyClass obj;
assert (obj.something() == 0);
}
void test2()
{
std::cout << "test1\n";
MyClass obj ("data");
assert (obj.something_else() == "123");
}
int main()
{
test1();
test2();
return 0;
}
#+END_EXAMPLE
After the test completes, its output is compared with the reference file.
However, not all lines are compared; the output is filtered
to remove lines that look like they contain timestamps, pointer values,
and other things that can vary from run to run. This is done by the
script =TestTools/share/post.sh=; look there to find the complete list
of patterns that are ignored.
You can add more patterns to be ignored on a test-by-test basis using the
optional =extrapatterns= to the cmt pattern. These should be =egrep=-style
patterns; lines that match will be ignored. For example:
#+BEGIN_EXAMPLE
apply_pattern UnitTest_run unit_test=MyClass \
extrapatterns="^IncidentSvc +DEBUG|^JobOptionsSvc +INFO"
#+END_EXAMPLE
(If you need to embed a double-quote mark in a pattern, use the special
cmt macro =$(q)=.)
suppress locale
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment