Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
C
coding-rules
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Scott Snyder
coding-rules
Commits
558f7aca
Commit
558f7aca
authored
7 years ago
by
scott snyder
Browse files
Options
Downloads
Patches
Plain Diff
add note on unit testing.
parent
e273451f
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
01-unit-tests.org
+153
-0
153 additions, 0 deletions
01-unit-tests.org
with
153 additions
and
0 deletions
01-unit-tests.org
0 → 100644
+
153
−
0
View file @
558f7aca
#+TITLE: Unit testing in ATLAS
#+AUTHOR: Scott Snyder
* Unit tests in offline builds using UnitTest_run
The simplest way to write a unit test for the Atlas offline builds
is to use the =UnitTest_run= pattern from the =TestTools= package.
** Basic usage
Here is an outline of the steps required to write a unit test.
For the sake of concreteness, suppose we want to write a unit
test for a class called =MyClass= that lives in =MyPackage=.
1. Create the unit test itself, in the file =MyPackage/test/MyClass_test.cxx=.
If this test exits with a non-zero status code, the test fails.
Further, the test may produce output. This will be compared
with a reference file, and the test fails if there are differences.
2. Add the test to the =requirements= file. You should have something
like this:
#+BEGIN_EXAMPLE
private
use TestTools TestTools-* AtlasTest
apply_pattern UnitTest_run unit_test=MyClass
#+END_EXAMPLE
Once you've added this, running `=make check=' should compile and
run your test. The output of the test will be written to
=MyPackage/run/MyClass_test.log=.
3. Put the expected output of your test in the file
=MyPackage/share/MyClass_test.ref=. Usually, you can just copy
the log file produced by running the test.
4. To get the tests to run as part of the release, you need to create
a file =MyPackage/test/MyPackage.xml=. This should look something
like this:
#+BEGIN_EXAMPLE
<?xml version="1.0"?>
<atn>
<TEST name="MyPackageTest" type="makecheck" suite="Examples">
<package>Full/Path/To/MyPackage</package>
<timelimit>10</timelimit>
<author> Joe Random Programmer </author>
<mailto> j.r.programmer@cern.ch </mailto>
<expectations>
<errorMessage>Athena exited abnormally</errorMessage>
<errorMessage>differ</errorMessage>
<warningMessage> # WARNING_MESSAGE : post.sh> ERROR</warningMessage>
<successMessage>check ok</successMessage>
<returnValue>0</returnValue>
</expectations>
</TEST>
</atn>
#+END_EXAMPLE
** Some finer points of unit testing
You can write your tests in one of two ways. The check can be done
within the test itself, with the test returning a non-zero error code
on failure. Alternatively, the test program can produce output which
is then compared to the reference files. Note that these alternatives
are not exclusive; a given test may use both methods. Generally,
it is preferable to do as much checking as possible within the
test program itself, and to limit the amount of output produced.
This makes it easier to try out the tests interactively: if the test
program crashes on a failure, then it is immediately obvious, while
a failure that shows up as differing output may not be noticed.
Also, limiting the amount of output produced by the test program
makes it easier to maintain the reference files.
In writing tests, it is convenient to use the =assert= macro for tests.
However, remember that assertions are turned off by default in optimized
builds. Thus, the =_test.cxx= file should usually start with the line
(before any =#include= directives):
#+BEGIN_EXAMPLE
#undef NDEBUG
#+END_EXAMPLE
Following this, the next line should usually be the =#include= for the component
being tested. Putting this first ensures that you'll get a compilation
error if any needed =#include= directives are missing from the
component's header.
While it's a good idea to keep output from the tests to a minimum,
it can be useful to print out a heading before major steps of the test.
This aids in localizing the piece of code that failed when there's
a crash.
So a complete test might look something like this:
#+BEGIN_EXAMPLE
#undef NDEBUG
#include "MyPackage/MyClass.h"
#include <iostream>
#include <cassert>
void test1()
{
std::cout << "test1\n";
MyClass obj;
assert (obj.something() == 0);
}
void test2()
{
std::cout << "test1\n";
MyClass obj ("data");
assert (obj.something_else() == "123");
}
int main()
{
test1();
test2();
return 0;
}
#+END_EXAMPLE
After the test completes, its output is compared with the reference file.
However, not all lines are compared; the output is filtered
to remove lines that look like they contain timestamps, pointer values,
and other things that can vary from run to run. This is done by the
script =TestTools/share/post.sh=; look there to find the complete list
of patterns that are ignored.
You can add more patterns to be ignored on a test-by-test basis using the
optional =extrapatterns= to the cmt pattern. These should be =egrep=-style
patterns; lines that match will be ignored. For example:
#+BEGIN_EXAMPLE
apply_pattern UnitTest_run unit_test=MyClass \
extrapatterns="^IncidentSvc +DEBUG|^JobOptionsSvc +INFO"
#+END_EXAMPLE
(If you need to embed a double-quote mark in a pattern, use the special
cmt macro =$(q)=.)
suppress locale
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment