Stand-alone HTTP based interface for ATLAS TDAQ Information Service
This is a stand-alone library depending only on curl and nlohmann/json to read and write data from and to the Information Service of the ATLAS TDAQ system.
It is normally part of tdaq-common
so that ATLAS software
has easy access to it.
Building
Note: it is not necessary to setup a TDAQ or ATLAS software release. This package can be compiled as stand-alone software.
git clone https://gitlab.cern.ch/rhauser/webdaq.git
mkdir build
cd build
cmake ../webdaq
make
If the CURL and nlohmann-json development headers can not be found, it will download and build them itself.
Configuration
The library requires the TDAQ_WEBDAQ_BASE
environment variable to be
set. It should contain the protocol, hostname and, if required, the
port number of webis_server to access.
E.g. inside Point 1 this could be:
export TDAQ_WEBDAQ_BASE=http://pc-tdq-mon-01.cern.ch:5000
It also supports a syntax of the form:
export TDAQ_WEBDAQ_BASE=unix:/path/to/socket
where the path is expected to be a Unix doman socket.
The TDAQ_WEBDAQ_COOKIEFILE
environment variable can point to a file with cookies
that will be used by libcurl.
Trying it out
On a machine which has access to a running TDAQ infrastructure,
start the webis_server
:
cm_setup nightly
webis_server --port=8080 --writable &
In the terminal where you build the webdaq client:
export TDAQ_WEBDAQ_BASE=http://$(hostname):8080
./test/test-webdaq
See the source code to the test program on how to use the functions.
With a Unix domain socket
cm_setup nightly
webis_server --unix=/tmp/webis.socket --writable
export TDAQ_WEBIS_BASE=unix:/tmp/webis.socket
./test/test-webdaq
In this case the server and the client must be on the same host, obviously.
Other configuration options
To set libcurl into verbose mode:
export TDAQ_WEBDAQ_VERBOSE=1
To have the webdaq
library use a proxy:
export TDAQ_WEBDAQ_PROXY=https://atlasgw.cern.ch:3182
To use GSSAPI (kerberos) authentication for the proxy:
export TDAQ_WEBDAQ_PROXY_NEGOTIATE=1
To disable TLS verification:
export TDAQ_WEBDAQ_VERIFY=0
API
See the main header file
All the functions described in the following have a variant
that takes a CURL*
handle as first argument. To use
these you have to include webdaq/webdaq-curl.hpp
instead.
Information system
Apart from the obvious paramaters like partition
, server
and name
, the webdaq::is::get()
function takes a boolean
parameter meta
(false by default). If set to yes, the result
will be an array of four values:
[ "object name", "object type", "time stamp", { ... } ]
The last value is the actual IS object.
If meta
is false, only the fourth entry in the array above
is returned. This should be the most common use case.
The webdaq::is::put()
function requires the user to specify
the object's type.
The last optional parameter for both functions is a tag for the object (-1 by default, i.e. the last tag).
Online Histograms
The data format for histograms is the JSRoot format. It is up to the application to convert any ROOT histograms from and to this format.
if(oh::get("ATLAS", "Histogramming.HLTSV.ProcessingTime", str)) {
TObject *obj = TBufferJSON.ConvertFromJSON(str);
if(TH1F *h = dynamic_cast<TH1F*>(obj)) {
h->Draw();
}
}
// publish
TH1F *h = new TH1F(...)
h->Fill(...)
std::string data = TBufferJSON::ConvertToJSON(h, 23);
oh::put("ATLAS", "Histogramming.MyProvider.Test", data);
OKS
Data is returned as nlohmann::json
objects.
if(oks::get("ATLAS", "Partition", "ATLAS", result)) {
std::cout << result << std::endl;
}
ERS
The ERS issue is encoded in JSON.
json issue{
{ "sev" : "ERROR" },
{ "message": "This is an error" },
...
};
ers::send("ATLAS", issue);
The full list of supported keys and JSON types for the values:
"msg" : "class_name"
"sev" : "severity level" // LOG,DEBUG,INFO,WARNING,ERROR,FATAL
"time": "%Y-%b-%d %H:%M:%S" // formatted time string
"message": "The free text error message"
"params: {
{ "param1", "value1" },
{ "param2", "value2" },
...
}
"qual" : [ "qual1", "qual2" ,... ]
"app" : "application_name"
"host" : "host_name"
"file" : "file_name.cxx"
"func" : "function_name()"
"pkg" : "package_name"
"line" : 32
"user" : "user_name"
"pid" : 12345
Event Monitoring
The subscription is specified in JSON
json sub{
{ "sampler_type" : "dcm" },
{ "stream_tag" : "physics" },
{ "stream_names" : [ ... ] }
};
auto id = emon::subscribe("ATLAS", sub);
if(!id.empty()) {
std::vector<char> data;
emon::nextEvent(id, data);
...
}
The full ist of keys and the JSON types of their values:
"sampler_type" : "dcm"
"sampler_keys" : [ "key1", "key2", ...]
"sampler_number" : 10
"status_word" : 0
"lvl1_trigger_type" : 0
"lvl1_trigger_bits" : [ 1, 10, 245 ]
"lvl1_trigger_logic" : "IGNORE" // "AND", "OR"
"lvl1_trigger_origin : "AFTER_VETO" // "BEFORE_PRESCALES", "AFTER_PRESCALES"
"stream_tag" : "physics"
"stream_names" : [ "stream1", "stream2", ...]
"stream_logic" : "IGNORE" // "AND", "OR"
An event is returned as a std::vector<char>
std::vector<char> event;
// optimization if you know the approximate event size
event.reserve(2 * 1024 * 1024);
if(emon::nextEvent(id, event), std::chrono::seconds(10)) {
unsigned int *raw = static_cast<uint32_t*>(&event.data[0]);
} // else no event or timeout