Add new C++ functors that work a little differently to the existing LoKi ones
This is an attempt to add new C++ functors that work a little differently to the existing LoKi ones, with a few key goals:
- Creating functors that can act container-wise instead of object-wise, with flexibility on the container type used
- Among other advantages, this should minimise the number of virtual function calls...
- Currently
vector<T> -> vector<T>
,Pr::Selection<T> -> Pr::Selection<T>
andZipping::SelectionView<T> -> Zipping::ExportedSelection<>
are supported - It would be interesting to explore whether the generic functor code is flexible enough that @ahennequ's vectorised IP cut could be implemented as a specialisation, but this has not been done...
- Ensuring that even complex, composite functors are seen as one entity by the C++ compiler and can be inlined/optimised/etc. (this aspect is similar to LHCb!1302 (closed))
- Keeping the functors templated until "the last minute" -- the functor argument type is only specified when the functor factory is called from whatever algorithm (
Pr::Filter<T>
is the example) will actually receive a cut string- This should make it easy to use different event model types in different places, it should "just work" as long as we are consistent with accessor names...
There are only a few functors and predicates defined, but there should be examples of most "types" of functor, and the grammar is (at least) mostly defined, so expressions like Functor > 5
, MinimumImpactParameterCut( 0.1, "Path/To/PVs" )
and math::pow(PseudoRapidity - 1, 2)
should all work. Functors are compiled just-in-time using ROOT's cling interface, or taken from a pre-compiled cache. cling does not support optimisation at the moment, so the cache should be used in any performance tests. The cache works, but needs some accompanying changes in Brunel!790 (merged). The machinery for functors with data-dependencies is implemented and demonstrated in the impact parameter related functors.
These functors work a little bit differently to the LoKi functors. In the new functors, the Python interface is a pure-python wrapper that allows functors to be written directly in options files (instead of specifying cut strings, as LoKi does) with minimal overhead. The Python-side implementation generates a valid C++ string, which is passed into the C++ algorithm for JIT/cache compilation. Concretely, you might specify:
from Functors import ETA, MINIPCUT
from GaudiKernel.SystemOfUnits import mm
myFilter = setupAlgorithm('PrFilter__Track_v2',
Functor=MINIPCUT(IPCut=0.1*mm, Vertices=VertexLocation) & (ETA > 2.5),
...)
in the python configuration (with the advantage that you will immediately get an error if you misspell a functor name or forget to specify an argument), which would generate (in Python) the string
::Functors::Track::MinimumImpactParameterCut( 0.1, "/Event/Rec/FutureVertex" ) & ( ::Functors::Track::PseudoRapidity{} > 2.5 )
that gets passed (as a property) into a C++ algorithm and JIT-compiled (or loaded from the cache).
The outline structure of the new functors is:
OutputType MyFunctor::operator()( InputType const& input ) const {
auto outside_loop1 = subfunctor1.prepare();
auto outside_loop2 = subfunctor2.prepare();
// ... however many more subfunctors
OutputType output;
for ( auto const& object : input ) {
// the condition here is some arbitrary combination of and/or/not based on the functor that was specified
// this example is typical of the functor being something like: (ETA > 2.5) | (MINIP( "Path/To/PVs" ) > 0.1*mm)
if ( subfunctor1.apply_single( object, outside_loop1 ) || subfunctor2.apply_single( object, outside_loop2 ) ) {
output.emplace_back( object ); // this is a dumb implementation for something like vector<Track> -> vector<Track>...
}
}
return output;
}
where this boilerplate is automatically generated based on the input and output types and an arbitrary number of functors being logically combined. The purpose of the outside_loopN
variables is to allow functors' data dependencies (such as the container of primary vertices for a minimum-IP functor) to be read in from the TES once-per-container instead of once-per-contained-object. This object is (supposed to be) assembled and compiled in such a way that all the prepare
and apply_single
function definitions are visible to the compiler and can be inlined.
The definition of the boilerplate for different Input/Output types is separate from the actual functor definitions, which are themselves templated on the contained object type. As a trivial example, the PT
functor is defined simply as:
struct TransverseMomentum : public Function<TransverseMomentum> {
template <typename TrackLike, typename Internal>
auto apply_single( TrackLike const& track, Internal& ) const {
return track.pt();
}
};
which makes the minimal assumption that the relevant type has an accessor called pt()
. This could be a large v2::Track
object, or some non-owning proxy view into an SoA container.
A lot of the code is adapted from LoKi (although there are not very many direct dependencies/inheritances left) -- credit where it's due!
The new machinery is used to create a zip of tracks and muonPIDs (Tr/TrackUtils/src/MakeView.cpp
) and then apply an ISMUON | MINIPCUT( 1.0, "..." )
cut on this