Integrate cta-frontend-grpc into CTA repo and RPMs
Problem to solve
Currently the API to the CTA Frontend is defined using Google protocol buffers, with XRootD SSI as the transport.
The dCache developers have implemented a version of the CTA Frontend which uses gRPC, in order to use CTA as the tape back-end to dCache. The implementation includes the main operations of archival, retrieval and deletions. However, it is not complete as it does not include the admin interface.
The CTA team are interested in moving to gRPC as the main supported protocol in future. XRootD SSI has not been widely adopted and is therefore a technology risk. gRPC is already used elsewhere in EOS and in CTA, so by switching from SSI to gRPC we can drop one dependency.
The CTA team would like to support a gRPC version of the Frontend, but with the following constraints:
- The protocol buffer definitions should be kept the same inasmuch as possible, with minimum changes to accommodate dCache (for example, a 64-bit int is not sufficient to store the dCache disk file ID).
- The gRPC Frontend should be a complete implementation which supports all the functionality of the current XRootD SSI implementation.
Stakeholders
- dCache team:
cta-frontend-grpc
will be a first-class citizen and will be maintained alongside the rest of CTA. - CTA team: this gives us a migration pathway away from XRootD SSI, which has had low adoption, and moves us to a more widely-supported protocol for metadata operations.
Proposal
-
Merge the existing work done by the dCache team into the CTA repo. This will form the basis of cta-frontend-grpc
-
Rename the existing CTA Frontend to cta-frontend-ssi
-
Create a new base class to handle requests/responses. It should have a separate subclass implementation for EOS and dCache to handle the few differences between them. This class should accept Requests and return Responses, as defined in the protocol buffers. It should be agnostic to the underlying transport (XRootD SSI or gRPC). This code can mostly be lifted from CTA/xroot_plugins/XrdSsiCtaRequestMessage.cpp
. Do the EOS implementation first, to separate the protocol buffer layer from the transport layer. -
Amend the protocol buffer definitions in xrootd-ssi-protobuf-interface/eos_cta/protobuf
to include the few changes needed for dCache (see https://gitlab.cern.ch/cta/CTA/-/issues/1240 for details). Later this code can be moved to a separate submodule to separate it from the XRootD SSI implementation. -
Create the dCache implementation of the Request/Response handling class, using the new expanded protocol buffer definition and the code contributed by the dCache team. -
Create a new cta-admin-grpc
-
Rename cta-admin
tocta-admin-ssi
-
Set up both variants as alternatives, each system should run one or the other. -
Implement all the remaining cta-admin
functions incta-frontend-grpc
. Most commands (that have a simple Request/Response format) should have already been covered by the Request/Response class mentioned above. Some additional work is needed to implement gRPC streaming for thels
commands or any command that has an arbitrarily-long response.
Finally, cta-frontend-grpc
should be a complete replacement for all the functions of cta-frontend-ssi
. The choice of SSI or gRPC can be configured using alternatives. (Or nothing to stop you deploying both at the same time).
Once this is done, what remains is some integration work in EOS to use the new gRPC transport. The choice of SSI or gRPC should be configurable, at least until SSI is deprecated. (Not foreseen on the timescale of Run-3).