Skip to content

add inference pipeline

This merge introduces the core logic for the pipeline stage of the ACORN framework, focusing on the orchestration of graph construction, inference, and evaluation. The main class, PipeLineStage, is defined in pipeline_stage.py and is responsible for loading event data, processing it into PyG Data objects, and managing the inference and, in the future, the evaluation workflows. The class is designed to be flexible, supporting different data formats and processing regimes, and includes mechanisms for data validation and feature handling. The EventDatasetPipe2 class is also provided as a custom dataset loader for GNN-based workflows.

The pipeline.py file implements the main inference pipeline for the ACORN framework, orchestrating the execution of multiple configurable stages such as edge classification, graph construction, and track building. It provides robust device management, logging, and profiling, supporting both GPU and CPU execution with dynamic resource allocation. The PipeLine class extends the base pipeline stage, handling model loading, configuration parsing, and the sequential application of inference models to event data. The pipeline collects and logs detailed metrics, manages memory usage, and outputs results in a structured format, making it a central component for running and analyzing complex multi-stage inference workflows in high-energy physics experiments.

The config file is under examples/CTD23_new_naming_scheme/pipeline_infer.yaml

Merge request reports

Loading