From 43a9e876a4aef7579c028f9845e7cfe8d41c65c2 Mon Sep 17 00:00:00 2001
From: Domenico Giordano <domenico.giordano@cern.ch>
Date: Fri, 29 Nov 2024 17:06:40 +0100
Subject: [PATCH] [BMK-1540] add description

---
 cms/flowsim/cms-flowsim/DESCRIPTION | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/cms/flowsim/cms-flowsim/DESCRIPTION b/cms/flowsim/cms-flowsim/DESCRIPTION
index dd456036..f138fb80 100644
--- a/cms/flowsim/cms-flowsim/DESCRIPTION
+++ b/cms/flowsim/cms-flowsim/DESCRIPTION
@@ -1 +1,17 @@
-Code from https://github.com/francesco-vaselli/FlowSim/tree/benchmark
\ No newline at end of file
+The present benchmark replicates the inference step of the CMS FlashSim simulation framework.
+A ML model is deployed to simulate an arbitrary number of reconstructed particle jets starting from a collection of generator-level jets.
+Models can also be deployed sequentially to replicate the simulation of different, correlated object collections in FlashSim.
+Code from https://github.com/francesco-vaselli/FlowSim/tree/benchmark
+
+The figure of merit being tracked is the throughput measured in object simulated per second. A series of other stats related to CPU/GPU usage are also being logged.
+
+The script accepts the following arguments:
+
+      '--n-objects' the Number of objects to simulate
+      '--batch-size', the Batch size for inference
+      '--device', the Device to run on (cpu/cuda/vulkan)
+      '--n-model-instances' the Number of model instances to run
+      '--gpu-memory-limit' the GPU memory limit in GB
+      '--num-threads' the Number of PyTorch threads to use
+      '--monitor-interval' the Resource monitoring interval in seconds
+      '--gpu-id' the GPU device ID to use (default: 0)
-- 
GitLab