diff --git a/cms/flowsim/cms-flowsim/DESCRIPTION b/cms/flowsim/cms-flowsim/DESCRIPTION
index dd45603683adf8adf00d97737640449b35e38719..f138fb80dd3bfbb1449bd9b30a95335b9804a770 100644
--- a/cms/flowsim/cms-flowsim/DESCRIPTION
+++ b/cms/flowsim/cms-flowsim/DESCRIPTION
@@ -1 +1,17 @@
-Code from https://github.com/francesco-vaselli/FlowSim/tree/benchmark
\ No newline at end of file
+The present benchmark replicates the inference step of the CMS FlashSim simulation framework.
+A ML model is deployed to simulate an arbitrary number of reconstructed particle jets starting from a collection of generator-level jets.
+Models can also be deployed sequentially to replicate the simulation of different, correlated object collections in FlashSim.
+Code from https://github.com/francesco-vaselli/FlowSim/tree/benchmark
+
+The figure of merit being tracked is the throughput measured in object simulated per second. A series of other stats related to CPU/GPU usage are also being logged.
+
+The script accepts the following arguments:
+
+      '--n-objects' the Number of objects to simulate
+      '--batch-size', the Batch size for inference
+      '--device', the Device to run on (cpu/cuda/vulkan)
+      '--n-model-instances' the Number of model instances to run
+      '--gpu-memory-limit' the GPU memory limit in GB
+      '--num-threads' the Number of PyTorch threads to use
+      '--monitor-interval' the Resource monitoring interval in seconds
+      '--gpu-id' the GPU device ID to use (default: 0)