Skip to content

Adding setting to handle unexpectedly large charge deposits by propagating more charges together from those deposit points

Added setting for max_charge_groups to [GenericPropagation] and [TransientPropagation], to reduce the negative performance effects of unexpectedly large energy deposits.

The setting allows a user to decide on a maximum number of charge groups that can be propagated form a single deposit. Setting this overrides and increases charge_per_step for any deposit that exceeds the product of charge_per_step and max_charge_groups. This speeds up handling of large deposits, while reducing the simulation precision for them. This will thus set a sort of “maximum effort” used to propagate a single deposit. The default value of 0 leaves the behaviour as it was before this MR (i.e. no dynamic changing of charge_per_step).

An output plot histogram showing the charge_per_step distribution has been added, and LOG(INFO) output for when a deposit exceeds the maximum number of charge groups allowed, as well as a log output at the end stating which percentage of deposits exceeded it.

Here are some example time results for a run of 1000 events for different values of max_charge_groups on my local machine (using a charge_per_step value of 5);

max_charge_groups Time of simulation run
10 000 4 minutes 14 seconds
1 000 4 minutes 11 seconds
100 3 minutes 25 seconds
10 1 minutes 35 seconds

Looking at just one large event (this one is a 500 MeV pion, depositing 3 676 712 electron-hole pairs), I get this;

max_charge_groups Time of simulation run
10 4 seconds
100 12 seconds
1 000 1 minutes 24 seconds
10 000 1 minutes 28 seconds
100 000 2 minutes 12 seconds
1 000 000 2 minutes 31 seconds

With 10 000, there are actually only two deposits that get affected at all, so this event is a bit odd. But setting max_charge_groups to 10 000 definitely makes this event manageable rather than freezy and memory-eaty.

Testing it with a setup and event provided by @agsponer (see this forum post), I get the following with charge_per_step = 100 for a single massive event:

max_charge_groups Time of simulation run
10 3 minutes 43 seconds
50 14 minutes 0 seconds
100 23 minutes 29 seconds
100 000 1 hours 28 minutes 1 seconds

The figures below show a comparison of some final observables when using different values of max_charge_groups in [GenericPropagation], for a thin monolithic sensor with rectangular pixels in a 5 GeV electron beam, 500 000 events. The noise added in the [DefaultDigitizer] is 10 electrons.

chargeGroupsComp_meanEfficiency chargeGroupsComp_meanClusterSize image

There is little difference. The difference only becomes prominent in the residual RMS (resolution) at high thresholds. The red curve is always under the green. For the max_charge_groups = 10 run, 40.7% of deposits would have exceeded the given max value of 10 if a charge_per_step value of 5 had been used. For the max_charge_groups = 100 run, this fraction is 2.5%.

The figures below show the group_size_histo for the two situations, with max_charge_groups = 10 on the left and 100 on the right. Note the different x-axis scales. Both peak at 5, which is the most common group size used if the maximum number of charge groups is not reached.

image image

Below the same images are shown with a logarithmic scale on the y-axis:

image image

The bin on the very right represents overflow.

I have asked @mvanrijn to see if this fixes her problem as well, when she has time. The issue there seems to be exactly that an abnormally large deposit causes the simulation to stop in its tracks for quite a while

Edited by Haakan Wennloef

Merge request reports