Via E-Mail it was reported, that the online-monitor of EUDAQ does not work after the added trigger veto feature for the snyc flavour during AZ ECR during AZ. We know that some events look bad when sending ECR; so there might be a change needed in the online event building for EUDAQ.
Edited
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
Really the problem does not occur when the veto signal is activated, the problem occurs when the az is being made, although the veto is disabled.
Reviewing the code, the problem comes from the last implementations that were made in the FW for the AZ. I have detected that the problem is because the signal EXT_START_VETO has been included in the signal EXT_START in the module pulse_gen for the trigger command in this way there is no trigger sending during the az for the sync FE. If I remove this signal from the assignment, the problem disappears, but I do not know how bad it is.
Can you again describe in detail what the problem is (some log file from which the exact error can be seen would be helpful). Also it would be nice if you can send us the raw data so that we can check it.
In general, it is no good idea to remove this signal. The idea behind this is that we need to veto triggers while AZ. Otherwise the hits recorded during AZ will get lost due to the ECR, resulting into too low efficiencies.
It is also known that the SYNC flavor produces often trash data, but data interpretation should of course not fail.
This is a raw data with sync FE during the testbeam, with the AZ VETO disabled. We can see some duplicate BCID and also the trigger words don't follow the proper format.
Good that you upload raw data. Could you please upload the raw data where you see problems? The uploaded file and your print out do not belong to the same raw data.
I checked the event sending with your raw data file and everything works as intended:
We send raw data to EUDAQ for each event
One event data is just the data between two trigger words + preceding trigger number
The number of send calls corresponds to the number of triggers in your data
I also added a unit test using your raw data to prevent regressions.
As a conclusion: event building within EUDAQ is likely wrong, if you see no correlations when using sync. BDAQ53 works fine. I suggest you talk to the programmer (https://github.com/duartej) of the EUDAQ BDAQ53 converter.
then the online correlation will not work in EUDAQ. It is most times related to
the EUDAQ TLU firmware. I am not sure if you saw this during data taking...
Currently we are not correcting the event number by the trigger number. But this
might be needed if this warning is often raised. If that is the case or you have other
observations to discuss just open a new issue.
I checked the event sending with your raw data file and everything works as intended:
We send raw data to EUDAQ for each event
One event data is just the data between two trigger words + preceding trigger number
The number of send calls corresponds to the number of triggers in your data
We don't see any issue with the data processed by the EUDAQ plugin, we just forced to interpret the data coming from the chip in order to follow a precise format, and otherwise crash/(dump error), as we don't understand then why is coming the data like it comes.
Therefore, the EUDAQ decoder expects for an event to be as well, just the data between two trigger words + preceding trigger number, but except to have 32 (or the number defined in the scan file) data headers (for each BC), not 3x32 data headers, as you can see in
So, the problem is not actually that EUDAQ is not able to deal with it (we can fix that easily), but the data format when you activate the FE synch is not following the same data format than the linear, for instance. The FE linear always follows the pattern: Trigger word +32 Data Headers (and the hits between the corresponding DH). Why do you expect in the FE sync to follow a different pattern, then?
There is as well some other things we don't understand, some data headers seem to be duplicated sometimes and placed where you don't expect them, see for instance the lines pasted by Esther several comments before, and look for this:
So, actually I just read that: !186 (comment 1805547)
What @hemperek reported there is what we see. A TRG word + 32 Data Headers + 32 Data Headers + 32 DH + TRG Word + ...
And that's what a "wrong" format for us. So, this could be ignore safely? I mean, just use the first 32 DH, for instance?
You are right, you will see more data headers than 32 in one event and only for the SYNC flavor. That is because of a digital bug in the SYNC flavor (stuck state machine as far as I remember). To prevent this state we send an ECR for every AZ. This workaround the chip really does not like, so it vomits some colorful mixture of old and new event data into the data stream. And we have to deal with it in software to actually use this flavor in TB. We can likely not avoid this! I recommend that you treat this case in your converter. Just take events as I send them and mark the events with too many data headers as bad. Tomek's comment is not really related, the issue where we investigate this is: #178 (closed)