dev2main1
Loading
torch.cat((tensor1, tensor2))
instead or torch.cat([tensor1, tensor2])
, because it works better with typehintsbuild_embedding
on CPU, which is faster because of the kNNbuilding
parameter in metric_learning
section to processing
GNNLazyDataset
in order to load the events during GNN training and apply a processing beforehand. Here, I apply a processing for bidirectional graph, which we will be removed later because I don't care about bidirectional anymoreon_step
hyperparamameter to log the loss each step instead of each epoch, just for debugging. Probably useless.DataFrameLoader
during preprocessing. Even though the 700,000 events scattered across 350 folders, this loop allows to loop over the events, kind of as if they were belonging to a single file.compute_n_unique_planes
custom processingModelBase
.