Skip to content

Improve ONNX robustness and remove torch.nan_to_num

closes #14 (closed) -- I didn't see any observable slowdown in training when switching from nan_to_num to masked_fill. While the masked_fill operation is around 20% slower in isolated tests, it must be negligible in the context of the other operations in the model. @mleigh Let me know if you saw something different.

Improve ONNX robustness comes from the addition of a padded track to the onnx model, meaning we never operate on empty inputs in onnx. The export script now checks consistency down to zero tracks. @dguest might be interested and @jabarr might want to test this for the GN1 VR export.

Also adds proper masking in the softmax of the GAP.

Merge request reports