Add a flag to save floats at half precision
Many ML applications don't need 32 bit floats: GPUs will often do everything in 16 bit for example. While the conversion isn't a problem on the ML side, it's a waste of data to save full precision in output datasets.
This add a flag to save float types data as 16 bit.