Skip to content
Snippets Groups Projects
Commit 19855b87 authored by Thea Aarrestad's avatar Thea Aarrestad
Browse files

adding README

parent 575f6064
No related branches found
No related tags found
No related merge requests found
# Efficient inference: Model compression and hls4ml
In this part, we will start from the data and model you trained in Part 1. We will train a new model quantization aware using QKeras, compare the model performance to that of Part 1 and build the FPGA firmware of this quantized, sparse model using hls4ml.
We assume that you have already participated in Part 1, but if you have not, you can copy the neccessary files from
```/eos/home-t/thaarres/cms_mlatl1t_tutorial/part1/```
Assuming you have already followed the setup instructions (```bash start_notebooks.sh```), go ahead and run through ```part2_compression.ipynb```!
......@@ -29,5 +29,4 @@ cd $SCRIPT_DIR
# put the HLS tools on the PATH
echo "ML@L1T Setup: prepending $SCRIPT_DIR/bin to PATH"
export PATH=$SCRIPT_DIR/bin:$PATH
export MLATL1T_DIR=$SCRIPT_DIR
\ No newline at end of file
export PATH=$SCRIPT_DIR/bin:$PATH
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment