Skip to content
Snippets Groups Projects
Commit 8bf4ef8c authored by ssummers's avatar ssummers
Browse files
parents 2fa29854 7d8847ac
No related branches found
No related tags found
No related merge requests found
# Efficient inference: Model compression and hls4ml
In this part, we will start from the data and model you trained in Part 1. We will train a new model quantization aware using QKeras, compare the model performance to that of Part 1 and build the FPGA firmware of this quantized, sparse model using hls4ml.
We assume that you have already participated in Part 1, but if you have not, you can copy the neccessary files from
```/eos/home-t/thaarres/cms_mlatl1t_tutorial/part1/```
Assuming you have already followed the setup instructions (```bash start_notebooks.sh```), go ahead and run through ```part2_compression.ipynb```!
This diff is collapsed.
Source diff could not be displayed: it is too large. Options to address this: view the blob.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment