Skip to content
GitLab
Explore
Sign in
fastmachinelearning
hls4ml
Repository
Branches
Overview
Active
Stale
All
This project is mirrored from
https://*****@github.com/fastmachinelearning/hls4ml.git
. Pull mirroring updated
Jun 28, 2024
.
zw/addPytorch
91f81e5c
·
Sublayer support for Pytorch
·
Nov 19, 2018
!58
gdg/compression
82048d24
·
Use clang compiler for both simulation and co-simulation
·
Dec 04, 2018
offset_bug
572d98a3
·
runs for dense
·
Dec 26, 2018
nvt/phil-large-mlp
676c6347
·
Some fixes from Phil
·
Jan 10, 2019
jmgd/bigconv2d
2441936c
·
testing conv2d
·
Jan 17, 2019
github/fork/violatingcp/large_mlp
794ab8f7
·
fixes to keep low latency at low reuse
·
Jan 18, 2019
!128
nvt/large_mlp_v2
02e10ea2
·
adding giuseppes model
·
Jan 29, 2019
large_mlp_v2
e0012f93
·
alternative option that doesn't scale nicely at low reuse
·
Jan 30, 2019
ejk/add-brams-to-parallel-mode_large_v0
bf6d6f6b
·
first version not latency tuned
·
Jan 30, 2019
nvt/large_mlp_A
f9d97516
·
a first try with a reuse loop
·
Jan 30, 2019
nvt/large_mlp_smallBlocks
5a11a83c
·
small blocks of mat-vec operations, still fails
·
Jan 31, 2019
nvt/large_mlp_wDep
da25bf53
·
a working thing with dependence pragma that controls II but needs...
·
Jan 31, 2019
large_mlp
0e0fab6f
·
now fixed to be more robust
·
Feb 01, 2019
ejk/add-brams-to-parallel-mode_large_v1
f84bd9ad
·
updated to function approach
·
Feb 02, 2019
ejk/add-brams-to-parallel-mode_large_v2
95082b13
·
Format that minimizes LUT usage but goes to large networks
·
Feb 02, 2019
nvt/large_mlp_wDep_v2
ab8a6fec
·
adding latency to the accum
·
Feb 02, 2019
nvt/large_mlp_wDep_v3
097b0be9
·
remove the separate matvec function
·
Feb 02, 2019
nvt/large_mlp_wDep_v5
7acbb938
·
Improve stress test script (if something goes wrong with any intermediate step)
·
Feb 26, 2019
bjk/bn_cast
0970bcd2
·
remove explicit casting with wrong precedence
·
Mar 02, 2019
!135
dep4
faf646b3
·
Merge pull request #135 from hls-fpga-machine-learning/bjk/bn_cast
·
Mar 04, 2019
Prev
1
2
3
4
5
6
7
…
17
Next