The repo describes an HLS-based CNN training accelerator in floating-point format for a reference design, using the back-propagation algorithm with the SGD optimizer.
pytorch- Verification of back-propagation derivatives, including Conv, transposed Conv, dilated Conv, BN, ReLU, average pooling, and FC.
BP_function.py
fc_test.py
resnet20- HLS design of the accelerator with both input and output channel tiling.
design source files
bnn.h
conv_weights.h
dimension_def.h
layer.h
resnet20.cc
orvgg.cc
typedefs.h
testbench files
conv_weights_tb.h
tb.cc
weights_tb.h
data_batch_1.bin
train.bin
(image data from http://www.cs.toronto.edu/~kriz/cifar.html)
@inproceedings{guo2023boost,
title={BOOST: block minifloat-based on-device CNN training accelerator with transfer learning},
author={Guo, Chuliang and Lou, Binglei and Liu, Xueyuan and Boland, David and Leong, Philip HW and Zhuo, Cheng},
booktitle={2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)},
pages={1--9},
year={2023},
organization={IEEE}
}