Replies: 1 comment
-
Hi, BatchNorm is supported indirectly by streamlining the normalization/scaling operations (along with quantization and activation functions) into FINN MultiThreshold operators. For quantizing below 8 bits (which you should do for FINN) it is best to build your model in Brevitas to perform quantization-aware training, then export to QONNX. Your topology will not work out of the box with FINN, but you might be able to make it work with a few custom build steps and custom operators. There are multiple open PRs (mostly by @mdanilow) that add functionality required for Yolov8 (split, concat, upsample, etc.). Maybe some of this can also help you with Yolov5. |
Beta Was this translation helpful? Give feedback.
-
I want to acquire about implementation of YOLOV5s using FINN Framework. Does the tool support Batch normalization layer or the model would need editing before the deployment. I am using ZCU102 board. Another request is whether there is a tool guide for QONNX or not to quantize the network with precision of less than 8-bits as the case in ONNX.
Beta Was this translation helpful? Give feedback.
All reactions