Replies: 1 comment 2 replies
-
Hi, when you say "the quantized brevitas model works", do you run inference still within PyTorch or do you execute the exported .onnx model? To find out if the error is introduced somewhere during the transformation steps, you can use FINN's cppsim or rtlsim infrastructure to simulate the .onnx at different steps. If everything looks fine up until RTL simulation, but it's still broken on hardware, then you might have an error in your driver (e.g. input data packing and output data unpacking). |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When trying to run my custom model on pynq-z2 using FINN I'm encountering a weird issue. I made a small CNN to classify a sattelite image as "iceberg" or "ship". When i run my model on the PL I always get 0 (ship) on the output whatever the input is. To debug this i tried to remove the TopK layer so i can see what the output of my model is before executing the labelselect_batch. However instead of the expected 1D array containing 2 values, i get a 2D (2x2) array with all 0's as can be seen in the screenshot below.
My quantized brevitas model however does work as expected when I test it:
My model is shown below
`class Network(nn.Module):
def init(self):
# Define all the parameters of the ModelOne
super(Network, self).init()
`
Any help is welcome as i have been debugging this for over a week now and can't seem to get anywhere.
Beta Was this translation helpful? Give feedback.
All reactions