-
Is there a specific way to do the streamlining for ConvTranspose layers on an autoencoder while working with FINN? I am currently working with an autoencoder for the purpose of reducing noise in images. At the moment, it seems that brevitas has the possibility to work with QuantConvTranspose layers and train with them. However I have faced the problem that while working with FINN, there might be some poblems in the export that might not allow me to do the complete streamlining of the file along with the conversion to hw layers. Since Im2Col only works with regular convolutional layers and later these can be easily converted into ConvolutionalInputGenerators. Has anyone had any experience on how to deal with this kind of layers? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Maybe @hleblevec can help you, he contributed this PR for supporting transpsed convolutions: #835 I believe a more efficient implementation with a modified ConvolutionInputGenerator instead of upsampling/padding is also on the roadmap. |
Beta Was this translation helpful? Give feedback.
-
Hi @Mounice97 and thanks @fpjentzsch for tagging me. It works, but it's not very hardware efficient. A more efficient implementation is currently being developped, but you can use the current implementation in the meantime. Please don't hesitate to ask for help if you encounter issues with using the transformation. |
Beta Was this translation helpful? Give feedback.
Hi @Mounice97 and thanks @fpjentzsch for tagging me.
For the moment there is a support of transposed convolutions implemented as a fractionnaly-strided convolution. The streamlining and convertion to hw layers can be done through the InferPixelPaddingDeconv transformation. You can find here an example of using this transformation in a custom conversion to hw layers step: https://github.com/Xilinx/finn-examples/blob/7a672f89553882854fedb0c864272ff8f0f9975d/build/espcn/custom_steps.py#L134
It works, but it's not very hardware efficient. A more efficient implementation is currently being developped, but you can use the current implementation in the meantime. Please don't hesitate to ask for …