Replies: 6 comments
-
Hi Marc, We don't support these operations directly as they are not "layers" rather lower-level operations. Depending on how you intend to use them will dictate how we can add support for them. In principle, support for reduction operation on an array given a reduction operator (max, avg) already exists (implemented for pooling layers), and can easily be generalized and expanded to include the missing "min" operation. This is something that works without problems in both Vivado and Vitis HLS so getting both supported would not be an issue. The most important question is how do you use this? Are you using Keras and they are part of a custom layer? Or are you using PyTorch and they are just a part of the module? Since ATM we don't support this in backend HLS, there is no support in the frontend Keras/PyTorch/QONNX converter either. Having QONNX flow is useful if there's no other way of getting your model into hls4ml or if you plan on using it for something else on your side. Cheers, |
Beta Was this translation helpful? Give feedback.
-
Hi Vladimir, thank your for the answer! You fully agree with your points and also the tradeoff between operations / layers.
For Inference, batch_dim would of course be equal to 1. I'm not sure if adding this as a certain layer is sensible for hls4ml, but adding each of these functions manually seems cumbersome as well. Looking forward to your thoughts. |
Beta Was this translation helpful? Give feedback.
-
Parsing lambda layers is not possible as they serialize a "pointer" to a function defined in python, so we can't extract its contents. You should wrap your operation in a custom layer, and save the model as such. You can then use hls4ml's extension API to register that layer and the corresponding HLS implementation without the need to change the internals of hls4ml. See this or this as examples of this flow. |
Beta Was this translation helpful? Give feedback.
-
Hi Vladimir, thank you again for the quick answer. I agree implementing this as a custom layer is defenitely an option. I still have a suggestion for an extension when considering the hls4ml ONNX interface: In ONNX, pooling layers and Reduce Operators are both implemented as operators. Would you be interested in a reduce implementation which is compatible with standard ONNX functions? If there is any interest, we are happy to share our code, both in the form of "custom layers" or an extending the framework as soon as we are finished. Let me know what you think. Kind Regards, Marc |
Beta Was this translation helpful? Give feedback.
-
As you'll see from the examples, you'll need to create a custom layer in Keras and a layer in hls4ml to which it maps. You can make the latter general and contribute that, so in the future it may map to an "official" operator from ONNX opset. Note that the current ONNX parser in hls4ml is not maintained and we're replacing it with a newer one (see #832), along with adding support for QONNX nodes. |
Beta Was this translation helpful? Give feedback.
-
Thanks for guiding me to the current PR, this helps a lot. I'll get back to you as soon as possible |
Beta Was this translation helpful? Give feedback.
-
Hello together,
we are currently developing models for a typical hls4ml application (L1 Trigger) which require reduce_mean and reduce_max and reduce_min layers. To the best of my knowledge, these layers are currently not supported (link.
As we plan to implement such layers in any case, we'd be happy to contribute to hls4ml. Before opening an issue I have some questions:
Looking forward to your feedback.
Beta Was this translation helpful? Give feedback.
All reactions