You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a Qt6 application (cmake based) and included the Inference helper with Tensorflow Lite support.
I include this Qt project into a Yocto project, generating a Linux image for iMX8 Plus platform.
The project compile all well and the Inference helper runs with the INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK settings.
external_delegate_path
In Yocto tensorflow-lite and tensorflow-lite-vx-delegate for iMX8P is integrated.
If I use the following command from the installed examples
Is it possible to include USE_GPU_INFERENCE and --external_delegate_path=/usr/lib/libvx_delegate.so into the XNNPACK settings?
Or need I create a completely custom delegate pack?
The text was updated successfully, but these errors were encountered:
Environment (Hardware)
Information
I have a Qt6 application (cmake based) and included the Inference helper with Tensorflow Lite support.
I include this Qt project into a Yocto project, generating a Linux image for iMX8 Plus platform.
The project compile all well and the Inference helper runs with the INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK settings.
external_delegate_path
In Yocto tensorflow-lite and tensorflow-lite-vx-delegate for iMX8P is integrated.
If I use the following command from the installed examples
Tensorflow use the NPU hardware acceleration. Important for this are
Question
Is it possible to include USE_GPU_INFERENCE and --external_delegate_path=/usr/lib/libvx_delegate.so into the XNNPACK settings?
Or need I create a completely custom delegate pack?
The text was updated successfully, but these errors were encountered: