Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iMX8 Plus NPU delegate #39

Open
WalterPrech opened this issue Mar 29, 2024 · 0 comments
Open

iMX8 Plus NPU delegate #39

WalterPrech opened this issue Mar 29, 2024 · 0 comments

Comments

@WalterPrech
Copy link

Environment (Hardware)

  • Hardware: iMX8 Plus SOC with NPU.
  • Software: yocto, qt, cmake

Information

I have a Qt6 application (cmake based) and included the Inference helper with Tensorflow Lite support.
I include this Qt project into a Yocto project, generating a Linux image for iMX8 Plus platform.

The project compile all well and the Inference helper runs with the INFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK settings.

external_delegate_path

In Yocto tensorflow-lite and tensorflow-lite-vx-delegate for iMX8P is integrated.
If I use the following command from the installed examples

USE_GPU_INFERENCE=0 ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt --external_delegate_path=/usr/lib/libvx_delegate.so

Tensorflow use the NPU hardware acceleration. Important for this are

  • USE_GPU_INFERENCE=0
  • --external_delegate_path=/usr/lib/libvx_delegate.so

Question

Is it possible to include USE_GPU_INFERENCE and --external_delegate_path=/usr/lib/libvx_delegate.so into the XNNPACK settings?
Or need I create a completely custom delegate pack?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant