Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you share some details about running inference on RK3588 NPU? #33

Open
hlacikd opened this issue Jun 14, 2023 · 7 comments
Open

Comments

@hlacikd
Copy link

hlacikd commented Jun 14, 2023

Hi guys,

I have RK3588 board, that you have used in paper as edge device to run on.

I have trained my weights using edgeyolo_tiny_lrelu , converted to ONNX , than to RKNN using rknn-toolkit 1.4

Command for export to onnx used was

python export.py --onnx-only --weights /workspaces/rocm-ml/edgeyolo/output/train/edgeyolo_lp_2/best.pth --opset 12

However currently i am unable to use QUANTIZE_ON during onnx->rknn conversion, i have used same dataset as for validation during training , different range from 10 to 50 images, without success --> resulting rknn model always outputs bogus.
You are mentioned QUANTIZATION enabled in your paper, could you share same details about how you managed to make it work?

I am running it on RK3588 (from radxa , rock5b) my inference speed is around 11fps~

You are mentioned 32fps in your paper.

  • I am curious if you implemented multi-threaded inference on RK3588 using all 3 NPU cores by yourself? , which would be 3x11fps ?
  • Also, have you used edgeyolo_tiny_lrelu.yaml as model_cfg?

Could you please share some more details about running edge-yolo on rk3588, paper nor code here is showing some insights about running it on that edge device.

Thank you in advance~

@susanin1970
Copy link

@hlacikd, hello
I also tried to experiment with converting of EdgeYOLO from ONNX to RKNN for inference on RK3588s board by Firefly (It was on loan to me)
I converted trained on own data EdgeYOLO-Tiny 320x320 (without LReLU) to ONNX with opset 11 and converted it to RKNN in two modes: with quantize and without quantize

Then I tried to inference on images on RK3588s board using RKNNLite2 and I got wrong detections, that went out of frame

Also I tried to convert own EdgeYOLO-Tiny 320x320 to RKNN on the fly using RKNN Toolkit 2, installed on host PC, and tried to inference on images using simulator. And I got true detections in this case

In both cases, I used the same pre- and post-processing methods

So it would be nice to see recommendations here for converting EdgeYOLO to RKNN format and inferencing on Rockchip boards, given that EdgeYOLO is aimed at use in edge devices

@susanin1970
Copy link

I'm not sure if EdgeYOLO with Leaky ReLU can be correctly converted from ONNX to RKNN, since the LeakyReLU node in ONNX is available with opset 16, when RKNN Toolkit 2 requires the source model in ONNX to have an opset no higher than 12: https://github.com/onnx/onnx/blob/main/docs/Operators.md#LeakyRelu

@hlacikd
Copy link
Author

hlacikd commented Jul 18, 2023

I'm not sure if EdgeYOLO with Leaky ReLU can be correctly converted from ONNX to RKNN, since the LeakyReLU node in ONNX is available with opset 16, when RKNN Toolkit 2 requires the source model in ONNX to have an opset no higher than 12: https://github.com/onnx/onnx/blob/main/docs/Operators.md#LeakyRelu

hey @susanin1970 , I am using edgeyolo_tiny_lrelu on rk3588 sucessfully (without rknn quantization, it predicts bogus when quantize=on).

What happens to me tho, that once after around 1000frames i get several false detections, which is something i am fighting with rn and have no solution for that yet.

I am using rknn-toolkit2 1.4 since 1.5 is totally broken

@susanin1970
Copy link

@hlacikd, can you please tell me why rknn toolkit2 1.5.0 is totally broken

Because I was just using rknn toolkit2 version 1.5.0 in my experiments 😄

@hlacikd
Copy link
Author

hlacikd commented Jul 18, 2023

it throws this error after each inference run

rockchip-linux/rknn-toolkit2#168

@LSH9832
Copy link
Owner

LSH9832 commented Dec 6, 2023

@hlacikd @susanin1970
hi guys, thanks for all of your works on rk3588(rk3588s), actually it needs some adjustment while exporting onnx if you want to convert to rknn model, I've released the newest code and models for rk3588, and it reaches 55fps(orangepi5plus) for edgeyolo_tiny_lrelu, if you are still interested, please click here and see

@LSH9832
Copy link
Owner

LSH9832 commented Dec 6, 2023

btw, I didn't say any words about RK3588 in my paper, the edge device I use in my paper is Jetson AGX Xavier.

Hi guys,

I have RK3588 board, that you have used in paper as edge device to run on.

I have trained my weights using edgeyolo_tiny_lrelu , converted to ONNX , than to RKNN using rknn-toolkit 1.4

Command for export to onnx used was

python export.py --onnx-only --weights /workspaces/rocm-ml/edgeyolo/output/train/edgeyolo_lp_2/best.pth --opset 12

However currently i am unable to use QUANTIZE_ON during onnx->rknn conversion, i have used same dataset as for validation during training , different range from 10 to 50 images, without success --> resulting rknn model always outputs bogus. You are mentioned QUANTIZATION enabled in your paper, could you share same details about how you managed to make it work?

I am running it on RK3588 (from radxa , rock5b) my inference speed is around 11fps~

You are mentioned 32fps in your paper.

* I am curious if you implemented multi-threaded inference on RK3588 using all 3 NPU cores by yourself? , which would be 3x11fps ?

* Also, have you used _edgeyolo_tiny_lrelu.yaml_ as model_cfg?

Could you please share some more details about running edge-yolo on rk3588, paper nor code here is showing some insights about running it on that edge device.

Thank you in advance~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants