-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you share some details about running inference on RK3588 NPU? #33
Comments
@hlacikd, hello Then I tried to inference on images on RK3588s board using RKNNLite2 and I got wrong detections, that went out of frame Also I tried to convert own EdgeYOLO-Tiny 320x320 to RKNN on the fly using RKNN Toolkit 2, installed on host PC, and tried to inference on images using simulator. And I got true detections in this case In both cases, I used the same pre- and post-processing methods So it would be nice to see recommendations here for converting EdgeYOLO to RKNN format and inferencing on Rockchip boards, given that EdgeYOLO is aimed at use in edge devices |
I'm not sure if EdgeYOLO with Leaky ReLU can be correctly converted from ONNX to RKNN, since the LeakyReLU node in ONNX is available with opset 16, when RKNN Toolkit 2 requires the source model in ONNX to have an opset no higher than 12: https://github.com/onnx/onnx/blob/main/docs/Operators.md#LeakyRelu |
hey @susanin1970 , I am using edgeyolo_tiny_lrelu on rk3588 sucessfully (without rknn quantization, it predicts bogus when quantize=on). What happens to me tho, that once after around 1000frames i get several false detections, which is something i am fighting with rn and have no solution for that yet. I am using rknn-toolkit2 1.4 since 1.5 is totally broken |
@hlacikd, can you please tell me why rknn toolkit2 1.5.0 is totally broken Because I was just using rknn toolkit2 version 1.5.0 in my experiments 😄 |
it throws this error after each inference run |
@hlacikd @susanin1970 |
btw, I didn't say any words about RK3588 in my paper, the edge device I use in my paper is Jetson AGX Xavier.
|
Hi guys,
I have RK3588 board, that you have used in paper as edge device to run on.
I have trained my weights using edgeyolo_tiny_lrelu , converted to ONNX , than to RKNN using rknn-toolkit 1.4
Command for export to onnx used was
python export.py --onnx-only --weights /workspaces/rocm-ml/edgeyolo/output/train/edgeyolo_lp_2/best.pth --opset 12
However currently i am unable to use QUANTIZE_ON during onnx->rknn conversion, i have used same dataset as for validation during training , different range from 10 to 50 images, without success --> resulting rknn model always outputs bogus.
You are mentioned QUANTIZATION enabled in your paper, could you share same details about how you managed to make it work?
I am running it on RK3588 (from radxa , rock5b) my inference speed is around 11fps~
You are mentioned 32fps in your paper.
Could you please share some more details about running edge-yolo on rk3588, paper nor code here is showing some insights about running it on that edge device.
Thank you in advance~
The text was updated successfully, but these errors were encountered: