Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why int8 mode performance worse on jetson tx2? #62

Open
jfangah opened this issue Aug 21, 2019 · 1 comment
Open

Why int8 mode performance worse on jetson tx2? #62

jfangah opened this issue Aug 21, 2019 · 1 comment

Comments

@jfangah
Copy link

jfangah commented Aug 21, 2019

Thanks for your work!
But I'm confuse why the int8 model performance worse on jetson tx2.
The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms.
I want to know why the int8 model worked on x86 but failed on tx2.

@ElonKou
Copy link

ElonKou commented Sep 6, 2019

It seems that TX2 doesn't support int8,int8 calibration support on TX2

I also test the yolo3-416(fp16) speed on TX2 ,it's about 211ms,The same config performance is about 14ms per image on my GTX1060. Did you have test tiny-yolo3-trt performance on TX2?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants