You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your work!
But I'm confuse why the int8 model performance worse on jetson tx2.
The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms.
I want to know why the int8 model worked on x86 but failed on tx2.
The text was updated successfully, but these errors were encountered:
I also test the yolo3-416(fp16) speed on TX2 ,it's about 211ms,The same config performance is about 14ms per image on my GTX1060. Did you have test tiny-yolo3-trt performance on TX2?
Thanks for your work!
But I'm confuse why the int8 model performance worse on jetson tx2.
The inference time of fp32 416 model is about 250ms and the inference time of fp16 416 model is about 200ms, but the inference time of int8 model is about 300ms.
I want to know why the int8 model worked on x86 but failed on tx2.
The text was updated successfully, but these errors were encountered: