Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

目前模型fp32和fp16效果正常,int8下效果下降很多,这个正常么 #21

Open
pjpanadas opened this issue Oct 19, 2023 · 2 comments

Comments

@pjpanadas
Copy link

fp32和fp16基本一致,检测框在120个左右
int8,检测框只能在10个左右了

其他啥都没动,仅仅是orin板端转成了int8的engine,用的--best

@zmmhz
Copy link

zmmhz commented Dec 14, 2023

你好,请问你转模型时是用自己的数据集转的吗?

@vilon888
Copy link

fp32和fp16基本一致,检测框在120个左右 int8,检测框只能在10个左右了

其他啥都没动,仅仅是orin板端转成了int8的engine,用的--best

@pjpanadas 你好,请问你在orin板子上转模型bevdet_one_lt_d.onnx 到int8的吗? 你是用trtexe命令,还是修改export.cu程序进行校准的,我这边转int8的显示错误:
UNKNOWN: *************** Autotuning format combination: Int8(27,1,1,1) -> Int8(1024,1,1,1) ***************
UNKNOWN: Deleting timing cache: 6080 entries, served 19730 hits since creation.
ERROR: 2: [weightConvertors.cpp::quantizeBiasCommon::337] Error Code 2: Internal Error (Assertion getter(i) != 0 failed. )
ERROR: 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants