We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fp32和fp16基本一致,检测框在120个左右 int8,检测框只能在10个左右了
其他啥都没动,仅仅是orin板端转成了int8的engine,用的--best
The text was updated successfully, but these errors were encountered:
你好,请问你转模型时是用自己的数据集转的吗?
Sorry, something went wrong.
fp32和fp16基本一致,检测框在120个左右 int8,检测框只能在10个左右了 其他啥都没动,仅仅是orin板端转成了int8的engine,用的--best
@pjpanadas 你好,请问你在orin板子上转模型bevdet_one_lt_d.onnx 到int8的吗? 你是用trtexe命令,还是修改export.cu程序进行校准的,我这边转int8的显示错误: UNKNOWN: *************** Autotuning format combination: Int8(27,1,1,1) -> Int8(1024,1,1,1) *************** UNKNOWN: Deleting timing cache: 6080 entries, served 19730 hits since creation. ERROR: 2: [weightConvertors.cpp::quantizeBiasCommon::337] Error Code 2: Internal Error (Assertion getter(i) != 0 failed. ) ERROR: 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
No branches or pull requests
fp32和fp16基本一致,检测框在120个左右
int8,检测框只能在10个左右了
其他啥都没动,仅仅是orin板端转成了int8的engine,用的--best
The text was updated successfully, but these errors were encountered: