We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'd like to use something implemented with tensorrt(c++) for this. Does anyone else implement it?
The text was updated successfully, but these errors were encountered:
You need to convert the model first to be able to build the TenosrRT engines. Different conversion paths are available in the TensorRT framework. For more info please refer to documentation in the following link: https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#conversion
IMO the easiest conversion path is through exporting the model to ONNX graph, then, using exported ONNX file to build the TensorRT engines.
Others are having problems exporting the model to ONNX using torch.onnx.export. (related to the following issue: #79)
torch.onnx.export
Maybe using ONNX with opset 20 will work as suggested here: pytorch/pytorch#100790 (comment)
If you successfully build the TensorRT engine, then you can load it and use it in C++ or Python.
Sorry, something went wrong.
No branches or pull requests
I'd like to use something implemented with tensorrt(c++) for this. Does anyone else implement it?
The text was updated successfully, but these errors were encountered: