We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Steps to reproduce the behavior:
model = models.resnet101(pretrained=False).eval().to("cuda") exp_program = torch.export.export(model, tuple(inputs)) enabled_precisions = {torch.float} debug = False workspace_size = 20 << 30 min_block_size = 0 use_python_runtime = False torch_executed_ops = {} trt_gm = torch_trt.dynamo.compile( exp_program, tuple(inputs), use_python_runtime=use_python_runtime, enabled_precisions=enabled_precisions, debug=debug, min_block_size=min_block_size, torch_executed_ops=torch_executed_ops, make_refitable=True, ) # Output is a torch.fx.GraphModule expected_outputs, compiled_outputs = model(*inputs), trt_gm(*inputs) for expected_output, compiled_output in zip(expected_outputs, compiled_outputs): assert torch.allclose( expected_output, compiled_output, 1e-2, 1e-2 ), "Compilation Result is not correct. Compilation failed" print("Compilation successfully!")
The error should be smaller
Build information about Torch-TensorRT can be found by turning on debug messages
conda
pip
libtorch
The text was updated successfully, but these errors were encountered:
Not sure about the actual culprit, but using resnet101(pretrained=True) instead of resnet101(pretrained=False) doesn't incur accuracy issue.
resnet101(pretrained=True)
resnet101(pretrained=False)
Sorry, something went wrong.
No branches or pull requests
Bug Description
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The error should be smaller
Environment
conda
,pip
,libtorch
, source):sourceAdditional context
The text was updated successfully, but these errors were encountered: