-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix upsample converter not properly registered #2683
Conversation
Thanks for the analysis and pointing out the above! I looked at it and looks like in the above case the AOT trace is returning the decomposition for
Post the torch.export or the AOT trace the graph decomposes into a big graph
As far as I understand the
|
@apbose - does the decomposition into that large set of operators you showed still occur if we remove the following two lines (but don't add anything to TensorRT/py/torch_tensorrt/dynamo/lowering/_decomposition_groups.py Lines 160 to 161 in ad74a73
|
@gs-olive, yes the above operation decomposes into the large set of ops when the two lines shown above has been commented. |
Description
Partially #2665
Even though the operator is properly registered along with #2681 being applied, the operator is still decomposed into lower-level operators rather than converted using this converter, just like #2665 (comment). Adding
aten.upsample_bilinear2d.default
andaten.upsample_bilinear2d.vec
to torch_disabled_decompositions doesn't help. Compiling the model underwith torch.inference_mode()
also doesn't help. At the end I find out that I have to remove these two lines and this line in PyTorch to bypass the decomposition and then this converter finally works.Type of change
Checklist: