Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨[Feature] Preserve Higher-Level Operators in Torch-TRT Dynamo Paths #2729

Open
gs-olive opened this issue Apr 5, 2024 · 1 comment
Open
Labels
feature request New feature or request

Comments

@gs-olive
Copy link
Collaborator

gs-olive commented Apr 5, 2024

Is your feature request related to a problem? Please describe.
In a few cases, torch.nn.Module objects are decomposed by AOT further than desired. For instance, see #2683 (comment). In that case, torch.nn.functional.interpolate should be decomposed into aten.upsample_* or left as-is, for direct conversion to TRT.

Describe the solution you'd like
Use a pre-AOT pass to replace such nn operators with a custom op or explicitly exclude these operators from AOT decomposition (needs investigation as to whether this is possible for both Dynamo paths). See the related issues and work below.

Additional context
Related to #1894, #1979

@gs-olive gs-olive added the feature request New feature or request label Apr 5, 2024
@HolyWu
Copy link
Contributor

HolyWu commented Apr 28, 2024

Relevant issue upstream: pytorch/pytorch#116684

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants