Skip to content

Milestone2.2: Optimize transposes in XNNPACK partition by removing redundant to_copy ops #11316

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

leafs1
Copy link
Contributor

@leafs1 leafs1 commented Jun 3, 2025

Summary

Optimize transposes in XNNPACK partition

Test plan

Constructed graphs with multiple redundant to_copy ops. Asserted their removal after pass

@leafs1 leafs1 requested review from digantdesai and mcr229 as code owners June 3, 2025 17:28
Copy link

pytorch-bot bot commented Jun 3, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/11316

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures

As of commit 6a47b46 with merge base 911fb0b (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 3, 2025
@leafs1 leafs1 force-pushed the milestone2.2 branch 3 times, most recently from 15d05eb to 2d02c41 Compare June 3, 2025 19:09
@leafs1
Copy link
Contributor Author

leafs1 commented Jun 4, 2025

@pytorchbot label "release notes: none"

@pytorch-bot pytorch-bot bot added the release notes: none Do not include this in the release notes label Jun 4, 2025
@leafs1 leafs1 changed the title Milestone2.2 Milestone2.2: Optimize transposes in XNNPACK partition by removing redundant to_copy ops Jun 5, 2025
def input_dim_order(
self, input_node: torch.fx.Node, input_order: InputDimOrder
) -> bool:
if input_node.name == "x":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you replace this with checking if the input_node is a placeholder?

from executorch.exir.passes.memory_format_ops_pass import DimOrderOpsRevertPass


class TestChannelsLastTaggedReshapePass(unittest.TestCase):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a test that includes implicitly created dim order conversions? This will check to make sure that user created and pass-created converts get optimized out correctly. I expected it will work, but it would be nice to cover it since this is a common use case.

Maybe something like:
to_channels_last
upsample_nearest2d (not partitioned)
to_channels_first
conv


# If we encounter a to_copy node, check if it is preceded by an opposite to_copy node
if node.target == exir_ops.edge.aten._to_copy.default:
if prev and ChannelsLastTaggedReshapePass.is_nchw_node(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that there may be cases where the using the previous node in the iteration order might not actually be the first arg, especially in more complex graphs. Can you try replacing prev with node.args[0]? That should be sound in all cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants