Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] require_full_compilation never reaches partitioner #3171

Open
dgcnz opened this issue Sep 21, 2024 · 0 comments
Open

🐛 [Bug] require_full_compilation never reaches partitioner #3171

dgcnz opened this issue Sep 21, 2024 · 0 comments

Comments

@dgcnz
Copy link

dgcnz commented Sep 21, 2024

I'm not sure if this is intended, but it seemed odd to me that there is logic inside both fast and global partitioners (see snippets 1 and 2) with require_full_compilation but the compile_module never feeds the passes the flag as parameter (see snippet 3).

Snippets

if not full_support and self.require_full_compilation:
raise AssertionError(
"require_full_compilation=True was specified, but model is not fully supported"
)

if not full_support and self.require_full_compilation:
raise AssertionError(
"require_full_compilation=True was specified, but model is not fully supported"
)

if settings.use_fast_partitioner:
try:
logger.info("Partitioning the graph via the fast partitioner")
partitioned_module, supported_ops = partitioning.fast_partition(
gm,
verbose=settings.debug,
min_block_size=settings.min_block_size,
torch_executed_ops=settings.torch_executed_ops,
)
except torch.fx.passes.splitter_base.FxNetSplitterInternalError:
logger.error(
"Partitioning failed on the subgraph with fast partition. See trace above. "
"Retrying with global partition.",
exc_info=True,
)
fast_partitioner_failed = True
settings.use_fast_partitioner = False

@dgcnz dgcnz changed the title require_full_compilation never reaches partitioner 🐛 [Bug] require_full_compilation never reaches partitioner Sep 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant