Skip to content

Fix nested stableHLO composite regions #9385

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

Carlomus
Copy link

Fix export failure when StableHLO regions are nested (e.g. SubModule inside Model), follow up on issue 6978

This PR aims to add support for nested stable HLO regions. Currently, trying to export something along these lines:

import torch
from torch_xla.stablehlo import exported_program_to_stablehlo
from torch_xla.experimental.mark_pattern_utils import StableHLOCompositeBuilder

class SubModule(torch.nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x, y):
        builder = StableHLOCompositeBuilder("abc.SubModule")
        x, y = builder.mark_inputs(x, y)
        out = x + y
        out = builder.mark_outputs(out)
        return out


class Model(torch.nn.Module):

    def __init__(self):
        super().__init__()
        self.submodule = SubModule()

    def forward(self, x, y):
        builder = StableHLOCompositeBuilder("abc.Model")
        x, y = builder.mark_inputs(x, y)
        a = x + y
        b = x - y
        c = self.submodule(a, b)
        a, b, c = builder.mark_outputs(a, b, c)
        return a + b + c 

sample_input = (torch.randn(1, 1, 32, 32), torch.randn(1, 1, 32, 32))
exported = torch.export.export(Model(), sample_input)
stablehlo_program = exported_program_to_stablehlo(exported)

raises an error. This PR makes it so that this generates correct MLIR:

module @IrToHlo.32 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} {
  func.func @main(%arg0: tensor<1x1x32x32xf32>, %arg1: tensor<1x1x32x32xf32>) -> tensor<1x1x32x32xf32> {
    %0:3 = stablehlo.composite "abc.Model" %arg1, %arg0 {decomposition = @abc.Model.impl} : (tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>) -> (tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>)
    %1 = stablehlo.add %0#0, %0#1 : tensor<1x1x32x32xf32>
    %2 = stablehlo.add %1, %0#2 : tensor<1x1x32x32xf32>
    return %2 : tensor<1x1x32x32xf32>
  }
  func.func private @abc.SubModule.impl(%arg0: tensor<1x1x32x32xf32>, %arg1: tensor<1x1x32x32xf32>) -> tensor<1x1x32x32xf32> {
    %0 = stablehlo.add %arg0, %arg1 : tensor<1x1x32x32xf32>
    return %0 : tensor<1x1x32x32xf32>
  }
  func.func private @abc.Model.impl(%arg0: tensor<1x1x32x32xf32>, %arg1: tensor<1x1x32x32xf32>) -> (tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>) {
    %0 = stablehlo.add %arg0, %arg1 : tensor<1x1x32x32xf32>
    %1 = stablehlo.subtract %arg0, %arg1 : tensor<1x1x32x32xf32>
    %2 = stablehlo.composite "abc.SubModule" %0, %1 {decomposition = @abc.SubModule.impl} : (tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>) -> tensor<1x1x32x32xf32>
    return %0, %1, %2 : tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>, tensor<1x1x32x32xf32>
  }
}

Cause

  1. Boundary groups weren’t processed in topological order, so inner regions were wrapped after their parents.
  2. Nodes moved into a region weren’t pruned from the parent graph, leaving dangling ops.

Solution

  1. Region ordering: compute last_order for each boundary group and sort ascending before wrapping.
  2. Dead-op cleanup: after moving ops, delete trivially dead or composite ops no longer used.

@bhavya01 bhavya01 requested a review from lsy323 June 24, 2025 17:50
Renamed _impl in test, so that first impl is 'impl', first is 'impl_0' and so on
@lsy323
Copy link
Collaborator

lsy323 commented Jul 1, 2025

Thank you for the fix! Pending on CI

@Carlomus
Copy link
Author

Carlomus commented Jul 1, 2025

Hi, happy this was approved! Apparently the last check is failing since "Secret TORCH_XLA_BOT_TOKEN is required, but not provided while calling." . I'm guessing that isn't something that is wrong from my end? If I need to change anything let me know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants