Skip to content
This repository has been archived by the owner on Aug 7, 2024. It is now read-only.

Use for each in sync_float8_scales #213

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

drisspg
Copy link
Contributor

@drisspg drisspg commented Feb 14, 2024

Summary

Based off of this PR: #211

Making this update throws the following error in test/test_fsdp_compile.py when compiling the sync func:

    compiled_fn = create_aot_dispatcher_function(
  File "/home/drisspg/meta/pytorch/torch/_dynamo/utils.py", line 250, in time_wrapper
    r = func(*args, **kwargs)
  File "/home/drisspg/meta/pytorch/torch/_functorch/aot_autograd.py", line 604, in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
  File "/home/drisspg/meta/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 434, in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
  File "/home/drisspg/meta/pytorch/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 639, in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
  File "/home/drisspg/meta/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 97, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "/home/drisspg/meta/pytorch/torch/_dynamo/utils.py", line 250, in time_wrapper
    r = func(*args, **kwargs)
  File "/home/drisspg/meta/pytorch/torch/_inductor/compile_fx.py", line 1249, in fw_compiler_base
    return inner_compile(
  File "/home/drisspg/meta/pytorch/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
  File "/home/drisspg/meta/pytorch/torch/_inductor/debug.py", line 304, in inner
    return fn(*args, **kwargs)
  File "/home/drisspg/miniconda3/envs/dev/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/home/drisspg/miniconda3/envs/dev/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/home/drisspg/meta/pytorch/torch/_inductor/compile_fx.py", line 423, in compile_fx_inner
    compiled_graph = fx_codegen_and_compile(
  File "/home/drisspg/meta/pytorch/torch/_inductor/compile_fx.py", line 689, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
  File "/home/drisspg/meta/pytorch/torch/_inductor/graph.py", line 1224, in compile_to_fn
    return self.compile_to_module().call
  File "/home/drisspg/meta/pytorch/torch/_dynamo/utils.py", line 250, in time_wrapper
    r = func(*args, **kwargs)
  File "/home/drisspg/meta/pytorch/torch/_inductor/graph.py", line 1176, in compile_to_module
    mod = PyCodeCache.load_by_key_path(
  File "/home/drisspg/meta/pytorch/torch/_inductor/codecache.py", line 2081, in load_by_key_path
    exec(code, mod.__dict__, mod.__dict__)
  File "/tmp/torchinductor_drisspg/75/c75wvi3focccss5zw3mvgbxdkclrrrxpdjz6gdsaunmilgxiquaw.py", line 198, in <module>
    async_compile.wait(globals())
  File "/home/drisspg/meta/pytorch/torch/_inductor/codecache.py", line 2621, in wait
    scope[key] = result.result()
  File "/home/drisspg/meta/pytorch/torch/_inductor/codecache.py", line 2428, in result
    self.future.result()
  File "/home/drisspg/miniconda3/envs/dev/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/drisspg/miniconda3/envs/dev/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
CompilationError: at 10:30:def triton_(in_ptr0, in_ptr1, in_ptr2, in_ptr3, out_ptr0, out_ptr1, out_ptr3, out_ptr4, out_ptr5, out_ptr7, out_ptr8, out_ptr9, out_ptr11):
    xpid = tl.program_id(0)
    XBLOCK: tl.constexpr = 1024
    if xpid >= 0 and xpid < 1:
        xpid_offset = xpid - 0
        xnumel = 1
        xoffset = xpid_offset * XBLOCK
        xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
        xmask = xindex < xnumel
        rindex = tl.arange(0, RBLOCK)[None, :]
                              ^
NameError('RBLOCK is not defined')

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 14, 2024
@drisspg drisspg mentioned this pull request Feb 14, 2024
6 tasks
@drisspg drisspg force-pushed the inductor_throws_error_with_foreach branch from f2074ce to 97a509e Compare February 15, 2024 01:43
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants