Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor error trace - Error converting op #2349

Open
darylsew opened this issue Sep 26, 2024 · 2 comments
Open

Poor error trace - Error converting op #2349

darylsew opened this issue Sep 26, 2024 · 2 comments
Labels
bug Unexpected behaviour that should be corrected (type)

Comments

@darylsew
Copy link

🐞Describing the bug

  • Make sure to only create an issue here for bugs in the coremltools Python package. If this is a bug with the Core ML Framework or Xcode, please submit your bug here: https://developer.apple.com/bug-reporting/
  • Provide a clear and consise description of the bug.

Stack Trace

ERROR - converting 'callmethod' op (located at: '0'):

Converting PyTorch Frontend ==> MIL Ops:   2%|█▌                                                                                           | 857/50346 [00:00<00:15, 3185.51 ops/s]
Traceback (most recent call last):
  File "/Users/daryl/tangia/E2TTS/lucidrain-fork/export_model.py", line 217, in <module>
    model = coremltools.convert(
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
    mlmodel = mil_convert(
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 288, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 108, in __call__
    return load(*args, **kwargs)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 87, in load
    return _perform_torch_convert(converter, debug)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 131, in _perform_torch_convert
    raise e
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 123, in _perform_torch_convert
    prog = converter.convert()
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1293, in convert
    convert_nodes(self.context, self.graph, early_exit=not has_states)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 92, in convert_nodes
    raise e     # re-raise exception
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 87, in convert_nodes
    convert_single_node(context, node)
  File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 117, in convert_single_node
    raise RuntimeError(
RuntimeError: PyTorch convert function for op 'callmethod' not implemented.
(tangia) Daryl-MBP:lucidrain-fork daryl$ 

To Reproduce

  • Please add a minimal code example that can reproduce the error when running it.
 # model code:
 x = nn.Linear()
 x()

Seems to happen any time we call a function. My real issue is that I cannot tell from the error message which line of Python code corresponds to the error. I think it goes away in this case if i replace x with x.forward(..)

  • If the model conversion succeeds, but there is a numerical mismatch in predictions, please include the code used for comparisons.

System environment (please complete the following information):

  • coremltools version:
  • OS (e.g. MacOS version or Linux type):
  • Any other relevant version information (e.g. PyTorch or TensorFlow version):

Additional context

  • Add anything else about the problem here that you want to share.
@darylsew darylsew added the bug Unexpected behaviour that should be corrected (type) label Sep 26, 2024
@darylsew
Copy link
Author

by cloning the repo and putting in a pdb, i was able to get the info i needed.

-> scope_names = node.get_scope_info()[0]
(Pdb) node
  %3934 = callmethod[name=forward](%proj.1, %input.19)
Converting PyTorch Frontend ==> MIL Ops:   1%|▍                               | 707/50058 [00:20<00:19, 2507.84 ops/s]
  %3934 = callmethod[name=forward](%proj.1, %input.19)

2 questions:

  1. why wouldn't forward calls be supported? am i expected to have my entire graph be flat without any classes?
  2. would there be any issue with having this exception in ops.py print str(node)? might make a PR, as node.get_scope_info() is kinda useless - (Pdb) node.get_scope_info()[0] ['0', '3934']

@TobyRoseman
Copy link
Collaborator

I'm not understanding the issue here. Please add a complete example to reproduce the issue. Please include all nesscessary import statements and the call to coremltools.convert.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unexpected behaviour that should be corrected (type)
Projects
None yet
Development

No branches or pull requests

2 participants