cumsum
op with CoreML backend fails export
#6201
Labels
bug
Something isn't working
module: coreml
Issues related to Apple's Core ML delegation
partner: apple
For backend delegation, kernels, demo, etc. from the 3rd-party partner, Apple
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
I'm currently working on creating an example for ExecuTorch with
EfficientSAM
. I've developed a runner that successfully exports the model inpte
format. However, I'm encountering a problem. There seems to be a mismatch in the number of arguments thecumsum
operation on the CoreML side expects. The function is indicated to expect 3 arguments, but only two are effectively used. Consequently, I received an errorValueError: node aten_cumsum_default (cumsum) got 2 input(s), expected [3]
. When I manually adjust this line toexpected=2
, the export works fine. I created an issue with mentioning this on CoreML GitHub. However, I received a respond (here) that I was testing it on a version of PyTorch that CoreML doesn't support. So I downgraded it, but it led to another issue. Some functionalities used in ExecuTorch require unsupported by CoreML versions of PyTorch (2.5.0+).For now, I "hacked" it and replace each call of
cumsum
with the following code:So it will work, but the problem is still valid as this op is not optimized. I understand that is more problem on CoreML side, but I've already sent an issue and I want to check if there are any better ways to approach it.
Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241007
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] executorch==0.5.0a0+cb3a546
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.21.3
[pip3] torch==2.6.0.dev20241007
[pip3] torchaudio==2.5.0.dev20241007
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0.dev20241007
[conda] executorch 0.5.0a0+cb3a546 pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 1.21.3 pypi_0 pypi
[conda] torch 2.6.0.dev20241007 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241007 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241007 pypi_0 pypi
The text was updated successfully, but these errors were encountered: