Skip to content

cumsum op with CoreML backend fails export #6201

Closed
@msluszniak

Description

@msluszniak

🐛 Describe the bug

I'm currently working on creating an example for ExecuTorch with EfficientSAM. I've developed a runner that successfully exports the model in pte format. However, I'm encountering a problem. There seems to be a mismatch in the number of arguments the cumsum operation on the CoreML side expects. The function is indicated to expect 3 arguments, but only two are effectively used. Consequently, I received an error ValueError: node aten_cumsum_default (cumsum) got 2 input(s), expected [3]. When I manually adjust this line to expected=2, the export works fine. I created an issue with mentioning this on CoreML GitHub. However, I received a respond (here) that I was testing it on a version of PyTorch that CoreML doesn't support. So I downgraded it, but it led to another issue. Some functionalities used in ExecuTorch require unsupported by CoreML versions of PyTorch (2.5.0+).

For now, I "hacked" it and replace each call of cumsum with the following code:

def vectorized_cumsum(input_tensor, dim):
        output = input_tensor.clone()
        slices = [slice(None)] * input_tensor.dim()
        for i in range(1, input_tensor.size(dim)):
            slices[dim] = i
            minus_one_slices = slices.copy()
            minus_one_slices[dim] = i - 1
            output[tuple(slices)] += output[tuple(minus_one_slices)]
        return output

So it will work, but the problem is still valid as this op is not optimized. I understand that is more problem on CoreML side, but I've already sent an issue and I want to check if there are any better ways to approach it.

Versions

Collecting environment information...
PyTorch version: 2.6.0.dev20241007
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.4
Libc version: N/A

Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M3 Pro

Versions of relevant libraries:
[pip3] executorch==0.5.0a0+cb3a546
[pip3] executorchcoreml==0.0.1
[pip3] numpy==1.21.3
[pip3] torch==2.6.0.dev20241007
[pip3] torchaudio==2.5.0.dev20241007
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0.dev20241007
[conda] executorch 0.5.0a0+cb3a546 pypi_0 pypi
[conda] executorchcoreml 0.0.1 pypi_0 pypi
[conda] numpy 1.21.3 pypi_0 pypi
[conda] torch 2.6.0.dev20241007 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241007 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241007 pypi_0 pypi

Metadata

Metadata

Labels

module: coremlIssues related to Apple's Core ML delegation and code under backends/apple/coreml/partner: appleFor backend delegation, kernels, demo, etc. from the 3rd-party partner, AppletriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions