Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it run on Apple Silicon by any chance? #101

Open
ahmetkca opened this issue Oct 30, 2024 · 1 comment
Open

Can it run on Apple Silicon by any chance? #101

ahmetkca opened this issue Oct 30, 2024 · 1 comment
Labels
type: experiment One off experiment to test something

Comments

@ahmetkca
Copy link

No description provided.

@ahmetkca ahmetkca added the type: experiment One off experiment to test something label Oct 30, 2024
@ahmetkca
Copy link
Author

I tried to set the device to mps

I am getting the following error:

NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'SparseMPS' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, MPS, Meta, QuantizedCPU, QuantizedMeta, MkldnnCPU, SparseCPU, SparseMeta, SparseCsrCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

Both torch.backends.mps.is_available() and torch.backends.mps.is_built() are true.

Python 3.12.6 (main, Sep 19 2024, 00:27:59) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.backends.mps.is_available())
True
>>> print(torch.backends.mps.is_built())
True
>>>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: experiment One off experiment to test something
Projects
None yet
Development

No branches or pull requests

1 participant