Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tts_angular NotImplementedError: The operator 'aten::norm.dtype_out' is not currently implemented for the XPU device. #495

Closed
mengfei25 opened this issue Jun 27, 2024 · 3 comments

Comments

@mengfei25
Copy link
Contributor

🐛 Describe the bug

torchbench_amp_fp16_training
xpu train tts_angular
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2294, in validate_model
self.model_iter_fn(model, example_inputs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 456, in forward_and_backward_pass
pred = mod(*cloned_inputs)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1566, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1575, in _call_impl
return forward_call(*args, **kwargs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/models/tts_angular/model.py", line 61, in forward
d = torch.nn.functional.normalize(d[:, -1], p=2, dim=1)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/functional.py", line 4816, in normalize
denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_tensor.py", line 768, in norm
return torch.norm(self, p, dim, keepdim, dtype=dtype)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/functional.py", line 1858, in norm
return _VF.norm(input, p, _dim, keepdim=keepdim) # type: ignore[attr-defined]
NotImplementedError: The operator 'aten::norm.dtype_out' is not currently implemented for the XPU device. Please open a feature on https://github.com/intel/torch-xpu-ops/issues. You can set the environment variable PYTORCH_ENABLE_XPU_FALLBACK=1 to use the CPU implementation as a fallback for XPU unimplemented operators. WARNING: this will bring unexpected performance compared with running natively on XPU.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 4177, in run
) = runner.load_model(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 380, in load_model
self.validate_model(model, example_inputs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2296, in validate_model
raise RuntimeError("Eager run failed") from e
RuntimeError: Eager run failed

eager_fail_to_run

Versions

torch-xpu-ops: 31c4001
pytorch: 0f81473d7b4a1bf09246410712df22541be7caf3 + PRs: 127277,129120
device: PVC 1100, 803.61, 0.5.1

@retonym
Copy link
Contributor

retonym commented Jul 15, 2024

This operator is not implemented with xpu backend. @fengyuan14 please help to check this issue. Thanks.

@retonym retonym assigned fengyuan14 and unassigned retonym Jul 15, 2024
@fengyuan14
Copy link
Contributor

@mengfei25
Copy link
Contributor Author

pass in latest weekly test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants