Skip to content

Commit

Permalink
Support lazy initialization for empty_xpu (#1115)
Browse files Browse the repository at this point in the history
# Motivation
Fix pytorch/pytorch#140877
Some PyTorch C++ users could call empty op directly. In this situation,
lazy initialization would not have been triggered. So we have to add
`lazyInitDevice` here, which also aligns with CUDA
[convention](https://github.com/pytorch/pytorch/blob/150ffb6e07f3802f8d0d3e843486e77a872803cf/aten/src/ATen/cuda/EmptyTensor.cpp#L13).
  • Loading branch information
guangyey authored Nov 25, 2024
1 parent 27ebbf8 commit 3af6f73
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions src/ATen/xpu/EmptyTensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ TensorBase empty_xpu(
ScalarType dtype,
c10::optional<Device> device_opt,
c10::optional<c10::MemoryFormat> memory_format_opt) {
at::globalContext().lazyInitDevice(c10::DeviceType::XPU);
const auto device = device_or_default(device_opt);
TORCH_INTERNAL_ASSERT(device.is_xpu());
const c10::DeviceGuard device_guard(device);
Expand Down

0 comments on commit 3af6f73

Please sign in to comment.