Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add aten::_nested_from_padded #1045

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

min-jean-cho
Copy link
Contributor

  • _nested_from_padded

@min-jean-cho min-jean-cho changed the title [Draft] Add aten::_nested_from_padded Add aten::_nested_from_padded Nov 5, 2024
@min-jean-cho min-jean-cho requested a review from xytintel November 5, 2024 23:53
@min-jean-cho
Copy link
Contributor Author

min-jean-cho commented Nov 6, 2024

Hi @daisyden, could you please help port the following UTs to torch-xpu-ops? Thanks!
test_nested_tensor_from_padded: https://github.com/pytorch/pytorch/blob/main/test/test_nestedtensor.py#L2961-L2975
test_nested_tensor_from_padded_fused: https://github.com/pytorch/pytorch/blob/main/test/test_nestedtensor.py#L2977-L2991

}

template <typename T>
void remove_padding_kernelLauncher(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the mixing naming. In PyTorch, usually, we have launch_xxx_kernel.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated.

} else {
AT_ERROR("Only support fp32/fp16 for padded input");
}
return at::detail::make_tensor<at::native::NestedTensorImpl>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember we have not upstreamed DispatchKey::NestedTensorXPU. It should not work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fengyuan14 , I created #1141 to keep track of PRs related to NestedTensor support for xpu.

@fengyuan14
Copy link
Contributor

fengyuan14 commented Nov 7, 2024

I think we can leave Nested Tensor related operators for now,

  1. I asked workload team people, there is no usage of Nested Tensor operators in real popular workloads.
  2. We need a full plan to support Nested Tensor, like 1) we need upstream the dispatch key first. 2) Implement ~80 operators. 3) New test suite to support.
  3. Why does IPEX have the operators. It's an internal requirement to align op coverage with HPU.

According to the time line, it is a bit tight for us to catch up 2.6, if we need upstream the dispatch key. We can treat it as an entire feature to support Nested Tensor in 2.7.

@EikanWang Please comment and decide.

@min-jean-cho min-jean-cho changed the title Add aten::_nested_from_padded [Draft] Add aten::_nested_from_padded Nov 8, 2024
@xytintel xytintel marked this pull request as draft November 13, 2024 04:58
@min-jean-cho min-jean-cho marked this pull request as ready for review December 10, 2024 23:32
@min-jean-cho
Copy link
Contributor Author

Depends on #1140, #1142.

@min-jean-cho min-jean-cho changed the title [Draft] Add aten::_nested_from_padded Add aten::_nested_from_padded Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants