Skip to content

Swap IntNBit TBE Kernel with SSD Embedding DB TBE Kernel for SSD Infernece Enablement #3134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

faran928
Copy link
Contributor

Summary:
For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.

Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.

Differential Revision: D76953960

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 24, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76953960

faran928 added a commit to faran928/torchrec that referenced this pull request Jun 24, 2025
…rnece Enablement (pytorch#3134)

Summary:

For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.

Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.

Differential Revision: D76953960
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76953960

faran928 added a commit to faran928/torchrec that referenced this pull request Jun 24, 2025
…rnece Enablement (pytorch#3134)

Summary:

For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.

Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.

Differential Revision: D76953960
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76953960

@@ -224,6 +224,7 @@ def __init__(
self._is_weighted: bool = module.is_weighted()
self._lookups: List[nn.Module] = []
self._create_lookups(fused_params, device)
self._fused_params = fused_params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self._fused_params = fused_params
self.fused_params = fused_params

looks like this attr is used externally, remove the _

faran928 added a commit to faran928/torchrec that referenced this pull request Jun 24, 2025
…rnece Enablement (pytorch#3134)

Summary:

For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.

Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.

Differential Revision: D76953960
…rnece Enablement (pytorch#3134)

Summary:
Pull Request resolved: pytorch#3134

For SSD inference, we have added EmbeddingDB as a custom in house storage not exposed to OSS. We leverage TGIF stack to rewrite IntNBit TBE Kernel with SSD EmbeddingDB TBE kernel as SSD TBE embedding kernel can't be exposed within TorchRec code base.

Additionally, for SSD we only provide in di_sharding_pass and SSD can be enabled without having additional DI shards. In that case, for the tables that assigned to CPU host we can just do tw sharding of those tables. Added the TW sharding logic accordingly.

Differential Revision: D76953960
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76953960

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants