-
Notifications
You must be signed in to change notification settings - Fork 249
Enabling EmbeddingQuantizer and SharedEmbeddingQuantizer #1525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Enabling EmbeddingQuantizer and SharedEmbeddingQuantizer #1525
Conversation
…aredEmbeddingQuantizer
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1525
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 4 New Failures, 1 Cancelled JobAs of commit 49039c7 with merge base ecdb4e3 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Looks like the imports aren't happy. I wonder if we need a torchao pin bump? |
weight_dtype = getattr(torch, f"int{bit_width}") | ||
|
||
try: | ||
quantize_( | ||
model, | ||
model, | ||
int8_dynamic_activation_intx_weight( | ||
weight_dtype=weight_dtype, | ||
granularity=granularity, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
granularity => weight_granularity
has_weight_zeros=True => weight_mapping_type=MappingType.ASYMMETRIC
has_weight_zeros=False => weight_mapping_type=MappingType.SYMMETRIC
torchchat/utils/quantize.py
Outdated
@@ -154,45 +170,86 @@ def quantize_model( | |||
print("Encountered error during quantization: {e}") | |||
print("Trying with PlainLayout") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use QDQLayout instead
Yeah, you will need to update the torchao pin to something more recent (just pick the latest commit in torchao): https://github.com/pytorch/torchchat/blob/main/install/.pins/torchao-pin.txt |
…und of PR comments. Fixes to usage of EmbeddingQuantizer and SharedEmbeddingQuantizer
Overview
This PR enables the use of EmbeddingQuantizer and SharedEmbeddingQuantizer as quantization configuration options.
Running lintrunner appears to have changed several lines in this file. However, the edits made strictly to enable these new experimental quantizer can be found on the following lines:
Lines 46-49
: Imports for EmbeddingQuantizer and SharedEmbeddingQuantizerLines 202-234
: Logic for setting EmbeddingQuantizer and SharedEmbeddingQuantizer optionsLines 1033-1034
: Including options to map quantization config types to the corresponding quantizers.