Skip to content

Commit

Permalink
fix a type in code comment (bitsandbytes-foundation#1063)
Browse files Browse the repository at this point in the history
was pointing to wrong class
  • Loading branch information
nairbv authored Feb 14, 2024
1 parent 5b28fd3 commit ceae150
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion bitsandbytes/nn/modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ class Linear4bit(nn.Linear):
compute datatypes such as FP4 and NF4.
In order to quantize a linear layer one should first load the original fp16 / bf16 weights into
the Linear8bitLt module, then call `quantized_module.to("cuda")` to quantize the fp16 / bf16 weights.
the Linear4bit module, then call `quantized_module.to("cuda")` to quantize the fp16 / bf16 weights.
Example:
Expand Down

0 comments on commit ceae150

Please sign in to comment.