You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
When loading 4 bits with bitsandbytes, a KeyError occurs when the weight is changed to qweight in the code below. Are there any cases where weight should be changed to qweight? weight_name = weight_name.replace(".weight", ".qweight")
Checklist
Describe the bug
An error occurs when loading the gemma model with the command below.
First of all, bitsandbytes_stacked_params_mapping does not exist in the gemma model, so I added it. (https://github.com/sgl-project/sglang/blob/v0.4.0.post1/python/sglang/srt/model_loader/loader.py#L908-L912)
When loading 4 bits with bitsandbytes, a KeyError occurs when the
weight
is changed toqweight
in the code below. Are there any cases where weight should be changed to qweight?weight_name = weight_name.replace(".weight", ".qweight")
https://github.com/sgl-project/sglang/blob/v0.4.0.post1/python/sglang/srt/model_loader/loader.py#L839-L844
Reproduction
Are there any cases where weight should be changed to qweight?
Environment
The text was updated successfully, but these errors were encountered: