You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
We fix this issue in the latest release.
In the previous version, we load all models in FP16 by default. However, some models use BF16 by default. Since rotation is sensitive to the weight precision, we choose to first load model in FP32 which is compatible with both FP16 and BF16, and perform weight rotation. We then load FP16 model and replace corresponding weights with rotated weights.
hi! Thanks for your work. I do not know why to rotate again in the above codes?
The text was updated successfully, but these errors were encountered: