Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HfModel: Disable use_exllama by default for GPTQ models #1474

Merged
merged 1 commit into from
Nov 11, 2024

Conversation

jambayk
Copy link
Contributor

@jambayk jambayk commented Nov 9, 2024

Describe your changes

The default value for use_exllama in transformers is True. However, exllama model cannot be loaded on cpu (for model export) and doesn't have a backward pass implemented for finetuning.
Since the main use for gptq quantized model in Olive is for export and finetuning, we should disable use_exllama by default. User can provide use_exllama=True as part of the loading args if they want to enable exllama for inference, etc.

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to update example documentation in a follow-up PR.

(Optional) Issue link

@jambayk jambayk merged commit 4009def into main Nov 11, 2024
25 checks passed
@jambayk jambayk deleted the jambayk/dont-use-exllama branch November 11, 2024 20:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants