[Test] Save local model path in PEFT adapter config #2026
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
It was pointed out in #2025 that the one-liner
AutoModelForCausalLM.from_pretrained(model_id)
doesn't work for our models trained with LoRA (instead you need to callAutoModelForCausalLM.from_pretrained
on a hub model followed byPeftModel.from_pretrained
on the LoRA-trained model). This is a demo of saving the checkpointer's directory as part of our adapter_config.json file which should enable this to work.Edit: after discussing with @pbontrager this won't integrate well with e.g. push to hub since it assumes a local path. So I'm not sure we want to actually land this as is, the better solution may be to make some changes to tune download.