-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added example with Workflow interface for finetuning Llama2 LLM #905
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @manuelhsantana. This PR is very close. Three small changes to address, then this is ready to merge.
openfl-tutorials/experimental/Workflow_Interface_501_FineTuning_LLAMA2.ipynb
Outdated
Show resolved
Hide resolved
openfl-tutorials/experimental/Workflow_Interface_501_FineTuning_LLAMA2.ipynb
Outdated
Show resolved
Hide resolved
openfl-tutorials/experimental/Workflow_Interface_501_FineTuning_LLAMA2.ipynb
Outdated
Show resolved
Hide resolved
Fix typos and unused comments
openfl-tutorials/experimental/Workflow_Interface_501_FineTuning_LLAMA2.ipynb
Show resolved
Hide resolved
I am trying to run it on CPU, but am running error: @manuelhsantana did you happen to run into this? I can try to triage a bit more, but from a quick investigate, it seems like it is expecting CUDA drivers, which I think is a req for bitesandbytes currently for easier testing, I reduced dataset size by replacing the data cell with this:
|
openfl-tutorials/experimental/Workflow_Interface_501_FineTuning_LLAMA2.ipynb
Outdated
Show resolved
Hide resolved
I changed the optimizer to avoid the warning on CPU |
Thanks - this resolved the issue on my end. Everything looks good to me. Great contribution! |
This PR introduces a new example of fine-tuning the Llama2 Language Model (LLM), using the workflow interface.
The main objective of this PR is to provide users and developers with a practical guide on how to fine-tune the Llama2 LLM for their specific use cases.
The added example includes:
A few things to keep in mind:
This tutorial serves as a basic example, and users are encouraged to adapt and expand upon it to suit their specific needs and requirements.
Users are animated to explore this example and provide feedback, which will be invaluable in refining and expanding our set of examples.
Please review the changes and provide your valuable feedback.