Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not able to Load the Fine Tuned Model and Run Inference in Fine_Tune_Llama_2_by_generating_data_from_the_LLM_OpenAI #7

Open
Opperessor opened this issue Nov 4, 2023 · 0 comments

Comments

@Opperessor
Copy link

Hi, very helpful tutorial, i followed all the steps but im not able to do step 12 Load the Fine Tuned Model and Run Inference on GPU in Fine_Tune_Llama_2_by_generating_data_from_the_LLM_OpenAI.
its throwing out of memory error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant