Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terminate called after throwing an instance of 'std::runtime_error' | what(): unexpectedly reached end of file | Aborted (core dumped) #6

Open
anhbsn opened this issue Oct 20, 2023 · 0 comments

Comments

@anhbsn
Copy link

anhbsn commented Oct 20, 2023

Hello, I am running the llama-2-7b-chat.ggmlv3.q4_0.bin model with Run_llama2_local_cpu_upload.
My systems: Ubuntu 20.04. I ran on my local computer (Windows), it work very well. But when I run on other machine (Server), it not work.

I use this model with code from https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/tree/main/Run_llama2_local_cpu_upload

Error:
terminate called after throwing an instance of 'std::runtime_error' what(): unexpectedly reached end of file Aborted (core dumped)

if you have any solution, pls show me, thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant