Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stuck #41

Open
C0deXG opened this issue May 10, 2023 · 2 comments
Open

stuck #41

C0deXG opened this issue May 10, 2023 · 2 comments

Comments

@C0deXG
Copy link

C0deXG commented May 10, 2023

am fallowing the intructions to install but i hade to chnage the numpy version to 1.19.0 to work and also i install vicuna.bin to run my model since there is no bulided model with the repo so when i download the vicuna.bin from fastChat repo and create 7B folder in the models folder and i put the ggml-vocab.bin there i run this command from the intructions: ./main -m models/7B/ggml-vocab.bin -p "the sky is" i get this:

command: ./main -m models/7B/ggml-vocab.bin -p "the sky is"

error: main: build = 526 (e6a46b0) main: seed = 1683697939 llama.cpp: loading model from models/7B/ggml-vocab.bin error loading model: missing tok_embeddings.weight llama_init_from_file: failed to load model llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-vocab.bin' main: error: unable to load mode

can i skip this part and how can i go forward if there is not model bulided in with this repo?

plz help if you can also if i try to host the backend as a API how is that possible since am just using localhost::8080 as a backend endpoint

@keldenl
Copy link
Owner

keldenl commented May 12, 2023

you need an actual model – ggml-vocab.bin isn't a model. you need to download it online u can find plenty on hugging face, just make sure it's labelled ggml

@C0deXG
Copy link
Author

C0deXG commented May 12, 2023

you need an actual model – ggml-vocab.bin isn't a model. you need to download it online u can find plenty on hugging face, just make sure it's labelled ggml

am getting error npm:
npm install ─╯

[email protected] postinstall
npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

[email protected] updateengines
git submodule foreach git pull

sh: python: command not found
npm ERR! code 127
npm ERR! path /Users/khederyusuf/Desktop/llama.cpp/gpt-llama.cpp
npm ERR! command failed
npm ERR! command sh -c npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

npm ERR! A complete log of this run can be found in:
npm ERR! /Users/khederyusuf/.npm/_logs/2023-05-12T10_54_16_481Z-debug-0.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants