Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unresolved reference 'llama' #305

Open
12dc32d opened this issue Aug 6, 2024 · 2 comments
Open

Unresolved reference 'llama' #305

12dc32d opened this issue Aug 6, 2024 · 2 comments

Comments

@12dc32d
Copy link

12dc32d commented Aug 6, 2024

Hello:
I downloaded llama3 and 8B models from Github, but encountered problems when setting up the virtual environment.
After installing the requirements toolkit, the generation.py and test_tokenizer.py have "from llama.tokenizer import ChatFormat, Tokenizer" and "from llama.model import ModelArgs, Transformer " are not recognized.
Similarly, llama is not the name of a toolkit and cannot be installed directly.
Does anyone understand this problem? Please reply to me, I will wait.

@ChiruChirag
Copy link

There are few things which might have happened here
1.Make sure your repository structure is clear, and your files are situated in proper locations.(try relative paths in import if not so)
2.Ensure you have the necessary dependencies installed.
3.If u are working on VS Code i suggest you to check if proper interpreter is selected(The one with the Virtual env u have created and installed dependency)
4.Refer to the documentation or README file of the LLAMA repository for specific setup instructions.

I don't know the exact problem ,these might be some of the fixes.
Explain in detail ,for more info..

@12dc32d
Copy link
Author

12dc32d commented Aug 7, 2024

There are few things which might have happened here 1.Make sure your repository structure is clear, and your files are situated in proper locations.(try relative paths in import if not so) 2.Ensure you have the necessary dependencies installed. 3.If u are working on VS Code i suggest you to check if proper interpreter is selected(The one with the Virtual env u have created and installed dependency) 4.Refer to the documentation or README file of the LLAMA repository for specific setup instructions.

I don't know the exact problem ,these might be some of the fixes. Explain in detail ,for more info..

Hello bro:
When I trying to setting up the visual environment, I used “pip install -r requirements.txt“ words to install the toolkits in pycharm. But here is a error about "llama" model in the top of code in text_tokinezer.py:
import os
from unittest import TestCase
from llama.tokenizer import ChatFormat, Tokenizer

This "llama" in "from llama.tokenizer import ChatFormat, Tokenizer" are not recognized.
And the error is:
Unresolved reference 'llama'
Unresolved reference 'ChatFormat'
Unresolved reference 'Tokenizer'

Before I truing to install llama model in project, but can not find it. And the project was downloaded directly from me on GitHub.
Mat share me how to solve sych problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants