Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean/None' for available files. #111

Open
dimagod101 opened this issue Jan 31, 2025 · 0 comments

Comments

@dimagod101
Copy link

Type of Issue:
Bug

Summary:
Whenever I try running python gradio_run.py it gives the error in the title

(MagicQuill) F:\ai\MagicQuill>python gradio_run.py
Total VRAM 12281 MB, total RAM 49053 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : native
Using pytorch cross attention
['F:\ai\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\python310.zip', 'C:\Users\PC\.conda\envs\MagicQuill\DLLs', 'C:\Users\PC\.conda\envs\MagicQuill\lib', 'C:\Users\PC\.conda\envs\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'F:\ai\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "F:\ai\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "F:\ai\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "F:\ai\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 773, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1100, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 634, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 356, in cached_file
raise EnvironmentError(
OSError: F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean/None' for available files.

sorry if my issue request is a bit messy it's my first time writing an issue on GitHub

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant