You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OSError: F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean/None' for available files.
#111
Open
dimagod101 opened this issue
Jan 31, 2025
· 0 comments
Summary:
Whenever I try running python gradio_run.py it gives the error in the title
(MagicQuill) F:\ai\MagicQuill>python gradio_run.py
Total VRAM 12281 MB, total RAM 49053 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : native
Using pytorch cross attention
['F:\ai\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\python310.zip', 'C:\Users\PC\.conda\envs\MagicQuill\DLLs', 'C:\Users\PC\.conda\envs\MagicQuill\lib', 'C:\Users\PC\.conda\envs\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'F:\ai\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "F:\ai\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "F:\ai\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "F:\ai\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 773, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1100, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 634, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 356, in cached_file
raise EnvironmentError(
OSError: F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean/None' for available files.
sorry if my issue request is a bit messy it's my first time writing an issue on GitHub
The text was updated successfully, but these errors were encountered:
Type of Issue:
Bug
Summary:
Whenever I try running python gradio_run.py it gives the error in the title
(MagicQuill) F:\ai\MagicQuill>python gradio_run.py
Total VRAM 12281 MB, total RAM 49053 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : native
Using pytorch cross attention
['F:\ai\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\python310.zip', 'C:\Users\PC\.conda\envs\MagicQuill\DLLs', 'C:\Users\PC\.conda\envs\MagicQuill\lib', 'C:\Users\PC\.conda\envs\MagicQuill', 'C:\Users\PC\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'F:\ai\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "F:\ai\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "F:\ai\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "F:\ai\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 773, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1100, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 634, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\PC.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 356, in cached_file
raise EnvironmentError(
OSError: F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/F:\ai\MagicQuill\models\llava-v1.5-7b-finetune-clean/None' for available files.
sorry if my issue request is a bit messy it's my first time writing an issue on GitHub
The text was updated successfully, but these errors were encountered: