You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to evaluate VideoLLaMA2 on MVBench. As I run the inference_video_mcqa_mvbench.py, the following traceback occurs:
Traceback (most recent call last):
File "/***/VideoLLaMA2/videollama2/eval/inference_video_mcqa_mvbench.py", line 203, in <module>
run_inference(args)
File "/***/VideoLLaMA2/videollama2/eval/inference_video_mcqa_mvbench.py", line 164, in run_inference
for i, line in enumerate(tqdm(val_loader)):
File "/***/python3.11/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/***/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/***/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/***/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise
raise exception
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/***/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/***/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/***/VideoLLaMA2/videollama2/eval/inference_video_mcqa_mvbench.py", line 50, in __getitem__
torch_imgs = self.processor(video_path, s=bound[0], e=bound[1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/VideoLLaMA2/./videollama2/mm_utils.py", line 202, in process_video
video = processor.preprocess(images, return_tensors='pt')['pixel_values']
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'preprocess'
# NOTE: videollama2 adopts the same processor for processing image and video.
processor=vision_tower.image_processor
ifhasattr(model.config, "max_sequence_length"):
context_len=model.config.max_sequence_length
else:
context_len=2048
returntokenizer, model, processor, context_len
is initialized as None. For model_type=mistral in config.json of VideoLLaMA2-7B and VideoLLaMA2-7B-16F, the processor keeps as None, which may cause the traceback above. Could you please help me address the problem? Thanks!
The text was updated successfully, but these errors were encountered:
# Use a pipeline as a high-level helperfromtransformersimportpipelinepipe=pipeline("visual-question-answering", model="DAMO-NLP-SG/VideoLLaMA2-7B")
Transformers returns the following traceback:
ValueError: The checkpoint you are trying to load has model type`videollama2_mistral` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Hi Teams,
I'm trying to evaluate VideoLLaMA2 on MVBench. As I run the inference_video_mcqa_mvbench.py, the following traceback occurs:
I find that the
processor
inVideoLLaMA2/videollama2/model/__init__.py
Lines 193 to 208 in 42bf9fe
is initialized as
None
. Formodel_type=mistral
inconfig.json
of VideoLLaMA2-7B and VideoLLaMA2-7B-16F, theprocessor
keeps as None, which may cause the traceback above. Could you please help me address the problem? Thanks!The text was updated successfully, but these errors were encountered: