-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (128, 128). #36
Comments
pip install open-clip-torch==2.24.0 this didn't solve the issue. I got another after installing this. [08/11 03:23:43 fcclip.data.datasets.register_cityscapes_panoptic]: 3 cities found in '/home/jovyan/Desktop/shared/fc_clip_vamsi/detectron2/datasets/cityscapes/leftImg8bit/val'. |
pip install open-clip-torch==2.24.0 |
hi @ruiming46zrm , thanks for the reply. pip install open-clip-torch==2.24.0 didn't solve the issue. I got another after installing this. [08/11 03:23:43 fcclip.data.datasets.register_cityscapes_panoptic]: 3 cities found in '/home/jovyan/Desktop/shared/fc_clip_vamsi/detectron2/datasets/cityscapes/leftImg8bit/val'. |
it's another issue, you can debugg and see your key names. and i think the attn_mask issue is solved |
Thanks! Those are default keys. and to bypass the issue I just assigned 0 to class_id for now. |
Hi bytedance,
I was trying to reproduce the evaluation result of Cityscapes in the paper (test only table 2).
I have done the necessary setup.
When I try to run > python train_net.py
--config-file configs/coco/panoptic-segmentation/fcclip/fcclip_convnext_large_eval_cityscapes.yaml
--eval-only MODEL.WEIGHTS FC-CLIP_ConvNeXt-Large/fcclip_cocopan.pth
below are the logs I got
[08/11 02:18:13 fcclip.data.datasets.register_cityscapes_panoptic]: 3 cities found in '/home/jovyan/Desktop/shared/fc_clip_vamsi/detectron2/datasets/cityscapes/leftImg8bit/val'.
[08/11 02:18:13 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(1024, 1024), max_size=2560, sample_style='choice')]
[08/11 02:18:13 d2.data.common]: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'>
[08/11 02:18:13 d2.data.common]: Serializing 500 elements to byte tensors and concatenating them all ...
[08/11 02:18:13 d2.data.common]: Serialized dataset takes 0.81 MiB
[08/11 02:18:13 d2.evaluation.evaluator]: Start inference on 500 batches
[08/11 02:18:13 d2.evaluation.cityscapes_evaluation]: Writing cityscapes results to temporary directory /tmp/cityscapes_eval_fnp4wh2e ...
[08/11 02:18:13 d2.evaluation.cityscapes_evaluation]: Writing cityscapes results to temporary directory /tmp/cityscapes_eval_e0osk9d3 ...
Traceback (most recent call last):
File "train_net.py", line 340, in
launch(
File "/home/jovyan/shared/fc_clip_vamsi/detectron2/detectron2/engine/launch.py", line 84, in launch
main_func(*args)
File "train_net.py", line 325, in main
res = Trainer.test(cfg, model)
File "/home/jovyan/shared/fc_clip_vamsi/detectron2/detectron2/engine/defaults.py", line 621, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/jovyan/shared/fc_clip_vamsi/detectron2/detectron2/evaluation/evaluator.py", line 165, in inference_on_dataset
outputs = model(inputs)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jovyan/shared/fc_clip_vamsi/fc-clip/fcclip/fcclip.py", line 324, in forward
text_classifier, num_templates = self.get_text_classifier()
File "/home/jovyan/shared/fc_clip_vamsi/fc-clip/fcclip/fcclip.py", line 208, in get_text_classifier
text_classifier.append(self.backbone.get_text_classifier(self.test_class_names[idx:idx+bs], self.device).detach())
File "/home/jovyan/shared/fc_clip_vamsi/fc-clip/fcclip/modeling/backbone/clip.py", line 211, in get_text_classifier
text_features = self.encode_text(text_tokens, normalize=False)
File "/home/jovyan/shared/fc_clip_vamsi/fc-clip/fcclip/modeling/backbone/clip.py", line 95, in encode_text
x = self.clip_model.transformer(x, attn_mask=self.clip_model.attn_mask)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/open_clip/transformer.py", line 363, in forward
x = r(x, attn_mask=attn_mask)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/open_clip/transformer.py", line 263, in forward
x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/open_clip/transformer.py", line 250, in attention
return self.attn(
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1031, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/opt/conda/envs/fcclip/lib/python3.8/site-packages/torch/nn/functional.py", line 4992, in multi_head_attention_forward
raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (128, 128).
Please let me know if you need more info
Thanks!
The text was updated successfully, but these errors were encountered: