We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when I run with "python -m torch.distributed.launch --nnodes 1 --node_rank 0 --nproc_per_node 1 --master_port 29502 main.py --master_port "127.0.0.1" --cfg configs/vssm/vmambav2_tiny_224.yaml --batch-size 64 --data-path /nfsv4/23039356r/data/defect_seg/dataset_new --output [my dataset path] --pretrained /nfsv4/23039356r/repository/VMamba-main/ckpt/vssm1_tiny_0230s_ckpt_epoch_264.pth",
I encountered a shape error: "RuntimeError: shape '[192, 96]' is invalid for input of size 9216".
Could you please help me to solve this problem? Thanks.
Detailed log: " [rank0]: Traceback (most recent call last): [rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/main.py", line 440, in [rank0]: main(config, args) [rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/main.py", line 181, in main [rank0]: load_pretrained_ema(config, model_without_ddp, logger, model_ema) [rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/utils/utils.py", line 64, in load_pretrained_ema [rank0]: msg = model.load_state_dict(checkpoint['model'], strict=False) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2175, in load_state_dict [rank0]: load(self, state_dict) [rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load [rank0]: load(child, child_state_dict, child_prefix) # noqa: F821 [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load [rank0]: load(child, child_state_dict, child_prefix) # noqa: F821 [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load [rank0]: load(child, child_state_dict, child_prefix) # noqa: F821 [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: [Previous line repeated 3 more times] [rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2157, in load [rank0]: module._load_from_state_dict( [rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/models/vmamba.py", line 48, in _load_from_state_dict [rank0]: state_dict[prefix + "weight"] = state_dict[prefix + "weight"].view(self.weight.shape) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: RuntimeError: shape '[192, 96]' is invalid for input of size 9216 "
Update: when I used "--cfg configs/vssm/vmambav2v_tiny_224.yaml", the error disappered. Is there any wrong with your readme?
The text was updated successfully, but these errors were encountered:
Well, you can find the link of checkpoint together with the link of config in the table, where it seems like "config/log/ckpt".
Sorry, something went wrong.
No branches or pull requests
when I run with
"python -m torch.distributed.launch --nnodes 1 --node_rank 0 --nproc_per_node 1 --master_port 29502 main.py --master_port "127.0.0.1" --cfg configs/vssm/vmambav2_tiny_224.yaml --batch-size 64 --data-path /nfsv4/23039356r/data/defect_seg/dataset_new --output [my dataset path] --pretrained /nfsv4/23039356r/repository/VMamba-main/ckpt/vssm1_tiny_0230s_ckpt_epoch_264.pth",
I encountered a shape error: "RuntimeError: shape '[192, 96]' is invalid for input of size 9216".
Could you please help me to solve this problem? Thanks.
Detailed log:
"
[rank0]: Traceback (most recent call last):
[rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/main.py", line 440, in
[rank0]: main(config, args)
[rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/main.py", line 181, in main
[rank0]: load_pretrained_ema(config, model_without_ddp, logger, model_ema)
[rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/utils/utils.py", line 64, in load_pretrained_ema
[rank0]: msg = model.load_state_dict(checkpoint['model'], strict=False)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2175, in load_state_dict
[rank0]: load(self, state_dict)
[rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load
[rank0]: load(child, child_state_dict, child_prefix) # noqa: F821
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load
[rank0]: load(child, child_state_dict, child_prefix) # noqa: F821
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2163, in load
[rank0]: load(child, child_state_dict, child_prefix) # noqa: F821
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: [Previous line repeated 3 more times]
[rank0]: File "/nfsv4/23039356r/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2157, in load
[rank0]: module._load_from_state_dict(
[rank0]: File "/nfsv4/23039356r/repository/VMamba-main/classification/models/vmamba.py", line 48, in _load_from_state_dict
[rank0]: state_dict[prefix + "weight"] = state_dict[prefix + "weight"].view(self.weight.shape)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: shape '[192, 96]' is invalid for input of size 9216
"
Update:
when I used "--cfg configs/vssm/vmambav2v_tiny_224.yaml", the error disappered. Is there any wrong with your readme?
The text was updated successfully, but these errors were encountered: