Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing Key of pad_shape in img_meta #12279

Open
LaiXuanyu opened this issue Dec 25, 2024 · 0 comments
Open

Missing Key of pad_shape in img_meta #12279

LaiXuanyu opened this issue Dec 25, 2024 · 0 comments
Assignees

Comments

@LaiXuanyu
Copy link

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
I pip the lastest mmdetection from source and use M-Rcnn architectecture but during the training it has follow bug, the img_mata doesn't have key of [pad_shape]

Reproduction

  1. What command or script did you run?

image

  1. Did you make any modifications on the code or config? Did you understand what you have modified?

I use new backbone and dataset. I think the model can load sussucceful means the change is reasonable
3. What dataset did you use?

UIIS coco format dataset

Environment

sys.platform: linux
Python: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0]
CUDA available: False
MUSA available: False
numpy_random_seed: 2147483648
GCC: gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
PyTorch: 2.1.2
PyTorch compiling details: PyTorch built with:

  • GCC 9.3
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX512
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.16.2
OpenCV: 4.10.0
MMEngine: 0.10.5
MMDetection: 3.3.0+cfd5d3a

Error traceback
The traceback shows following

Traceback (most recent call last):
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/tools/train.py", line 123, in
main()
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/tools/train.py", line 119, in main
runner.train()
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/runner/loops.py", line 98, in run
self.run_epoch()
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/runner/loops.py", line 115, in run_epoch
self.run_iter(idx, data_batch)
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/runner/loops.py", line 131, in run_iter
outputs = self.runner.model.train_step(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
losses = self._run_forward(data, mode='loss') # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/anaconda3/envs/AutoSAM/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/detectors/base.py", line 92, in forward
return self.loss(inputs, data_samples)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/detectors/two_stage.py", line 175, in loss
rpn_losses, rpn_results_list = self.rpn_head.loss_and_predict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 165, in loss_and_predict
losses = self.loss_by_feat(*loss_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/dense_heads/rpn_head.py", line 125, in loss_by_feat
losses = super().loss_by_feat(
^^^^^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/dense_heads/anchor_head.py", line 500, in loss_by_feat
anchor_list, valid_flag_list = self.get_anchors(
^^^^^^^^^^^^^^^^^
File "/rds/general/user/xl4423/home/AutoSAM/mmdetection/mmdet/models/dense_heads/anchor_head.py", line 196, in get_anchors
featmap_sizes, img_meta['pad_shape'], device)
~~~~~~~~^^^^^^^^^^^^^
KeyError: 'pad_shape'

Bug fix
I am trying to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants