-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How should I fix the input size during testing? #238
Comments
Same question. I resize the images in the forward function during the inference period, but it is not elegant :( |
I use HUST's ViM as the backbonehttps://github.com/hustvl/Vim/blob/main/vim/models_mamba.py, in which PatchEmbed specifies the input size. I followed the Swin Transformer and added a padding operation, so non-fixed inputs can be used. Fortunately, both ViM and Mask2Former's pixel decoder do not have many requirements for input size. You can try modifying PatchEmbed in this way.
''' |
Thanks! |
I have modified the backbone of Mask2Former to Vmamba, which requires the input size of my model to be fixed, for example, 640x640. This is not an issue during training because the train_dataloader outputs cropped images, and I just need to specify the specific crop parameters. However, I encountered a problem during testing. I am not sure how the test_dataloader operates exactly (I am not very familiar with the detectron2 framework and couldn't find the specific code location). During testing, the width and height of the images are not equal, with one of them being 640. My question is, which part of the code should I modify to ensure that the input images to the model are 640x640 during testing? I don't need any other data augmentation methods. I would greatly appreciate it if someone could provide an answer.
The text was updated successfully, but these errors were encountered: