Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-training method differences between code and paper #17

Open
jakobamb opened this issue Oct 30, 2024 · 4 comments
Open

Pre-training method differences between code and paper #17

jakobamb opened this issue Oct 30, 2024 · 4 comments

Comments

@jakobamb
Copy link

Hi,

thank you for releasing your work.

In the paper you are writing that you use the SimMIM method with mean-image masks and spatial frequency masking. The same approach is described in the figure of this repository.

However, the code and description of this repo are describing pre-training based on BEiT, not SimMIM, and are not mentioning the frequency masking. Are you in the process of updating the repo?

Are the weights you published from BEiT pretraining, or from USFM pretraining described in MIA?

Thanks

Jakob

@BodongDu
Copy link

BodongDu commented Dec 3, 2024

Hello,

I found the same question, the pre-training models in the code are Vit and SegVit. Where did you find that the model is BEiT. I find the Vit and SegVit in the task/Cls.yaml and task/Seg.yaml which are not consistent with paper. T am looking forward to your reply.

Best Reagrds,
Bodong Du

@BodongDu
Copy link

BodongDu commented Dec 3, 2024

I have a little understanding that what we talked about is a specific architecture and the method talked in the paper is a self-supervised pre-training method which can be used in many architecture like BEiT or ViT

@jakobamb
Copy link
Author

jakobamb commented Dec 5, 2024

BEiT is also a SSL pre-training method, they all use ViTs as the model. Not sure where I read about BEiT, the code has been partially updated since then and now I cannot find any pretraining related code anymore

@BodongDu
Copy link

BodongDu commented Dec 5, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants