-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data preparation #13
Comments
Thank you for your interest. We used the 2014 training, validation, and test images and the corresponding annotations. |
Thank you very much for your timely reply. Excuse me again. I would like to ask how to obtain the multi-modal and multi-task datasets in your training process. Aren't the storage formats of each dataset different? My main problem is that I didn't quite understand the content of DATASET.md. I'm sorry to have taken up your time. Please accept my apologies again! |
You can gather all the necessary multi-modal data for various tasks by following the instructions in This data must be generated before starting the training. During the training phase, the model will utilize data from different tasks for training. To begin, you can run the following command: python build_data/format_dataset_rp.py --save_root './image_pairs_train' --tasks ['det'] --data_root './data/coco' Afterwards, you can modify the Let me know if you have any further questions. |
Thank you for your exordinary work! I want to know how to download the right dataset when occurring the various choises in the official websites.
The text was updated successfully, but these errors were encountered: