Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to build a semi-supervised data set #12126

Open
1999luodi opened this issue Dec 11, 2024 · 5 comments
Open

How to build a semi-supervised data set #12126

1999luodi opened this issue Dec 11, 2024 · 5 comments
Assignees

Comments

@1999luodi
Copy link

Where does the unlabeled2017 data of semi-supervised data set come from? Is it independent of the train data

At the beginning coco’s dataset:
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017

After processing the semi-supervised json:
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── semi_anns
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ │ ├── [email protected]
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017

The data set used at the end:
mmdetection
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── image_info_unlabeled2017.json
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_unlabeled2017.json
│ │ │ ├── instances_val2017.json
│ │ ├── test2017
│ │ ├── train2017
│ │ ├── unlabeled2017
│ │ ├── val2017

among unlabeled2017,image_info_unlabeled2017.json,instances_unlabeled2017.json,how these file make? And why the semi_anns file disappear?

@1999luodi
Copy link
Author

when I run train.py,I get an error where the program breaks at 150 iterations or 300 iterations with an error that it can't find loss_cla. Then I use tools/analysis_tools/browse_dataset.py and it also reports the error: warnings.warn(f'Failed to add {vis_backend.class}, ').
[0/6305, elapsed: 0s, ETA: Traceback (most recent call last):
File “tools/analysis_tools/browse_dataset.py”, line 89, in
main()
File “tools/analysis_tools/browse_dataset.py”, line 57, in main
img = item['inputs'].permute(1, 2, 0).numpy()
AttributeError: 'dict' object does not have 'permute' attribute

@1999luodi
Copy link
Author

大佬,求帮助

@1999luodi
Copy link
Author

@Keiku @jbwang1997

@1999luodi
Copy link
Author

god,help please!

@1999luodi
Copy link
Author

I also used the raw dataset (not converted to semi-supervised format) for browse_dataset.py It works, but with some warnings: UserWarning: Warning: polygon out of bounds, plotted polygons may not be in the image and UserWarning: Warning: box out of bounds, plotted boxes may not be in the image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants