Instance segmentation for anime characters based on CondInst and SOLOv2, using the implementation from AdelaiDet and detectron2.
Many thanks to AniSeg created by jerryli27, as part of the dataset originates from the segmentation data provided in this repo. The rest of the dataset is retrieved from Pixiv and manually annotated.
A model based on SOLOv2 had been added, which generally outperforms previous CondInst model. Evaluation results may be added soon.
Newer version of the model is still under development, stay tuned if you are interested.
Both AdelaiDet and detectron2 are required. Please refer to the official guide from AdelaiDet and detectron2. A Colab tutorial is provided.
-
Download the pretrained model and use the corresponding config file.
- CondInst: model weights, config.
- SOLOv2: model weights, config.
-
Run inference with
python AdelaiDet/demo/demo.py \ --config-file path/to/config.yaml \ --input input1.jpg input2.jpg \ --opts MODEL.WEIGHTS path/to/pretrained/model
Training using transfer learning from pretrained models on COCO Instance Segmentation. Parameters can be found in the config file.
Dataset is augmented by placing segmentations on pure backgrounds. Models are trained with multi-scale augmentation.