A Lightweight Custom Foreground Segmentation Model Trained on Modified COCO.
Results Notebook
·
Report Bug
- Table of Contents
- About The Project
- Jupyter Notebooks - nbViewer
- Dataset Information
- Dataset Preprocessing
- Results
- How to Run
- Changelog
- Contributing
- License
- Contact
While working on synthetic data generation (Like Cut, Paste and Learn: ArXiv Paper), one of the many challenges that I faced was an easy way to extract the foreground object of interest from the background image. Having worked with simple UNet-like (Paper) architectures in the past, I wanted to test out a hypothesis that an Autoencoder should be able to learn to differentiate foreground from background objects and if trained on enough variety, it would be able to do the same for unseen objects as well (classes it has not seen during training).
The goal was to train an efficient end-to-end CNN capable of segmenting out the foreground objects, even for classes that are not present in the training set.
Even though I have used COCO dataset for this experiment, there is a good amount of preprocessing that has been done to convert the dataset to a format that suites the need. Hence, if you want to replicate it, be sure to check that part out.
In summary: the results are quite impressive as the model clearly displays potential to accurately extract foreground objects, both for seen and unseen classes. Please on to find out more!
The notebooks do not render properly on GitHub, hence please use the nbviewer links provided below to see the results.
- Dataset Preparation - Extracting the per-object masks from the COOC dataset
- Model - Model Notebook containing Data Loader and Architecture: Training
- Results/Testing - Test Results on Validation Set and a collection of Gun Images from Google
- The Model is trained on COCO 2017 Dataset.
- Dataset Splits Used:
- Train: COCO 2017 Train Images + Train Annotations -
instances_train2017.json
- Val: COCO 2017 Val Images + Val Annotations -
instances_val2017.json
- Train: COCO 2017 Train Images + Train Annotations -
- Dataset Download: https://cocodataset.org/#download
- Dataset Format Information: https://cocodataset.org/#format-data
- API to parse COCO: https://github.com/philferriere/cocoapi
- COCO contains instance segmentation annotations, and that was the primary reason behind selecting this dataset.
- We do not want to have more than one segmentation mask in a given image since our aim is to build network capable of segmenting out the foreground object.
- We thus process the dataset to crop out the surrounding region of each instance segmentation and save that image and the corresponding mask for that instance as training inputs and targets respectively.
- Though this does create quite a few bad samples of data, they have been deliberately been left in the dataset to add some noise elements to the dataset and because this is just an experimental project (It was almost morning and I was sleepy, got lazy).
There are two sets of results:
- First: Results from the Validation Set of the COCO Dataset (after preprocessing). This contains data from classes which the model has already seen before (present in the training set). For example, dog, human, car, cup, etc.
- Second: The most interesting results are of that class which the model has never seen before (not present in training set). To test that out, I collected a few variety of images of guns from Google Image, and passed them through the network for inference.
I'll let the results speak for itself.
Images (Left to Right): Input Image
, Predicted Image
, Thresholded Mask @ 0.5
, Ground Truth Mask
Inference Results on a Collected Dataset of Guns from Google (A class similar to this was not present in the COCO dataset)
Images (Left to Right): Input Image
, Predicted Image
, Thresholded Mask @ 0.5
, Masked Background (Segmented Object)
The experiment should be fairly reproducible. However, a GPU would be recommended for training. For Inference, a CPU System would suffice.
- CPU: AMD Ryzen 7 3700X - 8 Cores 16 Threads
- GPU: Nvidia GeForce RTX 2080 Ti 11 GB
- RAM: 32 GB DDR4 @ 3200 MHz
- Storage: 1 TB NVMe SSD (This is not important, even a normal SSD would suffice)
- OS: Ubuntu 20.10
Alternative Option: Google Colaboratory - GPU Kernel
- Use the COCO API to extract the masks from the dataset. (Refer: Dataset Preparation.ipynb Notebook)
- Save the masks in a directory as
.jpg
images. - Example Directory Structure:
.
├── images
│ ├── train
│ │ ├── *.jpg
│ └── val
│ └── *.jpg
└── masks
│ ├── train
│ │ ├── *.jpg
│ └── val
│ └── *.jpg
Simple List of Deep Learning Libraries. The main Architecture/Model is developed with Keras, which comes as a part of Tensorflow 2.x
Since this is a Proof of Concept Project, I am not maintaining a CHANGELOG.md at the moment. However, the primary goal is to improve the architecture to make the predicted masks more accurate.
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the Copyleft License: GNU AGPL v3. See LICENSE for more information.
- Website: Animikh Aich - Website
- LinkedIn: animikh-aich
- Email: [email protected]
- Twitter: @AichAnimikh