This repository contains the code of DISIR: Deep Image Segmentation with Interactive Refinement. In a nutshell, it consists in neural networks trained to perform semantic segmentation with human guidance. You may refer to our paper for detailed explanations.
This repository is divided into two parts:
train
which contains the training code of the networks (README)qgs_plugin
which contains the code of the QGIS plugin used to perform the interactive segmentation (README)
conda create -n disir python=3.7 rtree gdal=2.4 opencv scipy shapely -c 'conda-forge'
conda activate disir
pip install -r requirements.txt
Please note that this repository has been tested on Ubuntu 18.4, QGIS 3.8 and python 3.7 only.
- Download a segmentation dataset such as ISPRS Potsdam or INRIA dataset.
- Prepare this dataset according to
Dataset preprocessing
intrain/README.md
. - Train a model and convert it to a torch script still following
train/README.md
. - Install the QGIS plugin following
qgs_plugin/README.md
. - Follow
How to start
inqgs_plugin/README.md
and start segmenting your data !
If you use this work for your projects, please take the time to cite our ISPRS Congress conference paper:
@Article{isprs-annals-V-2-2020-877-2020,
AUTHOR = {Lenczner, G. and Le Saux, B. and Luminari, N. and Chan-Hon-Tong, A. and Le Besnerais, G.},
TITLE = {DISIR: DEEP IMAGE SEGMENTATION WITH INTERACTIVE REFINEMENT},
JOURNAL = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
VOLUME = {V-2-2020},
YEAR = {2020},
PAGES = {877--884},
URL = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2020/877/2020/},
DOI = {10.5194/isprs-annals-V-2-2020-877-2020}
}
Code is released under the MIT license for non-commercial and research purposes only. For commercial purposes, please contact the authors.
See LICENSE for more details.
See AUTHORS.md
This work has been jointly conducted at Delair and ONERA-DTIS.