PolSAR classification / segmentation using complex-valued neural networks.
To cite the code please use Zenodo:
"J Agustin Barrachina. (2022). NEGU93/polsar_cvnn: Antology of CVNN for PolSAR applications (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5821229"
@software{j_agustin_barrachina_2022_5821229,
author = {J Agustin Barrachina},
title = {{NEGU93/polsar\_cvnn: Antology of CVNN for PolSAR
applications}},
month = jan,
year = 2022,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.5821229},
url = {https://doi.org/10.5281/zenodo.5821229}
}
- Install all dependencies including cvnn.
- Clone this repository
- Run the file
principal_simulation.py
. Optional parameters go as follows:
usage: principal_simulation.py [-h] [--dataset_method DATASET_METHOD]
[--tensorflow] [--epochs EPOCHS]
[--model MODEL] [--early_stop [EARLY_STOP]]
[--balance BALANCE] [--real_mode [REAL_MODE]]
[--dropout DROPOUT DROPOUT DROPOUT]
[--coherency] [--dataset DATASET]
optional arguments:
-h, --help show this help message and exit
--dataset_method DATASET_METHOD
One of:
- random (default): randomly select the train and val set
- separate: split first the image into sections and select the sets from there
- single_separated_image: as separate, but do not apply the slinding window operation
(no batches, only one image per set).
Only possible with segmentation models
--tensorflow Use tensorflow library
--epochs EPOCHS (int) epochs to be done
--model MODEL deep model to be used. Options:
- fcnn
- cnn
- mlp
- 3d-cnn
--early_stop [EARLY_STOP]
Apply early stopping to training
--balance BALANCE Deal with unbalanced dataset by:
- loss: weighted loss
- dataset: balance dataset by randomly remove pixels of predominant classes
- any other string will be considered as not balanced
--real_mode [REAL_MODE]
run real model instead of complex.
If [REAL_MODE] is used it should be one of:
- real_imag
- amplitude_phase
- amplitude_only
- real_only
--dropout DROPOUT DROPOUT DROPOUT
dropout rate to be used on downsampling, bottle neck, upsampling sections (in order). Example: `python main.py --dropout 0.1 None 0.3` will use 10% dropout on the downsampling part and 30% on the upsamlpling part and no dropout on the bottle neck.
--coherency Use coherency matrix instead of s
--dataset DATASET dataset to be used. Available options:
- SF-AIRSAR
- SF-RS2
- OBER
Process finished with exit code 0
- Once simulations are done the program will create a folder inside
log/<date>/run-<time>/
that will contain the following information:tensorboard
: Files to be visualized with tensorboard.checkpoints
: Saved model weights of the lowest validation loss obtained.prediction.png
: Image with the predicted image of the best model.model_summary.txt
: Information about the simulation done.history_dict.csv
: The dictionary of all loss and metrics over epoch obtained as a return ofModel.fit()
.<dataset>_confusion_matrix.csv
: Confusion matrices for different datasets.evaluate.csv
: Loss and all metrics for all datasets and full image.
- Download the San Francisco dataset. The labels and images are well described in this paper. It is important that the format of the folder copies the structure of this repository.
- Change the
root_path
whith the path where the dataset was downloaded on the fileSan Francisco/sf_data_reader.py
- Download labels from this repository
- Download image from the European Space Agency (esa) website
- Change the
root_path
whith the path where the dataset was downloaded on the fileOberpfaffenhofen/oberpfaffenhofen_dataset.py
For using your own dataset:
- create a new class that inherits from
PolsarDatasetHandler
. Two methods (at least) should be created.get_image
: Return a numpy array of the 3D image (height, width, channels), channels are usually complex-valued and in the form of coherency matrix or pauli vector representation.get_sparse_labels
: Returns an array with the labels in sparse mode (NOT one-hot encoded).
- Inside
principal_simulation.py
- Import your class.
- Add your dataset metadata into
DATASET_META
. - Add your dataset into
_get_dataset_handler
.
Currently, the following models are supported:
- FCNN from Cao et al.
- CNN from Zhang et al. and then used (to some extent) and present to some extent in Sun et al.; Zhao et al.; Qin et al.
- MLP from Hansh et al. present in all these papers: 1; 2; 3
- 3D-CNN from Tan et al.
To create your own model it sufice to:
- Create your own
Tensorflow
model (usingcvnn
if needed) and create a function or class that returns it (already compiled). - Add it to
_get_model
insideprincipal_simulation.py
- Add your model name to
MODEL_META
to be able to call the script with your new model parameter.