Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.
/ polsar_cvnn Public archive

PolSAR classification / segmentation using complex-valued neural networks.

Notifications You must be signed in to change notification settings

NEGU93/polsar_cvnn

Repository files navigation

PolSAR CVNN

DOI PyPI version

PolSAR classification / segmentation using complex-valued neural networks.

Cite

To cite the code please use Zenodo:

"J Agustin Barrachina. (2022). NEGU93/polsar_cvnn: Antology of CVNN for PolSAR applications (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5821229"

@software{j_agustin_barrachina_2022_5821229,
  author       = {J Agustin Barrachina},
  title        = {{NEGU93/polsar\_cvnn: Antology of CVNN for PolSAR 
                   applications}},
  month        = jan,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.5821229},
  url          = {https://doi.org/10.5281/zenodo.5821229}
}

Code usage

  1. Install all dependencies including cvnn.
  2. Clone this repository
  3. Run the file principal_simulation.py. Optional parameters go as follows:
usage: principal_simulation.py [-h] [--dataset_method DATASET_METHOD]
                               [--tensorflow] [--epochs EPOCHS]
                               [--model MODEL] [--early_stop [EARLY_STOP]]
                               [--balance BALANCE] [--real_mode [REAL_MODE]]
                               [--dropout DROPOUT DROPOUT DROPOUT]
                               [--coherency] [--dataset DATASET]

optional arguments:
  -h, --help            show this help message and exit
  --dataset_method DATASET_METHOD
                        One of:
                        	- random (default): randomly select the train and val set
                        	- separate: split first the image into sections and select the sets from there
                        	- single_separated_image: as separate, but do not apply the slinding window operation 
                        		(no batches, only one image per set). 
                        		Only possible with segmentation models
  --tensorflow          Use tensorflow library
  --epochs EPOCHS       (int) epochs to be done
  --model MODEL         deep model to be used. Options:
                        	- fcnn
                        	- cnn
                        	- mlp
                        	- 3d-cnn
  --early_stop [EARLY_STOP]
                        Apply early stopping to training
  --balance BALANCE     Deal with unbalanced dataset by:
                        	- loss: weighted loss
                        	- dataset: balance dataset by randomly remove pixels of predominant classes
                        	- any other string will be considered as not balanced
  --real_mode [REAL_MODE]
                        run real model instead of complex.
                        If [REAL_MODE] is used it should be one of:
                        	- real_imag
                        	- amplitude_phase
                        	- amplitude_only
                        	- real_only
  --dropout DROPOUT DROPOUT DROPOUT
                        dropout rate to be used on downsampling, bottle neck, upsampling sections (in order). Example: `python main.py --dropout 0.1 None 0.3` will use 10% dropout on the downsampling part and 30% on the upsamlpling part and no dropout on the bottle neck.
  --coherency           Use coherency matrix instead of s
  --dataset DATASET     dataset to be used. Available options:
                        	- SF-AIRSAR
                        	- SF-RS2
                        	- OBER

Process finished with exit code 0

  1. Once simulations are done the program will create a folder inside log/<date>/run-<time>/ that will contain the following information:
    • tensorboard: Files to be visualized with tensorboard.
    • checkpoints: Saved model weights of the lowest validation loss obtained.
    • prediction.png: Image with the predicted image of the best model.
    • model_summary.txt: Information about the simulation done.
    • history_dict.csv: The dictionary of all loss and metrics over epoch obtained as a return of Model.fit().
    • <dataset>_confusion_matrix.csv: Confusion matrices for different datasets.
    • evaluate.csv: Loss and all metrics for all datasets and full image.

Datasets

San Francisco

  1. Download the San Francisco dataset. The labels and images are well described in this paper. It is important that the format of the folder copies the structure of this repository.
  2. Change the root_path whith the path where the dataset was downloaded on the file San Francisco/sf_data_reader.py

Oberpfaffenhofen

  1. Download labels from this repository
  2. Download image from the European Space Agency (esa) website
  3. Change the root_path whith the path where the dataset was downloaded on the file Oberpfaffenhofen/oberpfaffenhofen_dataset.py

Own dataset

For using your own dataset:

  1. create a new class that inherits from PolsarDatasetHandler. Two methods (at least) should be created.
    • get_image: Return a numpy array of the 3D image (height, width, channels), channels are usually complex-valued and in the form of coherency matrix or pauli vector representation.
    • get_sparse_labels: Returns an array with the labels in sparse mode (NOT one-hot encoded).
  2. Inside principal_simulation.py
    • Import your class.
    • Add your dataset metadata into DATASET_META.
    • Add your dataset into _get_dataset_handler.

Models

Currently, the following models are supported:

To create your own model it sufice to:

  1. Create your own Tensorflow model (using cvnn if needed) and create a function or class that returns it (already compiled).
  2. Add it to _get_model inside principal_simulation.py
  3. Add your model name to MODEL_META to be able to call the script with your new model parameter.

About

PolSAR classification / segmentation using complex-valued neural networks.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages