Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: LBaudoux/MLULC
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: ThomasRieutord/MT-MLULC
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Able to merge. These branches can be automatically merged.

Commits on Sep 1, 2023

  1. Packaging: move modules into the mmt directory + create setup + empty…

    … __init__. TODO: change imports
    ThomasRieutord committed Sep 1, 2023
    Copy the full SHA
    969ba8e View commit details
  2. Copy the full SHA
    765340c View commit details
  3. Copy the full SHA
    4ef760e View commit details
  4. Copy the full SHA
    e60b17d View commit details
  5. Copy the full SHA
    bd8cc5d View commit details
  6. Black formatting

    ThomasRieutord committed Sep 1, 2023
    Copy the full SHA
    a86f048 View commit details

Commits on Sep 4, 2023

  1. Copy the full SHA
    b8814d2 View commit details
  2. Copy the full SHA
    fd3905b View commit details
  3. Copy the full SHA
    cfbd05a View commit details
  4. Copy the full SHA
    3e0711e View commit details
  5. Copy the full SHA
    db9e68a View commit details

Commits on Sep 5, 2023

  1. Copy the full SHA
    0f5991a View commit details
  2. Copy the full SHA
    a7d387b View commit details
  3. Copy the full SHA
    354e640 View commit details
  4. Copy the full SHA
    49cba16 View commit details
  5. Copy the full SHA
    755c7d9 View commit details
  6. Copy the full SHA
    2a6e2e8 View commit details
  7. Copy the full SHA
    de335ca View commit details

Commits on Sep 7, 2023

  1. Copy the full SHA
    58dcb29 View commit details
  2. Copy the full SHA
    6e38b2b View commit details
  3. Minor changes

    ThomasRieutord committed Sep 7, 2023
    Copy the full SHA
    57031ee View commit details

Commits on Sep 13, 2023

  1. Copy the full SHA
    36af1bb View commit details
  2. Copy the full SHA
    479a720 View commit details
  3. Proposal: attention unet with skip connections + sort resizes to redu…

    …ce memory need in forward pass
    Thomas Rieutord committed Sep 13, 2023
    Copy the full SHA
    95b1d8e View commit details

Commits on Sep 15, 2023

  1. Copy the full SHA
    a73badf View commit details
  2. Small testings

    ThomasRieutord committed Sep 15, 2023
    Copy the full SHA
    569ee2c View commit details
  3. Copy the full SHA
    8e324d2 View commit details
  4. Copy the full SHA
    d16d578 View commit details
  5. Copy the full SHA
    ac026db View commit details
  6. Copy the full SHA
    cd126f3 View commit details
  7. Copy the full SHA
    0e1da3d View commit details
  8. Simply export to ONNX

    ThomasRieutord committed Sep 15, 2023
    Copy the full SHA
    5de52a9 View commit details
  9. Add model ShortUNet which is the exact copy of AttentionUNetSC with c…

    …onvolution instead of attention
    ThomasRieutord committed Sep 15, 2023
    Copy the full SHA
    c19f57d View commit details
  10. Copy the full SHA
    8ad35cc View commit details
  11. Copy the full SHA
    c7a65ec View commit details
  12. Copy the full SHA
    5f3815e View commit details

Commits on Sep 18, 2023

  1. Copy the full SHA
    3eeff71 View commit details
  2. Copy the full SHA
    70ec955 View commit details

Commits on Sep 19, 2023

  1. Copy the full SHA
    90f3f2c View commit details

Commits on Sep 20, 2023

  1. Minor changes

    ThomasRieutord committed Sep 20, 2023
    Copy the full SHA
    948f726 View commit details
  2. Copy the full SHA
    9610d7f View commit details
  3. Copy the full SHA
    e554846 View commit details

Commits on Sep 22, 2023

  1. Copy the full SHA
    3c88038 View commit details
  2. Copy the full SHA
    90164f7 View commit details
  3. Copy the full SHA
    fe35fd5 View commit details
  4. Minor changes

    ThomasRieutord committed Sep 22, 2023
    Copy the full SHA
    e729451 View commit details
  5. Copy the full SHA
    93bdf83 View commit details
  6. MAJOR[Fixes observed missing data over forest]: ESAWC transform chang…

    …ed (all labels+1) because the tree labels was at 0, thus seen as missing
    ThomasRieutord committed Sep 22, 2023
    Copy the full SHA
    809cb29 View commit details
  7. Copy the full SHA
    6478f51 View commit details
  8. Copy the full SHA
    b9e1ce2 View commit details
Showing with 12,021 additions and 2,614 deletions.
  1. +6 −0 .gitignore
  2. +22 −0 LICENSE.txt
  3. +0 −1 README
  4. +222 −18 README.md
  5. +0 −367 agents/TranslatingUnet_vf.py
  6. +0 −10 agents/__init__.py
  7. +0 −66 agents/base.py
  8. +0 −422 agents/multiLULC.py
  9. BIN assets/illustration_map_translation.png
  10. +46 −0 configs/new_config_template.yaml
  11. +46 −0 configs/test_config.yaml
  12. +9 −7 configs/universal_embedding.json
  13. +85 −0 data-downloads.sh
  14. +0 −10 datasets/__init__.py
  15. +0 −249 datasets/landcover_to_landcover.py
  16. +0 −85 datasets/transforms.py
  17. +165 −0 drafts/get_landcover_on_mera_domain.py
  18. +317 −0 drafts/influence_of_qscore_and_divscore_on_selected_patches.py
  19. +196 −0 drafts/lucas_comparison.py
  20. +32 −0 drafts/plot_colorbar.py
  21. +64 −0 drafts/plot_patches_hdf5_files.py
  22. +79 −0 drafts/plot_patches_hdf5_files2.py
  23. 0 {data → figures}/.keep
  24. +0 −10 graphs/__init__.py
  25. +0 −10 graphs/models/__init__.py
  26. +0 −22 graphs/models/custom_layers/double_conv.py
  27. +0 −35 graphs/models/custom_layers/down_block.py
  28. +0 −53 graphs/models/custom_layers/up_block.py
  29. +0 −304 graphs/models/translating_unet.py
  30. +0 −247 graphs/models/universal_embedding.py
  31. +15 −12 main.py
  32. +37 −0 pyproject.toml
  33. +1 −1 run.sh
  34. +0 −10 run_translating_unet.sh
  35. +209 −0 scripts/download_ecoclimapsg.py
  36. +136 −0 scripts/inference_and_merging.py
  37. +130 −0 scripts/look_at_map.py
  38. +157 −0 scripts/prepare_hdf5_ds1.py
  39. +218 −0 scripts/prepare_hdf5_ds2.py
  40. +111 −0 scripts/qualitative_evaluation.py
  41. +136 −0 scripts/scores_from_inference.py
  42. +98 −0 scripts/show_esgml_ensemble.py
  43. +118 −0 scripts/show_infres_ensemble.py
  44. +204 −0 scripts/stats_on_labels.py
  45. +17 −0 src/mmt/__init__.py
  46. +6 −0 src/mmt/agents/__init__.py
  47. +65 −0 src/mmt/agents/base.py
  48. +614 −0 src/mmt/agents/multiLULC.py
  49. +6 −0 src/mmt/datasets/__init__.py
  50. +1,271 −0 src/mmt/datasets/landcover_to_landcover.py
  51. +1,257 −0 src/mmt/datasets/landcovers.py
  52. +289 −0 src/mmt/datasets/transforms.py
  53. +6 −0 src/mmt/graphs/__init__.py
  54. +6 −0 src/mmt/graphs/models/__init__.py
  55. +1,010 −0 src/mmt/graphs/models/attention_autoencoder.py
  56. 0 { → src/mmt}/graphs/models/custom_layers/__init__.py
  57. +40 −0 src/mmt/graphs/models/custom_layers/double_conv.py
  58. +53 −0 src/mmt/graphs/models/custom_layers/down_block.py
  59. +84 −0 src/mmt/graphs/models/custom_layers/up_block.py
  60. +45 −0 src/mmt/graphs/models/position_encoding.py
  61. +561 −0 src/mmt/graphs/models/universal_embedding.py
  62. +6 −0 src/mmt/inference/__init__.py
  63. +366 −0 src/mmt/inference/io.py
  64. +796 −0 src/mmt/inference/translators.py
  65. +6 −0 src/mmt/utils/__init__.py
  66. +150 −0 src/mmt/utils/aliases.py
  67. +145 −0 src/mmt/utils/config.py
  68. +527 −0 src/mmt/utils/domains.py
  69. +521 −0 src/mmt/utils/misc.py
  70. +324 −0 src/mmt/utils/plt_utils.py
  71. +501 −0 src/mmt/utils/scores.py
  72. +26 −0 tests/export_test.py
  73. +97 −0 tests/import_test.py
  74. +126 −0 tests/is_data_there.py
  75. +57 −0 tests/landcovers_test.py
  76. +1 −0 tests/mmt-weights-v2.0.ckpt.sha256
  77. +54 −0 tests/run_all_tests.sh
  78. +97 −0 tests/transforms_test.py
  79. +32 −0 tests/translators_test.py
  80. +0 −10 utils/__init__.py
  81. +0 −102 utils/config.py
  82. +0 −17 utils/dirs.py
  83. +0 −173 utils/image_type.py
  84. +0 −54 utils/misc.py
  85. +0 −309 utils/plt_utils.py
  86. +0 −10 utils/tensorboardx_utils.py
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
**/__pycache__/**
.idea/**
data
experiments/*
configs/*
figures/*
*.onnx
*.egg-info/
22 changes: 22 additions & 0 deletions LICENSE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
MIT License

Software and Data Copyright (c) 2023 Thomas Rieutord
Original Pytorch template Copyright (c) 2022 Luc Baudoux https://github.com/LBaudoux/MLULC

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
1 change: 0 additions & 1 deletion README

This file was deleted.

240 changes: 222 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,232 @@
# Multi Land-use/Land-cover Translation Network
This repo is still subject to change. In paricular, we will improve the README installation procedure, the code comments and the influence of each parameter.
This github repository follow the structure proposed by [Hager Rady](https://github.com/hagerrady13/) and [Mo'men AbdelRazek](https://github.com/moemen95). For more information on the repository structure, and how to handle launch of the code refers to this [github repo](https://github.com/moemen95/Pytorch-Project-Template).
Multiple Map Translation
========================
This repo was forked from [MLULC](https://github.com/LBaudoux/MLULC).
The main purpose of this repository is provide the source code that was used to produce the ECOCLIMAP-SG-ML land cover map, which is described in [Rieutord et al. (2024)](https://doi.org/10.3390/land13111875).
Land cover maps are translated thanks to auto-encoders, as illustrated in the following figure.
ECOCLIMAP-SG-ML is obtained by map translation from ESA World Cover to ECOCLIMAP-SG+.

## Installation
<img src="assets/illustration_map_translation.png" width="600" />

1. Clone the repository
2. Inside the data folder download and unzip the dataset [zenodo archive](https://doi.org/10.5281/zenodo.5843595). Read the dataset README for more information.
3. Either Use anaconda to install required python module or replicate the environment used for our experimentation using the environement.yml file provided (note that this environment holds some unnecessary module )
Installation
------------

## launch
### Software

- default parameters used for our experimentation are provided in the config folder
- to launch an experiment use the run.sh file after choosing the desired config file.
The main dependencies of this repository are Pytorch, TorchGeo, Numpy, Pandas, h5py, netCDF4 and Matplotlib.
This code has been used with [Conda](https://docs.conda.io/projects/conda/en/latest/index.html) environments.
1. Create a new environment with `conda create -n mmt python=3.11; conda activate mmt`
2. Clone the package locally and install it with `pip install -e .`

### Data
The program `data-download.sh` is provided to help downloading and unpacking the data.
Copy it and execute it in the directory that will receive the data (the `data` directory or another that will be linked as `data`).
```
bash data-download.sh
```
The data original to this work is accessible in this [Zenodo archive](https://doi.org/10.5281/zenodo.11242911).
It contains the TIF files of ECOCLIMAP-SG-ML, the HDF5 files for training and testing and the weights of the neural network.

Note that the [ECOCLIMAP-SG](https://opensource.umr-cnrm.fr/projects/ecoclimap-sg/wiki) land cover is downloaded and extracted with a Python program.
From the package root directory, and after having installed the software, the command is as follows (also given at the end of `data-download.sh`):
```
python scripts/download_ecoclimapsg.py --landingdir data/tiff_data/ECOCLIMAP-SG
```

All data is assumed to be found in the `data` folder of the repository.
We recommend to use symbolic links to adapt it to your file system.
The `data` folder should be organised as follows:

```
data
├── outputs -> where the inference output will be stored
|
├── saved_models -> where the model checkpoints are stored.
|
├── tiff_data -> where the original land cover maps are stored in TIF format
| ├── ECOCLIMAP-SG
| ├── ECOCLIMAP-SG-ML
| ├── ECOCLIMAP-SG-plus
| └── ESA-WorldCover-2021
|
└── hdf5_data -> where the training data is stored
├── ecosg.hdf5
├── ecosg-train.hdf5
├── ecosg-test.hdf5
├── ecosg-val.hdf5
├── esawc.hdf5
└── ...
```

The full program takes approximately 4 hours to run.
The volume downloaded (for all data) is approximately 56 GB.
Once unzipped, the volume occupied by the data is approximately 370GB, distributed as follows
```
0 ./outputs
12M ./saved_models
266G ./tiff_data
103G ./hdf5_data
369G .
```
The amount of data can be reduced depending on the use you want to have of this repository.
Please remove the part you don't need in `data-download.sh` to reduce the amount of data.


### Check the installation

To check the software installation:
```
python tests/import_test.py
````
To check the data installation:
```
python tests/is_data_there.py [--tiff] [--weights] [--hdf5] [--all]
```
Usage
------
### Visualize maps
Once the landcovers are available in the `data/tiff_data` folder, they can be visualized using the `look_at_map.py` program.
For example, to look at ECOCLIMAP-SG-ML over the EURAT domain with a resolution of 0.1 degrees, the command is:
```
python -i scripts/look_at_map.py --lcname=EcoclimapSGML --domainname=eurat --res=0.1
```
See the header of `look_at_map.py` for more examples.
Alternatively, you can export maps in various formats (netCDF, DIR/HDR), using the `export` method of the land cover classes.
See the documentation of the method for more information.
### Make inference
Once the landcover and the weights are correctly installed, you can perform inference on any domain for which ESA World Cover is available.
The program to make the inference is `scripts/inference_and_merging.py`.
```
python drafts/inference_and_merging.py
python -i scripts/look_at_map.py --lcname=<path given by the previous program>
```
See the documentation inside to run it.
### Reproduce results
The results presented in the manuscript can be reproduces thanks to the programs `scripts/scores_from_inference.py` and `scripts/qualitative_evaluation.py`.
```
python -i scripts/qualitative_evaluation.py
python -i scripts/scores_from_inference.py
```
See the documentation and variables inside.
### Train the model
To train the model, make sure you have set the correct parameters in a config file (a template is provide in the `config` directory).
Point to this config file in the `run.sh` program.
Then, just launch `./run.sh`.
More infos
-----------
### Repository organisation
The repository has the following directories:
* `assets`: contains images for the documentation
* `configs`: contains the various configuration (YAML files) for the training
* `data`: contains all the data, as described earlier in this README
* `drafts`: contains draft programs using the package
* `experiments`: contains all the files created when training a model (logs, checkpoints, visualizations...)
* `mmt`: contains the source code of the MMT package
* `tests`: contains programs to test the installation
* `scripts`: contains programs ready for use
Specifically, the `mmt` folder will set the organisation of the MMT package in modules and sub-modules which are as follows:
```
mmt
├── agents
│   ├── __init__.py
│   ├── base.py
│   └── multiLULC.py
├── datasets
│   ├── __init__.py
│   ├── landcovers.py
│   ├── landcover_to_landcover.py
│   └── transforms.py
├── graphs
│   ├── __init__.py
│   └── models
│      ├── __init__.py
│      ├── custom_layers
│      │   ├── __init__.py
│      │   ├── double_conv.py
│      │   ├── down_block.py
│      │   └── up_block.py
│      ├── attention_autoencoder.py
│      ├── position_encoding.py
│      └── universal_embedding.py
├── inference
│   ├── __init__.py
│   ├── io.py
│   └── translators.py
└── utils
├── __init__.py
├── aliases.py
├── config.py
├── domains.py
├── misc.py
├── plt_utils.py
└── scores.py
```
The modules `agents`, `graphs`, `datasets` and `utils` are mostly inherited from the MLULC repository.
The other modules are specific additions for the ECOCLIMAP-SG-ML generation.
### Class diagrams
Two modules contain customised families of classes for which we provide the inheritance diagram here.
Landcovers are used to access the data from multiple TIF files:
```
mmt.datasets.landcovers
└── torchgeo.datasets.RasterDataset (-> https://torchgeo.readthedocs.io/en/v0.4.1/api/datasets.html#rasterdataset)
├── _TorchgeoLandcover
| ├── ESAWorldCover
| ├── EcoclimapSG
| | ├── SpecialistLabelsECOSGplus
| | ├── InferenceResults
| | └── EcoclimapSGML
| └── _CompositeMap
| ├── EcoclimapSGplus
| └── EcoclimapSGMLcomposite
|
├── _ScoreMap
| └── ScoreECOSGplus
|
└── _ProbaLandcover
└── InferenceResultsProba
```
Translators are used to perform map translation in inference mode:
```
mmt.inference.translators
└── _MapTranslator
├── MapMerger
└── EsawcToEsgp
├── EsawcToEsgpMembers
├── EsawcToEsgpProba
└── EsawcToEsgpAsMap -- landcovers.InferenceResults
└── EsawcToEsgpShowEnsemble
```
#Tips for Custom dataset.
If you want to use a custom dataset you need :
1. To crop your maps in tiles with a reasonable width (big tiles wont fit in memory, small one will give few spatial context information ).
2. Either store them in an hdf5 with the same attribute as those describe in the [zenodo archive](https://doi.org/10.5281/zenodo.5843595) README or adapt the datasets/landcover_to_landcover.py file to read your folder.
### Acknowledgement
* The French National Research agency as a part of the MAESTRIA project (grant ANR-18-CE23-0023) for funding.
* The AI4GEO project (http://www.ai4geo.eu/) for material support.
* [Hager Rady](https://github.com/hagerrady13/) and [Mo'men AbdelRazek](https://github.com/moemen95) for the repository template
Thanks to
* [Geoffrey Bessardon](https://github.com/gbessardon) for creating the ECOCLIMAP-SG+ map and providing early releases, used as a reference in this work.
* [Luc Baudoux](https://github.com/LBaudoux) for the initial implementation of the map translation network and the training data.
* [Met \'Eireann](https://www.met.ie/about-us) for providing the computing facilities for this work.
### License:
This project is licensed under MIT License - see the LICENSE file for details
This project is licensed under MIT License. See the LICENSE.txt file for details
Loading