Skip to content

Commit

Permalink
Add pre-commit configuration for linting and formatting (#6)
Browse files Browse the repository at this point in the history
- Added linting and formatting using pre-commits
- Trigger pre-commit on main-branch push
- Change line length to 80
- Fixing existing formatting issues
- Added section about development with pre-commits
---------

Co-authored-by: joeloskarsson <[email protected]>
  • Loading branch information
sadamov and joeloskarsson authored Feb 1, 2024
1 parent c14b6b4 commit 474bad9
Show file tree
Hide file tree
Showing 23 changed files with 2,318 additions and 1,122 deletions.
28 changes: 28 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: Run pre-commit in blueprint

on:
push:
branches:
- main
pull_request:
branches:
- main

jobs:
blueprint-pre-commit:
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.11.7
- name: Install pre-commit hooks
run: |
pip install -r requirements.txt
- name: Run pre-commit hooks
run: |
pre-commit run --all-files
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -72,4 +72,3 @@ tags

# Coc configuration directory
.vim

51 changes: 51 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-ast
- id: check-case-conflict
- id: check-docstring-first
- id: check-symlinks
- id: check-toml
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: local
hooks:
- id: codespell
name: codespell
description: Check for spelling errors
language: system
entry: codespell
- repo: local
hooks:
- id: black
name: black
description: Format Python code
language: system
entry: black
types_or: [python, pyi]
- repo: local
hooks:
- id: isort
name: isort
description: Group and sort Python imports
language: system
entry: isort
types_or: [python, pyi, cython]
- repo: local
hooks:
- id: flake8
name: flake8
description: Check Python code for correctness, consistency and adherence to best practices
language: system
entry: flake8 --max-line-length=80 --ignore=E203,F811,I002,W503
types: [python]
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint -rn -sn
language: system
types: [python]
16 changes: 13 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ See the issues https://github.com/joeloskarsson/neural-lam/issues/2, https://git
Below follows instructions on how to use Neural-LAM to train and evaluate models.

## Installation
Follow the steps below to create the neccesary python environment.
Follow the steps below to create the necessary python environment.

1. Install GEOS for your system. For example with `sudo apt-get install libgeos-dev`. This is neccesary for the Cartopy requirement.
1. Install GEOS for your system. For example with `sudo apt-get install libgeos-dev`. This is necessary for the Cartopy requirement.
2. Use python 3.9.
3. Install version 2.0.1 of PyTorch. Follow instructions on the [PyTorch webpage](https://pytorch.org/get-started/previous-versions/) for how to set this up with GPU support on your system.
4. Install required packages specified in `requirements.txt`.
Expand Down Expand Up @@ -160,7 +160,7 @@ python train_model.py --model hi_lam --graph hierarchical ...
```

### Hi-LAM-Parallel
A version of Hi-LAM where all message passing in the hierarchical mesh (up, down, inter-level) is ran in paralell.
A version of Hi-LAM where all message passing in the hierarchical mesh (up, down, inter-level) is ran in parallel.
Not included in the paper as initial experiments showed worse results than Hi-LAM, but could be interesting to try in more settings.

To train Hi-LAM-Parallel use
Expand Down Expand Up @@ -270,6 +270,16 @@ In addition, hierarchical mesh graphs (`L > 1`) feature a few additional files w
These files have the same list format as the ones above, but each list has length `L-1` (as these edges describe connections between levels).
Entries 0 in these lists describe edges between the lowest levels 1 and 2.

# Development and Contributing
Any push or Pull-Request to the main branch will trigger a selection of pre-commit hooks.
These hooks will run a series of checks on the code, like formatting and linting.
If any of these checks fail the push or PR will be rejected.
To test whether your code passes these checks before pushing, run
``` bash
pre-commit run --all-files
```
from the root directory of the repository.

# Contact
If you are interested in machine learning models for LAM, have questions about our implementation or ideas for extending it, feel free to get in touch.
You can open a github issue on this page, or (if more suitable) send an email to [[email protected]](mailto:[email protected]).
51 changes: 34 additions & 17 deletions create_grid_features.py
Original file line number Diff line number Diff line change
@@ -1,42 +1,59 @@
# Standard library
import os
from tqdm import tqdm
from argparse import ArgumentParser

# Third-party
import numpy as np
import torch


def main():
parser = ArgumentParser(description='Training arguments')
parser.add_argument('--dataset', type=str, default="meps_example",
help='Dataset to compute weights for (default: meps_example)')
"""
Pre-compute all static features related to the grid nodes
"""
parser = ArgumentParser(description="Training arguments")
parser.add_argument(
"--dataset",
type=str,
default="meps_example",
help="Dataset to compute weights for (default: meps_example)",
)
args = parser.parse_args()

static_dir_path = os.path.join("data", args.dataset, "static")

# -- Static grid node features --
grid_xy = torch.tensor(np.load(os.path.join(static_dir_path, "nwp_xy.npy")
)) # (2, N_x, N_y)
grid_xy = grid_xy.flatten(1,2).T # (N_grid, 2)
grid_xy = torch.tensor(
np.load(os.path.join(static_dir_path, "nwp_xy.npy"))
) # (2, N_x, N_y)
grid_xy = grid_xy.flatten(1, 2).T # (N_grid, 2)
pos_max = torch.max(torch.abs(grid_xy))
grid_xy = grid_xy / pos_max # Divide by maximum coordinate

geopotential = torch.tensor(np.load(os.path.join(static_dir_path,
"surface_geopotential.npy"))) # (N_x, N_y)
geopotential = geopotential.flatten(0,1).unsqueeze(1) # (N_grid,1)
geopotential = torch.tensor(
np.load(os.path.join(static_dir_path, "surface_geopotential.npy"))
) # (N_x, N_y)
geopotential = geopotential.flatten(0, 1).unsqueeze(1) # (N_grid,1)
gp_min = torch.min(geopotential)
gp_max = torch.max(geopotential)
# Rescale geopotential to [0,1]
geopotential = (geopotential - gp_min)/(gp_max - gp_min) # (N_grid, 1)
geopotential = (geopotential - gp_min) / (gp_max - gp_min) # (N_grid, 1)

grid_border_mask = torch.tensor(np.load(os.path.join(static_dir_path,
"border_mask.npy")), dtype=torch.int64) # (N_x, N_y)
grid_border_mask = grid_border_mask.flatten(0, 1).to(
torch.float).unsqueeze(1) # (N_grid, 1)
grid_border_mask = torch.tensor(
np.load(os.path.join(static_dir_path, "border_mask.npy")),
dtype=torch.int64,
) # (N_x, N_y)
grid_border_mask = (
grid_border_mask.flatten(0, 1).to(torch.float).unsqueeze(1)
) # (N_grid, 1)

# Concatenate grid features
grid_features = torch.cat((grid_xy, geopotential, grid_border_mask),
dim=1) # (N_grid, 4)
grid_features = torch.cat(
(grid_xy, geopotential, grid_border_mask), dim=1
) # (N_grid, 4)

torch.save(grid_features, os.path.join(static_dir_path, "grid_features.pt"))


if __name__ == "__main__":
main()
Loading

0 comments on commit 474bad9

Please sign in to comment.