Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flake8 linter and isort workflow to aucmedi #228

Open
wants to merge 27 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
4519bb9
ci: add import and code linter workflow
DanielHieber Sep 4, 2024
b079e26
style: update isort config and lint imports in automl module
DanielHieber Sep 4, 2024
61a87ec
style: lint import order in data_processing
DanielHieber Sep 4, 2024
60116c3
style: lint import order in ensemble
DanielHieber Sep 4, 2024
c0baa4c
style: lint import order in evaluation
DanielHieber Sep 4, 2024
b58eec3
style: lint import order in neural_networks
DanielHieber Sep 4, 2024
396440d
style: lint import order in sampling
DanielHieber Sep 4, 2024
a0edfad
style: lint import order in utils
DanielHieber Sep 4, 2024
af7b5fa
style: lint imports in xai
DanielHieber Sep 4, 2024
1b4d21f
style: fix import order placeholder error with isort
DanielHieber Sep 4, 2024
04a1f97
style: fix flake8 errors in multiple init files
DanielHieber Sep 4, 2024
d106f98
refactor: remove nested if else auoml/block_pred and lint files
DanielHieber Sep 5, 2024
4aeb707
style: lint automl according to flake8 config
DanielHieber Sep 5, 2024
39dc1e8
style: lint sampling and utils according to flake8 config
DanielHieber Sep 5, 2024
392694b
style: lint xai module according to flake8 config
DanielHieber Sep 8, 2024
ab6ac2f
style: lint neural_network toplevel modules as as defined in flake8 c…
DanielHieber Sep 8, 2024
d4ba906
style: lint neural_network/architectures/volume files
DanielHieber Sep 8, 2024
917bc69
style: finish linting of neural_network module
DanielHieber Sep 8, 2024
e1155fa
style: lint evaluation
DanielHieber Sep 8, 2024
ecf8321
style: lint ensemble
DanielHieber Sep 8, 2024
8c2e5e1
style: finish initial linting
DanielHieber Sep 8, 2024
349fac5
fix: fix import errors in __init__.py files
DanielHieber Sep 8, 2024
9b303de
fix: fix flake8 workflow command and isort errors
DanielHieber Sep 8, 2024
7310fa5
fix: revert refactoring for 3.9 downwards compatibility
DanielHieber Sep 20, 2024
ba9dc75
chore: Merge branch 'development' into chore.add_linter
DanielHieber Oct 8, 2024
3c5fb86
fix: lint merged files according to config
DanielHieber Oct 8, 2024
c5ce8a2
fix: cahnge if back to elif
DanielHieber Dec 22, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/.isort.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[settings]
src_paths=aucmedi
line_length=120
known_first_party=aucmedi
import_heading_stdlib=Python Standard Library
import_heading_thirdparty=Third Party Libraries
import_heading_firstparty=Internal Libraries
skip=__init__.py
17 changes: 17 additions & 0 deletions .github/flake8.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
[flake8]
extend-ignore =
E124,
E127,
E128,
# all ignore visual indent errors
E701,
# ignore multi-statement per line (because of 'name: type' error)
E731,
# allow lambdas
E265,
# ignore block comment should start with '# '
E231,
# ignore missing whitespace after ','
W503
# ignore line break before binary operator
max-line-length = 120
44 changes: 41 additions & 3 deletions .github/workflows/code-quality.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# This workflow will install Python dependencies and run tests for computing code coverage
# Results will be uploaded to codecov
# The results will be uploaded to codecov
# The workflow further executes commitlint, isort, and flake8 linters to ensure a common code style

name: Code Quality

Expand All @@ -14,7 +15,7 @@ jobs:
name: Coverage (codecov)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v2
with:
Expand Down Expand Up @@ -44,9 +45,46 @@ jobs:
name: Commit Convention
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: wagoid/commitlint-github-action@v4
with:
configFile: .github/commitlint.config.js

isort-lint:
runs-on: ubuntu-latest
name: Import Order
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install isort
- name: isort Lint
run: |
python -m isort --settings-path .github --check-only aucmedi

flake8-lint:
runs-on: ubuntu-latest
name: Lint
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8
- name: Install package
run: |
python -m pip install .
- name: Run flake8 linter
run: |
python -m flake8 --config .github/flake8.cfg aucmedi
12 changes: 11 additions & 1 deletion aucmedi/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
# Run model inference for unknown samples
preds = model.predict(test_gen)
```
"""
""" # noqa E501
#-----------------------------------------------------#
# Library imports #
#-----------------------------------------------------#
Expand All @@ -78,3 +78,13 @@
VolumeAugmentation, \
BatchgeneratorsAugmentation
from aucmedi.neural_network.model import NeuralNetwork


__all__ = [
"input_interface",
"DataGenerator",
"ImageAugmentation",
"VolumeAugmentation",
"BatchgeneratorsAugmentation",
"NeuralNetwork"
]
11 changes: 10 additions & 1 deletion aucmedi/automl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
| `evaluation` | [CLI - Evaluation][aucmedi.automl.cli.cli_evaluation] | [Block - Evaluate][aucmedi.automl.block_eval] |

More information can be found in the docs: [Documentation - AutoML](../../automl/overview/)
"""
""" # noqa E501
#-----------------------------------------------------#
# Library imports #
#-----------------------------------------------------#
Expand All @@ -52,3 +52,12 @@
# Parser
from aucmedi.automl.parser_yaml import parse_yaml
from aucmedi.automl.parser_cli import parse_cli


__all__ = [
"block_train",
"block_predict",
"block_evaluate",
"parse_yaml",
"parse_cli"
]
37 changes: 24 additions & 13 deletions aucmedi/automl/block_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,19 @@
#-----------------------------------------------------#
# Library imports #
#-----------------------------------------------------#
# External libraries
# Python Standard Library
import os
import pandas as pd
import numpy as np
import re
# Internal libraries
from aucmedi import *

# Third Party Libraries
import numpy as np
import pandas as pd

# Internal Libraries
from aucmedi import input_interface
from aucmedi.evaluation import evaluate_performance


#-----------------------------------------------------#
# Building Blocks for Evaluation #
#-----------------------------------------------------#
Expand All @@ -44,14 +48,20 @@ def block_evaluate(config):

Attributes:
path_imagedir (str): Path to the directory containing the ground truth images.
path_gt (str): Path to the index/class annotation file if required. (only for 'csv' interface).
path_gt (str): Path to the index/class annotation file if required
(only for 'csv' interface).
path_pred (str): Path to the input file in which predicted csv file is stored.
path_evaldir (str): Path to the directory in which evaluation figures and tables should be stored.
ohe (bool): Boolean option whether annotation data is sparse categorical or one-hot encoded.
path_evaldir (str): Path to the directory in which evaluation figures and tables should be
stored.
ohe (bool): Boolean option whether annotation data is sparse categorical or one-hot
encoded.
"""
# Obtain interface
if config["path_gt"] is None : config["interface"] = "directory"
else : config["interface"] = "csv"
if config["path_gt"] is None:
config["interface"] = "directory"
else:
config["interface"] = "csv"

# Peak into the dataset via the input interface
ds = input_interface(config["interface"],
config["path_imagedir"],
Expand All @@ -73,7 +83,6 @@ def block_evaluate(config):
df_gt_data = pd.DataFrame(data=class_ohe, columns=class_names)
df_gt = pd.concat([df_index, df_gt_data], axis=1, sort=False)


# Verify - maybe there is a file path encoded in the index?
if os.path.sep in df_gt.iloc[0,0]:
samples_split = df_gt["SAMPLE"].str.split(pat=os.path.sep,
Expand All @@ -94,8 +103,10 @@ def block_evaluate(config):
data_gt = df_merged.iloc[:, (class_n+1):].to_numpy()

# Identify task (multi-class vs multi-label)
if np.sum(data_pd) > (class_ohe.shape[0] + 1.5) : multi_label = True
else : multi_label = False
if np.sum(data_pd) > (class_ohe.shape[0] + 1.5):
multi_label = True
else:
multi_label = False

# Evaluate performance via AUCMEDI evaluation submodule
evaluate_performance(data_pd, data_gt,
Expand Down
66 changes: 39 additions & 27 deletions aucmedi/automl/block_pred.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,21 @@
#-----------------------------------------------------#
# Library imports #
#-----------------------------------------------------#
# External libraries
import os
# Python Standard Library
import json
import os

# Third Party Libraries
import pandas as pd
# Internal libraries
from aucmedi import *

# Internal Libraries
from aucmedi import DataGenerator, NeuralNetwork, input_interface
from aucmedi.data_processing.io_loader import image_loader, sitk_loader
from aucmedi.data_processing.subfunctions import *
from aucmedi.ensemble import *
from aucmedi.data_processing.subfunctions import Chromer, Crop, Padding, Standardize
from aucmedi.ensemble import Composite, predict_augmenting
from aucmedi.xai import xai_decoder


#-----------------------------------------------------#
# Building Blocks for Inference #
#-----------------------------------------------------#
Expand All @@ -46,10 +50,12 @@ def block_predict(config):

Attributes:
path_imagedir (str): Path to the directory containing the images for prediction.
path_modeldir (str): Path to the model directory in which fitted model weights and metadata are stored.
path_modeldir (str): Path to the model directory in which fitted model weights and metadata are
stored.
path_pred (str): Path to the output file in which predicted csv file should be stored.
xai_method (str or None): Key for XAI method.
xai_directory (str or None): Path to the output directory in which predicted image xai heatmaps should be stored.
xai_directory (str or None): Path to the output directory in which predicted image xai heatmaps should be
stored.
batch_size (int): Number of samples inside a single batch.
workers (int): Number of workers/threads which preprocess batches during runtime.
"""
Expand Down Expand Up @@ -102,23 +108,26 @@ def block_predict(config):
"shuffle": False,
"grayscale": False,
}
if not meta_training["three_dim"] : paras_datagen["loader"] = image_loader
else : paras_datagen["loader"] = sitk_loader
if not meta_training["three_dim"]:
paras_datagen["loader"] = image_loader
else:
paras_datagen["loader"] = sitk_loader

# Apply MIC pipelines
if meta_training["analysis"] == "minimal":
# Setup neural network
if not meta_training["three_dim"]:
arch_dim = "2D." + meta_training["architecture"]
else : arch_dim = "3D." + meta_training["architecture"]
else:
arch_dim = "3D." + meta_training["architecture"]
model = NeuralNetwork(architecture=arch_dim, **nn_paras)

# Build DataGenerator
pred_gen = DataGenerator(samples=index_list,
labels=None,
resize=model.meta_input,
standardize_mode=model.meta_standardize,
**paras_datagen)
labels=None,
resize=model.meta_input,
standardize_mode=model.meta_standardize,
**paras_datagen)
# Load model
path_model = os.path.join(config["path_modeldir"], "model.last.keras")
model.load(path_model)
Expand All @@ -128,15 +137,16 @@ def block_predict(config):
# Setup neural network
if not meta_training["three_dim"]:
arch_dim = "2D." + meta_training["architecture"]
else : arch_dim = "3D." + meta_training["architecture"]
else:
arch_dim = "3D." + meta_training["architecture"]
model = NeuralNetwork(architecture=arch_dim, **nn_paras)

# Build DataGenerator
pred_gen = DataGenerator(samples=index_list,
labels=None,
resize=model.meta_input,
standardize_mode=model.meta_standardize,
**paras_datagen)
labels=None,
resize=model.meta_input,
standardize_mode=model.meta_standardize,
**paras_datagen)
# Load model
path_model = os.path.join(config["path_modeldir"],
"model.best_loss.keras")
Expand All @@ -147,19 +157,21 @@ def block_predict(config):
# Build multi-model list
model_list = []
for arch in meta_training["architecture"]:
if not meta_training["three_dim"] : arch_dim = "2D." + arch
else : arch_dim = "3D." + arch
if not meta_training["three_dim"]:
arch_dim = "2D." + arch
else:
arch_dim = "3D." + arch
model_part = NeuralNetwork(architecture=arch_dim, **nn_paras)
model_list.append(model_part)
el = Composite(model_list, metalearner=meta_training["metalearner"],
k_fold=len(meta_training["architecture"]))
k_fold=len(meta_training["architecture"]))

# Build DataGenerator
pred_gen = DataGenerator(samples=index_list,
labels=None,
resize=None,
standardize_mode=None,
**paras_datagen)
labels=None,
resize=None,
standardize_mode=None,
**paras_datagen)
# Load composite model directory
el.load(config["path_modeldir"])
# Start model inference via ensemble learning
Expand Down
Loading