The repository is structured in the following directories:
autonnunet
: Main Python packageanalysis
: Plotting, DeepCAVE utilitiesdatasets
: MSD dataset handlingevaluation
: Prediction tools for MSD test setexperiment_planning
: Extensions to nnU-Nethnas
: Hierarchical NAS integrationinference
: Inference logicutils
: Utility functions
data
: MSD-related dataoutput
: Locally generated resultsresults_zipped
: Pre-compressed resultsrunscripts
: Scripts for experimentssubmodules
: External dependencies (nnU-Net, hypersweeper, etc.)tests
: Unit testspaper
: Paper plots and tables
π§ͺ Tested on Rocky Linux 9.5 and CUDA 12.4 (not Windows)
Auto-nnU-Net provides a pre-built Docker container on Docker Hub.
- Pull the Docker image
docker pull becktepe/autonnunet:latest
- Run the container
docker run -it --rm becktepe/autonnunet:latest
- (Optional) With CUDA
docker run -it --rm --gpus all becktepe/autonnunet:latest
In this container, Auto-nnU-Net and all dependencies are installed into the global Python environment.
The repository is located at /tmp/autonnunet
.
β οΈ This method is more brittle due to Python/package version constraints.
- Clone the repository
git clone https://github.com/automl/AutoNNUnet.git autonnunet
cd autonnunet
- Create and activate a conda environment
conda create -n autonnunet python=3.10
conda activate autonnunet
- Install via
make
make install
If that fails, use manual install:
cd submodules/batchgenerators && git checkout master && git pull && pip install . && cd ../../
cd submodules/hypersweeper && git checkout dev && git pull && pip install . && cd ../../
cd submodules/MedSAM && git checkout MedSAM2 && git pull && pip install . && cd ../../
cd submodules/neps && git checkout master && git pull && pip install . && cd ../../
cd submodules/nnUNet && git checkout dev && git pull && pip install . && cd ../../
pip install -e ".[dev]"
pip install deepcave
Use runscripts/configs/cluster
to configure SLURM or local execution.
Use cluster=gpu
for SLURM and cluster=local
for local execution.
Single dataset:
python autonnunet/datasets/msd_dataset.py --dataset_name=Dataset001_BrainTumour
All datasets:
./runscripts/download_msd.sh
β οΈ Preprocessing has to be executed on the same cluster/compute environment as the target for the training to get the correct nnU-Net configurations, e.g. by appendingcluster=gpu
.
python runscripts/convert_and_preprocess_nnunet.py -m "dataset=glob(*)" "cluster=gpu"
nnU-Net Conv
python runscripts/train.py -m "dataset=glob(*)" "fold=range(5)"
nnU-Net ResM
python runscripts/train.py -m "dataset=glob(*)" "fold=range(5)" "hp_config.encoder_type=ResidualEncoderM"
nnU-Net ResL
python runscripts/train.py -m "dataset=glob(*)" "fold=range(5)" "hp_config.encoder_type=ResidualEncoderL"
β οΈ You need to run the training for at least one of the nnU-Net models for a specific dataset as they create the dataset splits before you can run the MedSAM2 fine-tuning.
Preprocess (must run locally)
β οΈ The pre-processing for MedSAM2 must be executed locally, i.e. cannot be submitted on a SLURM cluster due to compatibility issues betweenpickle
andmultiprocessing
.
python runscripts/convert_and_preprocess_medsam2.py -m "dataset=glob(*)" "cluster=local"
Download checkpoint
cd submodules/MedSAM && mkdir checkpoints && cd checkpoints
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt
cd ../../../
Fine-tune
python runscripts/finetune_medsam2.py -m "dataset=glob(*)" "fold=range(5)" "cluster=gpu"
python runscripts/determine_hyperband_budgets.py --b_min=10 --b_max=1000 --eta=3
Auto-nnU-Net (HPO + NAS)
python runscripts/train.py --config-name=tune_hpo_nas -m "dataset=Dataset001_BrainTumour"
HPO Ablation
python runscripts/train.py --config-name=tune_hpo -m "dataset=Dataset001_BrainTumour"
HPO + HNAS Ablation
python runscripts/train.py --config-name=tune_hpo_hnas -m "dataset=Dataset001_BrainTumour"
Incumbent configurations are stored in runscripts/configs/incumbent
.
You can find our incumbent configurations already in this directory.
If you want to re-create them after running the experiments, you need to run:
python runscripts/extract_incumbents.py --approach=hpo_nas
python runscripts/extract_incumbents.py --approach=hpo
python runscripts/extract_incumbents.py --approach=hpo_hnas
Using these configs, you can than run the training of the incumbent configurations using the command:
python runscripts/train.py -m "dataset=Dataset001_BrainTumour" "+incumbent=Dataset001_BrainTumour_hpo_nas" "fold=range(5)"
Here, the incumbent
parameter defines the dataset and approach as <dataset_name>_<approach>
.
βΉοΈ Please note that you could also use the incumbent model saved during the optimization. In our experiments, we did not store model checkpoints in the respective run directories to reduce the memory consumption.
To train all datasets with the incumbent configuration of another dataset:
./runscripts/train_cross_eval.sh Dataset001_BrainTumour
Run inference:
python runscripts/run_inference.py --approach=hpo_nas
Or via SLURM:
sbatch runscripts/run_inference.sh hpo_nas
MSD submission will be saved to output/msd_submissions
.
Generate paper plots:
python runscripts/plot.py
Results will be saved in output/paper
.
If PyTorch crashes with a JSON error, clear the cache:
rm -rf ~/.cache/torch
rm -rf ~/.cache/triton/
rm -rf ~/.nv/ComputeCache
License: BSD-3-Clause
If you use Auto-nnU-Net, please cite:
@inproceedings{
becktepe2025autonnunet,
title={Auto-nnU-Net: Towards Automated Medical Image Segmentation},
author={Jannis Becktepe and Leona Hennig and Steffen Oeltze-Jafra and Marius Lindauer},
booktitle={AutoML 2025 ABCD Track},
year={2025},
url={https://openreview.net/forum?id=XSTIEVoEa2}
}
This package was created with Cookiecutter using the audreyr/cookiecutter-pypackage template.