Skip to content

[Champion Solution for FLARE24 Task1] A simple baseline based on nnU-Netv2 (A 5.6M Parameters 3D U-Net)

Notifications You must be signed in to change notification settings

Ziyan-Huang/FLARE24

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🏆 We Won the Champion of MICCAI FLARE2024 Challenge Task1!

🔗 Key Links

  • FLARE24 Official Website: Task1

  • Docker Download: Baidu NetDisk or Google Drive

    You only need to place all the .nii.gz files ready for inference in the inputs directory. Then, run the following commands to start inference:

    docker load -i gmai.tar.gz
    docker container run --gpus "device=all" --name gmai --rm \
      -v $PWD/inputs/:/workspace/inputs/ \
      -v $PWD/outputs/:/workspace/outputs/ \
      gmai:latest /bin/bash -c "sh predict.sh"
  • Slides Download: View our technical presentation and findings Baidu NetDisk or Google Drive

  • 中文介绍知乎

🔍 A 5.6M Parameter U-Net for Efficient Whole-body Tumor Segmentation

🆚 Key Differences Between Our Method and the default nnU-Net

Configuration Our Method 🚀 nnUNetV2 🗿
Number of Stages ⬇️ 4 6
Training Epochs ⏱️ 2000 1000
Learning Rate 📊 0.001 0.01
Target Spacing 📐 2, 2, 3 0.8, 0.8, 3

💡 Our method achieves efficient segmentation with fewer stages and lower resample resolution.

📂 Key Files in this Repository

Only two files you need to focus on:

  1. nnUNetTrainer_varianst.py

    • Custom nnUNet trainer with modified training parameters
  2. plans.json

    • Network architecture and configuration settings

How to Reimplement?

🛠️ Environment Setup

First, ensure you have PyTorch > 2.0 installed with CUDA support. We conducted our experiments using nnUNet v2.2, which might slightly differ from the latest version nnUNet v2.4. Set up your environment by running:

conda create -n FLARE24_gmai
conda activate FLARE24_gmai
pip install -e .

📂 Data Preparation

  • 📁 Dataset: FLARE24 Task1
  • 🔢 Data Usage: 5000 partial labeled data (treated as fully labeled without any special handling)
  • ⚙️ Preprocessing: Default nnUNet procedure

Organize your labeled data in nnUNet_raw in the following structure:

Dataset024_FLARE24_Task1/
├── imagesTr/
│   ├── coronacases_001_0000.nii.gz
│   ├── ...
│   └── (all 5,000 labeled images ending with .nii.gz)
├── labelsTr/
│   ├── coronacases_001.nii.gz
│   ├── ...
└── dataset.json

You can find the dataset.json file here

📝 Data Preprocessing

1. 🧬 Extract Fingerprints and Plan the Experiment:

nnUNetv2_extract_fingerprint -d 24
nnUNetv2_plan_experiment -d 24

2. 🛠️ Modify the Plans:

Edit the plans file in your nnUNet_preprocessed directory. Refer to our plans.json for guidance. We modified the "patch_size" and "spacing" under "3d_fullres" and create a new configuration "3d_fullres_S4D2W32".

3. ⚙️ Preprocess the Data:

nnUNetv2_preprocess -d 24 -c 3d_fullres -np 4

🚀 Training and Inference

🔧 Training the Model:

Train the network using the following command:

nnUNetv2_train 24 3d_fullres_S4D2W32 all -tr nnUNetTrainer_Epoch2000_Lr1e3

Alternatively, you can use our pre-trained model by copying it into your nnUNet_results folder. You can find our trained model here

🔍 Inference:

To perform inference, run:

nnUNetv2_predict -i ./inputs -o ./outputs -c 3d_fullres_S4D2W32 -f all -d 24 -tr nnUNetTrainer_Epoch2000_Lr1e3

Results

📊 Quantitative Results

Methods Public Validation Online Validation Testing
DSC(%) NSD(%) DSC(%) NSD(%) DSC(%) NSD(%)
Ours 25.34 ± 31.56 24.40 ± 27.80 - - - -

🖼️ Visualization

Two examples with good segmentation results and two examples with bad segmentation results in the validation set.

About

[Champion Solution for FLARE24 Task1] A simple baseline based on nnU-Netv2 (A 5.6M Parameters 3D U-Net)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages