Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow implementation #22

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
.DS_Store
*.DS_Store
*.pyc
*.p
Empty file modified .gitmodules
100644 → 100755
Empty file.
Empty file modified Cascaded-FCN.pdf
100644 → 100755
Empty file.
Empty file modified LICENSE.md
100644 → 100755
Empty file.
Empty file modified README.md
100644 → 100755
Empty file.
Empty file modified models/cascadedfcn/step1/README.md
100644 → 100755
Empty file.
Empty file modified models/cascadedfcn/step1/step1_deploy.prototxt
100644 → 100755
Empty file.
Empty file modified models/cascadedfcn/step2/README.md
100644 → 100755
Empty file.
Empty file modified models/cascadedfcn/step2/step2_deploy.prototxt
100644 → 100755
Empty file.
Empty file modified notebooks/cascaded_unet_inference.ipynb
100644 → 100755
Empty file.
82 changes: 82 additions & 0 deletions tensorflow-unet/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Cascaded-FCN - Tensorflow implementation

This repository contains the source-code for a Cascaded-FCN that segments the liver and its lesions out of axial CT images.

The network being used in the source-code is derived from the paper ([arXiv link](https://arxiv.org/pdf/1704.07239.pdf)) titled:

```
AUTOMATIC LIVER LESION SEGMENTATION USING A DEEP CONVOLUTIONAL NEURAL NETWORK METHOD
```

### The network being used (small alterations applied)
![alt text](https://raw.githubusercontent.com/IBBM/Cascaded-FCN/tensorflow-implementation/tensorflow-unet/wiki/network.png)

This network can take multiple neighboring axial slices as an input. The inputs are exclusively gray-scale images. The segmentation is learned with the middle label. The benefit of taking multiple slices is that the spatial information does not get lost. For further information on the network, please read the provided paper.

### Description ###
This work uses 2 cascaded UNETs,

1. In step1, a UNET segments the liver from an axial abdominal CT slice. The segmentation output is a binary mask with bright pixels denoting the segmented object. By segmenting all slices in a volume we obtain a 3D segmentation.
2. In step2 another UNET takes an enlarged liver slice and segments its lesions.

#### Liver Network
* Input: 400x400 CT-image (additionally 400x400 label-map during training - lesions and livers are merged, downsized from 512x512)
* Output: 400x400 Label-Probability-Map
* Batch-Size: 4
* Neighboring Slices: 1
* Augmentations: Rotation, Zoom, Translation
* Postprocessing before saving to .nii file:
1. Only the largest connected labeled component is kept in the 3d volume which should always be the liver.
2. Small segmentations (<16px area) are discarded
3. Smoothing of the output probability map is applied

#### Lesion Network
* Input: 256*256 CT-image (additionally 256*256 label-map during training, downsized from 512x512)
* Output: 256*256 Label-Probability-Map
* Batch-Size: 8
* Neighboring Slices: 3
* Augmentations: Rotation, Zoom, Translation
* Postprocessing before saving to .nii file:
1. Small segmentations (<16px area) are discarded
2. Smoothing of the output probability map is applied

Values are configurable in the main code file experiment.py.

### Environment
To run this repository, we provide a docker container which has all dependencies preinstalled:
https://hub.docker.com/r/chrisheinle/lits/

Run the container:
```bash
sudo GPU=0 nvidia-docker run -it --volume /optional_data_mount/:/data/ --volume /code_mount/:/code/ --net=host chrisheinle/lits bash
```


### Step-By-Step
1. Clone this repo
2. In order to obtain the training-data, please register at http://lits-challenge.com and follow the download instructions
3. Configure the train and test directory paths in the experiment.py code-file (todo, configurable)
4. Uncomment the liver configuration in the experiment.py file and comment the lesion configuration and run:
```bash
python experiment.py --logdir /path/which/exists
```
5. Train the network until convergence (you can check the statistics during training by running tensorboard --logdir /path/which/exists)
6. Take the best model (select in the tensorboard dice-score summary) and write it down
7. Uncomment the lesion configuration in the experiment.py file and comment the liver configuration and run:
```bash
python experiment.py --logdir /path/which/exists
```
9. Run both:
```bash
python generate_predictions.py --data_directory test_directory --out_postfix '__prediction' --model /path/which/exists/Run_x_liver/snapshots/unet-model-*bestmodelwithoutindexending* --liver 1
```
```bash
python generate_predictions.py --data_directory test_directory --out_postfix '__prediction_les' --model /path/which/exists/Run_x_lesion/snapshots/unet-model-*bestmodelwithoutindexending*
```
10. Take the *__prediction_les* files and move them to a different directory
11. Zip the with the -j flag
12. Upload them on the list-challenge website and run the result

### Expected results
Dice-score per case with current configuration: 0.477
Training steps: 150 000 for both networks respectively
Empty file added tensorflow-unet/__init__.py
Empty file.
109 changes: 109 additions & 0 deletions tensorflow-unet/experiment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
import argparse
import os
import sys

import tensorflow as tf

from segmenter import Segmenter

# Command Line Arguments

parser = argparse.ArgumentParser()

parser.add_argument('--model', '-m', help='the model file', default=None)
parser.add_argument('--reset_counter', '-r', action='store_true')
parser.add_argument('--logdir', '-l', help='parent directory where logfiles, checkpoints, graphs are stored', default='/code/litsruns')
parser.add_argument('--runname', '-name', help='name of the run. if empty, number will be chosen', default=None)


args = parser.parse_args()

# Training Parameters

load_path = args.model
log_dir = args.logdir
run_name = args.runname

if not os.path.exists(log_dir):
os.makedirs(log_dir)

if not run_name is None:
data_path = log_dir + '/' + run_name

if os.path.exists(data_path):
raise Exception('Error: Data for the run ' + run_name + ' already exists. Please delete or chose another.')
else:
run_name = 'Run_' + str(len([name for name in os.listdir(log_dir) if os.path.isdir(log_dir + "/" + name)]))
data_path = log_dir + '/' + run_name

print ("Starting run: ", run_name)

snapshot_dir = data_path + '/snapshots/'
summaries_dir = data_path + '/summaries/'
snapshot_path = os.path.join(snapshot_dir, 'unet_model')

if not os.path.exists(snapshot_dir):
os.makedirs(snapshot_dir)

if not os.path.exists(summaries_dir):
os.makedirs(summaries_dir)

train_data_dir = "/code/LITS/TRB2/"
test_data_dir = "/code/LITS/TRB1/"

# Liver config
# segmentor_instance = Segmenter(validation_examples = 100,
# validation_interval = 800,
# max_steps = 1000000000,
# batch_size = 4,
# n_neighboringslices = 1,
# input_size = 400,
# output_size = 400,
# slice_type = 'axial',
# oversample = False,
# load_path = load_path,
# reset_counter = args.reset_counter,
# summaries_dir = summaries_dir,
# snapshot_path = snapshot_path,
# label_of_interest=1,
# label_required=0,
# magic_number=26.91, # 16.4
# max_slice_tries_val = 0,
# max_slice_tries_train = 2,
# fuse_labels=True,
# apply_crop=False)

# Lesion config
segmentor_instance = Segmenter(validation_examples = 200,
validation_interval = 800,
max_steps = 1000000000,
batch_size = 8,
n_neighboringslices = 3,
input_size = 256,
output_size = 256,
slice_type = 'axial',
oversample = False,
load_path = load_path,
reset_counter = args.reset_counter,
summaries_dir = summaries_dir,
snapshot_path = snapshot_path,
label_of_interest=2,
label_required=1,
magic_number=8.5,
max_slice_tries_val = 0,
max_slice_tries_train = 0,
fuse_labels=False,
apply_crop=True)

# Instantiate the preprocessing
segmentor_instance.setup_preprocessing(train_data_dir, test_data_dir)

print ("Preprocessing setup done.")

# Fill the validation set with slices
segmentor_instance.setup_validation()

print ("Validation setup done.")

# Initiate the training
segmentor_instance.go()
Loading