MICCAI - MLMI 2023: "RoFormer for Position Aware Multiple Instance Learning in Whole Slide Image Classification"
source setup.sh
To install miniconda
, torch==2.0
and other packages.
Data should be stored according to their classes
└- data
|
|- Class 1
| |- Image1.svs
| |- Image2.svs
| └ ...
|
└- Class 2
|- Image1.svs
|- Image2.svs
└- ...
Pre-processing can be run with
python scripts/preprocessing_pipeline.py
python scripts/new_create_splits.py
scripts/preprocessing_pipeline.py
will run the CLAM preprocessing to tile the slides and extract resnet50 features. Data folder and some parameters can be set inconf/preprocessing.yaml
.scripts/new_create_splits.py
will run stratified train/test splitting. Parameters can be set inconf/create_splits.yaml
.
PyTorch Lightning - for boilerplate deep learning code/metrics
Hydra - for configuration files management
xFormers - memory efficient attention
- Model parameters can be set in
conf/model_dict.yaml
- Training hyperparameters can be set in
conf/training.yaml
- Modeling code is found in
romil/models
python scripts/train.py
Will trigger a training run on the K folds, leveraging pytorch-lightning for boilerplate code, and mlflow for experiment tracking (easily configurable in conf/training.yaml
:training_args.trainer.logger
)
If you find our work useful in your research please consider citing our paper:
Pochet, E., Maroun, R., Trullo, R. RoFormer for Position Aware Multiple Instance Learning in Whole Slide Image Classification. Machine Learning in Medical Imaging. MLMI 2023.
@InProceedings{pochetroformer23,
author="Pochet, Etienne
and Maroun, Rami
and Trullo, Roger",
title="RoFormer for Position Aware Multiple Instance Learning in Whole Slide Image Classification",
booktitle="Machine Learning in Medical Imaging",
year="2024",
}