[Paper] [Project page] [Video]
This is the official Pytorch implementation of Morphology-Aware Interactive Keypoint Estimation.
Diagnosis based on medical images, such as X-ray images, often involves manual annotation of anatomical keypoints. However, this process involves significant human efforts and can thus be a bottleneck in the diagnostic process. To fully automate this procedure, deep-learning-based methods have been widely proposed and have achieved high performance in detecting keypoints in medical images. However, these methods still have clinical limitations: accuracy cannot be guaranteed for all cases, and it is necessary for doctors to double-check all predictions of models. In response, we propose a novel deep neural network that, given an X-ray image, automatically detects and refines the anatomical keypoints through a user-interactive system in which doctors can fix mispredicted keypoints with fewer clicks than needed during manual revision. Using our own collected data and the publicly available AASCE dataset, we demonstrate the effectiveness of the proposed method in reducing the annotation costs via extensive quantitative and qualitative results.
The code was developed using python 3.8 on Ubuntu 18.04.
The experiments were performed on a single GeForce RTX 3090 in the training and evaluation phases.
Install following dependencies:
- Python 3.8
- torch == 1.8.0
- albumentations == 1.1.0
- munch
- tensorboard
- pytz
- tqdm
.
├── code
│ ├── data
│ ├── pretrained_models
│ ├── data_preprocessing.py
│ ├── train.sh
│ ├── test.sh
│ └── ...
├── save
└── ...
-
Clone this repository in the
code
folder:git clone https://github.com/seharanul17/interactive_keypoint_estimation code
-
Create
code/pretrained_models
andsave
folders.mkdir code/pretrained_models mkdir save
-
To train our model using the pretrained HRNet backbone model, download the model file from the HRNet Github repository. Place the downloaded file in the
pretrained_models
folder. Related code line can be found here. -
To test our pre-trained model, download our model file and config file from here. Place the downloaded folder contatining the files into the
save
folder. Related code line can be found here.
We provide the code to conduct experiments on a public dataset, the AASCE challenge dataset.
-
Create the
data
folder inside thecode
folder.cd code mkdir data
-
Download the data and place it inside the
data
folder.- The AASCE challenge dataset can be obtained from SpineWeb.
- The AASCE challenge dataset corresponds to `Dataset 16: 609 spinal anterior-posterior x-ray images' on the webpage.
-
Preprocess the downloaded data.
- Related code line is here.
- Run the following command:
python data_preprocessing.py
- To run the training code, run the following command:
bash train.sh
- To test the pre-trained model:
- Locate the pre-trained model in the
../save/
folder. - Run the test code:
bash test.sh
- Locate the pre-trained model in the
- To test your own model:
- Change the value of the argument
--only_test_version {your_model_name}
in thetest.sh
file. - Run the test code:
bash test.sh
- Change the value of the argument
When the evaluation ends, the mean radial error (MRE) of model prediction and manual revision will be reported.
The sargmax_mm_MRE
corresponds to the MRE reported in Fig. 4.
The following table compares the refinement performance of our proposed interactive model and manual revision. Both models revise the same initial prediction results of our model. The number of user modifications is prolonged from zero (initial prediction) to five. The model performance is measured using mean radial errors on the AASCE dataset. For more information, please see Fig. 4 in our main manuscript.
- "Ours (model revision)" indicates automatically revised results by the proposed interactive keypoint estimation approach.
- "Ours (manual revision)" indicates fully-manually revised results by a user without the assistance of an interactive model.
Method | No. of user modification | |||||
---|---|---|---|---|---|---|
0 (initial prediction) | 1 | 2 | 3 | 4 | 5 | |
Ours (model revision) | 58.58 | 35.39 | 29.35 | 24.02 | 21.06 | 17.67 |
Ours (manual revision) | 58.58 | 55.85 | 53.33 | 50.90 | 48.55 | 47.03 |
If you find this work or code is helpful in your research, please cite:
@inproceedings{kim2022morphology,
title={Morphology-Aware Interactive Keypoint Estimation},
author={Kim, Jinhee and
Kim, Taesung and
Kim, Taewoo and
Choo, Jaegul and
Kim, Dong-Wook and
Ahn, Byungduk and
Song, In-Seok and
Kim, Yoon-Ji},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={675--685},
year={2022},
organization={Springer}
}