This is the official repository for the paper Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction.
Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction,
Jiayang Shi, Junyi Zhu, Daniel M. Pelt, K. Joost Batenburg, Matthew B. Blaschko TMLR 2024
We introduce a novel Bayesian framework for joint reconstruction of multiple objects from sparse-view CT scans using Implicit Neural Representations (INRs) to improve reconstruction quality. By capturing shared patterns across multiple objects with latent variables, our method enhances the reconstruction of each object, increases robustness to overfitting, and accelerates the learning process.
Node 1 | Node 5 | Node 9 | Prior Mean |
Create and activate a Conda environment:
conda env create -f environment.yml
conda activate inr4ct
We provide 10 slices LungCT images in imgs.tif from LungCT for easy testing. You can also use your own data by loading it accordingly.
To run the joint reconstruction code, use:
python main.py
We also provide implementations of several comparison methods for joint reconstruction:
- Meta-Learning (MAML)
- Federated Averaging (FedAvg)
- INR-in-the-Wild (INRWild)
- A single reconstruction method (SingleINR)
You can run these methods using:
python meta.py
python fedavg.py
python single_inr.py
python inr_wild.py
If you find our paper useful, please cite
@article{shi2024implicit,
title={Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction},
author={Shi, Jiayang and Zhu, Junyi and Pelt, Daniel M and Batenburg, K Joost and Blaschko, Matthew B},
journal={arXiv preprint arXiv:2405.02509},
year={2024}
}