Our model is trained on a diverse dataset encompassing various tumor types, sizes, and locations, capturing the inherent heterogeneity of brain tumors encountered in clinical practice. The semantic segmentation architecture enables accurate localization and differentiation of tumor regions from surrounding healthy brain tissues, providing a valuable tool for early detection and characterization of lesions.
The integration of this deep learning-based segmentation approach into clinical workflows holds great promise for advancing the field of neuro-oncology. By automating the tumor detection process, our methodology not only expedites diagnosis but also provides clinicians with a reliable tool for precise delineation and monitoring of brain tumors, contributing to improved patient outcomes and treatment planning.
This Project is running live on 🤗 Hugging Face. You can run the app on your machine by pulling the Docker 🐋 Image from docker hub.
Tutorial Notebook is available on Kaggle
- Hyperparameters
# STAGE_NAME = 'Training'
# MODEL_NAME = 'UNet'
# feature_layers = [64, 128, 256, 512]
EPOCH = 50
BATCH_SIZE = 32
LEARNING_RATE = 1e-4
Model Architecture
SET | MERICS | Loss/DiceScore |
---|---|---|
Training | Loss | 0.0054 |
Training | Dice Score | 0.8824 |
Valid | Loss | 0.006 |
Valid | Dice Score | 0.898 |
Test | Loss | 0.012 |
Test | Dice Score | 0.87 |
Note These are the results for 50 EPOCH, on training the model For 100 EPOCHS the Val Dice Score reached .91 but the test dice score remained around ~ 0.90.
Ploting Results Original | Mask | predicted
- Clone the repository
https://github.com/mishra-18/MRI-Segmentation.git
cd MRI-Segmentation
- log in to your docker with the docker hub
sudo docker login -u <user-name>
- Pull the docker image
sudo docker pull mishra45/mris:latest
- Run the Streamlit app
sudo docker run -p 8080:8051 mishra45/mris:latest
Please Follow the API configure instructions in config/configure.py
python main.py
- The dataset used for training is Brain MRI Segmentation. You don't need to download anything, the Data Ingestion Stage will do it for you. given you configured your kaggle username under config/.
- Data Ingestion Stage: Starts downloading data into data/, Preprocess the data, and prepare the dataloaders.
- Training Stage: Starts Training the model
- After Training is finished the model weights will be stored under src/model/ for inference. The project is already deployed on huggingface space and you can perform inference from there or else run
streamlit run app.py
after training.