Skip to content

Federated Learning for Multi-Institutional Medical Image Segmentation.

License

Notifications You must be signed in to change notification settings

avocadopelvis/federated-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bachelor Thesis

Federated Learning for Multi-Institutional Medical Image Segmentation.

Deep Learning has been widely used for medical image segmentation and a large number of papers have been presented recording the success of Deep Learning in this field. The performance of Deep Learning models strongly relies on the amount and diversity of data used for training. In the Medical Imaging field, acquiring large and diverse datasets is a significant challenge. Unlike photography images, labeling medical images require expert knowledge. Ideally, collaboration between institutions could address this challenge but sharing medical data to a centralized location faces various legal, privacy, technical, and data-ownership challenges. This is a significant barrier in pursuing scientific collaboration across transnational medical research institutions.

Traditionally, Artificial Intelligence techniques require centralized data collection and processing that may be infeasible in realistic healthcare scenarios due to the aforementioned challenges. In recent years, Federated Learning has emerged as a distributed collaborative AI paradigm that enables the collaborative training of Deep Learning models by coordinating with multiple clients (e.g., medical institutions) without the need of sharing raw data. Although Federated Learning was initially designed for mobile edge devices, it has attracted increasing attention in the healthcare domain because of its privacy preserving nature of the patient information.

In Federated Learning, each client trains its own model using local data, and only the model updates are sent to the central server. The server accumulates and aggregates the individual updates to yield a global model and then sends the new shared parameters to the clients for further training. In this way, the training data remains private to each client and is never shared during the learning process. Only the model’s updates are shared, thus keeping patient data private and enabling multi-institutional collaboration.

Read More

Federated Learning Architecture

delete

MODELS

DATASET

BraTS 2020

Institutions Involved

  • Cancer Imaging Program, NCI, National Institutes of Health (NIH), USA
  • Center for Biomedical Image Computing and Analytics (CBICA), SBIA, UPenn, PA, USA
  • University of Alabama at Birmingham, AL, USA
  • University of Bern, Switzerland
  • University of Debrecen, Hungary
  • MD Anderson Cancer Center, TX, USA
  • Washington University School of Medicine in St. Louis, MO, USA
  • Heidelberg University, Germany
  • Tata Memorial Centre, Mumbai, India

The sub-regions of tumor considered for evaluation are:

  1. The "enhancing tumor" (ET)
  2. The "tumor core" (TC)
  3. The "whole tumor" (WT)

The provided segmentation labels have values of 1 for NCR & NET, 2 for ED, 4 for ET, and 0 for everything else.

About

Federated Learning for Multi-Institutional Medical Image Segmentation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages