-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* replace mantis preprint with QLIPP paper * fix figure links and update conda command * surfacing the new preprint * added place-holders for new movies and new models * move pipeline chart to its own page * show video covers in a table --------- Co-authored-by: Shalin Mehta <[email protected]>
- Loading branch information
1 parent
2747e83
commit dde3e27
Showing
5 changed files
with
128 additions
and
110 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# Virtual staining of cellular compartments from label-free images | ||
|
||
Predicting sub-cellular landmarks such as nuclei and membrane from label-free (e.g. phase) images | ||
can improve imaging throughput and ease experiment design. | ||
However, training a model directly for segmentation requires laborious manual annotation. | ||
We use fluorescent markers as a proxy of supervision with human-annotated labels, | ||
and turn this instance segmentation problem into a paired image-to-image translation (I2I) problem. | ||
|
||
VisCy features an end-to-end pipeline to design, train and evaluate I2I models in a declarative manner. | ||
It supports 2D, 2.5D (3D encoder, 2D decoder) and 3D U-Nets, | ||
as well as 3D networks with anisotropic filters (UNeXt2). | ||
|
||
## Overview of the pipeline | ||
|
||
```mermaid | ||
flowchart LR | ||
subgraph sp[Signal Processing] | ||
Registration --> Reconstruction --> Resampling | ||
end | ||
subgraph viscy["Computer Vision (viscy)"] | ||
subgraph Preprocessing | ||
Normalization -.-> fd[Feature Detection] | ||
end | ||
subgraph Training | ||
arch[Model Architecting] | ||
hyper[Hyperparameter Tuning] | ||
val[Performance Validation] | ||
compute[Acceleration] | ||
arch <--> hyper <--> compute <--> val <--> arch | ||
end | ||
subgraph Testing | ||
regr[Regression Metrics] | ||
segm[Instance Segmentation Metrics] | ||
cp[CellPose] | ||
cp --> segm | ||
end | ||
Preprocessing --> Training --> Testing | ||
Testing --> test{"Performance?"} | ||
test -- good --> Deployment | ||
test -- bad --> Training | ||
end | ||
subgraph Segmentation | ||
Cellpose ~~~ aicssegmentation | ||
end | ||
input[(Raw Images)] --> sp --> stage{"Training?"} | ||
stage -.- no -.-> model{{Virtual Staining Model}} | ||
stage -- yes --> viscy | ||
viscy --> model | ||
model --> vs[(Predicted Images)] | ||
vs --> Segmentation --> output[Biological Analysis] | ||
``` | ||
|
||
## Model architectures | ||
|
||
Reported in the [2024 preprint](https://www.biorxiv.org/content/10.1101/2024.05.31.596901): | ||
|
||
Reported in the [2020 paper](https://elifesciences.org/articles/55502v1): | ||
|
||
![2.5D U-Net light](https://github.com/mehta-lab/VisCy/blob/main/docs/figures/2_5d_unet_dark.svg?raw=true#gh-light-mode-only) | ||
![2.5D U-Net dark](https://github.com/mehta-lab/VisCy/blob/main/docs/figures/2_5d_unet_dark.svg?raw=true#gh-dark-mode-only) |