-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DO NOT MERGE: Minimal example for representation #178
Closed
Closed
Changes from all commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
cea0fb3
delete unrelated files
ziw-liu 3bc4761
move example configs
ziw-liu 8896d38
edit license
ziw-liu 374b659
remove docs
ziw-liu 56c5be0
remove application scripts
ziw-liu 0ebe292
edit example CTC configs
ziw-liu e87a7d0
edit readme for reproduction
ziw-liu 4b8f47f
remove files containing paths
ziw-liu 810d7a0
placeholder authors
ziw-liu d795307
remove url
ziw-liu f77ded2
move plotting script
ziw-liu e835e5b
remove plotting script that needs annotations
ziw-liu 4b036e1
hardcode version
ziw-liu 45f3122
rename example configs
ziw-liu 8551c8e
fix config path
ziw-liu b2ca7bd
remove dynamic version
ziw-liu 7b24ea1
install in editable mode
ziw-liu b24ff57
bump torch
ziw-liu 4fe0b92
add env test detail
Soorya19Pradeep File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,137 +1,82 @@ | ||
# VisCy | ||
|
||
VisCy (abbreviation of `vision` and `cyto`) is a deep learning pipeline for training and deploying computer vision models for image-based phenotyping at single-cell resolution. | ||
|
||
This repository provides a pipeline for the following. | ||
- Image translation | ||
- Robust virtual staining of landmark organelles | ||
- Image classification | ||
- Supervised learning of of cell state (e.g. state of infection) | ||
- Image representation learning | ||
- Self-supervised learning of the cell state and organelle phenotypes | ||
|
||
> **Note:** | ||
> VisCy has been extensively tested for the image translation task. The code for other tasks is under active development. Frequent breaking changes are expected in the main branch as we unify the codebase for above tasks. If you are looking for a well-tested version for virtual staining, please use release `0.2.1` from PyPI. | ||
|
||
|
||
## Virtual staining | ||
|
||
### Demos | ||
- [Virtual staining exercise](https://github.com/mehta-lab/VisCy/blob/46beba4ecc8c4f312fda0b04d5229631a41b6cb5/examples/virtual_staining/dlmbl_exercise/solution.ipynb): | ||
Notebook illustrating how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course and uses UNeXt2 architecture. | ||
|
||
- [Image translation demo](https://github.com/mehta-lab/VisCy/blob/92215bc1387316f3af49c83c321b9d134d871116/examples/virtual_staining/img2img_translation/solution.ipynb): Fluorescence images can be predicted from label-free images. Can we predict label-free image from fluorescence? Find out using this notebook. | ||
|
||
- [Training Virtual Staining Models via CLI](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions): | ||
Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*). | ||
|
||
### Gallery | ||
Below are some examples of virtually stained images (click to play videos). | ||
See the full gallery [here](https://github.com/mehta-lab/VisCy/wiki/Gallery). | ||
|
||
| VSCyto3D | VSNeuromast | VSCyto2D | | ||
|:---:|:---:|:---:| | ||
| [![HEK293T](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_1.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/d53a81eb-eb37-44f3-b522-8bd7bddc7755) | [![Neuromast](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_3.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/4cef8333-895c-486c-b260-167debb7fd64) | [![A549](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_5.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/287737dd-6b74-4ce3-8ee5-25fbf8be0018) | | ||
|
||
### Reference | ||
|
||
The virtual staining models and training protocols are reported in our recent [preprint on robust virtual staining](https://www.biorxiv.org/content/10.1101/2024.05.31.596901). | ||
|
||
|
||
This package evolved from the [TensorFlow version of virtual staining pipeline](https://github.com/mehta-lab/microDL), which we reported in [this paper in 2020](https://elifesciences.org/articles/55502). | ||
|
||
<details> | ||
<summary>Liu, Hirata-Miyasaki et al., 2024</summary> | ||
|
||
<pre><code> | ||
@article {Liu2024.05.31.596901, | ||
author = {Liu, Ziwen and Hirata-Miyasaki, Eduardo and Pradeep, Soorya and Rahm, Johanna and Foley, Christian and Chandler, Talon and Ivanov, Ivan and Woosley, Hunter and Lao, Tiger and Balasubramanian, Akilandeswari and Liu, Chad and Leonetti, Manu and Arias, Carolina and Jacobo, Adrian and Mehta, Shalin B.}, | ||
title = {Robust virtual staining of landmark organelles}, | ||
elocation-id = {2024.05.31.596901}, | ||
year = {2024}, | ||
doi = {10.1101/2024.05.31.596901}, | ||
publisher = {Cold Spring Harbor Laboratory}, | ||
URL = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901}, | ||
eprint = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901.full.pdf}, | ||
journal = {bioRxiv} | ||
} | ||
</code></pre> | ||
</details> | ||
|
||
<details> | ||
<summary>Guo, Yeh, Folkesson et al., 2020</summary> | ||
|
||
<pre><code> | ||
@article {10.7554/eLife.55502, | ||
article_type = {journal}, | ||
title = {Revealing architectural order with quantitative label-free imaging and deep learning}, | ||
author = {Guo, Syuan-Ming and Yeh, Li-Hao and Folkesson, Jenny and Ivanov, Ivan E and Krishnan, Anitha P and Keefe, Matthew G and Hashemi, Ezzat and Shin, David and Chhun, Bryant B and Cho, Nathan H and Leonetti, Manuel D and Han, May H and Nowakowski, Tomasz J and Mehta, Shalin B}, | ||
editor = {Forstmann, Birte and Malhotra, Vivek and Van Valen, David}, | ||
volume = 9, | ||
year = 2020, | ||
month = {jul}, | ||
pub_date = {2020-07-27}, | ||
pages = {e55502}, | ||
citation = {eLife 2020;9:e55502}, | ||
doi = {10.7554/eLife.55502}, | ||
url = {https://doi.org/10.7554/eLife.55502}, | ||
keywords = {label-free imaging, inverse algorithms, deep learning, human tissue, polarization, phase}, | ||
journal = {eLife}, | ||
issn = {2050-084X}, | ||
publisher = {eLife Sciences Publications, Ltd}, | ||
} | ||
</code></pre> | ||
</details> | ||
|
||
### Library of virtual staining (VS) models | ||
The robust virtual staining models (i.e *VSCyto2D*, *VSCyto3D*, *VSNeuromast*), and fine-tuned models can be found [here](https://github.com/mehta-lab/VisCy/wiki/Library-of-virtual-staining-(VS)-Models) | ||
|
||
### Pipeline | ||
A full illustration of the virtual staining pipeline can be found [here](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/virtual_staining.md). | ||
# DynaCLR | ||
|
||
Implementation for ICLR 2025 submission: | ||
Contrastive learning of cell state dynamics in response to perturbations. | ||
|
||
## Installation | ||
|
||
> **Note**: | ||
> The full functionality is tested on Linux `x86_64` with NVIDIA Ampere/Hopper GPUs (CUDA 12.4). | ||
> The CTC example configs are also tested on macOS with Apple M1 Pro SoCs (macOS 14.7). | ||
> Apple Silicon users need to make sure that they use | ||
> the `arm64` build of Python to use MPS acceleration. | ||
> Tested to work on Linux on the High Performance cluster, and may not work in other environments. | ||
> The commands below assume a Unix-like system. | ||
|
||
1. We recommend using a new Conda/virtual environment. | ||
|
||
```sh | ||
conda create --name viscy python=3.10 | ||
# OR specify a custom path since the dependencies are large: | ||
# conda create --prefix /path/to/conda/envs/viscy python=3.10 | ||
conda create --name dynaclr python=3.10 | ||
``` | ||
|
||
2. Install a released version of VisCy from PyPI: | ||
2. Install the package with `pip`: | ||
|
||
```sh | ||
pip install viscy | ||
conda activate dynaclr | ||
# in the project root directory | ||
# i.e. where this README is located | ||
pip install -e ".[visual,metrics]" | ||
``` | ||
|
||
If evaluating virtually stained images for segmentation tasks, | ||
install additional dependencies: | ||
3. Verify installation by accessing the CLI help message: | ||
|
||
```sh | ||
pip install "viscy[metrics]" | ||
viscy --help | ||
``` | ||
|
||
Visualizing the model architecture requires `visual` dependencies: | ||
For development installation, see [the contributing guide](./CONTRIBUTING.md). | ||
|
||
```sh | ||
pip install "viscy[visual]" | ||
``` | ||
## Reproducing DynaCLR | ||
|
||
3. Verify installation by accessing the CLI help message: | ||
Due to anonymity requirements during the review process, | ||
we cannot host the large custom datasets used in the paper. | ||
Here we demonstrate how to train and evaluate the DynaCLR models with a small public dataset. | ||
Here we use the training split of a HeLa cell DIC dataset from the | ||
[Cell Tracking Challenge](http://data.celltrackingchallenge.net/training-datasets/DIC-C2DH-HeLa.zip) | ||
and convert it to OME-Zarr for convenience (`../Hela_CTC.zarr`). | ||
This dataset has 2 FOVs, and we use a 1:1 split for training and validation. | ||
|
||
```sh | ||
viscy --help | ||
``` | ||
Verify the dataset download by running the following command. | ||
You may need to modify the path in the configuration file to point to the correct dataset location. | ||
|
||
```sh | ||
# modify the path in the configuration file | ||
# to use the correct dataset location | ||
iohub info /path/to/Hela_CTC.zarr | ||
``` | ||
|
||
It should print something like: | ||
|
||
```text | ||
=== Summary === | ||
Format: omezarr v0.4 | ||
Axes: T (time); C (channel); Z (space); Y (space); X (space); | ||
Channel names: ['DIC', 'labels'] | ||
Row names: ['0'] | ||
Column names: ['0'] | ||
Wells: 1 | ||
``` | ||
|
||
Training can be performed with the following command: | ||
|
||
```sh | ||
python -m viscy.cli.contrastive_triplet fit -c ./examples/fit_ctc.yml | ||
``` | ||
|
||
For development installation, see [the contributing guide](https://github.com/mehta-lab/VisCy/blob/main/CONTRIBUTING.md). | ||
The TensorBoard logs and model checkpoints will be saved the `./lightning_logs` directory. | ||
|
||
## Additional Notes | ||
The pipeline is built using the [PyTorch Lightning](https://www.pytorchlightning.ai/index.html) framework. | ||
The [iohub](https://github.com/czbiohub-sf/iohub) library is used | ||
for reading and writing data in [OME-Zarr](https://www.nature.com/articles/s41592-021-01326-w) format. | ||
Prediction of features on the entire dataset using the trained model can be done with: | ||
|
||
The full functionality is tested on Linux `x86_64` with NVIDIA Ampere GPUs (CUDA 12.4). | ||
Some features (e.g. mixed precision and distributed training) may not be available with other setups, | ||
see [PyTorch documentation](https://pytorch.org) for details. | ||
```sh | ||
python -m viscy.cli.contrastive_triplet predict -c ./examples/predict_ctc.yml | ||
``` |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the same as line 9. Also it won't be clear to the reader what 'the High Performance cluster' is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.