diff --git a/.codecov.yml b/.codecov.yml
deleted file mode 100644
index 6694b2a5..00000000
--- a/.codecov.yml
+++ /dev/null
@@ -1,14 +0,0 @@
-coverage:
- precision: 2
- round: down
- range: "70...100"
-
- status:
- project: yes
- patch: no
- changes: no
-
-comment:
- layout: "header, reach, diff, flags, files, footer"
- behavior: default
- require_changes: no
diff --git a/CITATION.cff b/CITATION.cff
deleted file mode 100644
index 8e3c763a..00000000
--- a/CITATION.cff
+++ /dev/null
@@ -1,37 +0,0 @@
-# This CITATION.cff file was generated with cffinit.
-# Visit https://bit.ly/cffinit to generate yours today!
-
-cff-version: 1.2.0
-title: VisCy
-message: >-
- If you use this software, please cite it using the
- metadata from this file.
-type: software
-authors:
- - given-names: Ziwen
- family-names: Liu
- email: ziwen.liu@czbiohub.org
- affiliation: Chan Zuckerberg Biohub San Francisco
- orcid: 'https://orcid.org/0000-0001-7482-1299'
- - given-names: Eduardo
- family-names: Hirata-Miyasaki
- affiliation: Chan Zuckerberg Biohub San Francisco
- orcid: 'https://orcid.org/0000-0002-1016-2447'
- - given-names: Christian
- family-names: Foley
- - given-names: Soorya
- family-names: Pradeep
- affiliation: Chan Zuckerberg Biohub San Francisco
- orcid: 'https://orcid.org/0000-0002-0926-1480'
- - given-names: Shalin
- family-names: Mehta
- affiliation: Chan Zuckerberg Biohub San Francisco
- orcid: 'https://orcid.org/0000-0002-2542-3582'
-repository-code: 'https://github.com/mehta-lab/VisCy'
-url: 'https://github.com/mehta-lab/VisCy'
-abstract: computer vision models for single-cell phenotyping
-keywords:
- - machine-learning
- - computer-vision
- - bioimage-analysis
-license: BSD-3-Clause
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index 44db5bbc..00000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Contributing to viscy
-
-## Development installation
-
-Clone or fork the repository,
-then make an editable installation with all the optional dependencies:
-
-```sh
-# in project root directory (parent folder of pyproject.toml)
-pip install -e ".[dev,visual,metrics]"
-```
-
-## CI requirements
-
-Lint with Ruff:
-
-```sh
-ruff check viscy
-```
-
-Format the code with Black:
-
-```sh
-black viscy
-```
-
-Run tests with `pytest`:
-
-```sh
-pytest -v
-```
diff --git a/LICENSE b/LICENSE
index 4520d7a9..485e6b04 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,6 +1,6 @@
BSD 3-Clause License
-Copyright (c) 2023, CZ Biohub SF
+Copyright (c) 2023, DynaCLR authors
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
diff --git a/README.md b/README.md
index 11c54191..45cb71c4 100644
--- a/README.md
+++ b/README.md
@@ -1,137 +1,82 @@
-# VisCy
-
-VisCy (abbreviation of `vision` and `cyto`) is a deep learning pipeline for training and deploying computer vision models for image-based phenotyping at single-cell resolution.
-
-This repository provides a pipeline for the following.
-- Image translation
- - Robust virtual staining of landmark organelles
-- Image classification
- - Supervised learning of of cell state (e.g. state of infection)
-- Image representation learning
- - Self-supervised learning of the cell state and organelle phenotypes
-
-> **Note:**
-> VisCy has been extensively tested for the image translation task. The code for other tasks is under active development. Frequent breaking changes are expected in the main branch as we unify the codebase for above tasks. If you are looking for a well-tested version for virtual staining, please use release `0.2.1` from PyPI.
-
-
-## Virtual staining
-
-### Demos
-- [Virtual staining exercise](https://github.com/mehta-lab/VisCy/blob/46beba4ecc8c4f312fda0b04d5229631a41b6cb5/examples/virtual_staining/dlmbl_exercise/solution.ipynb):
-Notebook illustrating how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course and uses UNeXt2 architecture.
-
-- [Image translation demo](https://github.com/mehta-lab/VisCy/blob/92215bc1387316f3af49c83c321b9d134d871116/examples/virtual_staining/img2img_translation/solution.ipynb): Fluorescence images can be predicted from label-free images. Can we predict label-free image from fluorescence? Find out using this notebook.
-
-- [Training Virtual Staining Models via CLI](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions):
-Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*).
-
-### Gallery
-Below are some examples of virtually stained images (click to play videos).
-See the full gallery [here](https://github.com/mehta-lab/VisCy/wiki/Gallery).
-
-| VSCyto3D | VSNeuromast | VSCyto2D |
-|:---:|:---:|:---:|
-| [![HEK293T](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_1.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/d53a81eb-eb37-44f3-b522-8bd7bddc7755) | [![Neuromast](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_3.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/4cef8333-895c-486c-b260-167debb7fd64) | [![A549](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_5.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/287737dd-6b74-4ce3-8ee5-25fbf8be0018) |
-
-### Reference
-
-The virtual staining models and training protocols are reported in our recent [preprint on robust virtual staining](https://www.biorxiv.org/content/10.1101/2024.05.31.596901).
-
-
-This package evolved from the [TensorFlow version of virtual staining pipeline](https://github.com/mehta-lab/microDL), which we reported in [this paper in 2020](https://elifesciences.org/articles/55502).
-
-Liu, Hirata-Miyasaki et al., 2024
-
-
-
- @article {Liu2024.05.31.596901,
- author = {Liu, Ziwen and Hirata-Miyasaki, Eduardo and Pradeep, Soorya and Rahm, Johanna and Foley, Christian and Chandler, Talon and Ivanov, Ivan and Woosley, Hunter and Lao, Tiger and Balasubramanian, Akilandeswari and Liu, Chad and Leonetti, Manu and Arias, Carolina and Jacobo, Adrian and Mehta, Shalin B.},
- title = {Robust virtual staining of landmark organelles},
- elocation-id = {2024.05.31.596901},
- year = {2024},
- doi = {10.1101/2024.05.31.596901},
- publisher = {Cold Spring Harbor Laboratory},
- URL = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901},
- eprint = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901.full.pdf},
- journal = {bioRxiv}
- }
-
Guo, Yeh, Folkesson et al., 2020
-
-
-
- @article {10.7554/eLife.55502,
- article_type = {journal},
- title = {Revealing architectural order with quantitative label-free imaging and deep learning},
- author = {Guo, Syuan-Ming and Yeh, Li-Hao and Folkesson, Jenny and Ivanov, Ivan E and Krishnan, Anitha P and Keefe, Matthew G and Hashemi, Ezzat and Shin, David and Chhun, Bryant B and Cho, Nathan H and Leonetti, Manuel D and Han, May H and Nowakowski, Tomasz J and Mehta, Shalin B},
- editor = {Forstmann, Birte and Malhotra, Vivek and Van Valen, David},
- volume = 9,
- year = 2020,
- month = {jul},
- pub_date = {2020-07-27},
- pages = {e55502},
- citation = {eLife 2020;9:e55502},
- doi = {10.7554/eLife.55502},
- url = {https://doi.org/10.7554/eLife.55502},
- keywords = {label-free imaging, inverse algorithms, deep learning, human tissue, polarization, phase},
- journal = {eLife},
- issn = {2050-084X},
- publisher = {eLife Sciences Publications, Ltd},
- }
-
--host <your-server-name>
to the tensorboard command below. <your-server-name>
is the address of your compute node that ends in amazonaws.com.\n",
- "\n",
- " http://localhost:{port_number_assigned}
) Ports
tab port_number_assigned
\n",
- " iohub info -v \"path-to-ome-zarr\"
\n",
- "\n",
- "iohub info --help
to see the help menu.\n",
- "--host <your-server-name>
to the tensorboard command below. <your-server-name>
is the address of your compute node that ends in amazonaws.com.\n",
- "\n",
- " http://localhost:{port_number_assigned}
) Ports
tab port_number_assigned
\n",
- " iohub info -v \"path-to-ome-zarr\"
\n",
- "\n",
- "iohub info --help
to see the help menu.\n",
- "