Skip to content

Commit

Permalink
Convert to jupyter-book (#66)
Browse files Browse the repository at this point in the history
* renamed to markdown

* converted RMarkdown to Markdown

* initial commit

* removed bookdown files

* deploy on jupyter branch push

* removed old files

* updated ready for cytomining PR

* removed bibtext ref

Co-authored-by: callum-jpg <[email protected]>
  • Loading branch information
callum-jpg and callum-jpg authored Apr 1, 2022
1 parent d9b9f5a commit 06142f2
Show file tree
Hide file tree
Showing 32 changed files with 423 additions and 1,449 deletions.
1 change: 0 additions & 1 deletion .Rprofile

This file was deleted.

42 changes: 42 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: deploy

# Only run this when the master branch changes
on:
push:
branches:
- master
# If your git repository has the Jupyter Book within some-subfolder next to
# unrelated files, you can make this run only if a file within that specific
# folder has been modified.
#
# paths:
# - some-subfolder/**

# This job installs dependencies, builds the book, and pushes it to `gh-pages`
jobs:
deploy-book:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

# Install dependencies
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install dependencies
run: |
pip install jupyter-book
# Build the book
- name: Build the book
run: |
jupyter-book build .
# Push the book's HTML to github-pages
- name: GitHub Pages action
uses: peaceiris/[email protected]
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./_build/html
17 changes: 0 additions & 17 deletions .travis.yml

This file was deleted.

56 changes: 22 additions & 34 deletions 01-overview.Rmd → 01-overview.md
Original file line number Diff line number Diff line change
@@ -1,85 +1,73 @@
# Introduction

This handbook describes the process of running a Cell Painting experiment.
While the code here will describe doing so in the context of running [Distributed-CellProfiler](https://github.com/CellProfiler/Distributed-CellProfiler) on AWS on images generated by a PerkinElmer microscope,
then collating the data with [cytominer-database](https://github.com/cytomining/cytominer-database) and analyzing it with [pycytominer](https://github.com/cytomining/pycytominer), the basic procedure for running a Cell Painting experiment is the same no matter the microscope or the processing platform.
This handbook describes the process of running a Cell Painting experiment. While the code here will describe doing so in the context of running [Distributed-CellProfiler](https://github.com/CellProfiler/Distributed-CellProfiler) on AWS on images generated by a PerkinElmer microscope, then collating the data with [cytominer-database](https://github.com/cytomining/cytominer-database) and analyzing it with [pycytominer](https://github.com/cytomining/pycytominer), the basic procedure for running a Cell Painting experiment is the same no matter the microscope or the processing platform.

Briefly, the steps for any and every platform are:

## Collect your software
## Collect your software

For the specific use case here, this involves
For the specific use case here, this involves:

- [Distributed-CellProfiler](https://github.com/CellProfiler/Distributed-CellProfiler)
- [pe2loaddata](https://github.com/broadinstitute/pe2loaddata)
- [cytominer-database](https://github.com/cytomining/cytominer-database)
- [pycytominer](https://github.com/cytomining/pycytominer)

along with their dependencies.
Almost certainly, you will need a locally-to-your-images installed version of GUI CellProfiler that matches the version you want to run on your cluster - locally might mean on a local machine or a VM.
along with their dependencies. Almost certainly, you will need a locally-to-your-images installed version of GUI CellProfiler that matches the version you want to run on your cluster - locally might mean on a local machine or a VM.

## Collect your pipelines
## Collect your pipelines

You will minimally require these pipelines
You will minimally require these pipelines:

- `illum` (illumination correction)
- `analysis` (segmentation and feature extraction)

But you may also want pipelines for
But you may also want pipelines for:

- Z projection
- QC
- assay development

## Determine how to get your image lists to CellProfiler
## Determine how to get your image lists to CellProfiler

CellProfiler needs to understand image sets, aka for each field-of-view that was captured how many channels were there, what would you like to name the channels were there, and what are the file names corresponding to each channel.
If you are using a Phenix Opera or Operetta, you can use the `pe2loaddata` program to generate a CSV that contains a list of image sets in an automated fashion, which can be passed to CellProfiler in via the LoadData module.
Otherwise, you have a couple of different options:

1. You can create a similar CSV using a script that you write yourself that handles the files from your microscope and makes a similar CSV.
Minimally, you need a `FileName` and `PathName` column for each channel (ie `FileName_OrigDNA`), and Metadata columns for each piece of metadata CellProfiler needs (ie `Metadata_Plate`, `Metadata_Well`, and `Metadata_Site`)
1. You can use a local copy of the files CellProfiler will be running, configure the 4 input modules of CellProfiler to create your image sets, then export CSVs using CellProfiler's "Export Image Set Listing" option, and feed those into the pipelines to be run on your cluster.
1. Alter all of your pipelines to, rather than the LoadData CSV module, use the 4 input modules of CellProfiler and add, configure, and run each pipeline with the CreateBatchFiles module and use batch files in your cluster environment.
1. You can create a similar CSV using a script that you write yourself that handles the files from your microscope and makes a similar CSV. Minimally, you need a `FileName` and `PathName` column for each channel (ie `FileName_OrigDNA`), and Metadata columns for each piece of metadata CellProfiler needs (ie `Metadata_Plate`, `Metadata_Well`, and `Metadata_Site`)
2. You can use a local copy of the files CellProfiler will be running, configure the 4 input modules of CellProfiler to create your image sets, then export CSVs using CellProfiler's "Export Image Set Listing" option, and feed those into the pipelines to be run on your cluster.
3. Alter all of your pipelines to, rather than the LoadData CSV module, use the 4 input modules of CellProfiler and add, configure, and run each pipeline with the CreateBatchFiles module and use batch files in your cluster environment.

These options are ordered in terms of most-scripting-proficiency-needed to least-scripting-proficiency-needed as well as least-CellProfiler-proficiency-needed to most-CellProfiler-proficiency-needed.

## Execute your CellProfiler pipelines

### (Optional) Z projection

If your images were taken with multiple planes, you will need to Z-project them.
All subsequent steps should be run on the projected images.
If your images were taken with multiple planes, you will need to Z-project them. All subsequent steps should be run on the projected images.

### (Optional) QC

You may want to run a quality control pipeline to determine your imaging plate quality.
You can choose to run this locally or on your cluster compute environment.
You will need to evaluate the results of this pipeline somehow, in CellProfiler-Analyst, KNIME, SpotFire, etc.
You may run illumination correction and assay development steps in the meantime, but should hold analysis steps until the results are evaluated.
You may want to run a quality control pipeline to determine your imaging plate quality. You can choose to run this locally or on your cluster compute environment. You will need to evaluate the results of this pipeline somehow, in CellProfiler-Analyst, KNIME, SpotFire, etc. You may run illumination correction and assay development steps in the meantime, but should hold analysis steps until the results are evaluated.

### Illumination correction

You need to run a pipeline that is grouped by plate and creates an illumination correction function.
Since it is grouped by plate, you don't need very many CPUs to run this, but it will take 6-24 hours depending on settings and image size.
Assay development and analysis require this step to complete.
You need to run a pipeline that is grouped by plate and creates an illumination correction function. Since it is grouped by plate, you don't need very many CPUs to run this, but it will take 6-24 hours depending on settings and image size. Assay development and analysis require this step to complete.

### (Optional) Assay Development

If desired, you can run a pipeline that executes on one image per well and carries out your segmentation but not measurement steps and makes (an) image(s) that you can use to evaluate the quality of the segmentation (either individually or by stitching them together first).
This is not required but allows you to ensure that your segmentation parameters look reasonable across the variety of phenotypes present in your data.
If being run, the final step should be held until this step can be evaluated.
If desired, you can run a pipeline that executes on one image per well and carries out your segmentation but not measurement steps and makes (an) image(s) that you can use to evaluate the quality of the segmentation (either individually or by stitching them together first). This is not required but allows you to ensure that your segmentation parameters look reasonable across the variety of phenotypes present in your data.

If being run, the final step should be held until this step can be evaluated.

## Analysis

This pipeline segments the cells and measures the whole images and cells, and creates output CSVs (or can dump to a MySQL host if configured).
This is typically run on each image site in parallel and thus can be sped up by using a large number of CPUs.
This pipeline segments the cells and measures the whole images and cells, and creates output CSVs (or can dump to a MySQL host if configured). This is typically run on each image site in parallel and thus can be sped up by using a large number of CPUs.

## Aggregate your data

Since the analysis is run in parallel, unless using a MySQL host you will have a number of sets of CSVs that need to be turned into a single file per plate.
This is currently done with the `cytominer-database` program.
Since the analysis is run in parallel, unless using a MySQL host you will have a number of sets of CSVs that need to be turned into a single file per plate. This is currently done with the `cytominer-database` program.

## Create and manipulate per-well profiles.
## Create and manipulate per-well profiles.

The final step is to create per-well profiles, annotate them with metadata, and do steps such as plate normalization and feature selection.
These are accomplished via a "profiling recipe" using `pycytominer`.
The final step is to create per-well profiles, annotate them with metadata, and do steps such as plate normalization and feature selection. These are accomplished via a "profiling recipe" using `pycytominer`.
42 changes: 17 additions & 25 deletions 02-config.Rmd → 02-config.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,44 @@
# (PART) Configuration {-}

# Configure Environment for Full Profiling Pipeline

This workflow assumes you have already set up an AWS account with an S3 bucket and EFS, and created a VM per the instructions in the link below.

(02-config:aws)=
## Launch an AWS Virtual Machine for making CSVs and running Distributed-CellProfiler

Launch an EC2 node using AMI `cytomining/images/hvm-ssd/cytominer-bionic-trusty-18.04-amd64-server-*`, created using [cytominer-vm](https://github.com/cytomining/cytominer-vm).

You will need to create an AMI for your own infrastructure because the provisioning includes mounting S3 and EFS, which is account specific.
We recommend using an `m4.xlarge` instance, with an 8Gb EBS volume.

Note: Proper configuration is essential to mount the S3 bucket.
The following configuration provides an example, named `imaging-platform` (modifications will be necessary).
You will need to create an AMI for your own infrastructure because the provisioning includes mounting S3 and EFS, which is account specific. We recommend using an `m4.xlarge` instance, with an 8Gb EBS volume.

* Launch an ec2 instance on AWS
* AMI: `cytomining/images/hvm-ssd/cytominer-ubuntu-trusty-18.04-amd64-server-1529668435`
* Instance Type: m4.xlarge
* Network: vpc-35149752
* Subnet: Default (imaging platform terraform)
* IAM role: `s3-imaging-platform-role`
* No Tags
* Select Existing Security Group: `SSH_HTTP`
* Review and Launch
* `ssh -i <USER>.pem ubuntu@<Public DNS IPv4>`
Note: Proper configuration is essential to mount the S3 bucket. The following configuration provides an example, named `imaging-platform` (modifications will be necessary).

After starting the instance, ensure that the S3 bucket is mounted on `~/bucket`.
If not, run `sudo mount -a`.
- Launch an ec2 instance on AWS
- AMI: `cytomining/images/hvm-ssd/cytominer-ubuntu-trusty-18.04-amd64-server-1529668435`
- Instance Type: m4.xlarge
- Network: vpc-35149752
- Subnet: Default (imaging platform terraform)
- IAM role: `s3-imaging-platform-role`
- No Tags
- Select Existing Security Group: `SSH_HTTP`
- Review and Launch
- `ssh -i <USER>.pem ubuntu@<Public DNS IPv4>`

After starting the instance, ensure that the S3 bucket is mounted on `~/bucket`. If not, run `sudo mount -a`.

Log in to the EC2 instance.


Enter your AWS credentials

```sh
aws configure
```

The infrastructure is configured with one S3 bucket.
Mount this S3 bucket (if it is not automatically mounted)
The infrastructure is configured with one S3 bucket. Mount this S3 bucket (if it is not automatically mounted)

```sh
sudo mount -a
```

Check that the bucket was was mounted.
This path should exist:
Check that the bucket was was mounted. This path should exist:

```sh
ls ~/bucket/projects
Expand All @@ -60,7 +52,7 @@ You will want to retain environment variables once defined, and for processes to
tmux new -s sessionname
```

You can detach from this session at any time by typing `Ctl+b`, then `d`.
You can detach from this session at any time by typing `Ctl+b`, then `d`.
To reattach to an existing session, type `tmux a -t sessionname`

You can list existing sessions with `tmux list-sessions` and kill any poorly-behaving session with `tmux kill-session -t sessionname`
Expand Down
Loading

0 comments on commit 06142f2

Please sign in to comment.