This pipeline is designed for automated end-to-end quality control and processing of ATAC-seq or DNase-seq data. The pipeline can be run on compute clusters with job submission engines or stand alone machines. It inherently makes uses of parallelized/distributed computing. Pipeline installation is also easy as most dependencies are automatically installed. The pipeline can be run end-to-end i.e. starting from raw FASTQ files all the way to peak calling and signal track generation; or can be started from intermediate stages as well (e.g. alignment files). The pipeline supports single-end or paired-end ATAC-seq or DNase-seq data (with or without replicates). The pipeline produces formatted HTML reports that include quality control measures specifically designed for ATAC-seq and DNase-seq data, analysis of reproducibility, stringent and relaxed thresholding of peaks, fold-enrichment and pvalue signal tracks. The pipeline also supports detailed error reporting and easy resuming of runs. The pipeline has been tested on human, mouse and yeast ATAC-seq data and human and mouse DNase-seq data.
The ATAC-seq pipeline specification is also the official pipeline specification of the Encyclopedia of DNA Elements (ENCODE) consortium. The ATAC-seq pipeline protocol definition is here. Some parts of the ATAC-seq pipeline were developed in collaboration with Jason Buenrostro, Alicia Schep and Will Greenleaf at Stanford.
- Portability: Support for many cloud platforms (Google/DNAnexus) and cluster engines (SLURM/SGE/PBS).
- User-friendly HTML report: tabulated quality metrics including alignment/peak statistics and FRiP along with many useful plots (IDR/cross-correlation measures).
- ATAqC: Annotation-based analysis including TSS enrichment and comparison to Roadmap DNase.
- Genomes: Pre-built database for GRCh38, hg19, mm10, mm9 and additional support for custom genomes.
- Install Caper. Caper is a python wrapper for Cromwell. Make sure that you have python3(> 3.4.1) installed on your system.
$ pip install caper
-
Read through Caper's README carefully.
-
Run a pipeline with Caper.
Caper uses the cromwell workflow execution engine to run the workflow on the platform you specify. While we recommend you use caper, if you want to run cromwell directly without caper you can learn about that here.
You can also run our pipeline on DNAnexus without using Caper or Cromwell. There are two ways to build a workflow on DNAnexus based on our WDL.
We no longer recommend Conda for resolving dependencies and plan to phase out Conda support. Instead we recommend using Docker or Singularity. You can install Singularity and use it for our pipeline with Caper (by adding --use-singularity
to command line arguments). Please see this instruction.
Make sure that you have configured Caper correctly.
WARNING: Do not run Caper on HPC login nodes. Your jobs can be killed.
Run it. Due to --deepcopy
all files (HTTP URLs) in examples/caper/ENCSR356KRQ_subsampled.json
will be recursively copied into Caper's temporary folder (--tmp-dir
).
$ caper run atac.wdl -i examples/caper/ENCSR356KRQ_subsampled.json --deepcopy --use-singularity
If you use Docker then replace --use-singularity
with --use-docker
.
$ caper run atac.wdl -i examples/caper/ENCSR356KRQ_subsampled.json --deepcopy --use-docker
If you use Conda then remove --use-singularity
from the command line and activate pipeline's Conda env before running a pipeline.
$ conda activate encode-atac-seq-pipeline
$ caper run atac.wdl -i examples/caper/ENCSR356KRQ_subsampled.json --deepcopy
To run it on an HPC (e.g. Stanford Sherlock and SCG). See details at Caper's README.
An input JSON file includes all genomic data files, input parameters and metadata for running pipelines. Always use absolute paths in an input JSON.
Install Croo. Make sure that you have python3(> 3.4.1) installed on your system.
$ pip install croo
Find a metadata.json
on Caper's output directory.
$ croo [METADATA_JSON_FILE]
There are some useful tools to post-process outputs of the pipeline.
This tool recursively finds and parses all qc.json
(pipeline's final output) found from a specified root directory. It generates a TSV file that has all quality metrics tabulated in rows for each experiment and replicate. This tool also estimates overall quality of a sample by a criteria definition JSON file which can be a good guideline for QC'ing experiments.
This tool downloads any type (FASTQ, BAM, PEAK, ...) of data from the ENCODE portal. It also generates a metadata JSON file per experiment which will be very useful to make an input JSON file for the pipeline.