This repository is designed to be an interactive introduction on how to use Nextflow on HPC within the CIP.
This intro assumes that you are familiar with a Linux Operating System and are proficient in using Linux command line tools.
This tutorial was created with the San Diego Supercomputer Center's Expanse supercomputer in mind.
After completing this introduction, you should:
- Be able to run a simple Nextflow pipeline on HPC
- How to use conda environments with Nextflow
- How to use Nextflow with Singularity containers
- Perform simple data analysis and visualizations
- Be able to use basic git commands
- Accessibility: Nextflow abstracts the complexities of SLURM, making it more accessible and user-friendly.
- Simplified Workflow Management: Users can define and execute workflows using simple, high-level commands.
- Efficient Execution: Facilitates the efficient and scalable execution of complex data analysis pipelines.
- No In-depth SLURM Knowledge Required: Allows leveraging SLURM's powerful resource management capabilities without needing detailed knowledge of its configurations.
- Optimized Resource Utilization: Ensures optimal utilization of HPC resources by integrating workflow management with SLURM's job scheduling.
- Enhanced Workflow Development: Simplifies the overall workflow development and execution process.
This repository contains a series of examples that demonstrate how to use Nextflow on HPC with SLURM.
To get started, navigate to the appropriate example directory and follow the instructions provided in the README file.
mkdir -p ~/a/
cd ~/a
git clone https://github.com/hovo1990/CIP_Nextflow_on_HPC.git
Unfortunately, there are instances where Nextflow is not available on the cluster and must be installed manually.