running nextfow pipeline in slurm using srun and mpi support #3649
Unanswered
BioInf2305
asked this question in
Q&A
Replies: 2 comments 4 replies
-
As far as I understand, your nextflow workflow is a standard one. |
Beta Was this translation helpful? Give feedback.
1 reply
-
I use this guide to run my pipelines, and modifying the required resources in the sbatch script. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
Using a nextflow pipeline (developed in 22.10.6), I want to run ~4000 jobs; these jobs each can take 1-2 days depending on the coverage. I intend to run this workflow on the Linux clusters of our university. This HPC has a standard set-up: a login node should only be used to submit jobs to one of the several clusters and partitions. Now depending on the queue time and limitations associated with policies of the HPC administration, the pipeline completion can take at least a few weeks. Further, except for the login node, all the other queues support the job spawning with mpi or srun from the sbatch file. In this pipeline, I am using executor as "slurm" (in nextflow config file) with computational resources mentioned in base.config. In the past, I have run this pipeline (with 50-100 jobs) by directly running nextflow command in "screen" from the login node (though it is not recommended way in our HPC system).
I would highly appreciate if anyone can answer these questions: (1) how can I wrap "nextflow run" command in a shell script and submit it as any other job, (2) can I run nextflow job with mpi and srun; based on my extensive search in the past issues of this github repository, I came to know that it was possible to do so in the past releases of nextflow (20.XX), however, current version does not support it anymore.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions