-
Notifications
You must be signed in to change notification settings - Fork 2
/
submit_generate
110 lines (85 loc) · 3.98 KB
/
submit_generate
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
#!/bin/bash
#!
#! Example SLURM job script for Darwin (Sandy Bridge, ConnectX3)
#!
#!#############################################################
#!#### Modify the options in this section as appropriate ######
#!#############################################################
#! sbatch directives begin here ###############################
#! Name of the job:
#SBATCH -J process_hpc_batch
#! Which project should be charged:
#SBATCH -A MRC-BSU-SL2-CPU
#! How many whole nodes should be allocated?
#SBATCH --nodes=1
#! How many (MPI) tasks will there be in total? (<= nodes*16)
#SBATCH --ntasks=1
#! How many CPUs will each task require
#SBATCH --cpus-per-task=8
#! How much wallclock time will be required?
#SBATCH --time=1:30:00
#! What types of email messages do you wish to receive?
#SBATCH --mail-type=FAIL
#SBATCH --mail-type=TIME_LIMIT
#SBATCH --mail-type=END
#! Uncomment this to prevent the job from being requeued (e.g. if
#! interrupted by node failure or system downtime):
#SBATCH --no-requeue
#SBATCH --verbose
#SBATCH --array=1
#SBATCH -p cclake-himem
#!SBATCH --qos=covid0
#! sbatch directives end here (put any additional directives above this line)
#SBATCH --output=process_model_runs_20201206_%a.out
#SBATCH --error=process_model_runs_20201206_%a.err
#! Notes:
#! Charging is determined by node number*walltime. Allocation is in entire nodes.
#! The --ntasks value refers to the number of tasks to be launched by SLURM only. This
#! usually equates to the number of MPI tasks launched. Reduce this from nodes*1if
#! demanded by memory requirements, or if OMP_NUM_THREADS>1.
#! Number of nodes and tasks per node allocated by SLURM (do not change):
numnodes=$SLURM_JOB_NUM_NODES
numtasks=$SLURM_NTASKS
mpi_tasks_per_node=$(echo "$SLURM_TASKS_PER_NODE" | sed -e 's/^\([0-9][0-9]*\).*$/\1/')
#! ############################################################
#! Modify the settings below to specify the application's environment, location
#! and launch method:
#! Optionally modify the environment seen by the application
#! (note that SLURM reproduces the environment at submission irrespective of ~/.bashrc):
. /etc/profile.d/modules.sh # Leave this line (enables the module command)
#! Insert additional module load commands after this line if needed:
module load pandoc # Need this to generate the STAN model code
# PATH=$PATH:$HOME/bin:$HOME/myPython/bin # Get access to the html2text function needed by the Rscript.
module load gcc/9 intel/compilers/2020.2 gsl-2.4-intel-17.0.4-etauzbm
# # Navigate to the correct working directory
# dirs=(*/)
# # echo ${dirs[SLURM_ARRAY_TASK_ID]}
# cd ${dirs[SLURM_ARRAY_TASK_ID]}
# pwd
#! Full path to application executable:
application="Rscript"
options="generate.R"
# #! Are you using OpenMP (NB this is unrelated to OpenMPI)? If so increase this
# #! safe value to no more than 16:
# export OMP_NUM_THREADS=1
# export R_PROFILE=$RMPI_RPROFILE
#! Number of MPI tasks to be started by the application per node and in total (do not change):
np=$[${numnodes}*${mpi_tasks_per_node}]
# #! The following variables define a sensible pinning strategy for Intel MPI tasks
# #! this should be suitable for both pure MPI and hybrid MPI/OpenMP jobs:
# export I_MPI_PIN_DOMAIN=omp:compact # Domains are $OMP_NUM_THREADS cores in size
# export I_MPI_PIN_ORDER=scatter # Adjacent domains have minimal sharing of caches/sockets
# #! Notes:
# #! 1. These variables influence Intel MPI only.
# #! 2. Domains are non-overlapping sets of cores which map 1-1 to MPI tasks.
# #! 3. I_MPI_PIN_PROCESSOR_LIST is ignored if I_MPI_PIN_DOMAIN is set.
# #! 4. If MPI tasks perform better when sharing caches/sockets, try I_MPI_PIN_ORDER=compact.
# # Get array ID
# i=${SLURM_ARRAY_TASK_ID}
#! Uncomment one choice for CMD below (add mpirun/mpiexec options if necessary):
#! Choose this for a MPI code (possibly using OpenMP) using Intel MPI.
# You can use any arbitrary set of Linux commands here
CMD="$application $options"
# Or for example:
# CMD="Rscript myScript.R"
eval $CMD