You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The number of cores in ssgsea-cli.R is hard coded, and should be set using an input command line argument instead.
Currently, parallel processing is set to use all available cores, i.e.:
## in ssgsea-cli.R
# hard-coded parameters
spare.cores <- 0 # use all available cpus
## then in ssGSEA.2.0.R
## register cores
cl <- parallel::makeCluster(detectCores() - spare.cores)
doParallel::registerDoParallel(cl)
This is problematic when using a job scheduler in an HPC environment, since detectCores() returns the number of CPU cores on the current machine regardless of how many cores the job has been allotted by the scheduler.
Is it possible to add a command line argument --cores (or some other name) to specify the number of cores one wants to use, and then in ssGSEA.2.0.R use cl <- parallel::makeCluster(cores)?
Thanks!
The text was updated successfully, but these errors were encountered:
The number of cores in ssgsea-cli.R is hard coded, and should be set using an input command line argument instead.
Currently, parallel processing is set to use all available cores, i.e.:
This is problematic when using a job scheduler in an HPC environment, since
detectCores()
returns the number of CPU cores on the current machine regardless of how many cores the job has been allotted by the scheduler.Is it possible to add a command line argument --cores (or some other name) to specify the number of cores one wants to use, and then in ssGSEA.2.0.R use
cl <- parallel::makeCluster(cores)
?Thanks!
The text was updated successfully, but these errors were encountered: