Skip to content

Commit

Permalink
Update gpu_hpc.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mselensky authored Aug 9, 2024
1 parent 5d50778 commit deeaf19
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions docs/Documentation/Development/Programming_Models/gpu_hpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ The following examples are generic templates that NREL HPC users can adapt for t

The following are some GPU-relevant environment variables you can set in your submission scripts to Slurm.

| Variable | Description |
| :-- | :-- |
| SLURM_GPUS_ON_NODE | Quantity of GPU devices available to a Slurm job. Set by Slurm. |
| SLURM_JOB_GPUS | GPU device ID(s) available to a Slurm job. Starts with `0`. Set by Slurm. |
| CUDA_VISIBLE_DEVICES | GPU device ID(s) available to a CUDA process. Starts with `0`. This is a variable that you might need to set, depending on the application. |
| Variable | Description |
| :-- | :-- |
| `SLURM_GPUS_ON_NODE` | Quantity of GPU devices available to a Slurm job. Set by Slurm. |
| `SLURM_JOB_GPUS` | GPU device ID(s) available to a Slurm job. Starts with `0`. Set by Slurm. |
| `CUDA_VISIBLE_DEVICES` | GPU device ID(s) available to a CUDA process. Starts with `0`. This is a variable that you might need to set, depending on the application. If `CUDA_VISIBLE_DEVICES` isn't already set in your shell session, you can with `CUDA_VISIBLE_DEVICES=$SLURM_JOB_GPUS` |

### Software containers

Expand Down

0 comments on commit deeaf19

Please sign in to comment.