You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you think your job will need more than 4GB of memory, request one CPU for every 4GB required.
Why does requesting more CPU help with allocating more memory?
FH Gizmo has class J nodes which each have 24 cores and 384GB of memory, and class K nodes which each have 36 cores and 768GB of memory. So, if you think you will need 100GB of memory for your job, by this rule of thumb you would request 25 cores. You would be assigned to a class K node, and you would occupy 25/36 cores on that node. On this node, other users can use the remaining 11 cores. You would share the 768GB of memory all together and hope that the other users don't take up more memory than you need: the more cores you occupy on a node, less users will compete for memory. It's an imprecise system and SciComp has interest to make memory allocation more precise in the future.
Note: on other SLURM systems, sbatch --mem does have implications about memory allocation.
Check in with Michael about how to set memory higher? And resolve some other questions
The text was updated successfully, but these errors were encountered: