-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ansys.md about unlimited license #735
base: gh-pages
Are you sure you want to change the base?
Conversation
@@ -37,8 +37,6 @@ However, the performance may be slow and certain display features may behave une | |||
## Running a Single-Node COMSOL Model in Batch Mode | |||
You can save your model built in FastX+GUI mode into a file such as `myinputfile.mph`. Once that's available, the following job script shows how to run a single process multithreaded job in batch mode: | |||
|
|||
???+ example "Example Submission Script" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These headers should be left in. They render as the drop down boxes in the site
(https://squidfunk.github.io/mkdocs-material/reference/admonitions/)
@@ -79,8 +77,6 @@ Once this script file (e.g., `submit_single_node_job.sh`) is saved, it can be su | |||
|
|||
## Running a Multi-Node COMSOL Model in Batch Mode | |||
To configure a COMSOL job with multiple MPI ranks, required for any job where the number of nodes >1, you can build on the following template: | |||
|
|||
???+ example "Example Multiprocess Submission Script" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These headers should be left in. They render as the drop down boxes in the site
(https://squidfunk.github.io/mkdocs-material/reference/admonitions/)
@@ -119,4 +115,43 @@ To configure a COMSOL job with multiple MPI ranks, required for any job where th | |||
|
|||
The job script can be submitted to SLURM just the same as above for the single-node example. The option `-mpibootstrap slurm` helps COMSOL to deduce runtime parameters such as `-nn`, `-nnhost` and `-np`. For large jobs that require more than one node, this approach, which uses MPI and/or OpenMP, can be used to efficiently utilize the available resources. Note that in this case, we choose 32 MPI ranks, 8 per node, and each rank using 13 threads for demonstration purpose, but *not* as an optimal performance recommendation. The optimal configuration depends on your particular problem, workload, and choice of solver, so some experimentation may be required. | |||
|
|||
## Running COMSOL Model with GPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Running COMSOL Model with GPU | |
## Running a COMSOL Model with GPU |
#SBATCH --time=00:20:00 | ||
#SBATCH --gres=gpu:1 # request 1 gpu per node, each gpu has 80 Gb of memory | ||
#SBATCH --mem-per-cpu=2G # requested memory per CPU core | ||
#SBATCH --ntasks-per-node=64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this many cores necessary? This will then charge for half a gpu node even though the job is only using 1 GPU.
comsol –nn 128 -nnhost 64 batch -np 1 -inputfile $inputfile -outputfile $outputfile –batchlog $logfile | ||
``` | ||
|
||
Note, When launching a GPU job on Kestrel, be sure to do so from one of its dedicated GPU login nodes (ssh to Kestrel from the NREL network using kestrel-gpu.hpc.nrel.gov). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, When launching a GPU job on Kestrel, be sure to do so from one of its dedicated GPU login nodes (ssh to Kestrel from the NREL network using kestrel-gpu.hpc.nrel.gov). | |
Note, when launching a GPU job on Kestrel, be sure to do so from one of its dedicated [GPU login nodes](../Systems/Kestrel/index.md). |
@@ -119,4 +115,43 @@ To configure a COMSOL job with multiple MPI ranks, required for any job where th | |||
|
|||
The job script can be submitted to SLURM just the same as above for the single-node example. The option `-mpibootstrap slurm` helps COMSOL to deduce runtime parameters such as `-nn`, `-nnhost` and `-np`. For large jobs that require more than one node, this approach, which uses MPI and/or OpenMP, can be used to efficiently utilize the available resources. Note that in this case, we choose 32 MPI ranks, 8 per node, and each rank using 13 threads for demonstration purpose, but *not* as an optimal performance recommendation. The optimal configuration depends on your particular problem, workload, and choice of solver, so some experimentation may be required. | |||
|
|||
## Running COMSOL Model with GPU | |||
In COMSOL Multiphysics®, GPU acceleration can significantly increase performance for time-dependent simulations that use the discontinuous Galerkin (dG) method, such as those using the Pressure Acoustics, Time Explicit interface, and for training deep neural network (DNN) surrogate models. The following is a job script example used to run COMSOL jobs on GPU nodes. | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???+ example "Example GPU Job Script" |
No description provided.