Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ansys.md about unlimited license #735

Open
wants to merge 3 commits into
base: gh-pages
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions docs/Documentation/Applications/ansys.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
## Ansys

The NREL Computational Science Center (CSC) maintains an Ansys license pool for general use, including two seats of CFD, one seat of Ansys Mechanical, and four Ansys HPC Packs to support running a model on many cores/parallel solves.
The current Ansys license is an unlimited license that covers all Ansys products, with no restrictions on quantities. However, since Ansys is unable to provide a license file that includes all products in unlimited quantities, we have requested licenses based on our anticipated needs. You can check the available licenses on Kestrel using the command `lmstat.ansys`. If the module you need is not listed, please submit a ticket by emailing [[email protected]](mailto:[email protected]) so that we can request an updated license to include the specific module you require.

The main workflow that we support has two stages. The first is interactive graphical usage, e.g., for interactively building meshes or visualizing boundary geometry. For this, Ansys should be run on a [FastX desktop](https://nrel.github.io/HPC/Documentation/Viz_Analytics/virtualgl_fastx/). The second stage is batch (i.e., non-interactive) parallel processing, which should be run on compute nodes via a Slurm job script. Of course, if you have Ansys input from another location ready to run in batch mode, the first stage is not needed. We unfortunately cannot support running parallel jobs on the DAV nodes, nor launching parallel jobs from interactive sessions on compute nodes.

### Shared License Etiquette
License usage can be checked on Kestrel with the command `lmstat.ansys`. Network floating licenses are a shared resource. Whenever you open an Ansys Fluent window, a license is pulled from the pool and becomes unavailable to other users. *Please do not keep idle windows open if you are not actively using the application*, close it and return the associated licenses to the pool. Excessive retention of software licenses falls under the inappropriate use policy.
Network floating licenses are a shared resource. Whenever you open an Ansys Fluent window, a license is pulled from the pool and becomes unavailable to other users. *Please do not keep idle windows open if you are not actively using the application*, close it and return the associated licenses to the pool. Excessive retention of software licenses falls under the inappropriate use policy.

### A Note on Licenses and Job Scaling
HPC Pack licenses are used to distribute Ansys batch jobs to run in parallel across many compute cores. The HPC Pack model is designed to enable exponentially more computational resources per each additional license, roughly 2x4^(num_hpc_packs). A table summarizing this relationship is shown below.
Expand Down
43 changes: 39 additions & 4 deletions docs/Documentation/Applications/comsol.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,6 @@ However, the performance may be slow and certain display features may behave une
## Running a Single-Node COMSOL Model in Batch Mode
You can save your model built in FastX+GUI mode into a file such as `myinputfile.mph`. Once that's available, the following job script shows how to run a single process multithreaded job in batch mode:

???+ example "Example Submission Script"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These headers should be left in. They render as the drop down boxes in the site

(https://squidfunk.github.io/mkdocs-material/reference/admonitions/)


```
#!/bin/bash
#SBATCH --job-name="comsol-batch-single-node"
Expand Down Expand Up @@ -79,8 +77,6 @@ Once this script file (e.g., `submit_single_node_job.sh`) is saved, it can be su

## Running a Multi-Node COMSOL Model in Batch Mode
To configure a COMSOL job with multiple MPI ranks, required for any job where the number of nodes >1, you can build on the following template:

???+ example "Example Multiprocess Submission Script"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These headers should be left in. They render as the drop down boxes in the site

(https://squidfunk.github.io/mkdocs-material/reference/admonitions/)


```
#!/bin/bash
Expand Down Expand Up @@ -119,4 +115,43 @@ To configure a COMSOL job with multiple MPI ranks, required for any job where th

The job script can be submitted to SLURM just the same as above for the single-node example. The option `-mpibootstrap slurm` helps COMSOL to deduce runtime parameters such as `-nn`, `-nnhost` and `-np`. For large jobs that require more than one node, this approach, which uses MPI and/or OpenMP, can be used to efficiently utilize the available resources. Note that in this case, we choose 32 MPI ranks, 8 per node, and each rank using 13 threads for demonstration purpose, but *not* as an optimal performance recommendation. The optimal configuration depends on your particular problem, workload, and choice of solver, so some experimentation may be required.

## Running COMSOL Model with GPU
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Running COMSOL Model with GPU
## Running a COMSOL Model with GPU

In COMSOL Multiphysics®, GPU acceleration can significantly increase performance for time-dependent simulations that use the discontinuous Galerkin (dG) method, such as those using the Pressure Acoustics, Time Explicit interface, and for training deep neural network (DNN) surrogate models. The following is a job script example used to run COMSOL jobs on GPU nodes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
???+ example "Example GPU Job Script"

```
#!/bin/bash
#SBATCH --job-name=comsol-batch-GPUs
#SBATCH --time=00:20:00
#SBATCH --gres=gpu:1 # request 1 gpu per node, each gpu has 80 Gb of memory
#SBATCH --mem-per-cpu=2G # requested memory per CPU core
#SBATCH --ntasks-per-node=64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this many cores necessary? This will then charge for half a gpu node even though the job is only using 1 GPU.

#SBATCH --nodes=2
#SBATCH --account=<allocation handle>
#SBATCH --output=comsol-%j.out
#SBATCH --error=comsol-%j.err

# This helps ensure your job runs from the directory
# from which you ran the sbatch command
cd $SLURM_SUBMIT_DIR

# Set up environment, and list to stdout for verification
module load comsol
echo " "
module list
echo " "

inputfile=$SLURM_SUBMIT_DIR/myinputfile.mph
outputfile=$SLURM_SUBMIT_DIR/myoutputfilename
logfile=$SLURM_SUBMIT_DIR/mylogfilename

# Run a 2-node, 64-rank parallel COMSOL job with 1 threads for each rank and 1 gpu per node
# -nn = total number of MPI ranks
# -nnhost = number of MPI ranks per host
# -np = number of threads per rank

comsol –nn 128 -nnhost 64 batch -np 1 -inputfile $inputfile -outputfile $outputfile –batchlog $logfile
```

Note, When launching a GPU job on Kestrel, be sure to do so from one of its dedicated GPU login nodes (ssh to Kestrel from the NREL network using kestrel-gpu.hpc.nrel.gov).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Note, When launching a GPU job on Kestrel, be sure to do so from one of its dedicated GPU login nodes (ssh to Kestrel from the NREL network using kestrel-gpu.hpc.nrel.gov).
Note, when launching a GPU job on Kestrel, be sure to do so from one of its dedicated [GPU login nodes](../Systems/Kestrel/index.md).


The Complex Systems Simulation and Optimization group has hosted introductory and advanced COMSOL trainings. The introductory training covered how to use the COMSOL GUI and run COMSOL in batch mode on Kestrel. The advanced training showed how to do a parametric study using different sweeps (running an interactive session is also included) and introduced equation-based simulation and parameter estimation. To learn more about using COMSOL on Kestrel, please refer to the training. The recording can be accessed at [Computational Sciences Tutorials](https://nrel.sharepoint.com/sites/ComputationalSciencesTutorials/Lists/Computational%20Sciences%20Tutorial%20Recordings/AllItems.aspx?viewid=7b97e3fa%2Dedf6%2D48cd%2D91d6%2Df69848525ba4&playlistLayout=playback&itemId=75) and the slides and models used in the training can be downloaded from [Github](https://github.com/NREL/HPC/tree/master/applications/comsol/comsol-training).