diff --git a/docs/Announcements/2022-04-06-announcement.md b/docs/Announcements/2022-04-06-announcement.md index 150cab507..2a50b841d 100644 --- a/docs/Announcements/2022-04-06-announcement.md +++ b/docs/Announcements/2022-04-06-announcement.md @@ -9,7 +9,7 @@ brief: FY23 Allocations, Documentation, Eagle Login Nodes, CSC Tutorial The Eagle allocation process for FY23 is scheduled to open up on May 11, with applications due June 8. The application process will be an update of the process used in FY23, with additional information requested to help manage the transition from Eagle to Kestrel. HPC Operations will host a webinar on May 17 to explain the application process. Watch for announcements. # Documentation -We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/master/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages. +We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/code-examples/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages. # Eagle login node etiquette Eagle logins are shared resources that are heavily utilized. We have some controls in place to limit per user process use of memory and CPU that will ramp down your processes usage over time. We recommend any sustained heavy usage of memory and CPU take place on compute nodes, where these limits aren't in place. If you only need a node for an hour, nodes in the debug partition are available. We permit compiles and file operations on the logins, but discourage multi-threaded operations or long, sustained operations against the file system. We cannot put the same limits on file system operations as memory and CPU, therefore if you slow the file system on the login node, you slow it for everyone on that login. Lastly, Fastx, the remote windowing package on the ED nodes, is a licensed product. When you are done using FastX, please log all the way out to ensure licenses are available for all users. diff --git a/docs/Announcements/2022-05-04-announcement.md b/docs/Announcements/2022-05-04-announcement.md index d310c710a..908dbc17c 100644 --- a/docs/Announcements/2022-05-04-announcement.md +++ b/docs/Announcements/2022-05-04-announcement.md @@ -41,4 +41,4 @@ The configuration file will also apply to command-line ssh in Windows, as well. The Lustre file systems that hosts /projects, /scratch, /shared-projects and /datasets works most efficiently when it is under 80% full. Please do your part to keep the file system under 80% by cleaning up your /projects, /scratch and /shared-projects spaces. # Documentation -We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/master/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages. +We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/code-examples/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages. diff --git a/docs/Documentation/MachineLearning/ReinforcementLearning/index.md b/docs/Documentation/MachineLearning/ReinforcementLearning/index.md index 8b992814f..422ad4a2a 100644 --- a/docs/Documentation/MachineLearning/ReinforcementLearning/index.md +++ b/docs/Documentation/MachineLearning/ReinforcementLearning/index.md @@ -71,7 +71,7 @@ If everything works correctly, you will see an output similar to: RL algorithms are notorious for the amount of data they need to collect in order to learn policies. The more data collected, the better the training will (usually) be. The best way to do it is to run many Gym instances in parallel and collecting experience, and this is where RLlib assists. -[RLlib](https://docs.ray.io/en/master/rllib/index.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one. +[RLlib](https://docs.ray.io/en/code-examples/rllib/index.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one. The RL policy learning examples provided in this tutorial demonstrate the RLlib abilities. For convenience, the `CartPole-v0` OpenAI Gym environment will be used. @@ -84,7 +84,7 @@ Begin trainer by importing the `ray` package: import ray from ray import tune ``` -`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/master/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib. +`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/code-examples/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib. `Tune` is also one of `Ray`'s libraries for scalable hyperparameter tuning. All RLlib trainers (scripts for RL agent training) are compatible with Tune API, making experimenting easy and streamlined. @@ -139,7 +139,7 @@ tune.run( ``` The RLlib trainer is ready! -Except the aforementioned default hyperparameters, [every RL algorithm](https://docs.ray.io/en/master/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance. +Except the aforementioned default hyperparameters, [every RL algorithm](https://docs.ray.io/en/code-examples/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance. The code of the trainer in this example can be found [in the tutorial repo](https://github.com/erskordi/HPC/blob/HPC-RL/languages/python/openai_rllib/simple-example/simple_trainer.py). @@ -402,7 +402,7 @@ Function `register_env` takes two arguments: env_name = "custom-env" register_env(env_name, lambda config: BasicEnv()) ``` -Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/master/rllib-env.html) of how `register_env` works. +Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/code-examples/rllib-env.html) of how `register_env` works. The `tune.run` function, instead of `args.name_env`, it uses the `env_name` defined above. diff --git a/docs/Documentation/Software_Tools/Jupyter/index.md b/docs/Documentation/Software_Tools/Jupyter/index.md index 5cbcc4ad8..100783286 100644 --- a/docs/Documentation/Software_Tools/Jupyter/index.md +++ b/docs/Documentation/Software_Tools/Jupyter/index.md @@ -160,17 +160,17 @@ Automation makes life better! ### Auto-launching with an sbatch script -Full directions included in the [Jupyter repo](https://github.com/NREL/HPC/tree/master/general/Jupyterhub/jupyter). +Full directions included in the [Jupyter repo](https://github.com/NREL/HPC/tree/code-examples/general/Jupyterhub/jupyter). -Download [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/sbatch_jupyter.sh) and [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) +Download [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/sbatch_jupyter.sh) and [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) -Edit [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/sbatch_jupyter.sh) to change: +Edit [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/sbatch_jupyter.sh) to change: `--account=*yourallocation*` `--time=*timelimit*` -Run [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) and follow directions +Run [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) and follow directions That's it! @@ -278,17 +278,17 @@ You can also run shell commands inside a cell. For example: [Awesome Jupyterlab](https://github.com/mauhai/awesome-jupyterlab) -[Plotting with matplotlib](https://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb) +[Plotting with matplotlib](https://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/code-examples/Lecture-4-Matplotlib.ipynb) -[Python for Data Science](https://nbviewer.jupyter.org/github/gumption/Python_for_Data_Science/blob/master/Python_for_Data_Science_all.ipynb) +[Python for Data Science](https://nbviewer.jupyter.org/github/gumption/Python_for_Data_Science/blob/code-examples/Python_for_Data_Science_all.ipynb) -[Numerical Computing in Python](https://nbviewer.jupyter.org/github/phelps-sg/python-bigdata/blob/master/src/main/ipynb/numerical-slides.ipynb) +[Numerical Computing in Python](https://nbviewer.jupyter.org/github/phelps-sg/python-bigdata/blob/code-examples/src/main/ipynb/numerical-slides.ipynb) -[The Sound of Hydrogen](https://nbviewer.jupyter.org/github/Carreau/posts/blob/master/07-the-sound-of-hydrogen.ipynb) +[The Sound of Hydrogen](https://nbviewer.jupyter.org/github/Carreau/posts/blob/code-examples/07-the-sound-of-hydrogen.ipynb) [Plotting Pitfalls](https://anaconda.org/jbednar/plotting_pitfalls/notebook) -[GeoJSON Extension](https://github.com/jupyterlab/jupyter-renderers/tree/master/packages/geojson-extension) +[GeoJSON Extension](https://github.com/jupyterlab/jupyter-renderers/tree/code-examples/packages/geojson-extension) ## Happy Notebooking! diff --git a/docs/Documentation/Systems/Swift/running.md b/docs/Documentation/Systems/Swift/running.md index 0c9ffce96..0303ebc56 100644 --- a/docs/Documentation/Systems/Swift/running.md +++ b/docs/Documentation/Systems/Swift/running.md @@ -365,7 +365,7 @@ ml openmpi gcc vasp #### get input and set it up #### This is from an old benchmark test -#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2 +#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2 mkdir $SLURM_JOB_ID cp input/* $SLURM_JOB_ID diff --git a/docs/Documentation/Systems/Vermillion/running.md b/docs/Documentation/Systems/Vermillion/running.md index 0b23d24df..0f00efece 100644 --- a/docs/Documentation/Systems/Vermillion/running.md +++ b/docs/Documentation/Systems/Vermillion/running.md @@ -355,9 +355,9 @@ There are actually several builds of Vasp on Vermilion, including builds of VASP The run times and additional information can be found in the file /nopt/nrel/apps/210929a/example/vasp/versions. The run on the GPU nodes is considerably faster than the CPU node runs. -The data set for these runs is from a standard NREL vasp benchmark. See [https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2]() This is a system of 519 atoms (Ag504C4H10S1). +The data set for these runs is from a standard NREL vasp benchmark. See [https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2]() This is a system of 519 atoms (Ag504C4H10S1). -There is a NREL report that discuss running the this test case and also a smaller test case with with various setting of nodes, tasks-per-nodes and OMP_NUM_THREADS. It can be found at: [https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202) +There is a NREL report that discuss running the this test case and also a smaller test case with with various setting of nodes, tasks-per-nodes and OMP_NUM_THREADS. It can be found at: [https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202) ### Running multi-node VASP jobs on Vermilion @@ -446,15 +446,15 @@ ml wget #### get input and set it up #### This is from an old benchmark test -#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2 +#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2 mkdir input -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS # mpirun is recommended (necessary for multi-node calculations) I_MPI_OFI_PROVIDER=tcp mpirun -iface ens7 -np 16 vasp_std @@ -536,15 +536,15 @@ ml wget #### get input and set it up #### This is from an old benchmark test -#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2 +#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2 mkdir input -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS # mpirun is recommended (necessary for multi-node calculations) I_MPI_OFI_PROVIDER=tcp mpirun -iface ens7 -np 16 vasp_std @@ -621,15 +621,15 @@ ml wget #### get input and set it up #### This is from an old benchmark test -#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2 +#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2 mkdir input -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS srun --mpi=pmi2 -n 16 vasp_std @@ -701,15 +701,15 @@ ml wget #### get input and set it up #### This is from an old benchmark test -#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2 +#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2 mkdir input -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR -wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR +wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS mpirun -npernode 1 vasp_std > vasp.$SLURM_JOB_ID ``` diff --git a/docs/Documentation/languages/bash/bash-starter.md b/docs/Documentation/languages/bash/bash-starter.md index 8ec7ab754..a9c58edb4 100644 --- a/docs/Documentation/languages/bash/bash-starter.md +++ b/docs/Documentation/languages/bash/bash-starter.md @@ -227,9 +227,9 @@ see `help declare` at the command line for more information on types that can be Further Resources -------------------------- -[NREL HPC Github](https://github.com/NREL/HPC/tree/master/general/beginner/bash) - User-contributed bash script and examples that you can use on HPC systems. +[NREL HPC Github](https://github.com/NREL/HPC/tree/code-examples/general/beginner/bash) - User-contributed bash script and examples that you can use on HPC systems. -[BASH cheat sheet](https://github.com/NREL/HPC/blob/master/general/beginner/bash/cheatsheet.sh) - A concise and extensive list of example commands, built-ins, control structures, and other useful bash scripting material. +[BASH cheat sheet](https://github.com/NREL/HPC/blob/code-examples/general/beginner/bash/cheatsheet.sh) - A concise and extensive list of example commands, built-ins, control structures, and other useful bash scripting material. diff --git a/docs/Documentation/languages/fortran/f90.md b/docs/Documentation/languages/fortran/f90.md index 091710ea1..b48bc0b54 100644 --- a/docs/Documentation/languages/fortran/f90.md +++ b/docs/Documentation/languages/fortran/f90.md @@ -1887,7 +1887,7 @@ end - Mutation - Nothing new in either of these files - [Source and makefile "git"](source) -- [Source and makefile "*tgz"](https://github.com/timkphd/examples/raw/master/fort/90/source/archive.tgz) +- [Source and makefile "*tgz"](https://github.com/timkphd/examples/raw/code-examples/fort/90/source/archive.tgz) - - - - - - @@ -2908,7 +2908,7 @@ end - [http://www.nsc.liu.se/~boein/f77to90/](http://www.nsc.liu.se/~boein/f77to90/) Fortran 90 for the Fortran 77 Programmer - <b>Fortran 90 Handbook Complete ANSI/ISO Reference</b>. Jeanne Adams, Walt Brainerd, Jeanne Martin, Brian Smith, Jerrold Wagener - <b>Fortran 90 Programming</b>. T. Ellis, Ivor Philips, Thomas Lahey -- [https://github.com/llvm/llvm-project/blob/master/flang/docs/FortranForCProgrammers.md](https://github.com/llvm/llvm-project/blob/master/flang/docs/FortranForCProgrammers.md) +- [https://github.com/llvm/llvm-project/blob/code-examples/flang/docs/FortranForCProgrammers.md](https://github.com/llvm/llvm-project/blob/code-examples/flang/docs/FortranForCProgrammers.md) - [FFT stuff](../mkl/) - [Fortran 95 and beyond](../95/) diff --git a/docs/blog/2020-12-01-numba.md b/docs/blog/2020-12-01-numba.md index 6b3050fcb..aefc2c243 100644 --- a/docs/blog/2020-12-01-numba.md +++ b/docs/blog/2020-12-01-numba.md @@ -15,4 +15,4 @@ def function_to_be_compiled(): ... ``` -Importantly, many functions require *no changes or refactoring* to gain this speedup. In this [getting-started guide](https://github.com/NREL/HPC/blob/master/languages/python/numba/numba_demo.ipynb), we build an example environment on Eagle, test the performance of a Numba-compiled function using the most common implementation of the `@jit` decorator, and discuss what sorts of functions will see performance improvements when compiled. +Importantly, many functions require *no changes or refactoring* to gain this speedup. In this [getting-started guide](https://github.com/NREL/HPC/blob/code-examples/languages/python/numba/numba_demo.ipynb), we build an example environment on Eagle, test the performance of a Numba-compiled function using the most common implementation of the `@jit` decorator, and discuss what sorts of functions will see performance improvements when compiled. diff --git a/docs/blog/2021-05-06-tf.md b/docs/blog/2021-05-06-tf.md index 5aed1ca45..fb120e35a 100644 --- a/docs/blog/2021-05-06-tf.md +++ b/docs/blog/2021-05-06-tf.md @@ -17,4 +17,4 @@ pip install --upgrade --no-deps --force-reinstall /nopt/nrel/apps/wheels/tensorf These builds provide a significant advantage as illustrated below over the standard `conda` install of TensorFlow.  -A recent tutorial was given on this topic, for more information see the [recording](https://web.microsoftstream.com/video/af9b54ae-9158-4075-9f36-9aa2a4412ad0) or checkout the [tutorial materials](https://github.com/NREL/HPC/tree/master/workshops/Optimized_TF) +A recent tutorial was given on this topic, for more information see the [recording](https://web.microsoftstream.com/video/af9b54ae-9158-4075-9f36-9aa2a4412ad0) or checkout the [tutorial materials](https://github.com/NREL/HPC/tree/code-examples/workshops/Optimized_TF) diff --git a/docs/blog/2021-06-18-srun.md b/docs/blog/2021-06-18-srun.md index 97d50eb1e..1905b5aa2 100644 --- a/docs/blog/2021-06-18-srun.md +++ b/docs/blog/2021-06-18-srun.md @@ -43,7 +43,7 @@ This article primarily discusses options for the srun command to enable good par The page -[https://www.nrel.gov/hpc/eagle-batch-jobs.html](https://www.nrel.gov/hpc/eagle-batch-jobs.html) has information about running jobs under Slurm including a link to example batch scripts. The page [https://github.com/NREL/HPC/tree/master/slurm](https://github.com/NREL/HPC/tree/master/slurm) has many slurm examples ranging from simple to complex. This article is based on the second page. +[https://www.nrel.gov/hpc/eagle-batch-jobs.html](https://www.nrel.gov/hpc/eagle-batch-jobs.html) has information about running jobs under Slurm including a link to example batch scripts. The page [https://github.com/NREL/HPC/tree/code-examples/slurm](https://github.com/NREL/HPC/tree/code-examples/slurm) has many slurm examples ranging from simple to complex. This article is based on the second page. ### 3. Why not just use mpiexec/mpirun? @@ -56,10 +56,10 @@ For our srun examples we will use two glorified "Hello World" programs, one in F repository [https://github.com/NREL/HPC.git](https://github.com/NREL/HPC.git) in the slurm/source directory or by running the *wget* commands shown below. -wget [https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/fhostone.f90](https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/fhostone.f90) -<br>wget [https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/mympi.f90](https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/mympi.f90) -<br>wget [https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/phostone.c](https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/phostone.c) -<br>wget [https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/makehello -O makefile](https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/makehello) +wget [https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/fhostone.f90](https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/fhostone.f90) +<br>wget [https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/mympi.f90](https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/mympi.f90) +<br>wget [https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/phostone.c](https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/phostone.c) +<br>wget [https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/makehello -O makefile](https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/makehello) After the files are downloaded you can build the programs #### using the mpt MPI compilers