-
Notifications
You must be signed in to change notification settings - Fork 1
How to Setup the Remote Part Automatically Using the Helper Script
- Connect to the remote cluster using a terminal.
- First download the helper script on your home directory in the cluster with the following command:
wget https://github.com/fiji-hpc/hpc-workflow-manager/raw/master/hpc-workflow-manager-client/src/main/resources/fiji-hpc-helper.sh
- Run the helper script to get a list of the available options and helpful information:
bash fiji-hpc-helper.sh
- output example:
* Please select at least one of the two options: 1) -openMpiModule, install a custom Open MPI module localy. 2) -parallelTools, install Fiji with the parallel macro and OpenMPI Ops plugins. 3) -installersRemoval, remove files used during each one of the two above options. * Note: The script will operate (create folders) in the current working directory (that means /home/dsv) * Note: With the option (1) it will add files into folder: /home/dsv/Modules/modulefiles
ℹ️ In this example a cluster with the PBS Professional was used. The script is compatible with IBM Spectrum LSF and Slurm Workload Manager. The steps are similar.
-
Run the script with the first option:
bash fiji-hpc-helper.sh -openMpiModule
-
An example of the output on a supercomputer with the scheduler PBS Professional could be this:
* Note: More details can be monitored in the currently populated log file fiji-hpc-helper-1661966490.log * Custom Open MPI Environment Module installation selected. * Found wget! * Found Environment Modules! * Found GCC Environment Module! * Will use the following GCC Environment Module: GCC/8.3.0 * Found OpenPBS or PBS Pro! * Will use --with-tm=/opt/pbs option in Open MPI configuration. * Downloading Open MPI. (This might take a while, please wait.) * Extracting Open MPI archive! * ERROR: Scheduler directory /opt/pbs must exist and be accessible! * Generated log file: fiji-hpc-helper-1661966490.log * Try running this script in an interactive job. In PBS for example run: qsub -q qexp -l select=1 -I
ℹ️ If there are any error that are not understandable at any stage you may look at the log file generated by a run of the helper script as indicated. In this example you could see the contents of the log with:
cat fiji-hpc-helper-1661966490.log
In this case the helper script failed to install the custom Open MPI module due to a lack of access rights to the necessary directory where the scheduler is installed.
-
We only need to follow the instructions to fix this. We will run the helper script in an interactive job from worker node of the remote cluster. The worker node will have access to the required PBS Professional directory. Run the command suggested:
qsub -q qexp -l select=1 -I
-
Once the interactive job has started and you can enter input in the terminal, rerun the command:
bash fiji-hpc-helper.sh -openMpiModule
-
The helper script will resume the installation, it may produce the following output:
* Note: More details can be monitored in the currently populated log file fiji-hpc-helper-1661967054.log * Custom Open MPI Environment Module installation selected. * Found wget! * Found Environment Modules! * Found GCC Environment Module! * Will use the following GCC Environment Module: GCC/8.3.0 * Found OpenPBS or PBS Pro! * Will use --with-tm=/opt/pbs option in Open MPI configuration. * Open MPI has already been downloaded! * Extracting Open MPI archive! * Scheduler directory /opt/pbs found! * About to configure Open MPI. (This will take a while, please wait.) * About to install Open MPI. (This WILL take very long, please wait.) * About to create custom Open MPI Environment Module: /home/dsv/Modules/modulefiles/OpenMpi * The custom Environment Module >> OpenMpi/4.1.1-GCC8.3.0-CustomModule << should appear in the list bellow: ---------------------------------------------------------- /apps/modules/chem ---------------------------------------------------------- ORCA/4.2.1-OpenMPI-3.1.4 ORCA/5.0.1-OpenMPI-4.1.1 ORCA/5.0.3-OpenMPI-4.1.1 (D) ---------------------------------------------------------- /apps/modules/mpi ----------------------------------------------------------- OpenMPI/3.1.4-GCC-6.3.0-2.27 OpenMPI/4.0.7-NVHPC-21.9-UCX-1.11.2-CUDA-11.4.1 OpenMPI/4.0.3-GCC-9.3.0 OpenMPI/4.1.1-GCC-10.2.0-AOCL-3.0.1-AOCC-3.1.0 OpenMPI/4.0.5-GCC-10.2.0 OpenMPI/4.1.1-GCC-10.2.0-Java-1.8.0_221 OpenMPI/4.0.5-gcccuda-2020b OpenMPI/4.1.1-GCC-10.2.0 OpenMPI/4.0.5-iccifort-2020.4.304 OpenMPI/4.1.1-GCC-10.3.0 OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.2.2 OpenMPI/4.1.1-GCC-11.2.0 OpenMPI/4.0.5-NVHPC-21.2-CUDA-11.3.0 OpenMPI/4.1.2-GCC-11.2.0-Java-1.8.0_221 OpenMPI/4.0.6-NVHPC-21.9-CUDA-11.4.1-v2 OpenMPI/4.1.2-GCC-11.2.0 OpenMPI/4.0.6-NVHPC-21.9-CUDA-11.4.1 OpenMPI/4.1.2-NVHPC-22.2-CUDA-11.6.0-v2 OpenMPI/4.0.6-NVHPC-21.11-CUDA-11.4.1-v2 OpenMPI/4.1.2-NVHPC-22.2-CUDA-11.6.0 OpenMPI/4.0.7-NVHPC-21.9-CUDA-11.4.1 OpenMPI/4.1.4-GCC-11.3.0 (D) OpenMPI/4.0.7-NVHPC-21.9-UCX-1.9.0-CUDA-11.4.1 ---------------------------------------------------- /home/dsv/Modules/modulefiles ----------------------------------------------------- OpenMpi/4.1.1-GCC8.3.0-CustomModule Where: D: Default Module Use "module spider" to find all possible modules and extensions. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". * Installation of Custom Open MPI Environment Module finished. * The new module can be later loaded with the command: * module load OpenMpi/4.1.1-GCC8.3.0-CustomModule * Generated log file: fiji-hpc-helper-1661967054.log
-
In the case above, we can see that the custom Open MPI module is present in the list of available modules. Thus the installation was successful.
-
Exit the interactive job and return to the login node:
exit
ℹ️ Keep note of the custom module name. You may need to enter the custom module name in the Advanced Settings
OpenMpi/4.1.1-GCC8.3.0-CustomModule
of the parallel paradigm profile settings if it is not detected automatically by HPC Workflow Manager.
-
Run the script with the second option specified:
bash fiji-hpc-helper.sh -parallelTools
-
Here a sample of the screen output of a successful run:
* Note: More details can be monitored in the currently populated log file fiji-hpc-helper-1661966241.log * Fiji and parallel macro and OpenMPI Ops plugins installation selected. * Found wget! * Found Git! * WARNING: Did not find Java Developement Kit 8! I will try to find and load an Environment Module! * Found Java 8 Environment Module: Java/1.8.0_221! * WARNING: Did not find Maven! I try to find and load a module! * WARNING: Did not find a Maven Module! I will install it localy! * Downloading maven! * Maven installed! * Downloading Fiji. (This will take a while, please wait.) * About to install Fiji! * Fiji installed! * Cloning parallel macro localy! * Parallel macro plugin installed! * Cloning OpenMPI Ops localy! * OpenMPI Ops plugin installed! * Installation of Fiji with the parallel macro and OpenMPI Ops plugins finished SUCCESSFULLY! * Generated log file: fiji-hpc-helper-1661966241.log
-
The remote parts of the HPC-ParallelTools should now be installed!
- Run the helper script with the clean up option enabled:
bash fiji-hpc-helper.sh -installersRemoval
- This is a sample of output:
* Note: More details can be monitored in the currently populated log file fiji-hpc-helper-1661969483.log * Removal of installers selected. * Removing installers! * About to delete directory apache-maven-3.8.6. * Directory apache-maven-3.8.6 deleted! * About to delete file apache-maven-3.8.6-bin.zip. * Item apache-maven-3.8.6-bin.zip deleted! * About to delete file fiji-linux64.zip. * Item fiji-linux64.zip deleted! * About to delete directory openmpi-4.1.1. * Directory openmpi-4.1.1 deleted! * About to delete file openmpi-4.1.1.tar.gz. * Item openmpi-4.1.1.tar.gz deleted! * About to delete directory parallel-macro. * Directory parallel-macro deleted! * About to delete directory scijava-parallel-mpi. * Directory scijava-parallel-mpi deleted!
- Close the terminal window.
Please report any issues or ask for help for the helper script in the issue tracker.
Short Guide Worksheets
-
Manually install cluster-side tools
- Note: The cluster-side tools are technically the Parallel Macro and OpenMPI Ops
-
Download and use your own cluster
- Note: A small homemade cluster for testing, or when you cannot access a big HPC
-
Building from scratch your own cluster and configuring it
- Note: You will learn and understand everything that's behind the scenes
- Additional Useful Information