Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building Ginkgo on Perlmutter #1367

Closed
Maxwell-Rosen opened this issue Jul 10, 2023 · 13 comments · Fixed by #1368
Closed

Building Ginkgo on Perlmutter #1367

Maxwell-Rosen opened this issue Jul 10, 2023 · 13 comments · Fixed by #1368

Comments

@Maxwell-Rosen
Copy link

I'm a new user of the Ginkgo Project and I'm attempting to build Ginkgo on Perlmutter, a NERSC cluster. Everything in cmake seems to pass and it's detecting CUDA, but it cannot recognize certain modules in cuda, such as cublas, cusparse, curand, and cufft. Upon further investigation, it seems like the cuda folder and math_libs folder have different functions in them.

train371@perlmutter:login17:~/hackathongkyl/gkylzero> ls /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/
cmake/     comm_libs/ compilers/ cuda/      examples/  math_libs/ profilers/ REDIST/

The methods of cublas, cusparse, curand, and cufft are all in math_libs, not in cuda. How can I get Ginkgo to detect the methods in that folder?

train371@perlmutter:login17:~> ls /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda/lib64/
cmake                   libnppc.so.11.7.3.21     libnppif.so.11.7.3.21   libnppisu.so.11.7.3.21     libnvrtc-builtins.so.11.7
libaccinj64.so          libnppc_static.a         libnppif_static.a       libnppisu_static.a         libnvrtc-builtins.so.11.7.50
libaccinj64.so.11.7     libnppial.so             libnppig.so             libnppitc.so               libnvrtc-builtins_static.a
libaccinj64.so.11.7.50  libnppial.so.11          libnppig.so.11          libnppitc.so.11            libnvrtc.so
libcudadevrt.a          libnppial.so.11.7.3.21   libnppig.so.11.7.3.21   libnppitc.so.11.7.3.21     libnvrtc.so.11.2
libcudart.so            libnppial_static.a       libnppig_static.a       libnppitc_static.a         libnvrtc.so.11.7.50
libcudart.so.11.0       libnppicc.so             libnppim.so             libnpps.so                 libnvrtc_static.a
libcudart.so.11.7.60    libnppicc.so.11          libnppim.so.11          libnpps.so.11              libnvToolsExt.so
libcudart_static.a      libnppicc.so.11.7.3.21   libnppim.so.11.7.3.21   libnpps.so.11.7.3.21       libnvToolsExt.so.1
libcufilt.a             libnppicc_static.a       libnppim_static.a       libnpps_static.a           libnvToolsExt.so.1.0.0
libcuinj64.so           libnppidei.so            libnppist.so            libnvjpeg.so               libOpenCL.so
libcuinj64.so.11.7      libnppidei.so.11         libnppist.so.11         libnvjpeg.so.11            libOpenCL.so.1
libcuinj64.so.11.7.50   libnppidei.so.11.7.3.21  libnppist.so.11.7.3.21  libnvjpeg.so.11.7.2.34     libOpenCL.so.1.0
libculibos.a            libnppidei_static.a      libnppist_static.a      libnvjpeg_static.a         libOpenCL.so.1.0.0
libnppc.so              libnppif.so              libnppisu.so            libnvptxcompiler_static.a  stubs
libnppc.so.11           libnppif.so.11           libnppisu.so.11         libnvrtc-builtins.so
train371@perlmutter:login17:~> ls /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/math_libs/lib64/
libcal.so                  libcufftMp.so.10.8.1          libcurand.so.10              libcusolver_static.a      libcutensor.so.1.5.0
libcublasLt.so             libcufft.so                   libcurand.so.10.2.10.50      libcusparse.so            libcutensor_static.a
libcublasLt.so.11          libcufft.so.10                libcurand_static.a           libcusparse.so.11         liblapack_static.a
libcublasLt.so.11.10.1.25  libcufft.so.10.7.2.50         libcusolver_lapack_static.a  libcusparse.so.11.7.3.50  libmetis_static.a
libcublasLt_static.a       libcufft_static.a             libcusolverMg.so             libcusparse_static.a      libnvblas.so
libcublas.so               libcufft_static_nocallback.a  libcusolverMg.so.11          libcutensorMg.so          libnvblas.so.11
libcublas.so.11            libcufftw.so                  libcusolverMg.so.11.3.5.50   libcutensorMg.so.1        libnvblas.so.11.10.1.25
libcublas.so.11.10.1.25    libcufftw.so.10               libcusolverMp.so             libcutensorMg.so.1.5.0    stubs
libcublas_static.a         libcufftw.so.10.7.2.50        libcusolver.so               libcutensorMg_static.a
libcufftMp.so              libcufftw_static.a            libcusolver.so.11            libcutensor.so
libcufftMp.so.10           libcurand.so                  libcusolver.so.11.3.5.50     libcutensor.so.1

Here are the modules I have loaded

train371@perlmutter:login07:~/ginkgo_test/ginkgo>` module list

Currently Loaded Modules:
  1) craype-x86-milan                        7) cpe/23.03              13) cray-libsci/23.02.1.1        19) Nsight-Systems/2022.2.1
  2) libfabric/1.15.2.0                      8) xalt/2.10.2            14) PrgEnv-gnu/8.3.3             20) cudatoolkit/11.7
  3) craype-network-ofi                      9) craype-accel-nvidia80  15) cray-mpich/8.1.25            21) nccl/2.17.1-ofi         (E)
  4) xpmem/2.5.2-2.4_3.48__gd0f7936.shasta  10) gpu/1.0                16) evp-patch
  5) gcc/11.2.0                             11) craype/2.7.20          17) python/3.9-anaconda-2021.11
  6) perftools-base/23.03.0                 12) cray-dsmml/0.2.2       18) Nsight-Compute/2022.1.1

  Where:
   E:  Experimental


Here is the print out of what happens when I try to build Ginkgo

train371@perlmutter:login07:~/ginkgo_test/ginkgo/build> cmake -G "Unix Makefiles" .. && make
-- The C compiler identification is GNU 11.2.0
-- The CXX compiler identification is GNU 11.2.0
-- Cray Programming Environment 2.7.20 C
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /opt/cray/pe/craype/2.7.20/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Cray Programming Environment 2.7.20 CXX
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /opt/cray/pe/craype/2.7.20/bin/CC - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenMP_C: -fopenmp (found suitable version "4.5", minimum required is "3.0")
-- Found OpenMP_CXX: -fopenmp (found suitable version "4.5", minimum required is "3.0")
-- Found OpenMP: TRUE (found suitable version "4.5", minimum required is "3.0")
-- Enabling OpenMP executor
-- Found MPI_CXX: /opt/cray/pe/craype/2.7.20/bin/CC (found suitable version "3.1", minimum required is "3.1")
-- Found MPI: TRUE (found suitable version "3.1", minimum required is "3.1") found components: CXX
-- Enabling MPI support
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda/11.7/bin/nvcc
-- Enabling CUDA executor
-- The CUDA compiler identification is NVIDIA 11.7.64
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda/11.7/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Detected GPU devices of the following architectures: 80
-- The CUDA compiler supports the following architectures: 35;37;50;52;53;60;61;62;70;72;75;80;86;87
-- Found NVTX: /opt/nvidia/hpc_sdk/Linux_x86_64/22.7/cuda/11.7/targets/x86_64-linux/include/nvtx3
-- Found OpenMP_C: -fopenmp (found suitable version "4.5", minimum required is "3.0")
-- Found OpenMP_CXX: -fopenmp (found suitable version "4.5", minimum required is "3.0")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Performing Test Ginkgo_C_COVERAGE_SUPPORTED
-- Performing Test Ginkgo_C_COVERAGE_SUPPORTED - Success
-- Performing Test Ginkgo_C_TSAN_SUPPORTED
-- Performing Test Ginkgo_C_TSAN_SUPPORTED - Success
-- Performing Test Ginkgo_C_ASAN_SUPPORTED
-- Performing Test Ginkgo_C_ASAN_SUPPORTED - Success
-- Performing Test Ginkgo_C_LSAN_SUPPORTED
-- Performing Test Ginkgo_C_LSAN_SUPPORTED - Success
-- Performing Test Ginkgo_C_UBSAN_SUPPORTED
-- Performing Test Ginkgo_C_UBSAN_SUPPORTED - Success
-- Performing Test Ginkgo_CXX_COVERAGE_SUPPORTED
-- Performing Test Ginkgo_CXX_COVERAGE_SUPPORTED - Success
-- Performing Test Ginkgo_CXX_TSAN_SUPPORTED
-- Performing Test Ginkgo_CXX_TSAN_SUPPORTED - Success
-- Performing Test Ginkgo_CXX_ASAN_SUPPORTED
-- Performing Test Ginkgo_CXX_ASAN_SUPPORTED - Success
-- Performing Test Ginkgo_CXX_LSAN_SUPPORTED
-- Performing Test Ginkgo_CXX_LSAN_SUPPORTED - Success
-- Performing Test Ginkgo_CXX_UBSAN_SUPPORTED
-- Performing Test Ginkgo_CXX_UBSAN_SUPPORTED - Success
-- Performing Test Ginkgo_HIP_COVERAGE_SUPPORTED
-- Performing Test Ginkgo_HIP_COVERAGE_SUPPORTED - Success
-- Performing Test Ginkgo_HIP_TSAN_SUPPORTED
-- Performing Test Ginkgo_HIP_TSAN_SUPPORTED - Success
-- Performing Test Ginkgo_HIP_ASAN_SUPPORTED
-- Performing Test Ginkgo_HIP_ASAN_SUPPORTED - Success
-- Performing Test Ginkgo_HIP_LSAN_SUPPORTED
-- Performing Test Ginkgo_HIP_LSAN_SUPPORTED - Success
-- Performing Test Ginkgo_HIP_UBSAN_SUPPORTED
-- Performing Test Ginkgo_HIP_UBSAN_SUPPORTED - Success
-- GINKGO_BUILD_TESTS is ON, enabling GINKGO_BUILD_REFERENCE
-- Setting build type to 'Release' as none was specified.
-- Looking for C++ include cxxabi.h
-- Looking for C++ include cxxabi.h - found
-- Could NOT find PAPI (missing: PAPI_LIBRARY PAPI_INCLUDE_DIR)
-- Could NOT find VTune (missing: VTune_EXECUTABLE VTune_LIBRARY VTune_INCLUDE_DIR)
-- Could NOT find METIS (missing: METIS_LIBRARY METIS_INCLUDE_DIR)
-- Could NOT find GTest (missing: GTEST_LIBRARY GTEST_INCLUDE_DIR GTEST_MAIN_LIBRARY) (Required is at least version "1.10.0")
-- Looking for HWLOC - found version 2.8.0
-- Looking for hwloc_topology_init
-- Looking for hwloc_topology_init - found
-- Found HWLOC: /usr/lib64/libhwloc.so (found suitable version "2.8", minimum required is "2.1")
-- Fetching external GTest
-- Found Python: /global/common/software/nersc/pm-2022q3/sw/python/3.9-anaconda-2021.11/bin/python3.9 (found version "3.9.7") found components: Interpreter
-- Fetching external GFlags
-- Looking for C++ include unistd.h
-- Looking for C++ include unistd.h - found
-- Looking for C++ include stdint.h
-- Looking for C++ include stdint.h - found
-- Looking for C++ include inttypes.h
-- Looking for C++ include inttypes.h - found
-- Looking for C++ include sys/types.h
-- Looking for C++ include sys/types.h - found
-- Looking for C++ include sys/stat.h
-- Looking for C++ include sys/stat.h - found
-- Looking for C++ include fnmatch.h
-- Looking for C++ include fnmatch.h - found
-- Looking for C++ include stddef.h
-- Looking for C++ include stddef.h - found
-- Check size of uint32_t
-- Check size of uint32_t - done
-- Looking for strtoll
-- Looking for strtoll - found
-- Fetching external RapidJSON
-- Found NUMA: /usr/lib64/libnuma.so
-- No OpenCV found, disabling examples with video output
-- No Kokkos found, disabling examples with Kokkos assembly.
--
---------------------------------------------------------------------------------------------------------
--
--    Summary of Configuration for Ginkgo (version 1.7.0 with tag develop, shortrev 4755c23b4)
--    Ginkgo configuration:
--        CMAKE_BUILD_TYPE:                           Release
--        BUILD_SHARED_LIBS:                          ON
--        CMAKE_INSTALL_PREFIX:                       /usr/local
--        PROJECT_SOURCE_DIR:                         /global/homes/t/train371/ginkgo_test/ginkgo
--        PROJECT_BINARY_DIR:                         /global/homes/t/train371/ginkgo_test/ginkgo/build
--        CMAKE_CXX_COMPILER:                         GNU 11.2.0 on platform Linux x86_64
--                                                    /opt/cray/pe/craype/2.7.20/bin/CC
--    User configuration:
--      Enabled modules:
--        GINKGO_BUILD_OMP:                           ON
--        GINKGO_BUILD_MPI:                           ON
--        GINKGO_BUILD_REFERENCE:                     ON
--        GINKGO_BUILD_CUDA:                          ON
--        GINKGO_BUILD_HIP:                           OFF
--        GINKGO_BUILD_DPCPP:                         OFF
--      Enabled features:
--        GINKGO_MIXED_PRECISION:                     OFF
--        GINKGO_HAVE_GPU_AWARE_MPI:                  OFF
--      Tests, benchmarks and examples:
--        GINKGO_BUILD_TESTS:                         ON
--        GINKGO_FAST_TESTS:                          OFF
--        GINKGO_BUILD_EXAMPLES:                      ON
--        GINKGO_EXTLIB_EXAMPLE:
--        GINKGO_BUILD_BENCHMARKS:                    ON
--        GINKGO_BENCHMARK_ENABLE_TUNING:             OFF
--      Documentation:
--        GINKGO_BUILD_DOC:                           OFF
--        GINKGO_VERBOSE_LEVEL:                       1
--
---------------------------------------------------------------------------------------------------------
--
--      Developer Tools:
--        GINKGO_DEVEL_TOOLS:                         OFF
--        GINKGO_WITH_CLANG_TIDY:                     OFF
--        GINKGO_WITH_IWYU:                           OFF
--        GINKGO_CHECK_CIRCULAR_DEPS:                 OFF
--        GINKGO_WITH_CCACHE:                         ON
---------------------------------------------------------------------------------------------------------
--
--      Components:
--        GINKGO_BUILD_HWLOC:                         ON
--
--  Detailed information (More compiler flags, module configuration) can be found in detailed.log
--
--
--  Now, run  cmake --build .  to compile Ginkgo!
--
---------------------------------------------------------------------------------------------------------

-- Configuring done
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUBLAS
    linked by target "ginkgo_cuda" in directory /global/homes/t/train371/ginkgo_test/ginkgo/cuda
    linked by target "cusparse_linops_d" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_s" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_z" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_c" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
CUFFT
    linked by target "ginkgo_cuda" in directory /global/homes/t/train371/ginkgo_test/ginkgo/cuda
CURAND
    linked by target "ginkgo_cuda" in directory /global/homes/t/train371/ginkgo_test/ginkgo/cuda
CUSPARSE
    linked by target "ginkgo_cuda" in directory /global/homes/t/train371/ginkgo_test/ginkgo/cuda
    linked by target "cusparse_linops_d" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_s" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_z" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark
    linked by target "cusparse_linops_c" in directory /global/homes/t/train371/ginkgo_test/ginkgo/benchmark

-- Generating done
CMake Generate step failed.  Build files cannot be regenerated correctly.
@yhmtsai
Copy link
Member

yhmtsai commented Jul 11, 2023

Thanks for reporting this issue.

  • Add ${CUDATOOLKIT_HOME}/../../math_libs/include to CPATH and ${CUDATOOLKIT_HOME}/../../math_libs/lib64 to LIBRARY_PATH
  • Rebuild the ginkgo from scratch

It should solve the problem.
cuda shipped by nvhpc uses a different path that is not listed in CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES from CMake such that cmake can not find it.

@upsj
Copy link
Member

upsj commented Jul 11, 2023

I'd like to get to the bottom of this though - we've had some workaround in place for nvhpc, but it seems like Perlmutter's setup deviates from the ways we've seen nvhpc installed so far. IMO this is something that needs to be fixed on the CMake level, we can't keep adding system-specific workarounds.

@Maxwell-Rosen
Copy link
Author

This fixed my issue with CUDA! Thank you so much. Separately, when the compilation was nearly complete, it seems like ginkgo failed to detect MPI. I have cray-mpich/8.1.25 loaded, so I'm not sure why it's an issue. I tried adding this location to CPATH and LIBRARY_PATH and encountered the same issue. From the setup, it seems like MPI was detected though, but in a different version than what I have loaded.

[ 90%] Building CUDA object examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/stencil_kernel.cu.o
In file included from /global/homes/t/train371/ginkgo_test/ginkgo/include/ginkgo/ginkgo.hpp:58,
                 from /global/homes/t/train371/ginkgo_test/ginkgo/examples/custom-matrix-format/stencil_kernel.cu:35:
/global/homes/t/train371/ginkgo_test/ginkgo/include/ginkgo/core/base/mpi.hpp:53:10: fatal error: mpi.h: No such file or directory
   53 | #include <mpi.h>
      |          ^~~~~~~
compilation terminated.
make[2]: *** [examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/build.make:90: examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/stencil_kernel.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:17629: examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/all] Error 2
make: *** [Makefile:166: all] Error 2

@MarcelKoch
Copy link
Member

MarcelKoch commented Jul 12, 2023

@MrQuell From the log in your first post, Ginkgo has correctly detected MPI. The issue you see is only in one of our examples. If you don't use this example, and just use our library, then you can ignore the issue, while we work on patching this.

@upsj
Copy link
Member

upsj commented Jul 12, 2023

It seems like something might be going wrong in CMake's MPI detection - the include path should be detected. Could you attach your CMakeCache.txt or look up the values of MPI_*_INCLUDE_DIRS in there? For a full deep-dive into what is going on, we could also use the output of cmake --debug-find --trace-expand ... run on a fresh build directory.

@Maxwell-Rosen
Copy link
Author

Maxwell-Rosen commented Jul 12, 2023

Here are the instances of looking at MPI_*_INCLUDE_DIRS in CMakeCache.txt.

train371@perlmutter:login19:~/ginkgo_test/ginkgo/build> grep "MPI_.*_INCLUDE_DIRS" CMakeCache.txt
MPI_CXX_ADDITIONAL_INCLUDE_DIRS:STRING=
MPI_CXX_COMPILER_INCLUDE_DIRS:STRING=
//ADVANCED property for variable: MPI_CXX_ADDITIONAL_INCLUDE_DIRS
MPI_CXX_ADDITIONAL_INCLUDE_DIRS-ADVANCED:INTERNAL=1
//ADVANCED property for variable: MPI_CXX_COMPILER_INCLUDE_DIRS
MPI_CXX_COMPILER_INCLUDE_DIRS-ADVANCED:INTERNAL=1

I am interested in running some examples, but I can complete the build by excluding the examples. I can run the debug cmake later and paste over its output.

@upsj
Copy link
Member

upsj commented Jul 17, 2023

A minimal reproducer using /opt/cray/pe/craype/2.7.20/bin/CC as CXX_COMPILER also fails, but when setting CMAKE_CUDA_HOST_COMPILER=/opt/cray/pe/craype/2.7.20/bin/CC, it succeeds. The same does not work with Ginkgo, because it seems to be overriding the host compiler option. I'll investigate further.

@upsj
Copy link
Member

upsj commented Jul 17, 2023

@MrQuell can you try a fresh build (including the CUDA environment variable fixes) where you add -DGINKGO_BUILD_CUDA=ON to the initial CMake invocation? If I'm correct, that should fix your MPI issues

@upsj
Copy link
Member

upsj commented Jul 17, 2023

For context, here the corresponding CMake issue: https://gitlab.kitware.com/cmake/cmake/-/issues/25093

@Maxwell-Rosen
Copy link
Author

Maxwell-Rosen commented Jul 17, 2023

It does seem like this cmake issue was resolved. However, I still get the error that mpi.h is not detected. I am up to date with github and I did all the suggested commands. Perhaps cmake is not up to date on perlmutter with the latest commit from the issue you raised.

export CPATH="${CUDATOOLKIT_HOME}/../../math_libs/include:$CPATH"
export LIBRARY_PATH="${CUDATOOLKIT_HOME}/../../math_libs/lib64:$LIBRARY_PATH"
mkdir build; cd build
cmake -G "Unix Makefiles" DGINKGO_BUILD_CUDA=ON .. && make

@upsj
Copy link
Member

upsj commented Jul 17, 2023

Just to make sure, did you use -DGINKGO_BUILD_CUDA=ON or DGINKGO_BUILD_CUDA=ON? It should be the former. By enabling this flag, we side-step the issue inside CMake.

@Maxwell-Rosen
Copy link
Author

I appologize for the confusion. I did use -DGINKGO_BUILD_CUDA=ON. I must have copied an earlier command from my history that failed.

When I run that command now, at the bottom, I get the following

Consolidate compiler generated dependencies of target custom-matrix-format
[ 90%] Building CUDA object examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/stencil_kernel.cu.o
In file included from /global/homes/t/train371/ginkgo_test/ginkgo/include/ginkgo/ginkgo.hpp:58,
                 from /global/homes/t/train371/ginkgo_test/ginkgo/examples/custom-matrix-format/stencil_kernel.cu:35:
/global/homes/t/train371/ginkgo_test/ginkgo/include/ginkgo/core/base/mpi.hpp:53:10: fatal error: mpi.h: No such file or directory
   53 | #include <mpi.h>
      |          ^~~~~~~
compilation terminated.
make[2]: *** [examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/build.make:90: examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/stencil_kernel.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:17629: examples/custom-matrix-format/CMakeFiles/custom-matrix-format.dir/all] Error 2
make: *** [Makefile:166: all] Error 2
train371@perlmutter:login40:~/ginkgo_test/ginkgo/build> cmake -G "Unix Makefiles" -DGINKGO_BUILD_CUDA=ON .. && make

@upsj
Copy link
Member

upsj commented Jul 18, 2023

Got it, thanks. Now #1368 should fix all issues in one go, can you give it a try? I also checked on Perlmutter with your modules enabled, the only thing I needed to load on top was cmake, since the older 3.20 version doesn't know NVHPC directory layouts yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants