Skip to content
This repository was archived by the owner on May 27, 2021. It is now read-only.

Can't access GPUs, get "ERROR: CUDA error: invalid device context (code 201, ERROR_INVALID_CONTEXT)" #620

Closed
la3lma opened this issue Apr 3, 2020 · 5 comments

Comments

@la3lma
Copy link

la3lma commented Apr 3, 2020

Hi

I just got access to a nice machine with plenty of GPUs but they don't seem to be available for Julia:


$ nvidia-smi
Fri Apr  3 14:44:44 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64       Driver Version: 440.64       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  On   | 00000000:04:00.0 Off |                  N/A |
| 27%   24C    P8     1W / 250W |   1108MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce RTX 208...  On   | 00000000:05:00.0 Off |                  N/A |
| 27%   24C    P8    21W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce RTX 208...  On   | 00000000:06:00.0 Off |                  N/A |
| 27%   24C    P8    20W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GeForce RTX 208...  On   | 00000000:07:00.0 Off |                  N/A |
| 27%   25C    P8     1W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  GeForce RTX 208...  On   | 00000000:08:00.0 Off |                  N/A |
| 27%   24C    P8    20W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  GeForce RTX 208...  On   | 00000000:0B:00.0 Off |                  N/A |
| 27%   25C    P8    19W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   6  GeForce RTX 208...  On   | 00000000:0C:00.0 Off |                  N/A |
| 27%   25C    P8    19W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   7  GeForce RTX 208...  On   | 00000000:0D:00.0 Off |                  N/A |
| 27%   23C    P8    18W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   8  GeForce RTX 208...  On   | 00000000:0E:00.0 Off |                  N/A |
| 27%   26C    P8    21W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   9  GeForce RTX 208...  On   | 00000000:0F:00.0 Off |                  N/A |
| 27%   25C    P8     1W / 250W |     11MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     19629      C   /opt/conda/bin/python                        787MiB |
|    0     26698      C   ...geHD/userHome/rmz/julia-1.4.0/bin/julia   310MiB |
+-----------------------------------------------------------------------------+

... so there should be plenty of hardware available. Having read a few other error reports about similar issues, I also tested this:

 apt list | grep -i cupti

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libcupti-dev/bionic 9.1.85-3ubuntu1 amd64
libcupti-doc/bionic 9.1.85-3ubuntu1 all
libcupti9.1/bionic 9.1.85-3ubuntu1 amd64

... but back to the main story and the error messages:

$ julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.4.0 (2020-03-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using CuArrays
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
[ Info: CUDAnative.jl failed to initialize, GPU functionality unavailable (set JULIA_CUDA_SILENT or JULIA_CUDA_VERBOSE to silence or expand this message)

julia> cu([1,2,3])
ERROR: CUDA error: invalid device context (code 201, ERROR_INVALID_CONTEXT)
Stacktrace:
 [1] throw_api_error(::CUDAdrv.cudaError_enum) at /storageHD/userHome/rmz/.julia/packages/CUDAdrv/b1mvw/src/error.jl:131
 [2] macro expansion at /storageHD/userHome/rmz/.julia/packages/CUDAdrv/b1mvw/src/error.jl:144 [inlined]
 [3] cuMemAlloc_v2 at /storageHD/userHome/rmz/.julia/packages/CUDAdrv/b1mvw/src/libcuda.jl:313 [inlined]
 [4] alloc(::Type{CUDAdrv.Mem.DeviceBuffer}, ::Int32) at /storageHD/userHome/rmz/.julia/packages/CUDAdrv/b1mvw/src/memory.jl:70
 [5] macro expansion at /storageHD/userHome/rmz/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
 [6] macro expansion at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory.jl:61 [inlined]
 [7] macro expansion at ./util.jl:234 [inlined]
 [8] actual_alloc(::Int32) at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory.jl:60
 [9] actual_alloc at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory/binned.jl:55 [inlined]
 [10] macro expansion at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory/binned.jl:198 [inlined]
 [11] macro expansion at /storageHD/userHome/rmz/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
 [12] pool_alloc(::Int32, ::Int32) at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory/binned.jl:197
 [13] (::CuArrays.BinnedPool.var"#12#13"{Int32,Int32,Set{CuArrays.BinnedPool.Block},Array{CuArrays.BinnedPool.Block,1}})() at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory/binned.jl:293
 [14] lock(::CuArrays.BinnedPool.var"#12#13"{Int32,Int32,Set{CuArrays.BinnedPool.Block},Array{CuArrays.BinnedPool.Block,1}}, ::ReentrantLock) at ./lock.jl:161
 [15] alloc(::Int32) at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory/binned.jl:292
 [16] macro expansion at /storageHD/userHome/rmz/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
 [17] macro expansion at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory.jl:159 [inlined]
 [18] macro expansion at ./util.jl:234 [inlined]
 [19] alloc at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/memory.jl:158 [inlined]
 [20] CuArray{Float32,1,P} where P(::UndefInitializer, ::Tuple{Int32}) at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/array.jl:92
 [21] CuArray at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/array.jl:100 [inlined]
 [22] similar at ./abstractarray.jl:671 [inlined]
 [23] convert at /storageHD/userHome/rmz/.julia/packages/GPUArrays/1wgPO/src/construction.jl:80 [inlined]
 [24] adapt_storage at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/array.jl:239 [inlined]
 [25] adapt_structure at /storageHD/userHome/rmz/.julia/packages/Adapt/m5jFF/src/Adapt.jl:9 [inlined]
 [26] adapt at /storageHD/userHome/rmz/.julia/packages/Adapt/m5jFF/src/Adapt.jl:6 [inlined]
 [27] cu(::Array{Int32,1}) at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/array.jl:314
 [28] top-level scope at REPL[2]:1

julia> using CUDAdrv; CUDAdrv.CuDevice(0)
CuDevice(0): GeForce RTX 2080 Ti

Following the advice to set the JULA_CUDA_VERBOSE flag, I get this result:

$  JULIA_CUDA_VERBOSE=true julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.4.0 (2020-03-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using CuArrays
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
┌ Error: CUDAnative.jl failed to initialize
│   exception =
│    Your CUDA installation does not provide libcudadevrt
│    Stacktrace:
│     [1] error(::String) at ./error.jl:33
│     [2] __init__() at /storageHD/userHome/rmz/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:146
│     [3] _include_from_serialized(::String, ::Array{Any,1}) at ./loading.jl:697
│     [4] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:781
│     [5] _tryrequire_from_serialized(::Base.PkgId, ::UInt64, ::String) at ./loading.jl:712
│     [6] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:770
│     [7] _require(::Base.PkgId) at ./loading.jl:1006
│     [8] require(::Base.PkgId) at ./loading.jl:927
│     [9] require(::Module, ::Symbol) at ./loading.jl:922
│     [10] eval(::Module, ::Any) at ./boot.jl:331
│     [11] eval_user_input(::Any, ::REPL.REPLBackend) at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
│     [12] macro expansion at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:118 [inlined]
│     [13] (::REPL.var"#26#27"{REPL.REPLBackend})() at ./task.jl:358
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:190
┌ Warning: CuArrays.jl did not initialize because CUDAdrv.jl or CUDAnative.jl failed to
└ @ CuArrays ~/.julia/packages/CuArrays/A6GUx/src/CuArrays.jl:64

julia> 

... do you have any suggestions about what I should do next? It seems like the text:

exception =
│    Your CUDA installation does not provide libcudadevrt

... is at the crux of the problem, but I don't know to amend. Do you have any suggestions?

@maleadt
Copy link
Member

maleadt commented Apr 4, 2020

It seems like the text:

exception =
│    Your CUDA installation does not provide libcudadevrt

... is at the crux of the problem

Indeed, once a part of the stack fails to initialize all subsequent errors are meaningless except for the reason it failed to initialize in the first place. You should have a libcudadevrt somewhere, if not your installation is broken or missing pieces. Do you have that file somewhere? If so, CUDAapi.jl is not detecting it; try running with JULIA_DEBUG=CUDAapi. Last-resort, you could also upgrade and use the latest CuArrays/CUDAnative which install CUDA automatically using artifacts.

@la3lma
Copy link
Author

la3lma commented Apr 4, 2020

Thank you for your reply. Here is some more information, including what you suggested, but also some commands showing where the libcudadevrt library is located:

$ locate libcudadevrt
/usr/local/cuda-10.0/lib64/libcudadevrt.a
$ ls -la /usr/local/cuda-10.0/lib64/libcudadevrt.a
-rw-r--r-- 1 root root 695156 Feb 20  2019 /usr/local/cuda-10.0/lib64/libcudadevrt.a
$ JULIA_CUDA_VERBOSE=true JULIA_DEBUG=CUDAapi  julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.4.0 (2020-03-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using CuArrays
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
┌ Debug: Request to look for binary ptxas
│   locations = 0-element Array{String,1}
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for binary ptxas
│   locations =
│    10-element Array{String,1}:
│     "/usr/local/cuda-10.0/bin"
│     "/usr/local/sbin"
│     "/usr/local/bin"
│     "/usr/sbin"
│     "/usr/bin"
│     "/sbin"
│     "/bin"
│     "/usr/games"
│     "/snap/bin"
│     "/storageHD/userHome/rmz/julia-1.4.0/bin"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found binary ptxas at /usr/local/cuda-10.0/bin
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:141
┌ Debug: Looking for CUDA toolkit via ptxas binary
│   path = "/usr/local/cuda-10.0/bin/ptxas"
│   dir = "/usr/local/cuda-10.0"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cudart
│   locations = 0-element Array{String,1}
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcudart.so, libcudart.so.1, libcudart.so.1.0, libcudart.so.1.1, libcudart.so.2, libcudart.so.2.0, libcudart.so.2.1, libcudart.so.2.2, libcudart.so.3, libcudart.so.3.0, libcudart.so.3.1, libcudart.so.3.2, libcudart.so.4, libcudart.so.4.0, libcudart.so.4.1, libcudart.so.4.2, libcudart.so.5, libcudart.so.5.0, libcudart.so.5.5, libcudart.so.6, libcudart.so.6.0, libcudart.so.6.5, libcudart.so.7, libcudart.so.7.0, libcudart.so.7.5, libcudart.so.8, libcudart.so.8.0, libcudart.so.9, libcudart.so.9.0, libcudart.so.9.1, libcudart.so.9.2, libcudart.so.10, libcudart.so.10.0, libcudart.so.10.1, libcudart.so.10.2
│   locations = 0-element Array{String,1}
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for CUDA toolkit via default installation directories
│   dirs =
│    1-element Array{String,1}:
│     "/usr/local/cuda-10.0"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found CUDA toolkit at /usr/local/cuda-10.0
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:281
┌ Debug: Request to look for binary nvdisasm
│   locations =
│    1-element Array{String,1}:
│     "/usr/local/cuda-10.0"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for binary nvdisasm
│   locations =
│    12-element Array{String,1}:
│     "/usr/local/cuda-10.0"
│     "/usr/local/cuda-10.0/bin"
│     "/usr/local/cuda-10.0/bin"
│     "/usr/local/sbin"
│     "/usr/local/bin"
│     "/usr/sbin"
│     "/usr/bin"
│     "/sbin"
│     "/bin"
│     "/usr/games"
│     "/snap/bin"
│     "/storageHD/userHome/rmz/julia-1.4.0/bin"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found binary nvdisasm at /usr/local/cuda-10.0/bin
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:141
┌ Debug: CUDA toolkit identified as 10.0.130
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:297
┌ Debug: Request to look for libdevice
│   locations =
│    1-element Array{String,1}:
│     "/usr/local/cuda-10.0"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Look for libdevice
│   locations =
│    2-element Array{String,1}:
│     "/usr/local/cuda-10.0"
│     "/usr/local/cuda-10.0/nvvm/libdevice"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found unified device library at /usr/local/cuda-10.0/nvvm/libdevice/libdevice.10.bc
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:327
┌ Debug: Request to look for libcudadevrt 
│   locations =
│    1-element Array{String,1}:
│     "/usr/local/cuda-10.0"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for CUDA device runtime library libcudadevrt.a
│   locations =
│    2-element Array{String,1}:
│     "/usr/local/cuda-10.0"
│     "/usr/local/cuda-10.0/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Error: CUDAnative.jl failed to initialize
│   exception =
│    Your CUDA installation does not provide libcudadevrt
│    Stacktrace:
│     [1] error(::String) at ./error.jl:33
│     [2] __init__() at /storageHD/userHome/rmz/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:146
│     [3] _include_from_serialized(::String, ::Array{Any,1}) at ./loading.jl:697
│     [4] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:781
│     [5] _tryrequire_from_serialized(::Base.PkgId, ::UInt64, ::String) at ./loading.jl:712
│     [6] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:770
│     [7] _require(::Base.PkgId) at ./loading.jl:1006
│     [8] require(::Base.PkgId) at ./loading.jl:927
│     [9] require(::Module, ::Symbol) at ./loading.jl:922
│     [10] eval(::Module, ::Any) at ./boot.jl:331
│     [11] eval_user_input(::Any, ::REPL.REPLBackend) at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
│     [12] macro expansion at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:118 [inlined]
│     [13] (::REPL.var"#26#27"{REPL.REPLBackend})() at ./task.jl:358
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:190
┌ Warning: CuArrays.jl did not initialize because CUDAdrv.jl or CUDAnative.jl failed to
└ @ CuArrays ~/.julia/packages/CuArrays/A6GUx/src/CuArrays.jl:64

julia>

.... seems like it's not looking in the lib64 directory. Is there an easy way to instruct Julia to do that? ... or should I ask my admin to add a symbolic link from lib to lib64 (may be dangerous for other systems on that box, I don't know).

@la3lma
Copy link
Author

la3lma commented Apr 4, 2020

More data: I tried making a fake cuda home with symbolic links going to the real one, and added a link for the "lib" going to "lib64". That got me one step ahead, now blocking on. "libcublas.so: wrong ELF class: ELFCLASS64".

$ LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64 JULIA_CUDA_VERBOSE=true CUDA_PATH=fake_cuda JULIA_CUDA_VERBOSE=true JULIA_DEBUG=CUDAapi   julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.4.0 (2020-03-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using CuArrays
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:114
┌ Debug: Looking for CUDA toolkit via environment variables CUDA_PATH
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for binary nvdisasm
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for binary nvdisasm
│   locations =
│    12-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/bin"
│     "/usr/local/cuda-10.0/bin"
│     "/usr/local/sbin"
│     "/usr/local/bin"
│     "/usr/sbin"
│     "/usr/bin"
│     "/sbin"
│     "/bin"
│     "/usr/games"
│     "/snap/bin"
│     "/storageHD/userHome/rmz/julia-1.4.0/bin"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found binary nvdisasm at fake_cuda/bin
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:141
┌ Debug: CUDA toolkit identified as 10.0.130
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:297
┌ Debug: Request to look for libdevice
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Look for libdevice
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/nvvm/libdevice"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found unified device library at fake_cuda/nvvm/libdevice/libdevice.10.bc
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:327
┌ Debug: Request to look for libcudadevrt 
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for CUDA device runtime library libcudadevrt.a
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Found CUDA device runtime library libcudadevrt.a at fake_cuda/lib
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/discovery.jl:379
┌ Debug: Request to look for library nvToolsExt
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libnvToolsExt.so, libnvToolsExt.so.1, libnvToolsExt.so.1.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Warning: Your CUDA installation does not provide the NVTX library, CUDAnative.NVTX will be unavailable
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:153
┌ Debug: Request to look for library cupti
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/extras/CUPTI"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcupti.so, libcupti.so.10, libcupti.so.10.0
│   locations =
│    4-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
│     "fake_cuda/extras/CUPTI"
│     "fake_cuda/extras/CUPTI/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Warning: Your CUDA installation does not provide the CUPTI library, CUDAnative.@code_sass will be unavailable
└ @ CUDAnative ~/.julia/packages/CUDAnative/hfulr/src/CUDAnative.jl:160
┌ Debug: Looking for CUDA toolkit via environment variables CUDA_PATH
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cublas
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcublas.so, libcublas.so.10, libcublas.so.10.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cusparse
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcusparse.so, libcusparse.so.10, libcusparse.so.10.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cusolver
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcusolver.so, libcusolver.so.10, libcusolver.so.10.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cufft
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcufft.so, libcufft.so.10, libcufft.so.10.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library curand
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcurand.so, libcurand.so.10, libcurand.so.10.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cudnn
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcudnn.so, libcudnn.so.7, libcudnn.so.7.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Request to look for library cutensor
│   locations =
│    1-element Array{String,1}:
│     "fake_cuda"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Debug: Looking for library libcutensor.so, libcutensor.so.1, libcutensor.so.1.0
│   locations =
│    2-element Array{String,1}:
│     "fake_cuda"
│     "fake_cuda/lib"
└ @ CUDAapi ~/.julia/packages/CUDAapi/wYUAO/src/CUDAapi.jl:8
┌ Error: CuArrays.jl failed to initialize
│   exception =
│    could not load library "libcublas"
│    libcublas.so: wrong ELF class: ELFCLASS64
│    Stacktrace:
│     [1] dlopen(::String, ::UInt32; throw_error::Bool) at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/Libdl/src/Libdl.jl:109
│     [2] dlopen at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/Libdl/src/Libdl.jl:109 [inlined] (repeats 2 times)
│     [3] (::CuArrays.CUBLAS.var"#509#lookup_fptr#28")() at /storageHD/userHome/rmz/.julia/packages/CUDAapi/wYUAO/src/call.jl:29
│     [4] macro expansion at /storageHD/userHome/rmz/.julia/packages/CUDAapi/wYUAO/src/call.jl:37 [inlined]
│     [5] macro expansion at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/blas/error.jl:65 [inlined]
│     [6] cublasGetProperty at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/blas/libcublas.jl:27 [inlined]
│     [7] cublasGetProperty at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/blas/wrappers.jl:38 [inlined]
│     [8] version() at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/blas/wrappers.jl:42
│     [9] __init__() at /storageHD/userHome/rmz/.julia/packages/CuArrays/A6GUx/src/CuArrays.jl:99
│     [10] _include_from_serialized(::String, ::Array{Any,1}) at ./loading.jl:697
│     [11] _require_search_from_serialized(::Base.PkgId, ::String) at ./loading.jl:781
│     [12] _require(::Base.PkgId) at ./loading.jl:1006
│     [13] require(::Base.PkgId) at ./loading.jl:927
│     [14] require(::Module, ::Symbol) at ./loading.jl:922
│     [15] eval(::Module, ::Any) at ./boot.jl:331
│     [16] eval_user_input(::Any, ::REPL.REPLBackend) at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
│     [17] macro expansion at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:118 [inlined]
│     [18] (::REPL.var"#26#27"{REPL.REPLBackend})() at ./task.jl:358
└ @ CuArrays ~/.julia/packages/CuArrays/A6GUx/src/CuArrays.jl:142

julia> 

@la3lma
Copy link
Author

la3lma commented Apr 5, 2020

... and it's resolved. The bug was: I had (inadvertently, but there you are) installed the 32 bit in julia and the system was a 64 bit system. :-)

@maleadt
Copy link
Member

maleadt commented Apr 6, 2020

Ah yeah, CUDA doesn't even support 32-bits anymore so we can't even support that use case with artifacts. Glad you got it fixed!

@maleadt maleadt closed this as completed Apr 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants