You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consult the security policy. If reporting a security vulnerability, do not report the bug using this form. Use the process described in the policy to report the issue.
Make sure you've read the documentation. Your issue may be addressed there.
Search the issue tracker to verify that this hasn't already been reported. +1 or comment there if it has.
If possible, make a PR with a failing test to give us a starting point to work on!
Describe the bug
A combination of assumptions in our build system and our code allows CUDA-Q to find nonsensical targets:
[LinkedLibraryHolder.cpp:185] Found Target nvidia-fp64 with config file nvidia-fp64.yml
[LinkedLibraryHolder.cpp:129] CUDA-Q Library Path is /workspace/dev/cuda-quantum/build/lib.
[LinkedLibraryHolder.cpp:140] Skip cusvsim-fp64 simulator for target nvidia-fp64 since it is not available
[LinkedLibraryHolder.cpp:129] CUDA-Q Library Path is /workspace/dev/cuda-quantum/build/lib.
[LinkedLibraryHolder.cpp:140] Skip custatevec-fp64 simulator for target nvidia-fp64 since it is not available
[LinkedLibraryHolder.cpp:195] Found Target: nvidia-fp64 -> (sim=qpp, platform=default)
The above is showing the nvidia-fp64 target being backed by the qpp simulator!
The assumptions:
The build systems assumes that a user will always build all the targets his system is capable of supporting and will install the target configuration files independently on whether the targets were actually built. For example:
It assumes that nvqir-qpp and nvqir-stim are always built, since the user is quite likely to have a CPU (:
It assumes that gpu-backed targets, nvidia-*, are built, if a user has GPUs, CUDA and cuQuantum installed
Our code seems to assume that the correct default simulator for all targets is qpp---even if the target is a different simulator and the runtime just happens to fail finding the shared library for it.
Implications:
Not all users build CUDA-Q in the same way. Developers might not want to build the whole world just to test (or benchmark!) specific parts of the system. In this cases, setting a target cudaq.set_target('nvidia') will silently load qpp.
Steps to reproduce the bug
In a freshly configured build directory, build the python modules: ninja CUDAQuantumPythonModules
You will also need to build the default platform ninja cudaq-default-platform and the qpp backend(?) ninja nvqir-qpp
Required prerequisites
Describe the bug
A combination of assumptions in our build system and our code allows CUDA-Q to find nonsensical targets:
The above is showing the
nvidia-fp64
target being backed by theqpp
simulator!The assumptions:
nvqir-qpp
andnvqir-stim
are always built, since the user is quite likely to have a CPU (:nvidia-*
, are built, if a user has GPUs, CUDA and cuQuantum installedqpp
---even if the target is a different simulator and the runtime just happens to fail finding the shared library for it.Implications:
Not all users build CUDA-Q in the same way. Developers might not want to build the whole world just to test (or benchmark!) specific parts of the system. In this cases, setting a target
cudaq.set_target('nvidia')
will silently loadqpp
.Steps to reproduce the bug
ninja CUDAQuantumPythonModules
ninja cudaq-default-platform
and the qpp backend(?)ninja nvqir-qpp
CUDAQ_LOG_LEVEL=info PYTHONPATH=$(pwd)/python python3 -c "import cudaq"
This will output logs showing all targets being "successfully" found and using the
qpp
simulator.Expected behavior
qpp
simulatorIs this a regression? If it is, put the last known working version (or commit) here.
Not a regression
Environment
Suggestions
No response
The text was updated successfully, but these errors were encountered: