From cb55c5292fbfc372235a4d7261e3115f7e02513d Mon Sep 17 00:00:00 2001 From: Riyan Jose <149682566+riyan-jose@users.noreply.github.com> Date: Mon, 22 Apr 2024 12:24:20 +0200 Subject: [PATCH] Update setup_nvidia_support.rst --- .../nvidia_docker/setup_nvidia_support.rst | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/docs/docker/nvidia_docker/setup_nvidia_support.rst b/docs/docker/nvidia_docker/setup_nvidia_support.rst index 78f7471c..622e706e 100644 --- a/docs/docker/nvidia_docker/setup_nvidia_support.rst +++ b/docs/docker/nvidia_docker/setup_nvidia_support.rst @@ -198,7 +198,23 @@ If you have only ``X`` in the output from ``nvidia-smi`` than make sure that the Take a note that ``nvidia-smi`` command in the docker container is necessary test to see if docker has access to the graphic card, but it doesn't shows any applications that are using it. You can see on your host if a docker application is using graphic card and how much. - +Workaround for Error: ``Failed to initialize NVML: Unknown Error`` +--------------------------------------------------------- +If you execute this ``docker run --rm --gpus all nvidia/cuda:11.7.1-base-ubuntu22.04 nvidia-smi`` and get this error, try the follwing `fix `_ + + nvidia-container configuration In the file + + ``/etc/nvidia-container-runtime/config.toml`` + + set the parameter + + ``no-cgroups = false`` + + After that restart docker and run test container: + + ``sudo systemctl restart docker`` + ``sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi`` + References """""""""""