I was having the same issue. Are you using Docker Desktop for Windows? Because I was, and I found out that WSL2 + CUDA does not work with Docker Desktop for Windows:
https://forums.developer.nvidia.com/t/hiccups-setting-up-wsl2-cuda/128641
Instead, install Docker manually in WSL2 (as is suggested in the tutorial you linked):
sudo apt update && sudo apt install -y nvidia-docker2
Then make sure you start the docker service:
sudo service docker start
After that, you can verify the CUDA/Docker/WSL2 setup with this:
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Where you should see some output like this:
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Windowed mode
Simulation data stored in video memory
Single precision floating point simulation
1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM
GPU Device 0: «GeForce RTX 2060» with compute capability 7.5
Compute 7.5 CUDA device: [GeForce RTX 2060]
30720 bodies, total time for 10 iterations: 52.181 ms
= 180.854 billion interactions per second
= 3617.077 single-precision GFLOP/s at 20 flops per interaction
Despite having 4 GPUs each with ~20GB vRAM, docker is not able to run with the following command. How can I solve this?
[20:08:28] jalal@echo:~/research/code$ docker run --shm-size 2GB -it --gpus all docurdt/heal
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ERRO[0000] error waiting for container: context canceled
[20:08:20] jalal@echo:~/research/code$ nvidia-smi
Fri Apr 1 20:08:28 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 31% 41C P8 23W / 350W | 301MiB / 24576MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:21:00.0 Off | N/A |
| 30% 39C P8 18W / 350W | 14MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce ... Off | 00000000:4A:00.0 Off | N/A |
| 30% 32C P8 23W / 350W | 14MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce ... Off | 00000000:4B:00.0 Off | N/A |
| 30% 40C P8 18W / 350W | 14MiB / 24576MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
I also have:
$ uname -a
Linux echo 5.4.0-99-generic #112-Ubuntu SMP Thu Feb 3 13:50:55 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
$ docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2
Also,
$ df -h | grep /dev/shm
tmpfs 126G 199M 126G 1% /dev/shm
and
and
$ cat /boot/config-$(uname -r) | grep -i seccomp
CONFIG_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
and
[20:33:17] (dpcc) jalal@echo:~$ lspci -vv | grep -i nvidia
01:00.0 VGA compatible controller: NVIDIA Corporation Device 2204 (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
01:00.1 Audio device: NVIDIA Corporation Device 1aef (rev a1)
21:00.0 VGA compatible controller: NVIDIA Corporation Device 2204 (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
21:00.1 Audio device: NVIDIA Corporation Device 1aef (rev a1)
4a:00.0 VGA compatible controller: NVIDIA Corporation Device 2204 (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
4a:00.1 Audio device: NVIDIA Corporation Device 1aef (rev a1)
4b:00.0 VGA compatible controller: NVIDIA Corporation Device 2204 (rev a1) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
4b:00.1 Audio device: NVIDIA Corporation Device 1aef (rev a1)
docker uses –gpus all to report an error:
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
I have searched many articles on the Internet. In summary, it is necessary to install nvidia-container-toolkit
or nvidia-container-runtime
(including nvidia-container-toolkit),
but the embarrassing thing is that I can’t install nvidia-container-toolkit, and it keeps showing ** E: Unable to locate package nvidia-container-toolkit **
Online solutions:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
Everyone should be familiar with this. Old versions of docker installation will use this to add GPG key.curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
orcurl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
I have tried all of the above methods. Here we should pay attention to the third step. Centos and Ubuntu commands are not the same!
I still can’t install using the above command, and the final resolution process is recorded as follows:
- Change the system source to Ali’s mirror source
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
OK will be displayed normallycurl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
My system is Ubuntu 18.04sudo apt-get update
This step must ensure that there is no problem. My display has several sources that are repeatedly configured, and then Isudo vim nvidia-docker.list
comment it out ( )sudo apt-get install nvidia-container-toolkit
Summarize
The implementation path is the same, is it as simple as updating the source? In fact the company’s network is very poor is very unstable, resulting in many steps can not be executed properly, such as sudo apt-get update
for a normal execution, a error.
Similar Posts:
On this page
- WSL + Windows
- With Tensorflow or PyTorch
- Basic installation
- Check info
- Does Docker work with GPU?
- Check cudnn
- Install nvidia-docker2
- Difference: nvidia-container-toolkit vs nvidia-container-runtime
- Using docker-compose?
- Check usage of GPU
- Kill process
- Reset GPU
- Errors with GPU
- Make NVIDIA work in docker (Linux)
- References
👉 Note: All docker notes.
👉 My Dockerfile setting up on Github.
WSL + Windows
👉 Note: WSL + Windows
With Tensorflow or PyTorch
👉 Official doc for TF + docker
👉 Note: Docker + TF.
👉 An example of docker pytorch with gpu support.
Basic installation
You must (successfully) install the GPU driver on your (Linux) machine before proceeding with the steps in this note. Go to the «Check info» section to check the availability of your drivers.
(Maybe just for me) It works perfectly on Pop!_OS 20.04, I tried it and we have a lot of problems with Pop!_OS 21.10 so stay with 20.04!
sudo apt updatesudo apt install -y nvidia-container-runtime
# You may need to replace above line with
sudo apt install nvidia-docker2
sudo apt install nvidia-container-toolkit
sudo apt install -y nvidia-cuda-toolkit
# restard required
If you have problems installing nvidia-docker2
, read this section!
Check info
# Verify that your computer has a graphic card
lspci | grep -i nvidia
# First, install drivers and check
nvidia-smi
# output: NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0
# It's the maximum CUDA version that your driver supports
# Check current version of cuda
nvcc --version
# If nvcc is not available, it may be in /usr/local/cuda/bin/
# Add this location to PATH
# modify ~/.zshrc or ~/.bashrc
export PATH=/usr/local/cuda/bin:$PATH# You may need to install
sudo apt install -y nvidia-cuda-toolkit
If below command doesn’t work, try to install nvidia-docker2
(read this section).
# Install and check nvidia-docker
dpkg -l | grep nvidia-docker
# or
nvidia-docker version
# Verifying –gpus option under docker run
docker run --help | grep -i gpus
# output: --gpus gpu-request GPU devices to add to the container ('all' to pass all GPUs)
Does Docker work with GPU?
# List all GPU devices
docker run -it --rm --gpus all ubuntu nvidia-smi -L
# output: GPU 0: GeForce GTX 1650 (...)
# ERROR ?
# docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
# ERROR ?
# Error response from daemon: could not select device driver "" with capabilities: [[gpu]]# Solution: install nvidia-docker2
# Verifying again with nvidia-smi
docker run -it --rm --gpus all ubuntu nvidia-smi# Return something like
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.54 Driver Version: 510.54 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| N/A 55C P0 11W / N/A | 369MiB / 4096MiB | 5% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
# and another box like this
It’s archived, but still useful
# Test a working setup container-toolkit
# Update 14/04/2022: the tag "latest" has deprecated => check your system versions and use
# the corresponding tag
# The following code is for reference only, it no longer works
docker run --rm --gpus all nvidia/cuda nvidia-smi
# Test a working setup container-runtime
# Update 14/04/2022: below code isn't working anymore because nvidia/cuda doesn't have
# the "latest" tag!
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi# Error response from daemon: Unknown runtime specified nvidia.
# Search below for "/etc/docker/daemon.json"
# Maybe it helps.
Check cudnn
whereis cudnn
# cudnn: /usr/include/cudnn.h# Check cudnn version
cat /usr/include/cudnn.h | grep CUDNN_MAJOR -A 2
# or try this (it works for me, cudnn 8)
cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
Install nvidia-docker2
More information (ref)
This package is the only docker-specific package of any of them. It takes the script associated with the
nvidia-container-runtime
and installs it into docker’s/etc/docker/daemon.json
file for you. This then allows you to run (for example)docker run --runtime=nvidia ...
to automatically add GPU support to your containers. It also installs a wrapper script around the native docker CLI callednvidia-docker
which lets you invoke docker without needing to specify--runtime=nvidia
every single time. It also lets you set an environment variable on the host (NV_GPU) to specify which GPUs should be injected into a container.
👉 (Should follow this for the up-to-date) Officicial guide to install.
Note: (Only for me) Use the codes below.
Command lines (for quickly preview)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)# NOTE FOR POPOS 20.04
# replace above line with
distribution=ubuntu20.04
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
👇 Read more about below error.
# Error?
# Read more:
# Depends: nvidia-container-toolkit (>= 1.9.0-1) but 1.5.1-1pop1~1627998766~20.04~9847cf2 is to be installed# create a new file
sudo nano /etc/apt/preferences.d/nvidia-docker-pin-1002
# with below content
Package: *
Pin: origin nvidia.github.io
Pin-Priority: 1002
# then save
# try again
sudo apt-get install -y nvidia-docker2
# restart docker
sudo systemctl restart docker# wanna check?
sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
# check version
nvidia-docker version
👉 What’s the difference between the lastest nvidia-docker and nvidia container runtime?
In this note, with Docker 19.03+ (
docker --version
), he says thatnvidia-container-toolkit
is used for--gpus
(indocker run ...
),nvidia-container-runtime
is used for--runtime=nvidia
(can also be used indocker-compose
file).
However, if you want to use Kubernetes with Docker 19.03, you actually need to continue using nvidia-docker2 because Kubernetes doesn’t support passing GPU information down to docker through the
--gpus
flag yet. It still relies on the nvidia-container-runtime to pass GPU information down the stack via a set of environment variables.
👉 Installation Guide — NVIDIA Cloud Native Technologies documentation
Using docker-compose?
Purpose?
# instead of using
docker run
--gpus all
--name docker_thi_test
--rm
-v abc:abc
-p 8888:8888
# we use this with docker-compose.yml
docker-compose up
# check version of docker-compose
docker-compose --version
# If "version" in docker-compose.yml < 2.3
# Modify: /etc/docker/daemon.json
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
# restart our docker daemon
sudo pkill -SIGHUP dockerd
# If "version" in docker-compose.yml >=2.3
# docker-compose.yml => able to use "runtime"
version: '2.3' # MUST BE >=2.3 AND <3
services:
testing:
ports:
- "8000:8000"
runtime: nvidia
volumes:
- ./object_detection:/object_detection
👉 Check more in my repo my-dockerfiles on Github.
Run the test,
docker pull tensorflow/tensorflow:latest-gpu-jupyter
mkdir ~/Downloads/test/notebooks
Without using docker-compose.yml
(tensorflow) (cf. this note for more)
docker run --name docker_thi_test -it --rm -v $(realpath ~/Downloads/test/notebooks):/tf/notebooks -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter
With docker-compose.yml
?
# ~/Download/test/Dockerfile
FROM tensorflow/tensorflow:latest-gpu-jupyter
# ~/Download/test/docker-compose.yml
version: '2'
services:
jupyter:
container_name: 'docker_thi_test'
build: .
volumes:
- ./notebooks:/tf/notebooks # notebook directory
ports:
- 8888:8888 # exposed port for jupyter
environment:
- NVIDIA_VISIBLE_DEVICES=0 # which gpu do you want to use for this container
- PASSWORD=12345
Then run,
docker-compose run --rm jupyter
Check usage of GPU
# Linux only
nvidia-smi
Return something like this
# |===============================+======================+======================|
# | 0 GeForce GTX 1650 Off | 00000000:01:00.0 Off | N/A |
# | N/A 53C P8 2W / N/A | 3861MiB / 3914MiB | 2% Default |
# | | | N/A |
# +-------------------------------+----------------------+----------------------+# => 3861MB / 3914MB is used!
# +-----------------------------------------------------------------------------+
# | Processes: GPU Memory |
# | GPU PID Type Process name Usage |
# |=============================================================================|
# | 0 3019 C ...e/scarter/anaconda3/envs/tf1/bin/python 3812MiB |
# +-----------------------------------------------------------------------------+
# => Process 3019 is using the GPU
# All processes that use GPU
sudo fuser -v /dev/nvidia*
Kill process
# Kill a single process
sudo kill -9 3019
Reset GPU
# all
sudo nvidia-smi --gpu-reset
# single
sudo nvidia-smi --gpu-reset -i 0
Errors with GPU
# Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
# Function call stack:
# train_function
👉 Check this answer as a reference!
👇 Use a GPU.
# Limit the GPU memory to be used
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
Problems with pytorch versions: check this.
RuntimeError: cuda runtime error (804) : forward compatibility was attempted on non supported HW at /pytorch/aten/src/THC/THCGeneral.cpp:47 (after update system including nvdia-cli, maybe) => The same problem with below, need to restart the computer.
nvidia-smi
: Failed to initialize NVML: Driver/library version mismatch.
This thread: just restart the computer.
Make NVIDIA work in docker (Linux)
This section still works (on 26-Oct-2020), but it’s obselete for newer methods.
One idea: Use NVIDIA driver of the base machine, don’t install anything in Docker!
Detail of steps
-
First, maker sure your base machine has an NVIDIA driver.
# list all gpus
lspci -nn | grep '[03'# check nvidia & cuda versions
nvidia-smi -
Install
nvidia-container-runtime
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime
-
Note that, we cannot use
docker-compose.yml
in this case!!! -
Create an image
img_datas
withDockerfile
isFROM nvidia/cuda:10.2-base
RUN apt-get update &&
apt-get -y upgrade &&
apt-get install -y python3-pip python3-dev locales git# install dependencies
COPY requirements.txt requirements.txt
RUN python3 -m pip install --upgrade pip &&
python3 -m pip install -r requirements.txt
COPY . .# default command
CMD [ "jupyter", "lab", "--no-browser", "--allow-root", "--ip=0.0.0.0" ] -
Create a container,
docker run --name docker_thi --gpus all -v /home/thi/folder_1/:/srv/folder_1/ -v /home/thi/folder_1/git/:/srv/folder_2 -dp 8888:8888 -w="/srv" -it img_datas
# -v: volumes
# -w: working dir
# --gpus all: using all gpus on base machine
This article is also very interesting and helpful in some cases.
References
- Difference between
base
,runtime
anddevel
inDockerfile
of CUDA. - Dockerfile on Github of Tensorflow.
Description
I have been trying to install and setup ADE using the .aderc
file, but I get a GPU-related error. I post this here as one owner of the AutowareAuto repository in GitLab told me in the comment of the issue I opened.
How to reproduce
I am using Kubuntu 18.04 (bionic) and have installed Docker 19.03 successfully (I can execute docker run hello-world
). I have followed the Quick Start section of the first lesson of the Autoware.Auto course. I am using a Laptop and have tried to do it both with and without the dedicated GPU activated, which is an NVidia GTX 1060 6GB (the integrated one is the one of the Intel i7 8750H). I would like to have this running while being able to use the GTX.
I have already searched updates for the NVidia drivers and tried to update everything in my system.
Current Behavior
I get the following error:
$ cd adehome/AutowareAuto/
$ ade start
Starting ade with the following images:
ade | 18425565a9fd | master | registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/ade:master
ade-atom | v1.39.1 | latest | registry.gitlab.com/apexai/ade-atom:latest
autowareauto | c9745bf2663e | master | registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto:master
ade_registry.gitlab.com_apexai_ade-atom_latest
ade_registry.gitlab.com_autowarefoundation_autoware.auto_autowareauto_master
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ERROR: Command return non-zero exit code (see above): 125
docker run -h ade --detach --name ade --env COLORFGBG --env DISPLAY --env EMAIL --env GIT_AUTHOR_EMAIL --env GIT_AUTHOR_NAME --env GIT_COMMITTER_EMAIL --env GIT_COMMITTER_NAME --env SSH_AUTH_SOCK --env TERM --env TIMEZONE=Europe/Paris --env USER=jmtc7 --env GROUP=jmtc7 --env USER_ID=1000 --env GROUP_ID=1000 --env VIDEO_GROUP_ID=44 -v /dev/dri:/dev/dri -v /dev/shm:/dev/shm -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/jmtc7/adehome:/home/jmtc7 --env ADE_CLI_VERSION=4.1.0 --env ADE_HOME_HOSTPATH=/home/jmtc7/adehome --label ade_version=4.1.0 -v /home/jmtc7/.ssh:/home/jmtc7/.ssh -v /tmp/ssh-AsnCiCzQcdnx/agent.1739:/tmp/ssh-AsnCiCzQcdnx/agent.1739 --volumes-from ade_registry.gitlab.com_apexai_ade-atom_latest:ro --volumes-from ade_registry.gitlab.com_autowarefoundation_autoware.auto_autowareauto_master:ro --label 'ade_volumes_from=["ade_registry.gitlab.com_apexai_ade-atom_latest", "ade_registry.gitlab.com_autowarefoundation_autoware.auto_autowareauto_master"]' --gpus all --env NVIDIA_VISIBLE_DEVICES=all --env NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics,display --env LD_LIBRARY_PATH=/usr/local/nvidia/lib64 --cap-add=SYS_PTRACE --env ADE_IMAGE_ADE_FQN=registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/ade:master --env ADE_IMAGE_ADE_COMMIT_SHA=18425565a9fdfd2b5e9a8fd837f18f8bbd99d961 --env ADE_IMAGE_ADE_COMMIT_TAG= --env ADE_IMAGE_ADE_ATOM_FQN=registry.gitlab.com/apexai/ade-atom:latest --env ADE_IMAGE_ADE_ATOM_COMMIT_SHA=41a804c93041bf2ef4fe118676a4b6a84bdeff91 --env ADE_IMAGE_ADE_ATOM_COMMIT_TAG=v1.39.1 --env ADE_IMAGE_AUTOWAREAUTO_FQN=registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto:master --env ADE_IMAGE_AUTOWAREAUTO_COMMIT_SHA=c9745bf2663ecca9d73c20e29d8c4624b58948f5 --env ADE_IMAGE_AUTOWAREAUTO_COMMIT_TAG= --label 'ade_images=[{"fqn": "registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/ade:master", "commit_sha": "18425565a9fdfd2b5e9a8fd837f18f8bbd99d961", "commit_tag": ""}, {"fqn": "registry.gitlab.com/apexai/ade-atom:latest", "commit_sha": "41a804c93041bf2ef4fe118676a4b6a84bdeff91", "commit_tag": "v1.39.1"}, {"fqn": "registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto:master", "commit_sha": "c9745bf2663ecca9d73c20e29d8c4624b58948f5", "commit_tag": ""}]' registry.gitlab.com/autowarefoundation/autoware.auto/autowareauto/ade:master
Some further information about my installations and system:
$ which ade
/home/jmtc7/.local/bin/ade
$ nvidia-smi
Wed May 13 20:31:59 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id ...
(more)
1 Answer
Solved. Solution:
In case someone finds himself in the same situation, I just followed the installation for NVIDIA Docker. It seems Docker 19.03 does not really work with GPUs by itself as I misunderstood… At least not for me. For convenience, these are the installation commands for Ubuntu/Debian:
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Your Answer
Please start posting anonymously — your entry will be published after you log in or create a new account.
Question Tools
Related questions
The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn’t apply to you and add more information where it makes sense.
Also, before reporting a new issue, please make sure that:
- You read carefully the documentation and frequently asked questions.
- You searched for a similar issue and this is not a duplicate of an existing one.
- This issue is not related to NGC, otherwise, please use the devtalk forums instead.
- You went through the troubleshooting steps.
1. Issue or feature description
I had installed nvidia docker container environment using instructions @ https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
After installing everything works fine. But after rebooting/restarting computer nvidia container environment is failing with below message.
sudo docker run —rm —gpus all nvidia/cuda:11.4.0-base-ubuntu20.04 nvidia-smi
[sudo] password for alyaan:
docker: Error response from daemon: could not select device driver «» with capabilities: [[gpu]].
2. Steps to reproduce the issue
install the nvidia docker environment and restart the computer
3. Information to attach (optional if deemed irrelevant)
- Some nvidia-container information:
nvidia-container-cli -k -d /dev/tty info
- 0508 19:37:25.340722 25974 nvc.c:376] initializing library context (version=1.9.0, build=5e135c17d6dbae861ec343e9a8d3a0d2af758a4f)
I0508 19:37:25.340816 25974 nvc.c:350] using root /
I0508 19:37:25.340829 25974 nvc.c:351] using ldcache /etc/ld.so.cache
I0508 19:37:25.340838 25974 nvc.c:352] using unprivileged user 1000:1000
I0508 19:37:25.340890 25974 nvc.c:393] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I0508 19:37:25.341208 25974 nvc.c:395] dxcore initialization failed, continuing assuming a non-WSL environment
W0508 19:37:25.342611 25975 nvc.c:273] failed to set inheritable capabilities
W0508 19:37:25.342702 25975 nvc.c:274] skipping kernel modules load due to failure
I0508 19:37:25.343372 25976 rpc.c:71] starting driver rpc service
I0508 19:37:25.350127 25977 rpc.c:71] starting nvcgo rpc service
I0508 19:37:25.359326 25974 nvc_info.c:765] requesting driver information with »
I0508 19:37:25.375408 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvoptix.so.470.103.01
I0508 19:37:25.375460 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.470.103.01
I0508 19:37:25.376067 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.470.103.01
I0508 19:37:25.377424 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.470.103.01
I0508 19:37:25.377959 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.470.103.01
I0508 19:37:25.379156 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.470.103.01
I0508 19:37:25.380063 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.470.103.01
I0508 19:37:25.380178 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.470.103.01
I0508 19:37:25.381131 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ifr.so.470.103.01
I0508 19:37:25.382093 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.470.103.01
I0508 19:37:25.382212 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.470.103.01
I0508 19:37:25.382311 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.470.103.01
I0508 19:37:25.383328 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.470.103.01
I0508 19:37:25.384347 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.470.103.01
I0508 19:37:25.384499 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.470.103.01
I0508 19:37:25.385786 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.470.103.01
I0508 19:37:25.385904 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.470.103.01
I0508 19:37:25.386971 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cbl.so.470.103.01
I0508 19:37:25.388062 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.470.103.01
I0508 19:37:25.389121 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libnvcuvid.so.470.103.01
I0508 19:37:25.390319 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libcuda.so.470.103.01
I0508 19:37:25.390834 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.470.103.01
I0508 19:37:25.391875 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.470.103.01
I0508 19:37:25.392678 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.470.103.01
I0508 19:37:25.392802 25974 nvc_info.c:172] selecting /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.470.103.01
I0508 19:37:25.393447 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-tls.so.470.103.01
I0508 19:37:25.394447 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-ptxjitcompiler.so.470.103.01
I0508 19:37:25.395236 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-opticalflow.so.470.103.01
I0508 19:37:25.396362 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-opencl.so.470.103.01
I0508 19:37:25.397533 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-ml.so.470.103.01
I0508 19:37:25.398575 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-ifr.so.470.103.01
I0508 19:37:25.399740 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-glvkspirv.so.470.103.01
I0508 19:37:25.400641 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-glsi.so.470.103.01
I0508 19:37:25.401717 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-glcore.so.470.103.01
I0508 19:37:25.402701 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-fbc.so.470.103.01
I0508 19:37:25.403640 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-encode.so.470.103.01
I0508 19:37:25.404579 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-eglcore.so.470.103.01
I0508 19:37:25.405618 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvidia-compiler.so.470.103.01
I0508 19:37:25.406777 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libnvcuvid.so.470.103.01
I0508 19:37:25.407989 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libcuda.so.470.103.01
I0508 19:37:25.409236 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libGLX_nvidia.so.470.103.01
I0508 19:37:25.410342 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libGLESv2_nvidia.so.470.103.01
I0508 19:37:25.411167 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libGLESv1_CM_nvidia.so.470.103.01
I0508 19:37:25.412162 25974 nvc_info.c:172] selecting /usr/lib/i386-linux-gnu/libEGL_nvidia.so.470.103.01
W0508 19:37:25.412239 25974 nvc_info.c:398] missing library libnvidia-nscq.so
W0508 19:37:25.412257 25974 nvc_info.c:398] missing library libnvidia-fatbinaryloader.so
W0508 19:37:25.412273 25974 nvc_info.c:398] missing library libnvidia-pkcs11.so
W0508 19:37:25.412297 25974 nvc_info.c:398] missing library libvdpau_nvidia.so
W0508 19:37:25.412306 25974 nvc_info.c:402] missing compat32 library libnvidia-cfg.so
W0508 19:37:25.412322 25974 nvc_info.c:402] missing compat32 library libnvidia-nscq.so
W0508 19:37:25.412335 25974 nvc_info.c:402] missing compat32 library libnvidia-fatbinaryloader.so
W0508 19:37:25.412347 25974 nvc_info.c:402] missing compat32 library libnvidia-allocator.so
W0508 19:37:25.412360 25974 nvc_info.c:402] missing compat32 library libnvidia-pkcs11.so
W0508 19:37:25.412377 25974 nvc_info.c:402] missing compat32 library libnvidia-ngx.so
W0508 19:37:25.412392 25974 nvc_info.c:402] missing compat32 library libvdpau_nvidia.so
W0508 19:37:25.412409 25974 nvc_info.c:402] missing compat32 library libnvidia-rtcore.so
W0508 19:37:25.412423 25974 nvc_info.c:402] missing compat32 library libnvoptix.so
W0508 19:37:25.412436 25974 nvc_info.c:402] missing compat32 library libnvidia-cbl.so
I0508 19:37:25.414646 25974 nvc_info.c:298] selecting /usr/bin/nvidia-smi
I0508 19:37:25.414706 25974 nvc_info.c:298] selecting /usr/bin/nvidia-debugdump
I0508 19:37:25.414753 25974 nvc_info.c:298] selecting /usr/bin/nvidia-persistenced
I0508 19:37:25.414825 25974 nvc_info.c:298] selecting /usr/bin/nvidia-cuda-mps-control
I0508 19:37:25.414872 25974 nvc_info.c:298] selecting /usr/bin/nvidia-cuda-mps-server
W0508 19:37:25.415031 25974 nvc_info.c:424] missing binary nv-fabricmanager
I0508 19:37:25.415104 25974 nvc_info.c:342] listing firmware path /usr/lib/firmware/nvidia/470.103.01/gsp.bin
I0508 19:37:25.415167 25974 nvc_info.c:528] listing device /dev/nvidiactl
I0508 19:37:25.415180 25974 nvc_info.c:528] listing device /dev/nvidia-uvm
I0508 19:37:25.415190 25974 nvc_info.c:528] listing device /dev/nvidia-uvm-tools
I0508 19:37:25.415198 25974 nvc_info.c:528] listing device /dev/nvidia-modeset
I0508 19:37:25.415257 25974 nvc_info.c:342] listing ipc path /run/nvidia-persistenced/socket
W0508 19:37:25.415313 25974 nvc_info.c:348] missing ipc path /var/run/nvidia-fabricmanager/socket
W0508 19:37:25.415356 25974 nvc_info.c:348] missing ipc path /tmp/nvidia-mps
I0508 19:37:25.415371 25974 nvc_info.c:821] requesting device information with »
I0508 19:37:25.423912 25974 nvc_info.c:712] listing device /dev/nvidia0 (GPU-8907c62c-f0d0-f149-5253-cac918b7b2c6 at 00000000:01:00.0)
NVRM version: 470.103.01
CUDA version: 11.4
Device Index: 0
Device Minor: 0
Model: NVIDIA GeForce RTX 3080
Brand: GeForce
GPU UUID: GPU-8907c62c-f0d0-f149-5253-cac918b7b2c6
Bus Location: 00000000:01:00.0
Architecture: 8.6
I0508 19:37:25.424019 25974 nvc.c:430] shutting down library context
I0508 19:37:25.424162 25977 rpc.c:95] terminating nvcgo rpc service
I0508 19:37:25.425430 25974 rpc.c:135] nvcgo rpc service terminated successfully
I0508 19:37:25.427168 25976 rpc.c:95] terminating driver rpc service
I0508 19:37:25.427735 25974 rpc.c:135] driver rpc service terminated successfully
- Kernel version from
uname -a
Linux indianTBINfragsWorkstation 5.13.0-40-generic #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux - Any relevant kernel output lines from
dmesg
- Driver information from
nvidia-smi -a
==============NVSMI LOG==============
Timestamp : Mon May 9 01:08:59 2022
Driver Version : 470.103.01
CUDA Version : 11.4
Attached GPUs : 1
GPU 00000000:01:00.0
Product Name : NVIDIA GeForce RTX 3080
Product Brand : GeForce
Display Mode : Enabled
Display Active : Enabled
Persistence Mode : Disabled
MIG Mode
Current : N/A
Pending : N/A
Accounting Mode : Disabled
Accounting Mode Buffer Size : 4000
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : GPU-8907c62c-f0d0-f149-5253-cac918b7b2c6
Minor Number : 0
VBIOS Version : 94.02.26.40.93
MultiGPU Board : No
Board ID : 0x100
GPU Part Number : N/A
Module ID : 0
Inforom Version
Image Version : G001.0000.03.03
OEM Object : 2.0
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GSP Firmware Version : N/A
GPU Virtualization Mode
Virtualization Mode : None
Host VGPU Mode : N/A
IBMNPU
Relaxed Ordering Mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x220610DE
Bus Id : 00000000:01:00.0
Sub System Id : 0x145510DE
GPU Link Info
PCIe Generation
Max : 3
Current : 1
Link Width
Max : 16x
Current : 16x
Bridge Chip
Type : N/A
Firmware : N/A
Replays Since Reset : 0
Replay Number Rollovers : 0
Tx Throughput : 0 KB/s
Rx Throughput : 17000 KB/s
Fan Speed : 36 %
Performance State : P8
Clocks Throttle Reasons
Idle : Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
HW Thermal Slowdown : Not Active
HW Power Brake Slowdown : Not Active
Sync Boost : Not Active
SW Thermal Slowdown : Not Active
Display Clock Setting : Not Active
FB Memory Usage
Total : 10009 MiB
Used : 350 MiB
Free : 9659 MiB
BAR1 Memory Usage
Total : 256 MiB
Used : 17 MiB
Free : 239 MiB
Compute Mode : Default
Utilization
Gpu : 1 %
Memory : 4 %
Encoder : 0 %
Decoder : 0 %
Encoder Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
FBC Stats
Active Sessions : 0
Average FPS : 0
Average Latency : 0
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
SRAM Correctable : N/A
SRAM Uncorrectable : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
Aggregate
SRAM Correctable : N/A
SRAM Uncorrectable : N/A
DRAM Correctable : N/A
DRAM Uncorrectable : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending Page Blacklist : N/A
Remapped Rows : N/A
Temperature
GPU Current Temp : 38 C
GPU Shutdown Temp : 98 C
GPU Slowdown Temp : 95 C
GPU Max Operating Temp : 93 C
GPU Target Temperature : 83 C
Memory Current Temp : N/A
Memory Max Operating Temp : N/A
Power Readings
Power Management : Supported
Power Draw : 18.19 W
Power Limit : 340.00 W
Default Power Limit : 340.00 W
Enforced Power Limit : 340.00 W
Min Power Limit : 100.00 W
Max Power Limit : 340.00 W
Clocks
Graphics : 210 MHz
SM : 210 MHz
Memory : 405 MHz
Video : 555 MHz
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Max Clocks
Graphics : 2115 MHz
SM : 2115 MHz
Memory : 9501 MHz
Video : 1950 MHz
Max Customer Boost Clocks
Graphics : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Voltage
Graphics : 737.500 mV
Processes
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 1350
Type : G
Name : /usr/lib/xorg/Xorg
Used GPU Memory : 152 MiB
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 1689
Type : G
Name : /usr/bin/gnome-shell
Used GPU Memory : 115 MiB
GPU instance ID : N/A
Compute instance ID : N/A
Process ID : 20709
Type : G
Name : /opt/google/chrome/chrome —type=gpu-process —enable-crashpad —crashpad-handler-pid=20676 —enable-crash-reporter=, —change-stack-guard-on-fork=enable —gpu-preferences=WAAAAAAAAAAgAAAIAAAAAAAAAAAAAAAAAABgAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAIAAAAAAAAAABAAAAAAAAAAgAAAAAAAAACAAAAAAAAAAIAAAAAAAAAA== —shared-files —field-trial-handle=0,i,5650099635971777813,3590146944553519854,131072 —enable-features=BlockInsecurePrivateNetworkRequests,BlockInsecurePrivateNetworkRequestsFromPrivate,BlockInsecurePrivateNetworkRequestsFromUnknown,ClientHintThirdPartyDelegation,ClientHintsMetaHTTPEquivAcceptCH,ClientHintsMetaNameAcceptCH,ClipboardCustomFormats,CookieSameSiteConsidersRedirectChain,CriticalClientHint,DocumentPolicyNegotiation,DocumentReporting,EditContext,EnableCanvas2DLayers,ExperimentalContentSecurityPolicyFeatures,OriginIsolationHeader,OriginPolicy,PrefersColorSchemeClientHintHeader,PrivateNetworkAccessForWorkers,PrivateNetworkAccessRespectPreflightResults,SchemefulSameSite,UserAgentClientHint,UserAgentClientHintFullVersionList
Used GPU Memory : 79 MiB
- Docker version from
docker version
Client: Docker Engine — Community
Version: 20.10.15
API version: 1.41
Go version: go1.17.9
Git commit: fd82621
Built: Thu May 5 13:19:23 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 17:15:03 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit:
docker-init:
Version: 0.19.0
GitCommit: de40ad0
-
NVIDIA packages version from
dpkg -l '*nvidia*'
orrpm -qa '*nvidia*'
dpkg -l ‘nvidia‘
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===================================-===========================-============-=========================================================
un libgldispatch0-nvidia (no description available)
ii libnvidia-cfg1-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library
un libnvidia-cfg1-any (no description available)
un libnvidia-common (no description available)
ii libnvidia-common-455 460.91.03-0ubuntu0.20.04.1 all Transitional package for libnvidia-common-460
ii libnvidia-common-460 470.103.01-0ubuntu0.20.04.1 all Transitional package for libnvidia-common-470
ii libnvidia-common-470 470.103.01-0ubuntu0.20.04.1 all Shared files used by the NVIDIA libraries
un libnvidia-compute (no description available)
ii libnvidia-compute-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-compute-460
ii libnvidia-compute-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-compute-470
ii libnvidia-compute-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA libcompute package
ii libnvidia-compute-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVIDIA libcompute package
ii libnvidia-container-tools 1.9.0-1 amd64 NVIDIA container runtime library (command-line tools)
ii libnvidia-container1:amd64 1.9.0-1 amd64 NVIDIA container runtime library
un libnvidia-decode (no description available)
ii libnvidia-decode-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-decode-460
ii libnvidia-decode-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-decode-470
ii libnvidia-decode-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA Video Decoding runtime libraries
ii libnvidia-decode-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVIDIA Video Decoding runtime libraries
un libnvidia-encode (no description available)
ii libnvidia-encode-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-encode-460
ii libnvidia-encode-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-encode-470
ii libnvidia-encode-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVENC Video Encoding runtime library
ii libnvidia-encode-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVENC Video Encoding runtime library
un libnvidia-extra (no description available)
ii libnvidia-extra-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Extra libraries for the NVIDIA driver
un libnvidia-fbc1 (no description available)
ii libnvidia-fbc1-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-fbc1-460
ii libnvidia-fbc1-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-fbc1-470
ii libnvidia-fbc1-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii libnvidia-fbc1-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVIDIA OpenGL-based Framebuffer Capture runtime library
un libnvidia-gl (no description available)
ii libnvidia-gl-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-gl-460
ii libnvidia-gl-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-gl-470
ii libnvidia-gl-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
ii libnvidia-gl-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
un libnvidia-ifr1 (no description available)
ii libnvidia-ifr1-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-ifr1-460
ii libnvidia-ifr1-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-ifr1-470
ii libnvidia-ifr1-470:amd64 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA OpenGL-based Inband Frame Readback runtime library
ii libnvidia-ifr1-470:i386 470.103.01-0ubuntu0.20.04.1 i386 NVIDIA OpenGL-based Inband Frame Readback runtime library
ii libnvidia-ml-dev 10.1.243-3 amd64 NVIDIA Management Library (NVML) development files
un libnvidia-ml.so.1 (no description available)
un libnvidia-ml1 (no description available)
un libnvidia-tesla-418-ml1 (no description available)
un libnvidia-tesla-440-ml1 (no description available)
un libnvidia-tesla-cuda1 (no description available)
un nvidia-384 (no description available)
un nvidia-390 (no description available)
un nvidia-common (no description available)
un nvidia-compute-utils (no description available)
ii nvidia-compute-utils-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-compute-utils-460
ii nvidia-compute-utils-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-compute-utils-470
ii nvidia-compute-utils-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA compute utilities
un nvidia-container-runtime (no description available)
un nvidia-container-runtime-hook (no description available)
ii nvidia-container-toolkit 1.9.0-1 amd64 NVIDIA container runtime hook
ii nvidia-cuda-dev 10.1.243-3 amd64 NVIDIA CUDA development files
ii nvidia-cuda-doc 10.1.243-3 all NVIDIA CUDA and OpenCL documentation
ii nvidia-cuda-gdb 10.1.243-3 amd64 NVIDIA CUDA Debugger (GDB)
ii nvidia-cuda-toolkit 10.1.243-3 amd64 NVIDIA CUDA development toolkit
ii nvidia-dkms-455 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-dkms-460
ii nvidia-dkms-460 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-dkms-470
ii nvidia-dkms-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA DKMS package
un nvidia-dkms-kernel (no description available)
un nvidia-docker (no description available)
ii nvidia-docker2 2.10.0-1 all nvidia-docker CLI wrapper
un nvidia-driver (no description available)
ii nvidia-driver-455 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-driver-460
ii nvidia-driver-460 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-driver-470
ii nvidia-driver-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA driver metapackage
un nvidia-driver-binary (no description available)
un nvidia-kernel-common (no description available)
ii nvidia-kernel-common-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-kernel-common-460
ii nvidia-kernel-common-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-kernel-common-470
ii nvidia-kernel-common-470 470.103.01-0ubuntu0.20.04.1 amd64 Shared files used with the kernel module
un nvidia-kernel-source (no description available)
ii nvidia-kernel-source-455 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-kernel-source-460
ii nvidia-kernel-source-460 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-kernel-source-470
ii nvidia-kernel-source-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA kernel source package
un nvidia-legacy-304xx-vdpau-driver (no description available)
un nvidia-legacy-340xx-vdpau-driver (no description available)
un nvidia-libopencl1 (no description available)
un nvidia-libopencl1-dev (no description available)
ii nvidia-modprobe 455.32.00-0ubuntu1 amd64 Load the NVIDIA kernel driver and create device files
ii nvidia-opencl-dev:amd64 10.1.243-3 amd64 NVIDIA OpenCL development files
un nvidia-opencl-icd (no description available)
un nvidia-persistenced (no description available)
ii nvidia-prime 0.8.14 all Tools to enable NVIDIA’s Prime
ii nvidia-profiler 10.1.243-3 amd64 NVIDIA Profiler for CUDA and OpenCL
ii nvidia-settings 455.32.00-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
un nvidia-settings-binary (no description available)
un nvidia-smi (no description available)
un nvidia-tesla-418-driver (no description available)
un nvidia-tesla-440-driver (no description available)
un nvidia-utils (no description available)
ii nvidia-utils-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-utils-460
ii nvidia-utils-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for nvidia-utils-470
ii nvidia-utils-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA driver support binaries
un nvidia-vdpau-driver (no description available)
ii nvidia-visual-profiler 10.1.243-3 amd64 NVIDIA Visual Profiler for CUDA and OpenCL
ii xserver-xorg-video-nvidia-455:amd64 460.91.03-0ubuntu0.20.04.1 amd64 Transitional package for xserver-xorg-video-nvidia-460
ii xserver-xorg-video-nvidia-460:amd64 470.103.01-0ubuntu0.20.04.1 amd64 Transitional package for xserver-xorg-video-nvidia-470
ii xserver-xorg-video-nvidia-470 470.103.01-0ubuntu0.20.04.1 amd64 NVIDIA binary Xorg driver -
NVIDIA container library version from
nvidia-container-cli -V
cli-version: 1.9.0
lib-version: 1.9.0
build date: 2022-03-18T13:46+00:00
build revision: 5e135c17d6dbae861ec343e9a8d3a0d2af758a4f
build compiler: x86_64-linux-gnu-gcc-7 7.5.0
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fplan9-extensions -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,—gc-sections -
NVIDIA container library logs (see troubleshooting)
-
Docker command, image and tag used