Nvidia smi error

I'm running an AWS EC2 g2.2xlarge instance with Ubuntu 14.04 LTS. I'd like to observe the GPU utilization while training my TensorFlow models. I get an error trying to run 'nvidia-smi'. ubuntu@ip-...

I’m running an AWS EC2 g2.2xlarge instance with Ubuntu 14.04 LTS.
I’d like to observe the GPU utilization while training my TensorFlow models.
I get an error trying to run ‘nvidia-smi’.

ubuntu@ip-10-0-1-213:/etc/alternatives$ cd /usr/lib/nvidia-375/bin
ubuntu@ip-10-0-1-213:/usr/lib/nvidia-375/bin$ ls
nvidia-bug-report.sh     nvidia-debugdump     nvidia-xconfig
nvidia-cuda-mps-control  nvidia-persistenced
nvidia-cuda-mps-server   nvidia-smi
ubuntu@ip-10-0-1-213:/usr/lib/nvidia-375/bin$ ./nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.


ubuntu@ip-10-0-1-213:/usr/lib/nvidia-375/bin$ dpkg -l | grep nvidia 
ii  nvidia-346                                            352.63-0ubuntu0.14.04.1                             amd64        Transitional package for nvidia-346
ii  nvidia-346-dev                                        346.46-0ubuntu1                                     amd64        NVIDIA binary Xorg driver development files
ii  nvidia-346-uvm                                        346.96-0ubuntu0.0.1                                 amd64        Transitional package for nvidia-346
ii  nvidia-352                                            375.26-0ubuntu1                                     amd64        Transitional package for nvidia-375
ii  nvidia-375                                            375.39-0ubuntu0.14.04.1                             amd64        NVIDIA binary driver - version 375.39
ii  nvidia-375-dev                                        375.39-0ubuntu0.14.04.1                             amd64        NVIDIA binary Xorg driver development files
ii  nvidia-modprobe                                       375.26-0ubuntu1                                     amd64        Load the NVIDIA kernel driver and create device files
ii  nvidia-opencl-icd-346                                 352.63-0ubuntu0.14.04.1                             amd64        Transitional package for nvidia-opencl-icd-352
ii  nvidia-opencl-icd-352                                 375.26-0ubuntu1                                     amd64        Transitional package for nvidia-opencl-icd-375
ii  nvidia-opencl-icd-375                                 375.39-0ubuntu0.14.04.1                             amd64        NVIDIA OpenCL ICD
ii  nvidia-prime                                          0.6.2.1                                             amd64        Tools to enable NVIDIA's Prime
ii  nvidia-settings                                       375.26-0ubuntu1                                     amd64        Tool for configuring the NVIDIA graphics driver
ubuntu@ip-10-0-1-213:/usr/lib/nvidia-375/bin$ lspci | grep -i nvidia
00:03.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
ubuntu@ip-10-0-1-213:/usr/lib/nvidia-375/bin$ 

$ inxi -G
Graphics:  Card-1: Cirrus Logic GD 5446 
           Card-2: NVIDIA GK104GL [GRID K520] 
           X.org: 1.15.1 driver: N/A tty size: 80x24 Advanced Data: N/A out of X

$  lspci -k | grep -A 2 -E "(VGA|3D)"
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
    Subsystem: XenSource, Inc. Device 0001
    Kernel driver in use: cirrus
00:03.0 VGA compatible controller: NVIDIA Corporation GK104GL [GRID K520] (rev a1)
    Subsystem: NVIDIA Corporation Device 1014
00:1f.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)

I followed these instructions to install CUDA 7 and cuDNN:

$sudo apt-get -q2 update
$sudo apt-get upgrade
$sudo reboot

=======================================================================

Post reboot, update the initramfs by running ‘$sudo update-initramfs -u’

Now, please edit the /etc/modprobe.d/blacklist.conf file to blacklist nouveau. Open the file in an editor and insert the following lines at the end of the file.

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

Save and exit from the file.

Now install the build essential tools and update the initramfs and reboot again as below:

$sudo apt-get install linux-{headers,image,image-extra}-$(uname -r) build-essential
$sudo update-initramfs -u
$sudo reboot

========================================================================

Post reboot, run the following commands to install Nvidia.

$sudo wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run
$sudo chmod 700 ./cuda_7.0.28_linux.run
$sudo ./cuda_7.0.28_linux.run
$sudo update-initramfs -u
$sudo reboot

========================================================================

Now that the system has come up, verify the installation by running the following.

$sudo modprobe nvidia
$sudo nvidia-smi -q | head`enter code here`

You should see the output like ‘nvidia.png’.

Now run the following commands.
$

cd ~/NVIDIA_CUDA-7.0_Samples/1_Utilities/deviceQuery
$make
$./deviceQuery

However, ‘nvidia-smi’ still doesn’t show GPU activity while Tensorflow is training models:

ubuntu@ip-10-0-1-48:~$ ipython
Python 2.7.11 |Anaconda custom (64-bit)| (default, Dec  6 2015, 18:08:32) 
Type "copyright", "credits" or "license" for more information.

IPython 4.1.2 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import tensorflow as tf 
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.7.5 locally



ubuntu@ip-10-0-1-48:~$ nvidia-smi
Thu Mar 30 05:45:26 2017       
+------------------------------------------------------+                       
| NVIDIA-SMI 346.46     Driver Version: 346.46         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   35C    P0    38W / 125W |     10MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Problem

Recently, when I re-configure my work environment, I install the Nvidia GPU driver. But after installation and reboot, I cannot use nvidia-smi command to get the GPU information. The only one message is:

NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

I try to install different version by command line and GUI, but they are not the points. Finally, I found I have not use MOK manager correctly, so my driver cannot work


What is MOK?

MOK stands for Machine Owner Key, which is boot process that protects operating system components and drivers.

Of course, it is implemented by BIOS.

We need to create a pair of keys, the private key is used to sign the driver to be allowed to execute, and the public key is used by the MOK system for encryption.


Solution

During the installation of the Nvidia driver, we must have the opportunity to enter our password. If you want to reinstall the driver, you can refer the following command to remove the currently installed nvidia package.

sudo apt purge nvidia-*

Now maybe you want to search the Nvidia driver you can install:

sudo apt search nvidia-driver*

If you find the version you want to use, key in:

sudo apt install nvidia-driver<NVIDIA DRIVER VERSION>

For related apt command operations, please refer the link at the bottom of the article.

And in case you missed the MOK screen and did not enter the MOK screen on the next reboot, maybe you can execute the following commands to re-do these procedures:

sudo mokutil --import /var/lib/shim-signed/mok/MOK.der

You’ll be prompted for your password, and you’ll be taken to the MOK screen upon reboot.

Operation steps in MOK screen

  1. Set the password in the configure secure boot stage, and remember your password
  2. If you enter the screen of Perform MOK management, please select Enroll MOK > Continue > Yes
  3. Enter the password at the screen of Enroll the key(s)?
  4. Select OK to reboot
  5. After rebooting, use nvidia-smi command to check the driver is work.

Hope everyone can successfully load their drivers!


References

  • https://unix.stackexchange.com/questions/535434/what-exactly-is-mok-in-linux-for
  • https://gist.github.com/bitsurgeon/b0f4440984c9e60dcd8fe8bbc346c029
  • https://askubuntu.com/questions/1122855/mok-manager-nvidia-driver-issue-after-cuda-install

Read More

  • [Linux] The Difference Between «apt» And «apt-get»
  • [Solved] An Error Occurred while Installing the Nvidia Driver: «The Nouveau kernel driver is currently in use by your system. …»
  • [Solved] An NVIDIA kernel module ‘nvidia-drm’ appears to already be loaded in your kernel

I recently install Ubuntu 18.04 and installed NVIDIA driver but they don’t seem to be loading.

I had to make the following edit to my grub file to make things boot properly as per this little guide.

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nouveau.modeset=0 tpm_tis.interrupts=0 acpi_osi=Linux i915.preliminary_hw_support=1 idle=nomwait"

This is what my command outputs currently give:

$ nvidia-smi 
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.


$ nvidia-settings 
ERROR: NVIDIA driver is not loaded
ERROR: Error querying enabled displays on GPU 0 (Missing Extension)


$ name -r
4.15.0-38-generic


$ lshw -c display
WARNING: you should run this program as super-user.
  *-display UNCLAIMED       
       description: VGA compatible controller
       product: GP102 [GeForce GTX 1080 Ti]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:0a:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: vga_controller cap_list
       configuration: latency=0
       resources: memory:f4000000-f4ffffff memory:60000000-6fffffff memory:70000000-71ffffff ioport:a000(size=128) memory:f5000000-f507ffff
  *-display UNCLAIMED
       description: VGA compatible controller
       product: GP102 [GeForce GTX 1080 Ti]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:09:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: vga_controller cap_list
       configuration: latency=0
       resources: memory:f6000000-f6ffffff memory:80000000-8fffffff memory:90000000-91ffffff ioport:b000(size=128) memory:f7000000-f707ffff
  *-display UNCLAIMED
       description: VGA compatible controller
       product: GP102 [GeForce GTX 1080 Ti]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:06:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: vga_controller cap_list
       configuration: latency=0
       resources: memory:f8000000-f8ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:c000(size=128) memory:f9000000-f907ffff
  *-display UNCLAIMED
       description: VGA compatible controller
       product: GP102 [GeForce GTX 1080 Ti]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:05:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: vga_controller bus_master cap_list
       configuration: latency=0
       resources: memory:fa000000-faffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:d000(size=128) memory:c0000-dffff
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.



$ lspci | grep -i nvidia
05:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
05:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
06:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
06:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
09:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
09:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)
0a:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
0a:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)


$ lsmod 
Module                  Size  Used by
snd_hda_codec_hdmi     49152  4
nls_iso8859_1          16384  1
input_leds             16384  0
intel_rapl             20480  0
eeepc_wmi              16384  0
asus_wmi               28672  1 eeepc_wmi
sparse_keymap          16384  1 asus_wmi
video                  45056  1 asus_wmi
mxm_wmi                16384  0
wmi_bmof               16384  0
x86_pkg_temp_thermal    16384  0
intel_powerclamp       16384  0
intel_wmi_thunderbolt    16384  0
coretemp               16384  0
snd_hda_codec_realtek   106496  1
snd_hda_codec_generic    73728  1 snd_hda_codec_realtek
kvm                   598016  0
irqbypass              16384  1 kvm
snd_seq_midi           16384  0
crct10dif_pclmul       16384  0
crc32_pclmul           16384  0
snd_seq_midi_event     16384  1 snd_seq_midi
snd_hda_intel          40960  7
ghash_clmulni_intel    16384  0
pcbc                   16384  0
snd_rawmidi            32768  1 snd_seq_midi
snd_hda_codec         126976  4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_realtek
snd_hda_core           81920  5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek
snd_hwdep              20480  1 snd_hda_codec
snd_pcm                98304  4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_core
snd_seq                65536  2 snd_seq_midi,snd_seq_midi_event
snd_seq_device         16384  3 snd_seq,snd_seq_midi,snd_rawmidi
aesni_intel           188416  0
snd_timer              32768  2 snd_seq,snd_pcm
aes_x86_64             20480  1 aesni_intel
crypto_simd            16384  1 aesni_intel
glue_helper            16384  1 aesni_intel
snd                    81920  25 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hda_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek,snd_timer,snd_pcm,snd_rawmidi
cryptd                 24576  3 crypto_simd,ghash_clmulni_intel,aesni_intel
mei_me                 40960  0
lpc_ich                24576  0
intel_cstate           20480  0
soundcore              16384  1 snd
intel_rapl_perf        16384  0
shpchp                 36864  0
mei                    90112  1 mei_me
joydev                 24576  0
wmi                    24576  4 intel_wmi_thunderbolt,asus_wmi,wmi_bmof,mxm_wmi
mac_hid                16384  0
sch_fq_codel           20480  6
parport_pc             36864  0
ppdev                  20480  0
lp                     20480  0
parport                49152  3 parport_pc,lp,ppdev
ip_tables              28672  0
x_tables               40960  1 ip_tables
autofs4                40960  2
hid_generic            16384  0
usbhid                 49152  0
hid                   118784  2 usbhid,hid_generic
drm_kms_helper        172032  0
syscopyarea            16384  1 drm_kms_helper
sysfillrect            16384  1 drm_kms_helper
igb                   212992  0
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
e1000e                249856  0
dca                    16384  1 igb
drm                   401408  1 drm_kms_helper
i2c_algo_bit           16384  1 igb
ahci                   36864  2
ptp                    20480  2 igb,e1000e
pps_core               20480  1 ptp
libahci                32768  1 ahci
ipmi_devintf           20480  0
ipmi_msghandler        53248  1 ipmi_devintf

I installed the drivers via sudo ubuntu-drivers autoinstall as per this guide

EDIT: Here are some more command outputs:

$ dpkg --get-selections | grep nvidia-driver-
nvidia-driver-390                install

$ lsmod | grep nvidia
<blank>

$ ls /lib/modules/*/updates/dkms/nvidia.ko
/lib/modules/4.15.0-38-generic/updates/dkms/nvidia.ko

Problem description

I am trying to set up a centos-7 GPU (Nvidia Tesla K80) instance on Google Cloud, to execute CUDA work.

Unfortunately, I can’t seem to properly install/configure drivers.

Indeed, here is what happens when trying to interact with nvidia-smi (NVIDIA System Management Interface):

# nvidia-smi -pm 1
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Same operation with more recent method nvidia-persistenced:

# nvidia-persistenced
nvidia-persistenced failed to initialize. Check syslog for more details.

+ I get the following error in syslog (using journalctl command):

Failed to query NVIDIA devices. Please ensure that the NVIDIA device files (/dev/nvidia*) exist, and that user 0 has read and write permissions for those files.

Indeed, no nvidia devices are present:

# ll /dev/nvidia*
ls: cannot access /dev/nvidia*: No such file or directory

However, here is a proof that the GPU is correctly connected to the instance:

# lshw -numeric -C display
  *-display UNCLAIMED       
       description: 3D controller
       product: GK210GL [Tesla K80] [10DE:102D]
       vendor: NVIDIA Corporation [10DE]
       physical id: 4
       bus info: pci@0000:00:04.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: msi pm cap_list
       configuration: latency=0
       resources: iomemory:40-3f iomemory:80-7f memory:fc000000-fcffffff memory:400000000-7ffffffff memory:800000000-801ffffff ioport:c000(size=128)

Installation process I followed

Creation of the centos-7 instance, following this section of the Google Cloud docs:

gcloud compute instances create test-gpu-drivers 
    --machine-type n1-standard-2 
    --boot-disk-size 250GB 
    --accelerator type=nvidia-tesla-k80,count=1 
    --image-family centos-7 --image-project centos-cloud 
    --maintenance-policy TERMINATE

Then, the installation process I followed for the drivers & CUDA is inspired by Google Cloud documentation, but with latest versions instead:

gcloud compute ssh test-gpu-drivers
sudo su
yum -y update

# Reboot for kernel update to be taken into account
reboot

gcloud compute ssh test-gpu-drivers
sudo su

# Install nvidia drivers repository, found here: https://www.nvidia.com/Download/index.aspx?lang=en-us
curl -J -O http://us.download.nvidia.com/tesla/410.72/nvidia-diag-driver-local-repo-rhel7-410.72-1.0-1.x86_64.rpm
yum -y install ./nvidia-diag-driver-local-repo-rhel7-410.72-1.0-1.x86_64.rpm

# Install CUDA repository, found here: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=CentOS&target_version=7&target_type=rpmlocal
curl -J -O https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-repo-rhel7-10.0.130-1.x86_64.rpm
yum -y install ./cuda-repo-rhel7-10.0.130-1.x86_64.rpm

# Install CUDA & drivers & dependencies
yum clean all
yum -y install cuda

nvidia-smi -pm 1

reboot

gcloud compute ssh test-gpu-drivers
sudo su
nvidia-smi -pm 1

Full logs here.

(I also tried the exact GCE driver install script, without upgrading versions, but with no luck too)

Environment

  • Distribution release

    [root@test-gpu-drivers myuser]# cat /etc/*-release | head -n 1
    CentOS Linux release 7.6.1810 (Core) 
    
  • Kernel release

    [root@test-gpu-drivers myuser]# uname -r
    3.10.0-957.1.3.el7.x86_64
    

I can make it work on Ubuntu!

To analyze the problem, I decided to try doing the same thing on Ubuntu 18.04 (LTS). This time, I had no problem.

Instance creation:

gcloud compute instances create gpu-ubuntu-1804 
    --machine-type n1-standard-2 
    --boot-disk-size 250GB 
    --accelerator type=nvidia-tesla-k80,count=1 
    --image-family ubuntu-1804-lts --image-project ubuntu-os-cloud 
    --maintenance-policy TERMINATE

Install process:

gcloud compute ssh gpu-ubuntu-1804
sudo su
apt update
apt -y upgrade
reboot

gcloud compute ssh gpu-ubuntu-1804
sudo su
curl -O https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
apt -y install ./cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
rm cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
apt-get update
apt-get -y install cuda
nvidia-smi -pm 1

Full installation logs available here.

Test:

# nvidia-smi -pm 1
Enabled persistence mode for GPU 00000000:00:04.0.
All done.
# ll /dev/nvidia*
crw-rw-rw- 1 root root 241,   0 Dec  4 14:01 /dev/nvidia-uvm
crw-rw-rw- 1 root root 195,   0 Dec  4 14:01 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Dec  4 14:01 /dev/nvidiactl

One thing I noticed is that on Ubuntu installation of package nvidia-dkms triggers some stuff, which I did not see on centos:

Setting up nvidia-dkms-410 (410.79-0ubuntu1) ...
update-initramfs: deferring update (trigger activated)

A modprobe blacklist file has been created at /etc/modprobe.d to prevent Nouveau
from loading. This can be reverted by deleting the following file:
/etc/modprobe.d/nvidia-graphics-drivers.conf

A new initrd image has also been created. To revert, please regenerate your
initrd by running the following command after deleting the modprobe.d file:
`/usr/sbin/initramfs -u`

*****************************************************************************
*** Reboot your computer and verify that the NVIDIA graphics driver can   ***
*** be loaded.                                                            ***
*****************************************************************************

Loading new nvidia-410.79 DKMS files...
Building for 4.15.0-1025-gcp
Building for architecture x86_64
Building initial module for 4.15.0-1025-gcp
Generating a 2048 bit RSA private key
.............................................................................................................+++
..........+++
writing new private key to '/var/lib/shim-signed/mok/MOK.priv'
-----
EFI variables are not supported on this system
/sys/firmware/efi/efivars not found, aborting.
Done.

nvidia:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.0-1025-gcp/updates/dkms/

nvidia-modeset.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.0-1025-gcp/updates/dkms/

nvidia-drm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.0-1025-gcp/updates/dkms/

nvidia-uvm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.15.0-1025-gcp/updates/dkms/

depmod...

DKMS: install completed.

Environment

  • Distribution release

    root@gpu-ubuntu-1804:/home/elouan_keryell-even# cat /etc/*-release
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=18.04
    DISTRIB_CODENAME=bionic
    DISTRIB_DESCRIPTION="Ubuntu 18.04.1 LTS"
    NAME="Ubuntu"
    VERSION="18.04.1 LTS (Bionic Beaver)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 18.04.1 LTS"
    VERSION_ID="18.04"
    HOME_URL="https://www.ubuntu.com/"
    SUPPORT_URL="https://help.ubuntu.com/"
    BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
    VERSION_CODENAME=bionic
    UBUNTU_CODENAME=bionic
    
  • Kernel release

    root@gpu-ubuntu-1804:/home/elouan_keryell-even# uname -r
    4.15.0-1025-gcp
    

Question

Does someone understand what goes wrong with my installation of NVIDIA drivers on Centos 7?

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Nvidia share unknown hard error
  • Nvidia share exe ошибка приложения 0ex0000008
  • Nvidia shadowplay как изменить папку сохранения
  • Nvidia setup error
  • Nvidia rtx voice error unable to start microphone denoising

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии