Error couldn t open display null glxgears

seth wrote:

seth wrote:

Please stop doing random things, you are turning this into a goose chase.

First, I want to apologize, I have been somewhat reckless with this issue because I’ve been doing things in a hurry, which I’m finding out is a good way to cause new problems. I really appreciate the time you’ve taken to help me solve this so I will make sure to not make any changes or posts in a hurry from now on.

That being said, I had installed bbswitch between yesterday’s post and the day before. bumblebeed doesn’t give any errors now

# systemctl status bumblebeed
* bumblebeed.service - Bumblebee C Daemon
   Loaded: loaded (/usr/lib/systemd/system/bumblebeed.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-01-28 02:26:19 UTC; 46s ago
 Main PID: 368 (bumblebeed)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/bumblebeed.service
           `-368 /usr/bin/bumblebeed

Jan 28 02:26:19 morgan systemd[1]: Started Bumblebee C Daemon.
Jan 28 02:26:19 morgan bumblebeed[368]: [    4.469721] [INFO]/usr/bin/bumblebeed 3.2.1 started

seth wrote:

do you have an ~/.xinitrc ?

seth wrote:

you’re not trying to optirun stuff from a linux console (ie. w/o any X11 server running), are you?

These two comments is honestly what solved my problem. The root of my problem was that I didn’t know what a tilde (~) meant in the Unix file system. Previously I had inferred that ~/insertname was a symlink made in the root directory by an application, and that it led to wherever directory or file insertname was, in this case ~/.xinitrc calling /etc/X11/xinit/xinitrc. So, when I followed the optimus guide and it told me to put those two lines into my ~/.xinitrc, they also happened to be the only two lines in the file… How I managed to get away with thinking that for so long I don’t know, but I now understand ~/ refers to my home directory.
I was also naïve to what xinit actually did which is why I did not bat an eye at there only being two lines in my xinitrc, actually, I don’t think I even had xinit installed until yesterday…. I had not read the wiki page on it until today, my reasoning being that I thought I should install drivers (bumblebee being part of that process) before dealing with features related to X. So yes I had been trying to run optirun since the beginning of this thread without X.

tl;dr
I was trying to run optirun without X running, and the only thing my ~/.xinitrc file contained was two commented lines, and before that, two uncommented lines!

Thanks for the help guys, here is a picture of twm running and it looks like bumble bee is working correctly too. I have marked this thread as solved, cheers!
https://dl.dropbox.com/s/uuw0hw1sss7rs6 … blebee.jpg

Mod note: Converted image to url — V1del

Last edited by V1del (2018-01-28 08:55:34)

Модератор: Модераторы разделов

Ximik

Сообщения: 60

OpenSUSE 11.1 и glxgears

Установил OpenSUSE 11.1, подключил репозиторий nvidia, установил дрова 185.18.36. Все нормально, разрешение, цвет. А вот шестеренки крутить не хочет! «glxgears: Error: couldn’t open display ‘(null)'». Может подскажете в чем проблема?

Аватара пользователя

step_slim

Сообщения: 431
ОС: SuSe 11.2 VB много ОС

Re: OpenSUSE 11.1 и glxgears

Сообщение

step_slim » 08.10.2009 23:09

Ximik писал(а): ↑

08.10.2009 21:26

step_slim писал(а): ↑

08.10.2009 10:45

А карточка какая?

ASUS 8600gt

Вообще странно, вроде ещё не встречал подобных проблем на этой карточки. А как ведёт себя 3D?

А голова что бы думать, ноги что бы ходить- никто не сможет меня остановить;

Аватара пользователя

Atolstoy

Сообщения: 1654
Статус: Tux in the rain
ОС: Linux x86_64
Контактная информация:

Re: OpenSUSE 11.1 и glxgears

Сообщение

Atolstoy » 08.10.2009 23:18

Ximik писал(а): ↑

07.10.2009 22:54

Установил OpenSUSE 11.1, подключил репозиторий nvidia, установил дрова 185.18.36. Все нормально, разрешение, цвет. А вот шестеренки крутить не хочет! «glxgears: Error: couldn’t open display ‘(null)'». Может подскажете в чем проблема?

Попробуй ещё раз прогнать sax2 -m 0=nvidia

Всего лишь 26 литров пива достаточно человеку для удовлетворения ежедневной потребности в кальции. Здоровое питание — это так просто!
http://atolstoy.wordpress.com

kobets

Сообщения: 1
ОС: openSUSE 11.2

Re: OpenSUSE 11.1 и glxgears

Сообщение

kobets » 01.02.2010 22:20

Код: Выделить всё

xxxxxxxxxr@tolga-laptop:~> glxgears
Xlib:  extension "GLX" missing on display ":0.0".
glxgears: Error: couldn't get an RGB, Double-buffered visual.
Ошибка сегментирования

а что в таком случае делать?

Аватара пользователя

step_slim

Сообщения: 431
ОС: SuSe 11.2 VB много ОС

Re: OpenSUSE 11.1 и glxgears

Сообщение

step_slim » 02.02.2010 11:37

kobets писал(а): ↑

01.02.2010 22:20

Код: Выделить всё

xxxxxxxxxr@tolga-laptop:~> glxgears
Xlib:  extension "GLX" missing on display ":0.0".
glxgears: Error: couldn't get an RGB, Double-buffered visual.
Ошибка сегментирования

а что в таком случае делать?

Вы бы изначально описали, то, что ранее сами делали, параметры системы, как дрова ставили и т.д.

А голова что бы думать, ноги что бы ходить- никто не сможет меня остановить;

I’m trying to run an appliaction, but when I run it I get a

Could not open display `(null').

Error. Why is this? Specifically I was trying to run scratch (which I installed via aws):

root@ip-10-251-56-90:/usr/bin# ./scratch
Executing: /usr/lib/squeak/4.4.7-2357/squeakvm -encoding UTF-8 -vm-display-x11 -xshm -plugins /usr/lib/scratch/plugins/:/usr/lib/squeak/4.4.7-2357/ -vm-sound-ALSA /usr/share/scratch/Scratch.image
Could not open display `(null)'.

Seth's user avatar

Seth

56.2k43 gold badges144 silver badges196 bronze badges

asked Mar 24, 2014 at 9:17

user261504's user avatar

0

Errors like this mean that you are running a program that needs a graphical display and it can’t find one. GUI programs connect to the display defined by the $DISPLAY environmental variable. The general format of the error is

Could not open display $DISPLAY

Since, in your case, the error says (null), this means that $DISPLAY is not set. You therefore need to:

  1. If you are logging in to a remote machine using something like ssh, you will need to export the $DISPLAY of your local machine and tell the remote computer to display GUI programs there. This can be done with the -X or -Y options of ssh:

    ssh -Y root@10.251.56.90
    

    As explained in man ssh:

     -Y  Enables trusted X11 forwarding.  Trusted X11 forwardings are not
         subjected to the X11 SECURITY extension controls.
     -X  Enables X11 forwarding.  This can also be specified on a per-host
         basis in a configuration file.
    
  2. If this is your local machine, you need to install a graphical environment. If one is already installed, assuming a default Ubuntu setup, you can start it with this command:

    sudo service lightdm start
    
  3. If you have a graphical environment running but for whatever reason, $DISPLAY is set to null, you can redefine it. The details will depend on your actual situation but in most cases, what you will need (assuming, again, you are on your local machine) is

    export DISPLAY=:0.0
    

    You can then run your GUI program normally.

  4. If you do have an X server running but have switched to a tty (for example by pressing Ctrl+Alt+F1), you might simply need to return to your graphical environment. This depends on which virtual console your GUI is running but in most cases on Ubuntu that will be 7, so you can get back to it using Alt+F7.

    If that does not bring you back to your desktop, just cycle through all ttys Alt+Left Arrow or Alt+Right Arrow until you find the right one.

  5. Another common problem is that you have started an X session as your normal user and are now trying to connect to it as root or another user. To enable this, you need to specify that this user has the right to access your graphical desktop. For example:

    xhost +si:localuser:terdon
    

    That will allow the local user terdon to connect to the running X server. The command needs to be run by the owner of the X session. Alternatively, to allow anyone to connect:

    xhost +
    

    And to revoke permissions:

    xhost -
    

answered Mar 24, 2014 at 22:48

terdon's user avatar

terdonterdon

96.2k15 gold badges192 silver badges289 bronze badges

0

@WordBearerYI

I am running ‘export DISPLAY=:1’ same as my vncserver the then ‘vglrun glxgears -display :1 ‘, however it shows: [VGL] ERROR: Could not open display :0. However, I can directly run glxgears successfully.
/etc/opt/VirtualGL/vgl_xauth_key does nor exist after I ran the configuration with YYYX. All other sanity checks can be executed without an error message.

I am trying to use using TurboVNC with VirtualGL(2.6.2_amd64)

@dcommander

Several things:

  1. You are able to run GLXgears directly because TurboVNC has a built-in software-only OpenGL implementation, but if you run an OpenGL application in TurboVNC without VirtualGL, that application will not use the GPU.

  2. Please use GLXspheres instead of GLXgears. It is a much more meaningful benchmark, and it also displays the OpenGL renderer string, which allows you to verify whether the GPU is being used.

  3. The vglserver_config script only supports specific Linux distributions, so I need to know which Linux distribution you are using in order to diagnose the problem.

  4. I can’t imagine any scenario in which the sanity checks mentioned at https://cdn.rawgit.com/VirtualGL/virtualgl/2.6.2/doc/index.html#hd006002001 work but VirtualGL still fails to access the 3D X server. The first line of the sanity check is xauth merge /etc/opt/VirtualGL/vgl_xauth_key, which would fail if /etc/opt/VirtualGL/vgl_xauth_key does not exist. I strongly suspect that you are mis-reporting something.

@WordBearerYI

Thank you for your answer!

2./opt/VirtualGL/bin/glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
GLX FB config ID of window: 0xa1 (8/8/8/0)
Visual ID of window: 0x1ea
Context is Indirect
OpenGL Renderer: Gallium 0.4 on llvmpipe (LLVM 3.9, 256 bits)
14.920510 frames/sec — 15.467197 Mpixels/sec
13.778510 frames/sec — 14.283355 Mpixels/sec
12.883871 frames/sec — 13.355936 Mpixels/sec
13.051807 frames/sec — 13.530025 Mpixels/sec
13.273729 frames/sec — 13.760078 Mpixels/sec
11.918804 frames/sec — 12.355509 Mpixels/sec
11.793930 frames/sec — 12.226060 Mpixels/sec
5.661016 frames/sec — 5.868435 Mpixels/sec
9.466578 frames/sec — 9.813433 Mpixels/sec
6.741568 frames/sec — 6.988579 Mpixels/sec
8.812639 frames/sec — 9.135534 Mpixels/sec
9.581499 frames/sec — 9.932565 Mpixels/sec
8.593932 frames/sec — 8.908814 Mpixels/sec
3.955810 frames/sec — 4.100751 Mpixels/sec
8.705185 frames/sec — 9.024143 Mpixels/sec
5.800145 frames/sec — 6.012662 Mpixels/sec
10.387560 frames/sec — 10.768160 Mpixels/sec
8.666726 frames/sec — 8.984275 Mpixels/sec
11.716695 frames/sec — 12.145994 Mpixels/sec
3.986369 frames/sec — 4.132429 Mpixels/sec
6.317764 frames/sec — 6.549247 Mpixels/sec
6.490367 frames/sec — 6.728174 Mpixels/sec
6.139027 frames/sec — 6.363961 Mpixels/sec
6.854301 frames/sec — 7.105443 Mpixels/sec
6.908684 frames/sec — 7.161818 Mpixels/sec
6.352117 frames/sec — 6.584858 Mpixels/sec
6.713118 frames/sec — 6.959087 Mpixels/sec
6.492184 frames/sec — 6.730058 Mpixels/sec
5.253095 frames/sec — 5.445568 Mpixels/sec
6.832524 frames/sec — 7.082868 Mpixels/sec
5.815673 frames/sec — 6.028759 Mpixels/sec
5.950129 frames/sec — 6.168141 Mpixels/sec
6.613416 frames/sec — 6.855732 Mpixels/sec
6.754384 frames/sec — 7.001864 Mpixels/sec
6.214130 frames/sec — 6.441815 Mpixels/sec

  1. I am using a google cloud instance, output of uname -a .
    Linux sbsy-vm 4.9.0-9-amd64 Migrate documentation to AsciiDoc #1 SMP Debian 4.9.168-1+deb9u5 (2019-08-11) x86_64 GNU/Linux

  2. You are correct, xauth merge /etc/opt/VirtualGL/vgl_xauth_key fails because the file does not exists. The other sanity checks run without outputing error messages.

P.S My task is based on . I need to build it and it requires:
OpenGL (Desktop / ES / ES2)
-(lin) sudo apt install libgl1-mesa-dev
Glew
-(win) built automatically (assuming git is on your path)
-(deb) sudo apt install libglew-dev
-(mac) sudo port install glew
I have not yet install mesa-dev, because I am not sure whether it will cause more problems.

@dcommander

You appear to be running Debian 9 («Stretch»), which should work with VirtualGL. (Debian 10 «Buster» has not been tested, to my knowledge. Refer to https://virtualgl.org/Documentation/OSSupport.) However, Google Cloud instances require additional configuration in order to make them work with VirtualGL. I provide that configuration assistance only as a paid service, but I can tell you that the major differences between a GCP VirtualGL configuration and an ordinary VirtualGL configuration are:

  1. The need to use a specific Google-supplied build of the nVidia drivers (https://cloud.google.com/compute/docs/gpus/add-gpus#installing_grid_drivers_for_virtual_workstations)
  2. The need to configure the X server as headless (https://virtualgl.org/Documentation/HeadlessNV)
  3. The need to install and enable a display manager (GDM, LightDM, etc.)

If the advice above is insufficient to make VirtualGL work with your GCP instance, then you will need to pay me to produce a configuration procedure for that specific instance. I do that work for a flat fee. Contact me through e-mail if you are interested in pursuing that: https://virtualgl.org/About/Contact.

@WordBearerYI

Thanks, sudo nvidia-xconfig -a --allow-empty-initial-configuration --use-display-device=None --virtual=1920x1200 --busid {busid} and re-run vgl configure script did the work for me.

Тема хотя и избитая, но решения на Debian пока не нашел, ставил свободный драйвер по руководству http://wiki.debian.org/ru/AtiHowTo, получил fps ~500, на Lubuntu 10.10 fps в glxgears был ~1000, даже баловался CS 1.6, теперь тормозит по страшному. Как повысить FPS?


У самого не стоит, но вот выписку сделал. У человека Убунта, такая же карта. Драйвер Mesa.

Цитировать3792 frames in 5.0 seconds = 758.294 FPS
3530 frames in 5.0 seconds = 703.979 FPS
2406 frames in 5.0 seconds = 480.835 FPS
1797 frames in 5.0 seconds = 359.340 FPS
3604 frames in 5.0 seconds = 720.696 FPS
3865 frames in 5.0 seconds = 772.991 FPS
3456 frames in 5.0 seconds = 691.067 FPS
3747 frames in 5.0 seconds = 749.375 FPS
3826 frames in 5.0 seconds = 765.146 FPS

Невелика разница.

:pardon:

8Gb/GTX750Ti 2Gb/AMD FX(tm)-4300 Quad-Core Processor/HDD 1Tb Toshiba DT01ACA100


хм… у меня тоже карточка radeon 9600.
Как проверить fps ? команда glxgears выдала: Error: couldn’t open display (null)

:shock:

Правда сейчас я сижу удаленно подключен к домашнему ПК через ssh, и дисплей мог как бы уснуть….
и еще пытался посмотреть на /etc/X11/xorg.conf файла не нашел….. бррр… я что-нить путаю или в дебиане такого файла не должно быть ???


Цитироватьи еще пытался посмотреть на /etc/X11/xorg.conf файла не нашел….. бррр… я что-нить путаю или в дебиане такого файла не должно быть ???

Его нигде нет если сам не создашь для старых видео карт, а не только в Debian.


ок, с этим понятно.
что насчет fps ? как проверить сколько сейчас выдает видеокарта ?


Цитата: «StrangerX600»ок, с этим понятно.
что насчет fps ? как проверить сколько сейчас выдает видеокарта ?

А что пишет ?

glxinfo | grep rendering


Цитата: «Udachnik»

Цитата: «StrangerX600»ок, с этим понятно.
что насчет fps ? как проверить сколько сейчас выдает видеокарта ?

glxgears


Udachnik

glxinfo | grep rendering

Error: unable to open display

:unknown:

kosst

glxgears

Error: couldn’t open display (null)


apt-get install mesa-utils?


Цитата: «kosst»apt-get install mesa-utils?

Не, тогда бы написал, что команда не найдена.

StrangerX600, из под обычного пользователя запускал?


serg:~$ sudo apt-get install mesa-utils
Чтение списков пакетов… Готово
Построение дерева зависимостей
Чтение информации о состоянии… Готово
Уже установлена самая новая версия mesa-utils.
обновлено 0, установлено 0 новых пакетов, для удаления отмечено 0 пакетов, и 2 пакетов не обновлено.

Vasiliy

Да запускал из под обычного пользователя


Под root:

glxinfo | grep rendering

Error: unable to open display

glxgears

Error: couldn’t open display (null)


[spoiler:2g4nb1ej][    5.522105] [drm] radeon: 128M of VRAM memory ready
[    5.522108] [drm] radeon: 512M of GTT memory ready.
[    5.522115] [drm] GART: num cpu pages 131072, num gpu pages 131072
[    5.525287] [drm] radeon: 1 quad pipes, 1 Z pipes initialized.
[    5.525299] [drm] radeon: cp idle (0x10000C03)
[    5.539383] [drm] Loading R300 Microcode
[    5.539533] platform radeon_cp.0: firmware: requesting radeon/R300_cp.bin
[    5.547621] radeon_cp: Failed to load firmware «radeon/R300_cp.bin»
[    5.547674] [drm:r100_cp_init] *ERROR* Failed to load firmware!
[    5.547721] radeon 0000:01:00.0: failled initializing CP (-2).
[    5.547767] radeon 0000:01:00.0: Disabling GPU acceleration
[    5.547815] [drm] radeon: cp finalized
[    5.550546] [drm] Default TV standard: PAL
[    5.550550] [drm] 27.000000000 MHz TV ref clk
[    5.550554] [drm] DFP table revision: 4
[    5.550662] [drm] Default TV standard: PAL
[    5.550665] [drm] 27.000000000 MHz TV ref clk
[    5.550713] [drm] Radeon Display Connectors
[    5.550716] [drm] Connector 0:
[    5.550718] [drm]   VGA
[    5.550722] [drm]   DDC: 0x60 0x60 0x60 0x60 0x60 0x60 0x60 0x60
[    5.550724] [drm]   Encoders:
[    5.550726] [drm]     CRT1: INTERNAL_DAC1
[    5.550728] [drm] Connector 1:
[    5.550730] [drm]   DVI-I
[    5.550732] [drm]   HPD1
[    5.550735] [drm]   DDC: 0x64 0x64 0x64 0x64 0x64 0x64 0x64 0x64
[    5.550738] [drm]   Encoders:
[    5.550740] [drm]     CRT2: INTERNAL_DAC2
[    5.550742] [drm]     DFP1: INTERNAL_TMDS1
[    5.550743] [drm] Connector 2:
[    5.550745] [drm]   S-video
[    5.550747] [drm]   Encoders:
[    5.550749] [drm]     TV1: INTERNAL_DAC2
[    5.762358] [drm] fb mappable at 0xE8040000
[    5.762362] [drm] vram apper at 0xE8000000
[    5.762364] [drm] size 3145728
[    5.762367] [drm] fb depth is 24
[    5.762369] [drm]    pitch is 4096
[    5.830011] Console: switching to colour frame buffer device 128×48
[    5.844616] fb0: radeondrmfb frame buffer device
[    5.844619] registered panic notifier[/spoiler:2g4nb1ej]
выполнил команду dmesg | less
напугало вот это:
[    5.539533] platform radeon_cp.0: firmware: requesting radeon/R300_cp.bin
[    5.547621] radeon_cp: Failed to load firmware «radeon/R300_cp.bin»
[    5.547674] [drm:r100_cp_init] *ERROR* Failed to load firmware!
[    5.547721] radeon 0000:01:00.0: failled initializing CP (-2).
[    5.547767] radeon 0000:01:00.0: Disabling GPU acceleration

:shock:


А установлен ли пакет firmware-linux-nonfree? В нём прошивки для всех Радеонов. Без этого пакета вполне логично 3d не заработает. Для установки должна быть подключена секция non-free.

Цитировать[ 5.547621] radeon_cp: Failed to load firmware «radeon/R300_cp.bin»
[ 5.547674] [drm:r100_cp_init] *ERROR* Failed to load firmware!

Точно, отсутствует firmware.



Вверх
Страницы1 2 3 4 5 6

I am sorry to reporting this well known error message again.

Nothing works for me. I am running on MacOS Catalina, if it is important.

I installed xdotool with brew on my Mac and try to run

xdotool getmouselocation

The error message that follows is

Error: Can't open display: (null)
Failed creating new xdo instance

I was searching for a solution for a long time, found plenty of answers that said. Run export DISPLAY=:0 and everything is fine. But this didn’t work for me.

PS: For better understanding: What does DISPLAY means exactly? It is the monitor of my computer?

halfer's user avatar

halfer

19.7k17 gold badges95 silver badges183 bronze badges

asked Dec 1, 2019 at 21:17

mrbela's user avatar

3

According to official notice by apple

X11 is no longer included with Mac, but X11 server and client libraries are available from the XQuartz project.

why X11 matters in this case ?

xdotool — command-line X11 automation tool.

So Alongside setting export DISPLAY=:0

install xquartz.

What does DISPLAY means exactly?

according x manual

From the user’s perspective, every X server has a display name of the form:

               hostname:displaynumber.screennumber

This information is used by the application to determine how it should
connect to the server and which screen it should use by default (on
displays with multiple monitors):

  1. hostname
    The hostname specifies the name of the machine to which the display is physically connected. If the hostname is not given, the
    most efficient way of communicating to a server on the same machine
    will be used.

  2. displaynumber
    The phrase «display» is usually used to refer to collection of monitors that share a common keyboard and pointer (mouse, tablet,
    etc.). Most workstations tend to only have one keyboard, and
    therefore, only one display. Larger, multi-user systems, however,
    frequently have several displays so that more than one person can be
    doing graphics work at once. To avoid confusion, each display on a
    machine is assigned a display number (beginning at 0) when the X
    server for that display is started. The display number must always be
    given in a display name.

  3. screennumber
    Some displays share a single keyboard and pointer among two or more monitors. Since each monitor has its own set of windows, each
    screen is assigned a screen number (beginning at 0) when the X server
    for that display is started. If the screen number is not given, screen
    0 will be used.

there is simpler description found here

A display consists (simplified) of:

  • a keyboard
  • a mouse
  • a screen

i.e. when you connect over ssh you are using different sets of these 3.

Community's user avatar

answered Dec 6, 2019 at 5:29

Devidas's user avatar

DevidasDevidas

2,4199 silver badges24 bronze badges

View previous topic :: View next topic  
Author Message
mrl4n
l33t
l33t

Joined: 08 Apr 2009
Posts: 681

PostPosted: Sat Oct 01, 2016 4:48 pm    Post subject: gnome/gdm hangs on start after upgrade [solved] Reply with quote

Hi, today I’ve emerge system (about 30 upgrades), but now I can’t start and login gnome.

I run journalctl -rb and I see a lot of errors, but many of them don’t block login.

Here result.

And here my emerge —info

Before makes any change I need, if possible, some suggest.

Thanks in advance

Last edited by mrl4n on Thu Oct 06, 2016 7:37 am; edited 1 time in total

Back to top

View user's profile Send private message

russK
l33t
l33t

Joined: 27 Jun 2006
Posts: 656

PostPosted: Sat Oct 01, 2016 6:27 pm    Post subject: Reply with quote

mrl4n,

It may be a long shot, but with nvidia, it it gets out of sync with the running kernel it may not load.

Maybe this would help:

Code:
# emerge -1 nvidia-drivers

# modprobe -r nvidia

# sytemctl restart gdm

Back to top

View user's profile Send private message

mrl4n
l33t
l33t

Joined: 08 Apr 2009
Posts: 681

PostPosted: Sun Oct 02, 2016 7:18 am    Post subject: Reply with quote

Probably there is a problem between xorg (1.18 ) and video drivers; when I try to run modprobe -r nvidia system says

Code:
modprobe: FATAL: Module nvidia_modeset is in use.

modprobe: FATAL: Error running remove command for nvidia

I’m checking xorg.conf and kernel

Edit: I’ve emerged (again) nvidia-drivers, gdm and xorg; now at start «Oh no something has gone wrong». I set automatic login for user, and if I restart gdm can login on a desktop half-gnome and half-X. Now I don’t understand what’s happens, really.

Here an example

Back to top

View user's profile Send private message

mrl4n
l33t
l33t

Joined: 08 Apr 2009
Posts: 681

PostPosted: Tue Oct 04, 2016 10:17 am    Post subject: Reply with quote

I’m being crazy…

I’ve check all system, emerged (more times) xorg 1.18 (server and driver), gdm 3.20 and nvidia-drivers 361.28 and in the log I see a problem with glx module and the problem is probably with rendering

If I launch

Code:
# glxgears

Error: couldn’t open display (null)

Please help me :cry: :cry: :cry:

Back to top

View user's profile Send private message

mrl4n
l33t
l33t

Joined: 08 Apr 2009
Posts: 681

PostPosted: Thu Oct 06, 2016 7:35 am    Post subject: Reply with quote

A simply solution for a big and strange problem…

Code:
emerge —update —newuse —deep @world

then

Code:
emerge —depclean

One thing I don’t understand: at start I see message «FAILED to load kernel module» but all works fine.

How working systemd and portage now? 8O 8O

Back to top

View user's profile Send private message

Display posts from previous:   

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

Понравилась статья? Поделить с друзьями:
  • Error couldn t open custom hpk
  • Error couldn t load pics colormap pcx
  • Error couldn t load image xp
  • Error couldn t find rgb glx visual or fbconfig
  • Error could not write to tuplestore temporary file no space left on device