Error requested operation is not valid domain is not running

Description Richard W.M. Jones 2012-08-31 09:06:33 UTC


Description


Richard W.M. Jones



2012-08-31 09:06:33 UTC

Description of problem:

When trying to shut down a transient domain which *is* running I get:

*stdin*:31: libguestfs: error: could not destroy libvirt domain: Requested operation is not valid: domain is not running [code=55 domain=10]

This used to work in libvirt-0.10.0-0rc0.2.fc18.x86_64 but
seems to have broken in Rawhide (libvirt-0.10.0-1.fc19.x86_64).

Version-Release number of selected component (if applicable):

libvirt-0.10.0-1.fc19.x86_64

How reproducible:

At least once.

Steps to Reproduce:
1. Build libguestfs in Rawhide.
 
Actual results:

http://kojipkgs.fedoraproject.org//work/tasks/8635/4438635/build.log


Comment 2


Osier Yang



2012-08-31 09:58:01 UTC

I couldn't reproduce the problem on top of libvirt git.


Comment 3


Osier Yang



2012-08-31 09:58:49 UTC

(In reply to comment #2)
> I couldn't reproduce the problem on top of libvirt git.

No libguestfs building surely, :-) Just trying to destroy a transient domain.


Comment 4


Richard W.M. Jones



2012-08-31 11:08:25 UTC

I get a similar but different error when running this
on my local machine:

libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 01 1a | 00 00 00 01 | 00 12 34 04 | ...
libguestfs: error: could not destroy libvirt domain: End of file while reading data: Input/output error [code=38 domain=7]
libguestfs-test-tool: shutdown failed
libguestfs: closing guestfs handle 0x665f80 (state 0)


Comment 7


Richard W.M. Jones



2012-08-31 11:42:14 UTC

Looking at it closer, I think what's happening is that
qemu segfaults when libvirt sends it a signal to shut down.
(That's a bug in qemu obviously).  But then libvirt ought
to be able to distinguish this case -- we really care if
qemu segfaults, but it could indicate data integrity issues.

I will try and catch the qemu segfault if I can.


Comment 8


Richard W.M. Jones



2012-08-31 11:42:55 UTC

(In reply to comment #7)
> qemu segfaults, but it could indicate data integrity issues.

s/but/because/


Comment 9


Osier Yang



2012-08-31 12:13:55 UTC

(In reply to comment #5)
> Created attachment 608486 [details]
> libvirt.log
> 
> Actually when running locally, I get both errors.
> 
> Attached is the libvirt log requested.
> 
> (In reply to comment #3)
> > (In reply to comment #2)
> > > I couldn't reproduce the problem on top of libvirt git.
> > 
> > No libguestfs building surely, :-) Just trying to destroy a transient domain.
> 
> What did you do to try to reproduce this?  libguestfs is a big
> C program and it creates and destroys the transient guest
> entirely through the API:

I simply used virsh to destroy a transient domain.

> https://github.com/libguestfs/libguestfs/blob/
> 87cb1549761c9441b0fa7ee9b6a85b8eeb164c5c/src/launch-libvirt.c
> I'm pretty sure it's not libguestfs at fault here since
> (a) it works fine with other libvirt and (b) its use of the
> API is very simple.


Comment 10


Richard W.M. Jones



2012-08-31 12:42:53 UTC

So I've verified that what is happening is that
qemu is segfaulting when libvirtd sends it a
signal (new bug 853408).

But definitely libvirt could improve the error
message here.  It's a good thing that libvirt
indicates some sort of error, because we really
want to know when this fails, but it should say
something like 'qemu just segfaulted'.


Comment 11


Daniel Berrangé



2012-09-04 11:57:44 UTC

From the POV of the virDomainDestroy() command, whether QEMU segfaults or shuts down cleanly is academic, since this command makes no guarantees about how QEMU is stopped, and indeed will even send SIGKILL to QEMU which arguably has similar effect to SEGV. So having QEMU SEGV after sending it a SIGTERM should be considered 'Success' for this function. As such we should not be returning the "Operation is not valid' error code.


Comment 12


Richard W.M. Jones



2012-09-04 12:20:00 UTC

13:07 <@rwmjones> danpb: what should I be using if I care about whether qemu shuts down without segfaulting?
13:09 < danpb> oh, pass the GRACEFUL flag to virDomainDestroy
13:09 < danpb> that means we'll only ever ask qemu to do a clean shutdown, and never try to SIGKILL it
13:10 < danpb> if we pass that flag, then you are right that we should report the SEGV as an error condition for virDomainDestory


Comment 13


Richard W.M. Jones



2012-09-04 15:53:21 UTC

(In reply to comment #12)
> 13:07 <@rwmjones> danpb: what should I be using if I care about whether qemu
> shuts down without segfaulting?
> 13:09 < danpb> oh, pass the GRACEFUL flag to virDomainDestroy
> 13:09 < danpb> that means we'll only ever ask qemu to do a clean shutdown,
> and never try to SIGKILL it
> 13:10 < danpb> if we pass that flag, then you are right that we should
> report the SEGV as an error condition for virDomainDestory

I have fixed this in libguestfs 1.19.39.

This post lists KVM known issues, how to troubleshoot and resolve them.

Error: Log files

There are various trace/log files generated by KVM. The files are follows:

  • /var/log/libvirt/qemu — log file for every guest VM (domain_name.log)
  • $HOME/.virtinst/virt-install.log – virt-install tool log file.
  • $HOME/.virt-manager/virt-manager.log — virt manager log

Error: Create KVM Guest VMs with ISO

There are many methods of creating a guest on KVM machine: the virt-install tool, the virt-manager tool, and the virsh tool. The following command code example creates a KVM guest running CentOS/RHEL 7. Example running storage pool repository (/kvm_repos/kvm_repo1/) MACVTap network bridge over public NIC btbond1 will be used.

The basic Command:

virt-install --name=RHEL7-1 
--ram=16384 
--vcpus=4 
--os-type=linux 
--os-variant=rhel7 
--accelerate 
--disk /kvm_repos/kvm_repo1/OL-1_boot.img,device=disk,size=40,sparse=yes,cache=none,format=qcow2,bus=virtio 
--network type=direct,source=btbond1,model=virtio 
--vnc 
--noautoconsole 
--cdrom=/iso/RHEL7-U3-Server-x86_64-dvd.iso

Parameters Details

—name: Name of the guest instance. –ram: Memory to allocate for guest instance in megabytes. –vcpus: Number of vcpus to configure for your guest. –os-type: The OS type being installed. –os-variant: The OS variant being installed guests. –accelerate: KVM kernel acceleration capabilities are used. –disk: you define the path, then comma delimited options, device is the type of storage, bus is the interface ide, scsi, usb, virtio – virtio is the fastest. –network: The network configuration, in this case we are connecting to a MACVTap bridge over “btbond1”, and using the virtio drivers which perform much better. –vnc: configures the graphics card to use VNC allowing you to use virt-viewer or virt-manager to see the desktop as if you were at the a monitor of a physical machine. –noautoconsole: configures the installer to NOT automatically try to open virt-viewer (if installed) to view the console to complete the installation – this is helpful if you are working on a remote system through SSH without graphical environment. –cdrom: this option specifies the iso image with which to boot off.

virt-manager, also known as Virtual Machine Manager, is a graphical tool for creating and managing guest virtual machines.

1. Lunch virt-manager. 2. Create a new virtual machine — Click the Create a new virtual machine button to open the new vm wizard. 3. Specify the installation type “Local install media (ISO image or CDROM)” 4. Locate the ISO image, Configure OS Type and Version 5. Configure CPU and memory 6. Configure the virtual storage

7. Final Configuration — Verify the settings of the virtual machine and click “Begin Installation”.

Error: (domain_definition):9: StartTag: invalid element name

The following error message appears when trying to modify Domain using virsh.

# virsh edit RHEL6.10
error: (domain_definition):9: StartTag: invalid element name
<bootmenu enable='yes'/>  <

This error message shows that the parser expects a new element name after the < symbol on line 9 of a guest’s XML file. Open the XML file, and locate the text on line 9: this snippet of a guest’s XML file contains an extra < symbol, correct this.

Error: (domain_definition):20: Unescaped ‘<’ not allowed in attributes values

The following error message appears when trying to modify Domain using virsh.

virsh edit RHEL6.10
error: (domain_definition):20: Unescaped '<' not allowed in attributes values <timer name='rtc' tickpolicy='catchup'/>
----------^

In a guest’s XML file contains an unterminated element attribute value, In this case, timer name=‘rtc’ is missing a second quotation mark. Attribute values must be opened and closed with quotation marks.

Error: (domain_definition):22: Opening and ending tag mismatch: domain line 1

The following error message appears when trying to modify Domain using virsh.

virsh edit RHEL6.10
error: (domain_definition):22: Opening and ending tag mismatch: domain line 1 and clock</clock>
----------^

Message following the last colon, clock line 22 and domain, reveals that contains a mismatched tag on line 22. To identify the problematic tag, read the error message for the context of the file, and check error, correct the XML for missing tag and save the changes.

Error: (domain_definition):1: Specification mandate value for attribute type [domain ty pe=‘kvm’]

The following error message appears when trying to modify Domain using virsh.

virsh edit RHEL7
error: (domain_definition):1: Specification mandate value for attribute ty<domain ty pe='kvm'>
-----------^

XML errors are caused by a typo error. This error message highlights the XML error — in this case, an extra white space within the word type — with a pointer, when trying to save the file and press i. To identify the problematic tag, read the error message for the context of the file, and check error, correct the XML and save the changes.

Error: failed to connect to the hypervisor using URI or Failed to connect to the hypervisor or No connection driver available

There are lots of errors that can occur while connecting to the server using URI, e.g when running a command, the following error (or similar) appears:

$ virsh -c [uri] list
error: no connection driver available for No connection for URI [uri]
error: failed to connect to the hypervisor

This can happen when libvirt is compiled from sources, verify if this is third party complied rpms.

Error: Cannot read CA certificate ‘/etc/pki/CA/cacert.pem’: No such file or directory or Failed to connect to the hypervisor

The error:

$ virsh -c [uri] list
error: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory
error: failed to connect to the hypervisor
1. specified URI is wrong (missing one '/' -- e.g. 'qemu://system')

When specifying qemu://system or qemu://session as a connection URI, virsh attempts to connect to host names’ system or session respectively. This is because virsh recognizes the text after the second forward slash as the host.

Use three forward slashes to connect to the local host. For example, specifying qemu:///system instructs virsh connect to the system instance of libvirtd on the local host. When a host name is specified, the QEMU transport defaults to TLS. This results in certificates.

The URI is correct (for example, qemu[+tls]://server/system) but the certificates are not set up properly on your machine.

$ virsh -c qemu:///system list
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
error: failed to connect to the hypervisor

connect as non-root user, configure following options in ‘/etc/libvirt/libvirtd.conf’ accordingly:

unix_sock_group = [group]
unix_sock_ro_perms = [perms]
unix_sock_rw_perms = [perms]

unable to connect to server at ‘host:16509’: Connection refused

The error:

virsh -c qemu+tcp://HOSTNAME/system
error: failed to connect to the hypervisor
error: unable to connect to server at 'HOSTNAME:16509': Connection refused

The libvirt daemon is not listening on TCP ports even after changing configuration in /etc/libvirt/libvirtd.conf:

# cat /etc/libvirt/libvirtd.conf|grep -i listen_
#listen_tls = 0
#listen_tcp = 1
#listen_addr = "192.168.0.1"

Start the daemon with the –listen option. To do this, modify the /etc/sysconfig/libvirtd file and uncomment the following line:

#LIBVIRTD_ARGS="--listen"

Then, restart the libvirtd service with this command:

# /bin/systemctl restart libvirtd.service

Error: libvirtd failed to start

The libvirt daemon does not start automatically at boot time. Also manually Starting the libvirt daemon fails, there is no ‘more info’ about errors in /var/log/messages.

# systemctl start libvirtd.service

* Starting libvirtd ...
/usr/sbin/libvirtd: error: Unable to initialize network sockets. Check /var/log/messages or run without --daemon for more info.
* start-stop-daemon: failed to start '/usr/sbin/libvirtd' [ !! ]
* ERROR: libvirtd failed to start
# cat /etc/libvirt/libvirtd.conf | grep -i log
# Logging controls
# Logging level: 4 errors, 3 warnings, 2 information, 1 debug
# basically 1 will log everything possible
# WARNING: The "log_filters" setting is recommended instead.
# WARNING: will limit "log_level" to only allow values 3 or 4 if
#log_level = 3#
Logging outputs:
# An output is one of the places to save logging information
# level:syslog:name
# use syslog for the output and use the given name as the ident
# output to journald logging system
# e.g. to log all warnings and errors to syslog under the libvirtd ident:
#log_outputs="3:syslog:libvirtd"

Change logging in /etc/libvirt/libvirtd.conf for warningand errors by enabling the line below. To enable the setting the line, open the /etc/libvirt/libvirtd.conf file in a text editor, remove the hash (or #) symbol from the beginning of the following line, and save the change:

log_outputs="3:syslog:libvirtd"

After diagnosing the problem, comment this line again in the /etc/libvirt/libvirtd.conf file to avoid excessive logs on server. Restart libvirt to see more error and debug accordingly.

Error: Bridge networking interface does not show up

Let’s take an example, want to use a custom bridge, called br0. However, when try to setup a network and make guests use the specific interface, it does not show in the dropdown menu in the Virtual Machine Manager.

manually edit the configuration file for domain. By default, KVM stores files in two locations, either /etc/kvm/vm or /etc/libvirt/qemu, Open the relevant one and manually change the bridged adapter details under . Save and Close the file, start the virtual machine.

<interface type='bridge'>
<source bridge='br0'/>
<mac address='XXX'/>
</interface>

Error: Biosdevname & no network

Biosdevname is a utility that tries to assign BIOS-given names to devices, preserving commonality and simplifying the logic of hardware administration, especially with network devices. There are several ways you can work around the problem.

One, some versions of the utility are capable of detecting they are being invoked inside a virtual environment and will exit without machine any changes. Second one is to pass a kernel argument in the GRUB menu; biosdevname=0 will disable the utility from running.

Third option is to hack the udev rules used to assign names to network cards. e.g this one is designed for a machine with a single network adapter, hence you will need a more dynamic logic for multiple cards. Or, as the commented text says, you will need to create a single separate line for each rule.

You can make a manual change to get the classic assignment:

# vi /etc/udev/rules.d/70-persistent-net.rules

Replace NAME string from “eth_biosname” to “ethX” or anything you need:

# PCI device ...
SUBSYSTEM=="net", ACTION=="add", ATTR(type)=="1", KERNEL=="eth*", NAME="eth_biosname"
# PCI device ...
SUBSYSTEM=="net", ACTION=="add", ATTR(type)=="1",KERNEL=="eth*", NAME="eth0"

You will get different rules depending on the virtual hardware you use in your virtual machines. For example, you could opt for Realtek, e1000 or virtio virtual hardware, resulting in other strings. Pay attention and make sure you match the solution to your specific environment.

Error: Domain already exists

The error:

virsh define machine.xml
error: Failed to define domain from machine.xml
error: operation failed: domain 'machine' already exists with uuid XXX

What happens if you try to define a new domain and it already exists, except you are having trouble finding its configuration file or declaration. virsh list does not have, virt-manager does not show it anywhere.

You will have to find the configuration file and delete it. And then restart libvirtd service.

# updatedb
# locate ol6.10
/etc/libvirt/qemu/ol6.10.xml
/root/ol6.10.xml
/var/lib/libvirt/images/ol6.10.qcow2
/var/lib/libvirt/qemu/domain-1-ol6.10
/var/lib/libvirt/qemu/channel/target/domain-1-ol6.10
/var/lib/libvirt/qemu/domain-1-ol6.10/master-key.aes
/var/lib/libvirt/qemu/domain-1-ol6.10/monitor.sock
/var/log/libvirt/qemu/ol6.10.log
# rm [full path to machine.xml]
# /etc/init.d/libvirtd restart

Error: Failed to create domain from machine.xml or Internal error Unable to find cgroup for machine

This problem may manifest after restarting libvirtd, like in the example, or completely unrelated. A full error message may look like this:

virsh create machine.xml
error: Failed to create domain from machine.xml
error: internal error Unable to find cgroup for machine

The reason for this could be related to systemd, In most cases, it’s a very indelicate race between cgroups and libvirtd, caused by libvirtd service coming up before cgroups, one of the cgroups being deleted or not existing in the first place. You can resolve the problem by editing the libvirtd configuration, /etc/libvirt/qemu.conf.

inside this file, you need to edit the cgroups_controllers directive so it does not list any cgroups, in which case, libvirtd will be able to run without them.

cgroups_controllers = [ ]

After this, you will have to restart libvirtd again. Alternatively, you will have to manually create the necessary cgroups and assign the libvirtd process into the relevant subsystem.

Error: Virtual machine vanishes from VMM on halt/reboot

This could be a very simple issue, in fact. After you halt or reboot the virtual machine, the virtual machine console closes. Your configurations are in place, but this is a major inconvenience, as you have to interfere in the machine management cycle.

You need to look for the on_reboot clause in the relevant XML file for your virtual machine and make sure the action is set to restart rather than destroy. There you go, that’s all there is to it.

<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>

Error: No reboot action; function is not supported

Again, very related to the previous tip. If you specify reboot instead of restart, you will learn this is an invalid command that KVM cannot execute.

libvirtError: this function is not supported by the hypervisor: virDomainReboot

If you use restart, all will be well.

Error: Boot loader error after installation

Sometimes, you may see a GRUB error, most likely number 15, on first reboot following an installation from an external media source. This can happen if you leave the CD/DVD image attached to the virtual machine and selected as the first boot device in your XML file. The problem is similar to this VirtualBox bug, affecting some of the Linux distributions out there. For example:

root (hd0,1)
Filesystem type is ext2fs, partition type 0x83
kernel /boot/vmlinuz

Error 15: File not found

Press any key to continue...

You can resolve by unmounting the ISO image and rebooting the guest. This time, it should work well. Please note the issue will not manifest if you set your hard disk as the first bootable device, because KVM will automatically skip to the second available source, most likely PXE or CD/DVD, if it cannot find a valid partition table on the disk, which should be the case if you’re only installing now.

Error: Server hung or non-recoverable errors, how to collect troubleshooting data

How to inject an NMI to the guest virtual machine — This is used when response time is critical, such as during non-recoverable hardware errors. In addition, virsh inject-nmi is useful for triggering a crashdump in Windows guests.

The following example sends an NMI to the guest1 virtual machine:

# virsh inject-nmi RHEL7

Error: Starting Virtual Machine giving error " Failed to start domain RHEL7"  Cannot access storage file '/var/lib/libvirt/images/test_vol2.qcow2': No such file or directory

Virtual Machine attached disk was deleted from KVM Server repository without detach-disk fromVM and them VM was rebooted, verify xml file for Virtual Machine.

# virsh detach-disk --domain ORACLE_LINUX7 --persistent --live --target vdb

error: Failed to detach disk
error: Requested operation is not valid: domain is not running
[ ~]#
virsh detach-disk --domain ORACLE_LINUX7 --persistent --target vdb
Disk csuccessfully

Error: Blue screen at boot

Since Windows 10 1803 there is a problem when you are using “host-passthrough” as cpu model. The machine cannot boot and is either boot looping or you get a bluescreen. You can workaround this by:

# echo 1 > /sys/module/kvm/parameters/ignore_msrs

To make it permanently you can create a modprobe file kvm.conf:

options kvm ignore_msrs=1

To prevent logging up dmesg with “ignored rdmsr” messages you can additionally add:

options kvm report_ignored_msrs=0

If you are going to install Windows from a CD/DVD (instead of an ISO file), make sure that the user which you are running virt-manager as, has read access to the optical drive device on your system. Otherwise, virt-manager may not let you select your drive as an install media location.

# virsh edit RHEL 7

Domain RHEL7 XML configuration not changed.

Error: During the installation of Windows, the error “A disk read error occurred”

During the installation of Windows, the error “A disk read error occurred” shows up during boot time, not allowing you to complete the installation. The problem here is that for whatever reason, virt-manager by default creates disk images using the raw format, and the Windows installer does not like that format. The solution is to convert your disk image to qcow2 format.

To convert your existing image:

# cd /var/lib/libvirt/images/ # or whatever other location you keep your images at
# qemu-img xp.img -O qcow2 xp-qcow2.img

Have to start the installation process again, and re-format the disk, after converting the image to qcow2 format, then attach again and restart installation.

I’m not sure if I can do these steps in practice, but I could perform the following steps in the system, and I got the an error.

DevStack branch=stable/rocky.

1. ./unstack.sh && ./clean.sh && ./stack.sh
2. source openrc admin admin
3. openstack flavor create —ram 21 —disk 0 —vcpus 1 custom
4. openstack server create —flavor custom —image cirros-0.3.5-x85_64-disk —flavor custom test
5. openstack server show test

+————————————-+——————————————————————+
| Field | Value |
+————————————-+——————————————————————+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | wallacec-ubuntu |
| OS-EXT-SRV-ATTR:hypervisor_hostname | wallacec-ubuntu |
| OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
| OS-EXT-STS:power_state | Paused |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-01-24T23:13:07.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2001:db8::d, 192.168.1.228 |
| config_drive | |
| created | 2019-01-24T23:13:01Z |
| flavor | custom (ac9f385c-efaa-4b93-acec-8184beb53ca3) |
| hostId | d99cb6d42c024008ba7f954f95a59d73313aebf95098e30ccb7f10f0 |
| id | e7825018-5fd7-4377-a6c1-cf36c269d849 |
| image | cirros-0.3.5-x86_64-disk (3739ba2a-34ab-4bcd-8fd3-70a186131e54) |
| key_name | None |
| name | test |
| progress | 0 |
| project_id | 6a0880f1c0b946acb71d61af9a92900b |
| properties | |
| security_groups | name=’default’ |
| status | ACTIVE |
| updated | 2019-01-24T23:13:08Z |
| user_id | 7c9be80e945f4333ad34d11f64643f51 |
| volumes_attached | |
+————————————-+——————————————————————+

6. openstack server rebuild test
7. openstack server list

+—————————————+——+———+————————————+—————————+———+
| ID | Name | Status | Networks | Image | Flavor |
+—————————————+——+———+————————————+—————————+———+
| e7825018-5fd7-4377-a6c1-cf36c269d849 | test | ERROR | public=2001:db8::d, 192.168.1.228 | cirros-0.3.5-x86_64-disk | custom |
+—————————————+——+———+————————————+—————————+———+

Logs:

Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m File «/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py», line 186, in doit
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m result = proxy_call(self._autowrap, f, *args, **kwargs)
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m File «/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py», line 144, in proxy_call
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m rv = execute(f, *args, **kwargs)
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m File «/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py», line 125, in execute
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m six.reraise(c, e, tb)
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m File «/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py», line 83, in tworker
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m rv = meth(*args, **kwargs)
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m File «/usr/local/lib/python2.7/dist-packages/libvirt.py», line 2454, in shutdown
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m if ret == -1: raise libvirtError (‘virDomainShutdown() failed’, dom=self)
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00mlibvirtError: Requested operation is not valid: domain is not running
Jan 24 21:15:03 wallacec-ubuntu nova-compute[24652]: ERROR oslo_messaging.rpc.server #033[01;35m#033[00m#033[00m
Jan 24 21:15:04 wallacec-ubuntu dstat.sh[10952]: 24-01 21:15:04| 18 2 78 1 0 0|7065M 29.0M 604M 184M| 0 0 | 0 520k| 0 34.0 |2664 6736 |2.15 2.03 1.96|3.0 0 0| 0 0 |cinder-volume 132574.8% 0 0 |dstat 10953 88k 375B0.5%|java 687M|3571M 244M| 33 361 1 64 1
Jan 24 21:15:05 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mAgent rpc_loop — iteration:7664 started#033[00m #033[00;33m{{(pid=11404) rpc_loop /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2090}}#033[00m#033[00m
Jan 24 21:15:05 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0x81a99048,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) result [OFPFlowStatsReply(body=[OFPFlowStats(byte_count=0,cookie=3209422424581828967,duration_nsec=404000000,duration_sec=15338,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],length=56,match=OFPMatch(oxm_fields={}),packet_count=0,priority=0,table_id=23)],flags=0,type=1)]#033[00m #033[00;33m{{(pid=11404) _send_msg /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py:114}}#033[00m#033[00m
Jan 24 21:15:05 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mAgent rpc_loop — iteration:7664 completed. Processed ports statistics: {‘regular’: {‘updated’: 0, ‘added’: 0, ‘removed’: 0}}. Elapsed:0.009#033[00m #033[00;33m{{(pid=11404) loop_count_and_wait /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1875}}#033[00m#033[00m
Jan 24 21:15:05 wallacec-ubuntu dstat.sh[10952]: 24-01 21:15:05| 16 6 78 0 0 0|7065M 29.1M 606M 183M| 536B 0 |4096B 152k|1.00 9.00 |2738 6627 |2.15 2.03 1.96|2.0 0 2.0| 0 0 |cinder-volume 132575.2% 0 0 |dstat 10953 88k 379B0.2%|java 687M|3571M 244M| 33 361 1 64 1
Jan 24 21:15:06 wallacec-ubuntu dstat.sh[10952]: 24-01 21:15:06| 19 3 78 0 0 0|7065M 29.1M 606M 182M| 731B 338B|4096B 48k|1.00 6.00 |2674 6177 |2.15 2.03 1.96|4.0 1.0 0|4096B 0 |cinder-volume 132576.2% 0 0 |dstat 10953 86k 381B0.8%|java 687M|3571M 244M| 33 361 1 55 1
Jan 24 21:15:07 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mAgent rpc_loop — iteration:7665 started#033[00m #033[00;33m{{(pid=11404) rpc_loop /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:2090}}#033[00m#033[00m
Jan 24 21:15:07 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0x81a9904a,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) result [OFPFlowStatsReply(body=[OFPFlowStats(byte_count=0,cookie=3209422424581828967,duration_nsec=407000000,duration_sec=15340,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],length=56,match=OFPMatch(oxm_fields={}),packet_count=0,priority=0,table_id=23)],flags=0,type=1)]#033[00m #033[00;33m{{(pid=11404) _send_msg /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py:114}}#033[00m#033[00m
Jan 24 21:15:07 wallacec-ubuntu neutron-openvswitch-agent[11404]: #033[00;32mDEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [#033[01;36mNone req-593d6fce-8410-47dc-9b3e-007cb815e113 #033[00;36mNone None#033[00;32m] #033[01;35m#033[00;32mAgent rpc_loop — iteration:7665 completed. Processed ports statistics: {‘regular’: {‘updated’: 0, ‘added’: 0, ‘removed’: 0}}. Elapsed:0.011#033[00m #033[00;33m{{(pid=11404) loop_count_and_wait /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1875}}#033[00m#033[00m

I git cloned the latest teuthology, ran ./bootstrap and ran the following command:

./virtualenv/bin/teuthology —lock —machine-type vps —os-type ubuntu —os-version precise ~/tests/test94.yaml

the output was:

./virtualenv/bin/teuthology --lock --machine-type vps --os-type ubuntu --os-version precise ~/tests/test94.yaml
2014-09-09 16:06:03,973.973 WARNING:teuthology.report:No job_id found; not reporting results
2014-09-09 16:06:03,976.976 INFO:teuthology.run:Tasks not found; will attempt to fetch
2014-09-09 16:06:03,976.976 INFO:teuthology.repo_utils:Fetching from upstream into /home/wusui/src/ceph-qa-suite_master
2014-09-09 16:06:04,808.808 INFO:teuthology.repo_utils:Resetting repo at /home/wusui/src/ceph-qa-suite_master to branch master
2014-09-09 16:06:04,823.823 INFO:teuthology.run_tasks:Running task internal.lock_machines...
2014-09-09 16:06:04,824.824 INFO:teuthology.task.internal:Locking machines...
2014-09-09 16:06:06,936.936 INFO:teuthology.provision:Downburst completed on ubuntu@vpm144.front.sepia.ceph.com: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ceph.com

2014-09-09 16:06:07,880.880 INFO:teuthology.provision:Downburst completed on ubuntu@vpm115.front.sepia.ceph.com: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ceph.com
downburst: Virtual machine with this name exists already: vpm115

2014-09-09 16:06:07,880.880 INFO:teuthology.provision:Guest files exist. Re-creating guest: ubuntu@vpm115.front.sepia.ceph.com
2014-09-09 16:06:08,443.443 ERROR:teuthology.provision:libvir: QEMU error : Requested operation is not valid: domain is not running
libvir: QEMU error : Requested operation is not valid: cannot undefine transient domain
Traceback (most recent call last):
  File "/home/wusui/src/downburst/virtualenv/bin/downburst", line 9, in <module>
    load_entry_point('downburst==0.0.1', 'console_scripts', 'downburst')()
  File "/home/wusui/src/downburst/downburst/cli.py", line 59, in main
    return args.func(args)
  File "/home/wusui/src/downburst/downburst/destroy.py", line 70, in destroy
    | libvirt.VIR_DOMAIN_UNDEFINE_SNAPSHOTS_METADATA,
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1386, in undefineFlags
    if ret == -1: raise libvirtError ('virDomainUndefineFlags() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: cannot undefine transient domain

2014-09-09 16:06:09,216.216 INFO:teuthology.provision:Downburst completed on ubuntu@vpm115.front.sepia.ceph.com: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ceph.com
downburst: Virtual machine with this name exists already: vpm115

2014-09-09 16:06:09,216.216 INFO:teuthology.provision:Guest files exist. Re-creating guest: ubuntu@vpm115.front.sepia.ceph.com
2014-09-09 16:06:09,767.767 ERROR:teuthology.provision:libvir: QEMU error : Requested operation is not valid: domain is not running
libvir: QEMU error : Requested operation is not valid: cannot undefine transient domain
Traceback (most recent call last):
  File "/home/wusui/src/downburst/virtualenv/bin/downburst", line 9, in <module>
    load_entry_point('downburst==0.0.1', 'console_scripts', 'downburst')()
  File "/home/wusui/src/downburst/downburst/cli.py", line 59, in main
    return args.func(args)
  File "/home/wusui/src/downburst/downburst/destroy.py", line 70, in destroy
    | libvirt.VIR_DOMAIN_UNDEFINE_SNAPSHOTS_METADATA,
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1386, in undefineFlags
    if ret == -1: raise libvirtError ('virDomainUndefineFlags() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: cannot undefine transient domain

2014-09-09 16:06:10,527.527 INFO:teuthology.provision:Downburst completed on ubuntu@vpm115.front.sepia.ceph.com: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ceph.com
downburst: Virtual machine with this name exists already: vpm115

2014-09-09 16:06:10,528.528 INFO:teuthology.provision:Guest files exist. Re-creating guest: ubuntu@vpm115.front.sepia.ceph.com
2014-09-09 16:06:11,206.206 ERROR:teuthology.provision:libvir: QEMU error : Requested operation is not valid: domain is not running
libvir: QEMU error : Requested operation is not valid: cannot undefine transient domain
Traceback (most recent call last):
  File "/home/wusui/src/downburst/virtualenv/bin/downburst", line 9, in <module>
    load_entry_point('downburst==0.0.1', 'console_scripts', 'downburst')()
  File "/home/wusui/src/downburst/downburst/cli.py", line 59, in main
    return args.func(args)
  File "/home/wusui/src/downburst/downburst/destroy.py", line 70, in destroy
    | libvirt.VIR_DOMAIN_UNDEFINE_SNAPSHOTS_METADATA,
  File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1386, in undefineFlags
    if ret == -1: raise libvirtError ('virDomainUndefineFlags() failed', dom=self)
libvirt.libvirtError: Requested operation is not valid: cannot undefine transient domain

^C2014-09-09 16:06:11,598.598 INFO:teuthology.run:Summary data:
{owner: wusui@aardvark, success: true}

2014-09-09 16:06:11,598.598 WARNING:teuthology.report:No job_id found; not reporting results
2014-09-09 16:06:11,598.598 INFO:teuthology.run:pass

~/tests/test94.yaml is fairly simple:

roles:
- [mon.a, mds.a, osd.0, osd.1,]
- [mon.b, client.0, osd.2, osd.3,]
tasks:
- install:
   branch: dumpling
- ceph:
   fs: xfs

Понравилась статья? Поделить с друзьями:
  • Error request interception is not enabled
  • Error ret 1002 aqsi
  • Error request from egais service 500 internal server error
  • Error restoring base system hfs imac
  • Error request for member which is of non class type arduino