Proxmox task error vm is locked clone

From time to time, you'll come across the need to kill a lock on your Proxmox server. Fear not, in today's guide we'll discuss the various lock errors you may face and how to unlock a Proxmox VM.

From time to time, you’ll come across the need to kill a lock on your Proxmox server. Fear not, in today’s guide we’ll discuss the various lock errors you may face and how to unlock a Proxmox VM.

Proxmox Locked VM Errors

Error: VM is locked

The «VM is locked» error is the most common circumstance in which you may want to kill a VM lock. This error has a lot of variants including:

  • Error: VM is locked (backup)
  • Error: VM is locked (snapshot)
  • Error: VM is locked (clone)

Error: VM is locked in Proxmox

Error: VM is locked in Proxmox

As you can see, they all share the same «Error: VM is locked» root, but with a suffix that indicates the task that initiated the lock, whether that task be a backup, a snapshot, or clone. This can be useful for determining IF you should clear the lock. (i.e. if the backup job is still running, you probably shouldn’t clear the lock and just let the backup process complete).

can’t lock file ‘/var/lock/qemu-server/lock-<VMID>.conf’ – got timeout

This is another common error, often seen when you’re trying to shutdown/stop a virtual machine or when qm unlock fails (see below).

Proxmox Unlock VM Methods

There are two main ways of clearing a lock on a Proxmox VM: 1) using the qm unlock command and 2) manually deleting the lock.

qm unlock

qm unlock should be your first choice for unlocking a Proxmox VM.

First, find your VM ID (it’s the number next to your VM in the Proxmox GUI). If you’re not using the WebGUI, you can obtain a list of your VM IDs with:

cat /etc/pve/.vmlist

Unlock the VM/kill the lock with:

qm unlock <VMID>

Now, this may not always work, and the command may fail with:

trying to acquire lock...
can't lock file '/var/lock/qemu-server/lock-<VMID>.conf' - got timeout

In that case, let’s move on to plan B: manually deleting the lock.

Manually Deleting the Lock

If you received the error message, «can’t lock file ‘/var/lock/qemu-server/lock-<VMID>.conf’ — got timeout», you can fix this by manually deleting the lock at that location. Simply put, you can just run the following command:

rm /var/lock/qemu-server/lock-<VMID>.conf

Obviously, this should be a last resort. It’s generally not a great practice to go around killing locks, but sometimes you have no choice.

I hope this guide helped you out. Let me know if you have any questions or feel I left something out in the comments/forums below!

Содержание

  1. TASK ERROR: start failed
  2. Mellgood
  3. How to Unlock a Proxmox VM
  4. Proxmox Locked VM Errors
  5. Error: VM is locked
  6. can’t lock file ‘/var/lock/qemu-server/lock- .conf’ – got timeout
  7. Proxmox Unlock VM Methods
  8. qm unlock
  9. Manually Deleting the Lock
  10. TASK ERROR: VM is locked (backup)
  11. jmjosebest
  12. jmjosebest
  13. PC Rescue
  14. RollMops
  15. fabian
  16. RollMops
  17. fabian
  18. RollMops
  19. TASK ERROR: VM is locked (backup)
  20. Can’t unlock vm
  21. mmohamed2
  22. t.lamprecht
  23. mmohamed2

TASK ERROR: start failed

Mellgood

New Member

Hi, after a failed clone and proxmox + host restart I got this error:
TASK ERROR: VM is locked (clone)

I tried:
qm unlock 106

but now i got another error:

kvm: -drive file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: qcow2: Image is corrupt; cannot be opened read/write
TASK ERROR: start failed: command ‘/usr/bin/kvm -id 106 -chardev ‘socket,id=qmp,path=/var/run/qemu-server/106.qmp,server,nowait’ -mon ‘chardev=qmp,mode=control’ -pidfile /var/run/qemu-server/106.pid -daemonize -smbios ‘type=1,uuid=24d6521e-95c7-463e-97fe-e79e16051387’ -name TMP -smp ‘4,sockets=2,cores=2,maxcpus=4’ -nodefaults -boot ‘menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg’ -vga std -vnc unix:/var/run/qemu-server/106.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 10000 -k it -device ‘pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e’ -device ‘pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f’ -device ‘piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2’ -device ‘usb-tablet,id=tablet,bus=uhci.0,port=1’ -chardev ‘socket,path=/var/run/qemu-server/106.qga,server,nowait,id=qga0’ -device ‘virtio-serial,id=qga0,bus=pci.0,addr=0x8’ -device ‘virtserialport,chardev=qga0,name=org.qemu.guest_agent.0’ -device ‘virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3’ -iscsi ‘initiator-name=iqn.1993-08.org.debian:01:3681fcbb6821’ -drive ‘file=/var/lib/vz/template/iso/debian-9.4.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads’ -device ‘ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200’ -device ‘virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5’ -drive ‘file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on’ -device ‘scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100’ -netdev ‘type=tap,id=net0,ifname=tap106i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown’ -device ‘e1000,mac=86:CA:F0:6E:FB:EB,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300» failed: exit code 1

Is it possible to recover the corrupted img?
ty in advice

Источник

How to Unlock a Proxmox VM

From time to time, you’ll come across the need to kill a lock on your Proxmox server. Fear not, in today’s guide we’ll discuss the various lock errors you may face and how to unlock a Proxmox VM.

Proxmox Locked VM Errors

Error: VM is locked

The «VM is locked» error is the most common circumstance in which you may want to kill a VM lock. This error has a lot of variants including:

  • Error: VM is locked (backup)
  • Error: VM is locked (snapshot)
  • Error: VM is locked (clone)

Error: VM is locked in Proxmox

As you can see, they all share the same «Error: VM is locked» root, but with a suffix that indicates the task that initiated the lock, whether that task be a backup, a snapshot, or clone. This can be useful for determining IF you should clear the lock. (i.e. if the backup job is still running, you probably shouldn’t clear the lock and just let the backup process complete).

can’t lock file ‘/var/lock/qemu-server/lock- .conf’ – got timeout

This is another common error, often seen when you’re trying to shutdown/stop a virtual machine or when qm unlock fails (see below).

Proxmox Unlock VM Methods

There are two main ways of clearing a lock on a Proxmox VM: 1) using the qm unlock command and 2) manually deleting the lock.

qm unlock

qm unlock should be your first choice for unlocking a Proxmox VM.

First, find your VM ID (it’s the number next to your VM in the Proxmox GUI). If you’re not using the WebGUI, you can obtain a list of your VM IDs with:

Unlock the VM/kill the lock with:

Now, this may not always work, and the command may fail with:

In that case, let’s move on to plan B: manually deleting the lock.

Manually Deleting the Lock

If you received the error message, «can’t lock file ‘/var/lock/qemu-server/lock- .conf’ — got timeout», you can fix this by manually deleting the lock at that location. Simply put, you can just run the following command:

Obviously, this should be a last resort. It’s generally not a great practice to go around killing locks, but sometimes you have no choice.

I hope this guide helped you out. Let me know if you have any questions or feel I left something out in the comments/forums below!

Источник

TASK ERROR: VM is locked (backup)

jmjosebest

Active Member

Hi, after a full host restart I get this error starting a virtual machine KVM.

TASK ERROR: VM is locked (backup)

/var/lib/vz/lock is empty.
How can I unlock?
Thanks

jmjosebest

Active Member

PC Rescue

New Member

RollMops

Active Member

fabian

Proxmox Staff Member

Best regards,
Fabian

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

RollMops

Active Member

fabian

Proxmox Staff Member

check the system logs around the time of the failed backup, try manually backing up the VM a couple of times and check for failure.

it might have been a one time fluke.

Best regards,
Fabian

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

RollMops

Active Member

I can provoke a system crash by manually starting a backup of the VE that I use to find in a locked state.
After 1 or 2 mins., process z_wr_iss takes 97% CPU and subsequently the server fails. It then automatically reboots, but the VE is still locked from the pending/failed backup process and does not automatically recover.
I attach a screenshot of the last state seen in «top».
Could this be prevented by adding some RAM to the box?

Источник

TASK ERROR: VM is locked (backup)

Active Member

Occasionally, usually every 2-3 days (I have nightly backups setup), my lxc containers freeze and ‘pct unlock’ won’t unlock them and I have to reboot the nodes.

This appears to be a common problem with lxc (I wish I would have stayed with openvz). Is there any solution to this? I’m backing up to a LVM storage

root@rovio:/media# pvdisplay ; lvdisplay ; mount
— Physical volume —
PV Name /dev/sdb3
VG Name pve
PV Size 447.00 GiB / not usable 3.84 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 114432
Free PE 4095
Allocated PE 110337
PV UUID mK0CnO-c5LV-kPJs-E6aw-Fupo-G0qg-OaPfHE

— Physical volume —
PV Name /dev/md0
VG Name store
PV Size 931.39 GiB / not usable 4.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238434
Free PE 0
Allocated PE 238434
PV UUID fU89Zd-gihP-B5ta-Wzk5-7UsT-Uju0-C2AExy

— Logical volume —
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID GGcRZ9-Q12y-FLsv-n0ds-ZEO9-9yaU-60A2i8
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 2
LV Size 31.00 GiB
Current LE 7936
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:1

— Logical volume —
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID mw530S-A1SO-BCIR-Taad-ORid-1nL7-wJaIEC
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:0

— Logical volume —
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID I7NH4F-e3VY-7z0R-RuVK-kOos-1WB3-QOWH7C
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 1
LV Size 304.00 GiB
Current LE 77825
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:2

— Logical volume —
LV Path /dev/store/store
LV Name store
VG Name store
LV UUID ixRcPS-6wIf-k58J-ubbx-ksJx-ABdi-giLfKe
LV Write Access read/write
LV Creation host, time rovio, 2016-01-29 19:07:14 -0500
LV Status available
# open 1
LV Size 931.38 GiB
Current LE 238434
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:3

Источник

Can’t unlock vm

mmohamed2

New Member

i have errors when i try to backup

I tried

  • qm unlock 129

but i got

  • unable to open file ‘/etc/pve/nodes/proxmoxtiebosch/qemu-server/129.conf.tmp.22475’ — Input/output error

Also i can’t create folder

How can i fix it please ?
Thank you !

t.lamprecht

Proxmox Staff Member

First: you’re on a pretty outdated version of Proxmox VE, the current version of the old 5.x release is 5.4, and 6.1 would be available already..

That said, it seems like the cluster file system hangs, this could be due to two things:

* some bug / situation which made the pmxcfs lock up

* Real hardware error, e.g., a faulty disk where the database of the pmxcfs is, or some thing similar.

Please post some additional info:

Best regards,
Thomas

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

mmohamed2

New Member

Thank you for reply,

Can you guide me how to upgrade Proxmox VE please ?

# ps aux|grep pmxcfs
root 4361 0.1 0.1 1136680 59048 ? Ssl Apr08 4:14 /usr/bin/pmxcfs
root 7465 0.0 0.0 12784 852 pts/1 S+ 09:52 0:00 grep pmxcfs

# systemctl status pve-cluster
● pve-cluster.service — The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-04-08 12:18:11 CEST; 2 days ago
Main PID: 4361 (pmxcfs)
Tasks: 13 (limit: 4915)
Memory: 59.2M
CPU: 4min 15.268s
CGroup: /system.slice/pve-cluster.service
└─4361 /usr/bin/pmxcfs

Apr 09 01:32:46 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local: -1
Apr 09 01:32:46 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local-zfs: -1
Apr 09 01:35:48 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/rpool2: -1
Apr 09 01:35:48 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local: -1
Apr 09 01:35:48 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local-zfs: -1
Apr 09 01:36:38 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local-zfs: -1
Apr 09 01:36:38 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/local: -1
Apr 09 01:36:38 proxmoxtiebosch pmxcfs[4361]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/proxmoxtiebosch/rpool2: -1
Apr 09 01:46:26 proxmoxtiebosch pmxcfs[4361]: [database] crit: commit transaction failed: database or disk is full#010
Apr 09 01:46:26 proxmoxtiebosch pmxcfs[4361]: [database] crit: rollback transaction failed: cannot rollback — no transaction is active#010

Disk spaces

Источник

Hi, after a failed clone and proxmox + host restart I got this error:
TASK ERROR: VM is locked (clone)

I tried:
qm unlock 106

but now i got another error:

kvm: -drive file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: qcow2: Image is corrupt; cannot be opened read/write
TASK ERROR: start failed: command ‘/usr/bin/kvm -id 106 -chardev ‘socket,id=qmp,path=/var/run/qemu-server/106.qmp,server,nowait’ -mon ‘chardev=qmp,mode=control’ -pidfile /var/run/qemu-server/106.pid -daemonize -smbios ‘type=1,uuid=24d6521e-95c7-463e-97fe-e79e16051387’ -name TMP -smp ‘4,sockets=2,cores=2,maxcpus=4’ -nodefaults -boot ‘menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg’ -vga std -vnc unix:/var/run/qemu-server/106.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 10000 -k it -device ‘pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e’ -device ‘pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f’ -device ‘piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2’ -device ‘usb-tablet,id=tablet,bus=uhci.0,port=1’ -chardev ‘socket,path=/var/run/qemu-server/106.qga,server,nowait,id=qga0’ -device ‘virtio-serial,id=qga0,bus=pci.0,addr=0x8’ -device ‘virtserialport,chardev=qga0,name=org.qemu.guest_agent.0’ -device ‘virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3’ -iscsi ‘initiator-name=iqn.1993-08.org.debian:01:3681fcbb6821’ -drive ‘file=/var/lib/vz/template/iso/debian-9.4.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads’ -device ‘ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200’ -device ‘virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5’ -drive ‘file=/var/lib/vz/images/106/vm-106-disk-1.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on’ -device ‘scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100’ -netdev ‘type=tap,id=net0,ifname=tap106i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown’ -device ‘e1000,mac=86:CA:F0:6E:FB:EB,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300» failed: exit code 1

Is it possible to recover the corrupted img?
ty in advice

TASK ERROR: VM is locked (backup)

Active Member

Occasionally, usually every 2-3 days (I have nightly backups setup), my lxc containers freeze and ‘pct unlock’ won’t unlock them and I have to reboot the nodes.

This appears to be a common problem with lxc (I wish I would have stayed with openvz). Is there any solution to this? I’m backing up to a LVM storage

root@rovio:/media# pvdisplay ; lvdisplay ; mount
— Physical volume —
PV Name /dev/sdb3
VG Name pve
PV Size 447.00 GiB / not usable 3.84 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 114432
Free PE 4095
Allocated PE 110337
PV UUID mK0CnO-c5LV-kPJs-E6aw-Fupo-G0qg-OaPfHE

— Physical volume —
PV Name /dev/md0
VG Name store
PV Size 931.39 GiB / not usable 4.69 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238434
Free PE 0
Allocated PE 238434
PV UUID fU89Zd-gihP-B5ta-Wzk5-7UsT-Uju0-C2AExy

— Logical volume —
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID GGcRZ9-Q12y-FLsv-n0ds-ZEO9-9yaU-60A2i8
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 2
LV Size 31.00 GiB
Current LE 7936
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:1

— Logical volume —
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID mw530S-A1SO-BCIR-Taad-ORid-1nL7-wJaIEC
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:0

— Logical volume —
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID I7NH4F-e3VY-7z0R-RuVK-kOos-1WB3-QOWH7C
LV Write Access read/write
LV Creation host, time proxmox, 2016-01-11 06:53:53 -0500
LV Status available
# open 1
LV Size 304.00 GiB
Current LE 77825
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:2

— Logical volume —
LV Path /dev/store/store
LV Name store
VG Name store
LV UUID ixRcPS-6wIf-k58J-ubbx-ksJx-ABdi-giLfKe
LV Write Access read/write
LV Creation host, time rovio, 2016-01-29 19:07:14 -0500
LV Status available
# open 1
LV Size 931.38 GiB
Current LE 238434
Segments 1
Allocation inherit
Read ahead sectors auto
— currently set to 256
Block device 252:3

Источник

TASK ERROR: VM is locked (backup)

jmjosebest

Active Member

Hi, after a full host restart I get this error starting a virtual machine KVM.

TASK ERROR: VM is locked (backup)

/var/lib/vz/lock is empty.
How can I unlock?
Thanks

jmjosebest

Active Member

PC Rescue

New Member

RollMops

Active Member

fabian

Proxmox Staff Member

Best regards,
Fabian

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

RollMops

Active Member

fabian

Proxmox Staff Member

check the system logs around the time of the failed backup, try manually backing up the VM a couple of times and check for failure.

it might have been a one time fluke.

Best regards,
Fabian

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

RollMops

Active Member

I can provoke a system crash by manually starting a backup of the VE that I use to find in a locked state.
After 1 or 2 mins., process z_wr_iss takes 97% CPU and subsequently the server fails. It then automatically reboots, but the VE is still locked from the pending/failed backup process and does not automatically recover.
I attach a screenshot of the last state seen in «top».
Could this be prevented by adding some RAM to the box?

Источник

TASK ERROR: VM is locked (snapshot)

dison4linux

Active Member

Unable to take new snapshots/backups because there seems to be a snapshot in progress.
I’ve rebooted to the hypervisor, no change.
The snapshots tab shows a ‘vzdump’ snapshot with a status of ‘prepare’

t.lamprecht

Proxmox Staff Member

Best regards,
Thomas

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

dison4linux

Active Member

I think that got me a little further, but no joy on a new snapshot yet.

t.lamprecht

Proxmox Staff Member

Using our pct snapshot command works?

Best regards,
Thomas

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

dison4linux

Active Member

It works to create a new one, but not to remove the troublesome one.

dison4linux

Active Member

For what it’s worth the backup process seems unable to delete the snapshot as well:

dietmar

Proxmox Staff Member

Best regards,
Dietmar

Do you already have a Commercial Support Subscription? — If not, Buy now and read the documentation

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Quick Navigation

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Источник

How to Unlock a Proxmox VM

From time to time, you’ll come across the need to kill a lock on your Proxmox server. Fear not, in today’s guide we’ll discuss the various lock errors you may face and how to unlock a Proxmox VM.

Proxmox Locked VM Errors

Error: VM is locked

The «VM is locked» error is the most common circumstance in which you may want to kill a VM lock. This error has a lot of variants including:

  • Error: VM is locked (backup)
  • Error: VM is locked (snapshot)
  • Error: VM is locked (clone)

Error: VM is locked in Proxmox

As you can see, they all share the same «Error: VM is locked» root, but with a suffix that indicates the task that initiated the lock, whether that task be a backup, a snapshot, or clone. This can be useful for determining IF you should clear the lock. (i.e. if the backup job is still running, you probably shouldn’t clear the lock and just let the backup process complete).

can’t lock file ‘/var/lock/qemu-server/lock- .conf’ – got timeout

This is another common error, often seen when you’re trying to shutdown/stop a virtual machine or when qm unlock fails (see below).

Proxmox Unlock VM Methods

There are two main ways of clearing a lock on a Proxmox VM: 1) using the qm unlock command and 2) manually deleting the lock.

qm unlock

qm unlock should be your first choice for unlocking a Proxmox VM.

First, find your VM ID (it’s the number next to your VM in the Proxmox GUI). If you’re not using the WebGUI, you can obtain a list of your VM IDs with:

Unlock the VM/kill the lock with:

Now, this may not always work, and the command may fail with:

In that case, let’s move on to plan B: manually deleting the lock.

Manually Deleting the Lock

If you received the error message, «can’t lock file ‘/var/lock/qemu-server/lock- .conf’ — got timeout», you can fix this by manually deleting the lock at that location. Simply put, you can just run the following command:

Obviously, this should be a last resort. It’s generally not a great practice to go around killing locks, but sometimes you have no choice.

I hope this guide helped you out. Let me know if you have any questions or feel I left something out in the comments/forums below!

Источник

VM is locked (backup) proxmox

cfsl1994

New Member

A few days ago, on the server proxmox a VM would not start which showed me a message VM is locked (backup) when I wanted to start. The programming of the backups is done every Saturday at midnight (automatic). Searching the Internet for the solution for the VM to start was the unlocked vm id command.
The proxmox was installed on zfs with 2 discs of 1 TB each. In the proxmox I have configured 4 virtual. I saw that we can make the backups of the first 2 virtual but the third was not done due to the problem mentioned.
The question is, what can be the cause of this problem?
Some comment that sometimes the same backup process fails. Why does it fail?

Sorry for the way I write, what happens is that I’m using a translator

I hope you can help me
regards

spirit

Famous Member

Can you send the failed backup log? what is the storage for backup ? (local, remote (nfs/smb?)).

«Are you looking for a French Proxmox training center?

cfsl1994

New Member

Thanks for the reply.

The backup configuration is locally in the default local route (pve). Respect to the logs of the backup process I think they are stored in this path / var / lib / vz / dump / (Please correct me if that is the route to see the logs generated from the backup). As in total I have 4 virtual machines only the first 2 were done correctly and I think these are the logs.

This log is the first of them

root@pve:/var/lib/vz/dump# cat vzdump-qemu-202-2018_11_03-00_00_02.log

Nov 03 00:00:02 INFO: Starting Backup of VM 202 (qemu)

Nov 03 00:00:02 INFO: status = running

Nov 03 00:00:03 INFO: update VM 202: -lock backup

Nov 03 00:00:03 INFO: VM Name: AAAAAAAAAA

Nov 03 00:00:03 INFO: include disk ‘virtio0’ ‘local-zfs:vm-202-disk-1’

Nov 03 00:00:03 INFO: backup mode: snapshot

Nov 03 00:00:03 INFO: ionice priority: 7

Nov 03 00:00:03 INFO: creating archive ‘/var/lib/vz/dump/vzdump-qemu-202-2018_11_03-00_00_02.vma.lzo’

Nov 03 00:00:03 INFO: started backup task ‘a1521384-a182-41e8-940f-1da52df2af92’

Nov 03 00:00:06 INFO: status: 0% (2548301824/343597383680), sparse 0% (2503143424), duration 3, 849/15 MB/s

Nov 03 00:00:28 INFO: status: 1% (3458072576/343597383680), sparse 0% (2693566464), duration 25, 41/32 MB/s

Nov 03 00:00:44 INFO: status: 2% (6946881536/343597383680), sparse 1% (5957898240), duration 41, 218/14 MB/s

Nov 03 00:00:47 INFO: status: 3% (10435428352/343597383680), sparse 2% (9446350848), duration 44, 1162/0 MB/s

Nov 03 00:00:50 INFO: status: 4% (13905362944/343597383680), sparse 3% (12916154368), duration 47, 1156/0 MB/s

Nov 03 00:00:53 INFO: status: 5% (17407606784/343597383680), sparse 4% (16418394112), duration 50, 1167/0 MB/s

Nov 03 00:00:56 INFO: status: 6% (20917977088/343597383680), sparse 5% (19928666112), duration 53, 1170/0 MB/s

Nov 03 00:00:59 INFO: status: 7% (24346427392/343597383680), sparse 6% (23356977152), duration 56, 1142/0 MB/s

Nov 03 00:01:02 INFO: status: 8% (27716419584/343597383680), sparse 7% (26726965248), duration 59, 1123/0 MB/s

Nov 03 00:01:05 INFO: status: 9% (31097159680/343597383680), sparse 8% (30107697152), duration 62, 1126/0 MB/s

Nov 03 00:01:08 INFO: status: 10% (34463612928/343597383680), sparse 9% (33474146304), duration 65, 1122/0 MB/s

Nov 03 00:01:11 INFO: status: 11% (37834063872/343597383680), sparse 10% (36844498944), duration 68, 1123/0 MB/s

Nov 03 00:01:15 INFO: status: 12% (42363191296/343597383680), sparse 12% (41373618176), duration 72, 1132/0 MB/s

Nov 03 00:01:18 INFO: status: 13% (45740720128/343597383680), sparse 13% (44751138816), duration 75, 1125/0 MB/s

Nov 03 00:01:21 INFO: status: 14% (49146167296/343597383680), sparse 14% (48156491776), duration 78, 1135/0 MB/s

Nov 03 00:01:24 INFO: status: 15% (52505346048/343597383680), sparse 14% (51515662336), duration 81, 1119/0 MB/s

Nov 03 00:01:27 INFO: status: 16% (55718379520/343597383680), sparse 15% (54728691712), duration 84, 1071/0 MB/s

Nov 03 00:01:30 INFO: status: 17% (59068710912/343597383680), sparse 16% (58079014912), duration 87, 1116/0 MB/s

Nov 03 00:01:33 INFO: status: 18% (62438703104/343597383680), sparse 17% (61448871936), duration 90, 1123/0 MB/s

Nov 03 00:01:36 INFO: status: 19% (65801355264/343597383680), sparse 18% (64811220992), duration 93, 1120/0 MB/s

Nov 03 00:01:39 INFO: status: 20% (69159682048/343597383680), sparse 19% (68169515008), duration 96, 1119/0 MB/s

Nov 03 00:01:42 INFO: status: 21% (72527839232/343597383680), sparse 20% (71537655808), duration 99, 1122/0 MB/s

Nov 03 00:01:45 INFO: status: 22% (75935318016/343597383680), sparse 21% (74945126400), duration 102, 1135/0 MB/s

Nov 03 00:01:48 INFO: status: 23% (79327854592/343597383680), sparse 22% (78337658880), duration 105, 1130/0 MB/s

Nov 03 00:01:51 INFO: status: 24% (82779111424/343597383680), sparse 23% (81788907520), duration 108, 1150/0 MB/s

Nov 03 00:01:54 INFO: status: 25% (86157885440/343597383680), sparse 24% (85167677440), duration 111, 1126/0 MB/s

Nov 03 00:01:57 INFO: status: 26% (89569165312/343597383680), sparse 25% (88578859008), duration 114, 1137/0 MB/s

Nov 03 00:02:00 INFO: status: 27% (92989489152/343597383680), sparse 26% (91999174656), duration 117, 1140/0 MB/s

Nov 03 00:02:03 INFO: status: 28% (96392118272/343597383680), sparse 27% (95401799680), duration 120, 1134/0 MB/s

Nov 03 00:02:06 INFO: status: 29% (99787407360/343597383680), sparse 28% (98797076480), duration 123, 1131/0 MB/s

Nov 03 00:02:10 INFO: status: 30% (103624146944/343597383680), sparse 29% (102577848320), duration 127, 959/13 MB/s

Nov 03 00:02:14 INFO: status: 31% (107144151040/343597383680), sparse 30% (106010476544), duration 131, 880/21 MB/s

Nov 03 00:02:17 INFO: status: 32% (110565588992/343597383680), sparse 31% (109431906304), duration 134, 1140/0 MB/s

Nov 03 00:02:20 INFO: status: 33% (113909366784/343597383680), sparse 32% (112775680000), duration 137, 1114/0 MB/s

Nov 03 00:02:23 INFO: status: 34% (117280669696/343597383680), sparse 33% (116146974720), duration 140, 1123/0 MB/s

Nov 03 00:02:26 INFO: status: 35% (120689786880/343597383680), sparse 34% (119556083712), duration 143, 1136/0 MB/s

Nov 03 00:02:29 INFO: status: 36% (124012527616/343597383680), sparse 35% (122878820352), duration 146, 1107/0 MB/s

Nov 03 00:02:32 INFO: status: 37% (127352504320/343597383680), sparse 36% (126218756096), duration 149, 1113/0 MB/s

Nov 03 00:02:35 INFO: status: 38% (130735734784/343597383680), sparse 37% (129601982464), duration 152, 1127/0 MB/s

Nov 03 00:02:38 INFO: status: 39% (134013517824/343597383680), sparse 38% (132875325440), duration 155, 1092/1 MB/s

Nov 03 00:02:42 INFO: status: 40% (138516692992/343597383680), sparse 39% (137378480128), duration 159, 1125/0 MB/s

Nov 03 00:02:45 INFO: status: 41% (141923123200/343597383680), sparse 40% (140784906240), duration 162, 1135/0 MB/s

Nov 03 00:02:48 INFO: status: 42% (145292197888/343597383680), sparse 41% (144153972736), duration 165, 1123/0 MB/s

Nov 03 00:02:51 INFO: status: 43% (148656947200/343597383680), sparse 42% (147518689280), duration 168, 1121/0 MB/s

Nov 03 00:03:02 INFO: status: 44% (151293722624/343597383680), sparse 43% (149850406912), duration 179, 239/27 MB/s

Nov 03 00:03:06 INFO: status: 45% (155503493120/343597383680), sparse 44% (154060156928), duration 183, 1052/0 MB/s

Nov 03 00:03:09 INFO: status: 46% (158745493504/343597383680), sparse 45% (157302153216), duration 186, 1080/0 MB/s

Nov 03 00:03:12 INFO: status: 47% (162202714112/343597383680), sparse 46% (160759353344), duration 189, 1152/0 MB/s

Nov 03 00:03:15 INFO: status: 48% (165613535232/343597383680), sparse 47% (164170170368), duration 192, 1136/0 MB/s

Nov 03 00:03:18 INFO: status: 49% (169079275520/343597383680), sparse 48% (167635849216), duration 195, 1155/0 MB/s

Nov 03 00:03:21 INFO: status: 50% (172296437760/343597383680), sparse 49% (170852409344), duration 198, 1072/0 MB/s

Nov 03 00:03:29 INFO: status: 51% (175484108800/343597383680), sparse 50% (173906579456), duration 206, 398/16 MB/s

Nov 03 00:03:32 INFO: status: 52% (178822905856/343597383680), sparse 51% (177245368320), duration 209, 1112/0 MB/s

Nov 03 00:03:35 INFO: status: 53% (182317809664/343597383680), sparse 52% (180740268032), duration 212, 1164/0 MB/s

Nov 03 00:03:38 INFO: status: 54% (185796460544/343597383680), sparse 53% (184218910720), duration 215, 1159/0 MB/s

Nov 03 00:03:41 INFO: status: 55% (189229105152/343597383680), sparse 54% (187651551232), duration 218, 1144/0 MB/s

Nov 03 00:03:44 INFO: status: 56% (192468287488/343597383680), sparse 55% (190890725376), duration 221, 1079/0 MB/s

Nov 03 00:03:47 INFO: status: 57% (195876945920/343597383680), sparse 56% (194299375616), duration 224, 1136/0 MB/s

Nov 03 00:03:50 INFO: status: 58% (199297007616/343597383680), sparse 57% (197719433216), duration 227, 1140/0 MB/s

Nov 03 00:03:54 INFO: status: 59% (203789631488/343597383680), sparse 58% (202212048896), duration 231, 1123/0 MB/s

Nov 03 00:03:57 INFO: status: 60% (207197241344/343597383680), sparse 59% (205619650560), duration 234, 1135/0 MB/s

Nov 03 00:04:00 INFO: status: 61% (210630410240/343597383680), sparse 60% (209052815360), duration 237, 1144/0 MB/s

Nov 03 00:04:03 INFO: status: 62% (213959114752/343597383680), sparse 61% (212381511680), duration 240, 1109/0 MB/s

Nov 03 00:04:06 INFO: status: 63% (217292275712/343597383680), sparse 62% (215714316288), duration 243, 1111/0 MB/s

Nov 03 00:04:09 INFO: status: 64% (220670001152/343597383680), sparse 63% (219086229504), duration 246, 1125/1 MB/s

Nov 03 00:04:12 INFO: status: 65% (224078266368/343597383680), sparse 64% (222494486528), duration 249, 1136/0 MB/s

Nov 03 00:04:15 INFO: status: 66% (227500425216/343597383680), sparse 65% (225916641280), duration 252, 1140/0 MB/s

Nov 03 00:04:18 INFO: status: 67% (230876839936/343597383680), sparse 66% (229293047808), duration 255, 1125/0 MB/s

Nov 03 00:04:21 INFO: status: 68% (234243817472/343597383680), sparse 67% (232660021248), duration 258, 1122/0 MB/s

Nov 03 00:04:24 INFO: status: 69% (237556269056/343597383680), sparse 68% (235972464640), duration 261, 1104/0 MB/s

Nov 03 00:04:27 INFO: status: 70% (240932159488/343597383680), sparse 69% (239348346880), duration 264, 1125/0 MB/s

Nov 03 00:04:30 INFO: status: 71% (244309753856/343597383680), sparse 70% (242725937152), duration 267, 1125/0 MB/s

Nov 03 00:04:33 INFO: status: 72% (247524687872/343597383680), sparse 71% (245940862976), duration 270, 1071/0 MB/s

Nov 03 00:04:36 INFO: status: 73% (250887274496/343597383680), sparse 72% (249303445504), duration 273, 1120/0 MB/s

Nov 03 00:04:40 INFO: status: 74% (255349555200/343597383680), sparse 73% (253765718016), duration 277, 1115/0 MB/s

Nov 03 00:04:43 INFO: status: 75% (258735144960/343597383680), sparse 74% (257151299584), duration 280, 1128/0 MB/s

Nov 03 00:04:46 INFO: status: 76% (262110248960/343597383680), sparse 75% (260526399488), duration 283, 1125/0 MB/s

Nov 03 00:04:49 INFO: status: 77% (265411821568/343597383680), sparse 76% (263827963904), duration 286, 1100/0 MB/s

Nov 03 00:04:52 INFO: status: 78% (268765822976/343597383680), sparse 77% (267181961216), duration 289, 1118/0 MB/s

Nov 03 00:04:56 INFO: status: 79% (271974596608/343597383680), sparse 78% (270289256448), duration 293, 802/25 MB/s

Nov 03 00:05:00 INFO: status: 80% (275274203136/343597383680), sparse 79% (273524998144), duration 297, 824/15 MB/s

Nov 03 00:05:15 INFO: status: 81% (279368630272/343597383680), sparse 80% (277223661568), duration 312, 272/26 MB/s

Nov 03 00:05:18 INFO: status: 82% (282713980928/343597383680), sparse 81% (280569004032), duration 315, 1115/0 MB/s

Nov 03 00:05:21 INFO: status: 83% (286082924544/343597383680), sparse 82% (283937939456), duration 318, 1122/0 MB/s

Nov 03 00:05:24 INFO: status: 84% (289453375488/343597383680), sparse 83% (287308386304), duration 321, 1123/0 MB/s

Nov 03 00:05:27 INFO: status: 85% (292799578112/343597383680), sparse 84% (290654568448), duration 324, 1115/0 MB/s

Nov 03 00:05:30 INFO: status: 86% (296157904896/343597383680), sparse 85% (294012801024), duration 327, 1119/0 MB/s

Nov 03 00:05:33 INFO: status: 87% (299541987328/343597383680), sparse 86% (297396875264), duration 330, 1128/0 MB/s

Nov 03 00:05:36 INFO: status: 88% (302786609152/343597383680), sparse 87% (300641492992), duration 333, 1081/0 MB/s

Nov 03 00:05:39 INFO: status: 89% (306045583360/343597383680), sparse 88% (303900446720), duration 336, 1086/0 MB/s

Nov 03 00:05:42 INFO: status: 90% (309347090432/343597383680), sparse 89% (307201949696), duration 339, 1100/0 MB/s

Nov 03 00:05:46 INFO: status: 91% (313647824896/343597383680), sparse 90% (311502675968), duration 343, 1075/0 MB/s

Nov 03 00:05:49 INFO: status: 92% (316798337024/343597383680), sparse 91% (314653179904), duration 346, 1050/0 MB/s

Nov 03 00:05:52 INFO: status: 93% (320073433088/343597383680), sparse 92% (317928271872), duration 349, 1091/0 MB/s

Nov 03 00:05:55 INFO: status: 94% (323384836096/343597383680), sparse 93% (321239666688), duration 352, 1103/0 MB/s

Nov 03 00:05:58 INFO: status: 95% (326593740800/343597383680), sparse 94% (324448477184), duration 355, 1069/0 MB/s

Nov 03 00:06:01 INFO: status: 96% (329991389184/343597383680), sparse 95% (327846117376), duration 358, 1132/0 MB/s

Nov 03 00:06:04 INFO: status: 97% (333371867136/343597383680), sparse 96% (331226587136), duration 361, 1126/0 MB/s

Nov 03 00:06:08 INFO: status: 98% (337854660608/343597383680), sparse 97% (335709372416), duration 365, 1120/0 MB/s

Nov 03 00:06:11 INFO: status: 99% (341128183808/343597383680), sparse 98% (338982891520), duration 368, 1091/0 MB/s

Nov 03 00:06:13 INFO: status: 100% (343597383680/343597383680), sparse 99% (341452083200), duration 370, 1234/0 MB/s

Nov 03 00:06:13 INFO: transferred 343597 MB in 370 seconds (928 MB/s)

Nov 03 00:06:13 INFO: archive file size: 1.04GB

Nov 03 00:06:13 INFO: delete old backup ‘/var/lib/vz/dump/vzdump-qemu-202-2018_10_27-00_00_02.vma.lzo’

Nov 03 00:06:15 INFO: Finished Backup of VM 202 (00:06:13)

Each virtual machine is labeled ID 202 204 205 206 respectively. When I wanted to verify the 205 backup, it was not done and I did not find the 205 log as well as the 206 log. In that route I saw that this directory and this file was created:

drwxr-xr-x 2 root root 3 Nov 3 00:07 vzdump-qemu-205-2018_11_03-00_07_40.tmp/

-rw-r—r— 1 root root 5.4G Nov 3 00:26 vzdump-qemu-205-2018_11_03-00_07_40.vma.dat

I do not know if these generated files are normal or as a result of the backup problem.
To be honest I have a few months getting acquainted with the proxmox.

Источник

Reproduced on v2.9.10.

Tried adjusting resource create timeouts and it didn’t seem to have any effect. Here are several test runs after changing the resource create timeouts. I did not change anything at all between runs:

  • Test 1: at the 5m50s mark, got a vm locked, could not obtain config error
  • Test 2: at the 5m mark, Error: file provisioner error because the file provisioner could not connect. «timeout — last error: dial tcp 192.168.1.78:22: connect: no route to host»
  • Test 3: Creation complete after 6m21s
  • Test 4: vm locked, could not obtain config
  • Test 5: the terraform destroy thought there was nothing to clean up, resulting in Error: 500 can't lock file '/var/lock/qemu-server/lock-135.conf' - got timeout on creation
  • Test 6: Creation complete after 6m17s
  • Test 7: Creation complete after 6m9s
  • Test 8: vm locked, could not obtain config
  • Test 9: the terraform destroy thought there was nothing to clean up, resulting in Error: file provisioner error because the file provisioner could not connect. «timeout — last error: dial tcp 192.168.1.78:22: connect: no route to host»
  • Test 10: Creation complete after 6m28s

So that gives some concrete data on the different types of errors that can come up and how often they appear. The issue with the destroy incorrectly thinking there’s nothing to destroy might be a separate issue and if I can dig up more details on that I’ll open a new ticket if one does not already exist. I suspect all the other issues are related to this timeout problem.

People seem to be mentioning setting both pm_timeout and PM_TIMEOUT to work around this issue. In case anyone in the future is confused about which is correct environment variable to us, it is PM_TIMEOUT. It is referred to as the pm_timeout in the documentation (similar to how the pm_api_url value is set by the PM_API_URL environment variable).

Similar to what others are reporting, I found that setting PM_TIMEOUT=600 seems to make everything completely stable. I’ve redeployed several times in a row without any failures, so this seems like a solid workaround.

In conclusion, I very much look forward to the fix with proper go style waits!

Update: I just got an The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled. error with PM_TIMEOUT set to 600 (and it happened at 5m20s). So apparently, either this workaround isn’t bulletproof, or there’s an additional issue that is causing trouble. Also after that error, the VM still existed in Proxmox, but terraform thought there was nothing to destroy. Workaround for that is to just delete it manually in Proxmox.

Update 2: The VM is being cloned from server_1 (which is where the template is located) and deployed onto server_2 (for no particular reason). If I change the terraform file to set the target_node=»server_1″, the PM_TIMEOUT appears to be a more stable fix (maybe 100% stale?). In my case the backing store is a ceph cluster that is available on both servers. I wanted to mention this because it probably will affect the fix (waiting for the VM to be on the right node, not just for the clone operation to complete).

Понравилась статья? Поделить с друзьями:
  • Proxmox task error timeout waiting on systemd
  • Proxmox task error iommu not present
  • Proxmox task error command apt get update failed exit code 100
  • Proxmox status io error
  • Proxmox internal error