Backup had an early error deleting partial backup

Вылазит ошибка при backup хоста Proxmox VE через UrBackup Хочу наладить резервное копирование сервера Proxmox (backup самого хоста). Название моего хоста

Вылазит ошибка при backup хоста Proxmox VE через UrBackup

Хочу наладить резервное копирование сервера Proxmox (backup самого хоста).

Название моего хоста pve.

В настройках клиента pve на сервере UrBackup в разделе File backups указал:

При попытке сделать первый Full file backup в логе сервера UrBackup вижу следующую ошибку

Backup не создаётся.

В какую сторону копать?

В сторону прав конечно же

кому каких прав не хватает?

установку UrBackup-клиента на PVE-хосте я делал из-под root

Тупой вопрос: а зачем делать копию proxmox?

В каком смысле — зачем? А зачем вообще бекапы делают?

Ну у меня ОС proxmox стоит на зеркале просто, вроде как достаточно. Если очень хочется, делаю tar, или делаю снепшот от lvm при обновлении. За более чем пять лет, пока не было проблем. Данные — другое дело. А сам хост, там по идее мало данных то меняется.

Насколько я помню, в проксмоксе используется fuse для монтирования своей кластерной файловой системы. Смотри выхлоп mount

Ну у меня ОС proxmox стоит на зеркале просто, вроде как достаточно.

считается, что зеркало и прочие рейды не отменяют необхобходимости резервного копирования

За более чем пять лет, пока не было проблем.

Я пока думал — как лучше сделать зеркало на системном диске Proxmox — параллельно поковырялся в настройках БИОСа и каким-то образом слетели все разделы на подключенных к материнке дисках — и SATA и NVMe.

Почему это произошло — я пока так и не понял (((

А сам хост, там по идее мало данных то меняется.

С одной стороны да, с другой — я потратил достаточно времени при первичной настройке хоста PVE. Утилиты там всякие ставил, сенсоры температуры и т.п. Опять же — конфигурация подключенных дисков (хранилища) в хосте хранится.

А сейчас видимо придётся всё с нуля заново делать.

Насколько я помню, в проксмоксе используется fuse для монтирования своей кластерной файловой системы. Смотри выхлоп mount

Я в Линуксе новичок, чем мне здесь может помочь mount пока не понимаю(

Источник

Dmitrius7/UrBackup_client_pre_post_backup_scripts_template

Use Git or checkout with SVN using the web URL.

Work fast with our official CLI. Learn more.

Sign In Required

Please sign in to use Codespaces.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio Code

Your codespace will open once ready.

There was a problem preparing your codespace, please try again.

Latest commit

Git stats

Files

Failed to load latest commit information.

README.md

Pre and post backup scripts template for UrBackup client. I tested it with 2.4.10 at debian 10. UrBackup documentation: https://www.urbackup.org/administration_manual.html#x1-330006.4

I use these scripts for stop services before backup and start them back after snapshot creation.

Each command is checked for errors;

If errors occur during operation, the script breaks, the stopped services will be started services back, then UrBackup backup is not started (Backup had an early error. Deleting partial backup);

All events are included in the UrBackup log;

Errors are logged in more detail;

  • Put these scripts to /usr/local/etc/urbackup and make them executable;
  • Make scripts executable

Edit prefilebackup, postfileindex, postfilebackup (there are inctructions inside);

Create file backup and check log (at the web interface).

prefilebackup — UrBackup called before a file backup (before snapshot/shadowcopy creation);

postfileindex — UrBackup called it after indexing (after snapshot/shadowcopy creation);

functions_prepost_scripts — functions used by these sripts.

Log without errors:

Level Time Message
Info 29.04.2020 15:08 Starting unscheduled incremental file backup.
Info 29.04.2020 15:08 Starting prefilebackup script
Info 29.04.2020 15:08 run: systemctl stop fail2ban
Info 29.04.2020 15:08 prefilebackup succeeded
Info 29.04.2020 15:08 Snapshotting device /dev/mapper/vgroot-root via dattobd.
Info 29.04.2020 15:08 Using /dev/datto0.
Info 29.04.2020 15:08 Mounting /dev/mapper/wsnap-76696aae351ab3313541b94beed0148caf9b7a4e86d22e40.
Info 29.04.2020 15:08 Indexing of «rootfs» done. 8376 filesystem lookups 0 db lookups and 0 db updates
Info 29.04.2020 15:08 Snapshotting device /dev/sda3 via dattobd.
Info 29.04.2020 15:08 Using /dev/datto1.
Info 29.04.2020 15:08 Mounting /dev/mapper/wsnap-10fedf2eed26fda4f9a17801428e72855aa0efe35d3884fb.
Info 29.04.2020 15:08 Indexing of «boot» done. 6 filesystem lookups 0 db lookups and 0 db updates
Info 29.04.2020 15:08 Starting postfileindex script (normally starts after indexing and snapshot creation)
Info 29.04.2020 15:08 run: systemctl start fail2ban
Info 29.04.2020 15:08 postfileindex succeeded
Info 29.04.2020 15:08 UrBackupC2dattobd: Loading file list.
Info 29.04.2020 15:08 UrBackupC2dattobd: Calculating file tree differences.
Info 29.04.2020 15:08 UrBackupC2dattobd: Creating snapshot.
Info 29.04.2020 15:08 UrBackupC2dattobd: Deleting files in snapshot. (14)
Info 29.04.2020 15:08 UrBackupC2dattobd: Deleting files in hash snapshot. (14)
Info 29.04.2020 15:08 UrBackupC2dattobd: Calculating tree difference size.
Info 29.04.2020 15:08 UrBackupC2dattobd: Linking unchanged and loading new files.
Info 29.04.2020 15:08 Waiting for file transfers.
Info 29.04.2020 15:08 Waiting for file hashing and copying threads.
Info 29.04.2020 15:09 Writing new file list.
Info 29.04.2020 15:09 All metadata was present
Info 29.04.2020 15:09 Transferred 12.4891 MB — Average speed: 51.1805 MBit/s
Info 29.04.2020 15:09 Time taken for backing up client UrBackupC2dattobd: 1m 10s
Info 29.04.2020 15:09 Backup succeeded

Log with errors:

Level Time Message
Info 29.04.2020 17:23 Starting unscheduled incremental file backup.
Errors 29.04.2020 17:23 Starting prefilebackup script
Errors 29.04.2020 17:23 run: systemctl stop fail22ban
Errors 29.04.2020 17:23 Failed to stop fail22ban.service: Unit fail22ban.service not loaded.
Errors 29.04.2020 17:23 prefilebackup script on client returned error on command: systemctl stop fail22ban. Return code: 5
Errors 29.04.2020 17:23 UrBackup will not run postfileindex script and stopped services will not be started.
Errors 29.04.2020 17:23 Fix it. Run postfileindex script right now for start services back
Errors 29.04.2020 17:23 Starting postfileindex script (normally starts after indexing and snapshot creation)
Errors 29.04.2020 17:23 run: systemctl start fail2ban
Errors 29.04.2020 17:23 postfileindex succeeded
Errors 29.04.2020 17:23 prefilebackup script was interrupted!
Errors 29.04.2020 17:23 Constructing of filelist of «UrBackupC2dattobd» failed: error — prefilebackup script failed with error code 256
Errors 29.04.2020 17:23 Backup had an early error. Deleting partial backup.

About

Pre and post backup scripts template for UrBackup client.

Источник

Info 29.04.2020 15:08 Starting unscheduled incremental file backup… Info 29.04.2020 15:08 Starting prefilebackup script Info 29.04.2020 15:08 run: systemctl stop fail2ban Info 29.04.2020 15:08 prefilebackup succeeded Info 29.04.2020 15:08 Snapshotting device /dev/mapper/vgroot-root via dattobd… Info 29.04.2020 15:08 Using /dev/datto0… Info 29.04.2020 15:08 Mounting /dev/mapper/wsnap-76696aae351ab3313541b94beed0148caf9b7a4e86d22e40… Info 29.04.2020 15:08 Indexing of «rootfs» done. 8376 filesystem lookups 0 db lookups and 0 db updates Info 29.04.2020 15:08 Snapshotting device /dev/sda3 via dattobd… Info 29.04.2020 15:08 Using /dev/datto1… Info 29.04.2020 15:08 Mounting /dev/mapper/wsnap-10fedf2eed26fda4f9a17801428e72855aa0efe35d3884fb… Info 29.04.2020 15:08 Indexing of «boot» done. 6 filesystem lookups 0 db lookups and 0 db updates Info 29.04.2020 15:08 Starting postfileindex script (normally starts after indexing and snapshot creation) Info 29.04.2020 15:08 run: systemctl start fail2ban Info 29.04.2020 15:08 postfileindex succeeded Info 29.04.2020 15:08 UrBackupC2dattobd: Loading file list… Info 29.04.2020 15:08 UrBackupC2dattobd: Calculating file tree differences… Info 29.04.2020 15:08 UrBackupC2dattobd: Creating snapshot… Info 29.04.2020 15:08 UrBackupC2dattobd: Deleting files in snapshot… (14) Info 29.04.2020 15:08 UrBackupC2dattobd: Deleting files in hash snapshot…(14) Info 29.04.2020 15:08 UrBackupC2dattobd: Calculating tree difference size… Info 29.04.2020 15:08 UrBackupC2dattobd: Linking unchanged and loading new files… Info 29.04.2020 15:08 Waiting for file transfers… Info 29.04.2020 15:08 Waiting for file hashing and copying threads… Info 29.04.2020 15:09 Writing new file list… Info 29.04.2020 15:09 All metadata was present Info 29.04.2020 15:09 Transferred 12.4891 MB — Average speed: 51.1805 MBit/s Info 29.04.2020 15:09 Time taken for backing up client UrBackupC2dattobd: 1m 10s Info 29.04.2020 15:09 Backup succeeded

I would also suggest the following enhancement to keep Veeam top Virtualization Backup Product among the rest that available in the market.

1. Multiple job settings for one particular VM. For instant, I can configure Full Backup Job without selecting Incremental which is currently a default setting and must be selected. This should be taken off to have more flexibility of the Backup files.
2. Creating separate Incremental Job / Forward Incremental or Reversed Incremental depends on the customer requirement. This will make sure that files are written to tapes are Full Backup or are Incremental backup. I can bring one example in here which is when I Backup VM in Reversed Incremental Mode, I’m forced to backup only Full Backup without incremental this because the Archive-Bit is resetting from last the job runs. Which force the the product that backing up from Disk to tape to always backup in Full Mode. Agreed for small number of VMs, but when it comes to VMs that have got TBs of Data, I would say Veeam Sorry, I can’t backup this VM in Reversed Incremental Mode as it keep changing the original VBK file in every Incremental runs. Thus, the customers can’t backup the whole VM and the customer forced to go back to the Legacy Backup application to back his VMs.

Or Veeam.com should do something on how the Reversed Incremental backup works. Reversed Incremental is good and in my onion works better than Forward Incremental specially for those who lacks of disk space to backup to disk, the Retention Policy works perfect, unlike Forward Incremental Backup. If the Reversed Incremental works without changing the main VBK file to keep track of the last Archive Bit when was reset by the Backup to Tape product, this will save us from running full backup from disk of very BIG VMs.

With Forward Incremental, if Veeam.com get rid of doubling the VBK Files during the second week of Full Backup this will keep backing up to Tape is a great choice in selecting this mode of Backup to Disk and then to Tapes.

3. Veeam.com should introduce Back-to-Tape integration so, VMs directly will be processed to tapes, I know it’s a bit slower but it’s a safest option customer can adopt.

In this scenarios I have only few option:

Sticking to Reversed Incremental Backup with 5 Days Retention. Sat Full Backup and Sun-Thur Incremental. And on Friday, I have to backup the whole backup of Veeam to Tape. This put me at risk of keeping one week backup of data on disk, taking into consideration that if SAN Storage failed, and I didn’t do a backup for last 1 week, the whole things would lost. Beside that, I have the option to go back to my legacy Backup Application to backup the VMs which doesn’t leave me with no Free Backup Windows. As backing Reversed Incremental Jobs on a weekly basis takes time to finish the job, plus I have to run the legacy backup application to finish some backup jobs.

Veeam.com, please do something regarding the above suggestion to keep the customer using Veeam Backup in more flexible way.

Thanks,

Last night, after Time Machine performed a backup and began its post-backup thinning, it got stalled on «Finishing backup… .» The system log showed that it was attempting to delete a previously partially deleted backup:

———————
Starting standard backup



Starting post-backup thinning
Found partially deleted backup — trying again to delete: 2009-09-30-110803
———————

The backup it was trying to delete was the last one on its list (i.e., the oldest one on that TM volume). And when I opened it up, it indeed appeared to be a partially deleted folder. So I let TM run. However, it never finished «Finishing backup… ,» so after letting it run all night and all day, I simply told TM to stop. It did, and the system then added two more messages to the log, acknowledging my cancelation as well as TM’s current success:

———————
Starting standard backup



Starting post-backup thinning
Found partially deleted backup — trying again to delete: 2009-09-30-110803
Backup deletion was canceled by user
Backup completed successfully.
———————

However, the next time TM ran, it began all over again:

———————
Starting standard backup



Starting post-backup thinning
Found partially deleted backup — trying again to delete: 2009-09-30-110803
———————

It’s still sitting there in its «Finishing backup…» mode.

I was thinking about entering Time Machine, selecting that backup, and telling Time Machine to delete that backup—and only that backup. But I’m (a) not sure that will address the actual problem, and (b) wondering if deleting that particular backup (i.e., the oldest one on the list) is advisable.

Suggestions?

iMac (24″, 2.8 GHz) • 2GB RAM • 320 GB HDD • Mac OS X (10.5.8)

Posted on Nov 13, 2009 4:39 PM

Maxwell’s Demon wrote:

. . .


http://www.macosxhints.com/article.php?story=20090515063602219

which provided a link to the «solution» when partial backups remain that result in TM «error: 11» failures:

http://www.bytebot.net/blog/archives/2008/08/13/fixing-time-machine-backup-faile d-with-error-11

Admittedly, mine wasn’t an «error: 11» problem, but it sure seemed similar.

No. They’re talking about deleting the «in.Progress» package, which is the remains of a cancelled or failed backup, and always the +most recent+ one listed. There’s no reason to delete it manually, as is designed to recover all by itself.

And those «solutions» don’t really apply, either. Error 11 usually occurs with one of the messages listed in #C3 of the
Time Machine — Troubleshooting *User Tip* at the top of this forum.

That Tip is also the place to start if you ever have other troubles with TM. It’s not been officially «blessed» by Apple, of course, but some of the TM engineers have seen it, and the advice was gathered from many solutions posted here by several of the TM «gurus» that lurk in the TM forums.

Posted on Nov 15, 2009 2:04 PM

Понравилась статья? Поделить с друзьями:
  • Backup bios checksum error что это
  • Backup bios checksum error что делать
  • Backup aborted unable to read from disk error code 23 data error cyclic redundancy check
  • Backtrace for this error
  • Backtrace fatal error php