Ext4 fs error deleted inode referenced

I checked my /var/log/messages log file, on every 2 secs interval there is some log getting added.. Mar 20 11:42:30 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode refere...

I checked my /var/log/messages log file, on every 2 secs interval there is some log getting added..

Mar 20 11:42:30 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:32 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:34 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:36 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:38 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:40 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:42 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844
Mar 20 11:42:44 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844

I didn’t do any kind of operation on the system, but still error is getting logged. I suppose FS is corrupted.

What should I do?

Mat's user avatar

Mat

50.6k10 gold badges154 silver badges139 bronze badges

asked Mar 20, 2015 at 6:18

vipin kumar's user avatar

1

I encountered this error before as well. A manual file system check fixes it, but you can consider some files lost already.

Syntax:

fsck -y

It is best to do this in single user mode.

answered Mar 20, 2015 at 6:32

wbruan's user avatar

wbruanwbruan

4251 gold badge4 silver badges9 bronze badges

1

I am sharing the answer, as how I resolved this issue.

I edited the /etc/fstab and provided the root FS with FSCK=1,

/dev/mapper/vg_vipin-lv_root /   ext4    defaults        0 1

And then I did a reboot.

fsck will be performed and now everything is back to normal.

answered Mar 20, 2015 at 7:37

vipin kumar's user avatar

vipin kumarvipin kumar

6862 gold badges6 silver badges15 bronze badges

1

Skip to navigation
Skip to main content

Red Hat Customer Portal

Infrastructure and Management

  • Red Hat Enterprise Linux

  • Red Hat Virtualization

  • Red Hat Identity Management

  • Red Hat Directory Server

  • Red Hat Certificate System

  • Red Hat Satellite

  • Red Hat Subscription Management

  • Red Hat Update Infrastructure

  • Red Hat Insights

  • Red Hat Ansible Automation Platform

Cloud Computing

  • Red Hat OpenShift

  • Red Hat CloudForms

  • Red Hat OpenStack Platform

  • Red Hat OpenShift Container Platform

  • Red Hat OpenShift Data Science

  • Red Hat OpenShift Online

  • Red Hat OpenShift Dedicated

  • Red Hat Advanced Cluster Security for Kubernetes

  • Red Hat Advanced Cluster Management for Kubernetes

  • Red Hat Quay

  • OpenShift Dev Spaces

  • Red Hat OpenShift Service on AWS

Storage

  • Red Hat Gluster Storage

  • Red Hat Hyperconverged Infrastructure

  • Red Hat Ceph Storage

  • Red Hat OpenShift Data Foundation

Runtimes

  • Red Hat Runtimes

  • Red Hat JBoss Enterprise Application Platform

  • Red Hat Data Grid

  • Red Hat JBoss Web Server

  • Red Hat Single Sign On

  • Red Hat support for Spring Boot

  • Red Hat build of Node.js

  • Red Hat build of Thorntail

  • Red Hat build of Eclipse Vert.x

  • Red Hat build of OpenJDK

  • Red Hat build of Quarkus

Integration and Automation

  • Red Hat Process Automation

  • Red Hat Process Automation Manager

  • Red Hat Decision Manager

All Products

Issue

  • Errors similar to the following in system logging:
EXT4-fs error (device dm-6): ext4_lookup: deleted inode referenced: 1048628
EXT4-fs error (device dm-6): ext4_lookup: deleted inode referenced: 1048628
  • Directory warnings are being seen in the logs:
EXT4-fs warning (device dm-6): dx_probe: Unrecognised inode hash code 20
EXT4-fs warning (device dm-6): dx_probe: Corrupt dir inode 1048627, running e2fsck is recommended.
  • After performing maintenance on my cluster nodes which required stopping/starting services with ext4 file systems, I see errors in the logs:
Mar 18 04:16:41 node1 kernel: EXT4-fs error (device dm-8): ext4_lookup: deleted inode referenced: 204277
Mar 18 04:16:41 node1 kernel: EXT4-fs error (device dm-8): ext4_lookup: deleted inode referenced: 204280
Mar 18 04:16:41 node1 kernel: EXT4-fs error (device dm-8): ext4_lookup: deleted inode referenced: 204279
Mar 18 04:16:41 node1 kernel: EXT4-fs error (device dm-8): ext4_lookup: deleted inode referenced: 204283
Mar 18 04:16:41 node1 kernel: EXT4-fs error (device dm-8): ext4_mb_free_metadata: Double free of blocks 4327 (4327 1)
Mar 18 04:16:41 node1 kernel: JBD: Spotted dirty metadata buffer (dev = dm-8, blocknr = 0). There's a risk of filesystem corruption in case of system crash.
Mar 18 04:16:45 node1 kernel: EXT4-fs error (device dm-8): mb_free_blocks: double-free of inode 0's block 563840(bit 6784 in group 17)
Mar 18 04:16:45 node1 kernel: EXT4-fs error (device dm-8): mb_free_blocks: double-free of inode 0's block 563841(bit 6785 in group 17)
Mar 18 04:16:45 node1 kernel: EXT4-fs error (device dm-8): mb_free_blocks: double-free of inode 0's block 563858(bit 6802 in group 17)

Environment

  • Red Hat Enterprise Linux (RHEL) 6
  • ext4 file system(s)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

Skip to content

Node : serviceguardnode2.setaoffice.com
Node Type : Intel/AMD x64(HTTPS)
Severity : minor
OM Server Time: 2016-12-22 18:22:32
Message : EXT4-fs: error (device dm-156): ext4_lookup: deleted inode referenced: 1091357
Msg Group : OS
Application : dmsg_mon
Object : EXT4
Event Type :
not_found

Instance Name :
not_found

Instruction : No

Checking which device is complaining. dm-156 is /dev/vgWPJ/lv_orawp0

root@serviceguardnode2:/dev/mapper # ls -l | grep 156
lrwxrwxrwx. 1 root root 9 Dec 14 22:15 vgWPJ-lv_orawp0 -> ../dm-156

The filesystem is currently mounted

root@serviceguardnode2:/dev/mapper # mount | grep lv_orawp0
/dev/mapper/vgWPJ-lv_orawp0 on /oracle/WPJ type ext4 (rw,errors=remount-ro,data_err=abort,barrier=0)

And the logical volume is open

root@serviceguardnode2:~ # lvs vgWPJ
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_ora11264 vgWPJ -wi-ao—- 30.00g
lv_orawp0 vgWPJ -wi-ao—- 5.00g

This is a clustered environment and it is currently running on the other node

root@serviceguardnode2:/dev/mapper # cmviewcl | grep -i wpj
dbWPJ up running enabled serviceguardnode1

There is a Red Hat note referencing the error – “ext4_lookup: deleted inode referenced” errors in /var/log/messages in RHEL 6.

In clustered environments, which is the case, if the other node is mounting the filesystem, it will throw these errors in /var/log/messages

root@serviceguardnode2:~ # cmviewcl -v -p dbWPJ

PACKAGE STATUS STATE AUTO_RUN NODE
dbWPJ up running enabled serviceguardnode1

Policy_Parameters:
POLICY_NAME CONFIGURED_VALUE
Failover configured_node
Failback manual

Script_Parameters:
ITEM STATUS MAX_RESTARTS RESTARTS NAME
Service up 5 0 dbWPJmon
Subnet up 10.106.10.0

Node_Switching_Parameters:
NODE_TYPE STATUS SWITCHING NAME
Primary up enabled serviceguardnode1 (current)
Alternate up enabled serviceguardnode2

Dependency_Parameters:
DEPENDENCY_NAME NODE_NAME SATISFIED
dbWP0_dep serviceguardnode2 no
dbWP0_dep serviceguardnode1 yes

Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style modular
Priority no_priority

Checking the filesystems. I need to unmount /oracle/WPJ but first I need to umount everything under /oracle/WPJ otherwise it will show that /oracle/WPJ is busy

root@serviceguardnode2:~ # df -hP | grep WPJ
/dev/mapper/vgSAP-lv_WPJ_sys 93M 1.6M 87M 2% /usr/sap/WPJ/SYS
/dev/mapper/vgWPJ-lv_orawp0 4.4G 162M 4.0G 4% /oracle/WPJ
/dev/mapper/vgWPJ-lv_ora11264 27G 4.7G 21G 19% /oracle/WPJ/11204
/dev/mapper/vgWPJlog2-lv_origlogb 2.0G 423M 1.4G 23% /oracle/WPJ/origlogB
/dev/mapper/vgWPJlog2-lv_mirrloga 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogA
/dev/mapper/vgWPJlog1-lv_origloga 2.0G 423M 1.4G 23% /oracle/WPJ/origlogA
/dev/mapper/vgWPJlog1-lv_mirrlogb 2.0G 404M 1.5G 22% /oracle/WPJ/mirrlogB
/dev/mapper/vgWPJdata-lv_sapdata4 75G 21G 55G 28% /oracle/WPJ/sapdata4
/dev/mapper/vgWPJdata-lv_sapdata3 75G 79M 75G 1% /oracle/WPJ/sapdata3
/dev/mapper/vgWPJdata-lv_sapdata2 75G 7.3G 68G 10% /oracle/WPJ/sapdata2
/dev/mapper/vgWPJdata-lv_sapdata1 75G 1.1G 74G 2% /oracle/WPJ/sapdata1
/dev/mapper/vgWPJoraarch-lv_oraarch 20G 234M 19G 2% /oracle/WPJ/oraarch
scsWPJ:/export/sapmnt/WPJ/profile 4.4G 4.0M 4.1G 1% /sapmnt/WPJ/profile
scsWPJ:/export/sapmnt/WPJ/exe 4.4G 2.5G 1.7G 61% /sapmnt/WPJ/exe

Umounting /oracle/WPJ

root@serviceguardnode2:~ # umount /oracle/WPJ/11204
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/origlogA
root@serviceguardnode2:~ # umount /oracle/WPJ/mirrlogB
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata4
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata3
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata2
root@serviceguardnode2:~ # umount /oracle/WPJ/sapdata1
root@serviceguardnode2:~ # umount /oracle/WPJ/oraarch
root@serviceguardnode2:~ # umount /oracle/WPJ

We are seeing a similar issue:

Container Linux: 1745.7.0
Kernel: 4.14.48-coreos-r2
Kubernetes: v1.11.0
Cloud env: AWS

Jul 23 03:34:31 ip-10-66-21-90 kernel: oom_reaper: reaped process 29050 (mongod), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Jul 23 04:46:57 ip-10-66-21-90 kernel: ftdc invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null),  order=0, oom_score_adj=998
Jul 23 04:46:57 ip-10-66-21-90 kernel: ftdc cpuset=37650b2ef9950343912e773ef083107c7b81ffdf67e628164d14005ec21f72bc mems_allowed=0
Jul 23 04:46:57 ip-10-66-21-90 kernel: CPU: 4 PID: 28956 Comm: ftdc Not tainted 4.14.48-coreos-r2 #1
Jul 23 04:46:57 ip-10-66-21-90 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
Jul 23 04:46:57 ip-10-66-21-90 kernel: Call Trace:
Jul 23 04:46:57 ip-10-66-21-90 kernel:  dump_stack+0x5c/0x85
Jul 23 04:46:57 ip-10-66-21-90 kernel:  dump_header+0x94/0x229
Jul 23 04:46:57 ip-10-66-21-90 kernel:  oom_kill_process+0x213/0x410
Jul 23 04:46:57 ip-10-66-21-90 kernel:  out_of_memory+0x2ab/0x4c0
Jul 23 04:46:57 ip-10-66-21-90 kernel:  mem_cgroup_out_of_memory+0x49/0x80
Jul 23 04:46:57 ip-10-66-21-90 kernel:  mem_cgroup_oom_synchronize+0x2ed/0x330
Jul 23 04:46:57 ip-10-66-21-90 kernel:  ? mem_cgroup_css_online+0x30/0x30
Jul 23 04:46:57 ip-10-66-21-90 kernel:  pagefault_out_of_memory+0x32/0x77
Jul 23 04:46:57 ip-10-66-21-90 kernel:  __do_page_fault+0x4b3/0x4c0
Jul 23 04:46:57 ip-10-66-21-90 kernel:  ? page_fault+0x2f/0x50
Jul 23 04:46:57 ip-10-66-21-90 kernel:  page_fault+0x45/0x50
Jul 23 04:46:57 ip-10-66-21-90 kernel: RIP: 9000:0x49be8
Jul 23 04:46:57 ip-10-66-21-90 kernel: RSP: 1937d000:00007f7040aae690 EFLAGS: 00010008
Jul 23 04:46:57 ip-10-66-21-90 kernel: Task in /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/37650b2ef9950343912e773ef083107c7b81ffdf67e628164d14005ec21f72bc killed as a result of limit of /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/37650b2ef9950343912e773ef083107c7b81ffdf67e628164d14005ec21f72bc
Jul 23 04:46:57 ip-10-66-21-90 kernel: memory: usage 1024000kB, limit 1024000kB, failcnt 0
Jul 23 04:46:57 ip-10-66-21-90 kernel: memory+swap: usage 1024000kB, limit 1024000kB, failcnt 24580
Jul 23 04:46:57 ip-10-66-21-90 kernel: kmem: usage 6860kB, limit 9007199254740988kB, failcnt 0
Jul 23 04:46:57 ip-10-66-21-90 kernel: Memory cgroup stats for /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/37650b2ef9950343912e773ef083107c7b81ffdf67e628164d14005ec21f72bc: cache:0KB rss:1017140KB rss_huge:120832KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:1017140KB inactive_file:0KB active_file:0KB
Jul 23 04:46:57 ip-10-66-21-90 kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Jul 23 04:46:57 ip-10-66-21-90 kernel: [28863]     0 28863   632640   253284     623       5        0           998 mongod
Jul 23 04:46:57 ip-10-66-21-90 kernel: Memory cgroup out of memory: Kill process 28863 (mongod) score 1989 or sacrifice child
Jul 23 04:46:57 ip-10-66-21-90 kernel: Killed process 28863 (mongod) total-vm:2530560kB, anon-rss:979668kB, file-rss:33468kB, shmem-rss:0kB
Jul 23 04:46:57 ip-10-66-21-90 kernel: oom_reaper: reaped process 28863 (mongod), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Jul 23 05:52:55 ip-10-66-21-90 kernel: replication-46 invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null),  order=0, oom_score_adj=998
Jul 23 05:52:55 ip-10-66-21-90 kernel: replication-46 cpuset=0e36987ee58480b66c889b5aa51c86362615dcfba864fd1147284109658ec321 mems_allowed=0
Jul 23 05:52:55 ip-10-66-21-90 kernel: CPU: 3 PID: 19253 Comm: replication-46 Not tainted 4.14.48-coreos-r2 #1
Jul 23 05:52:55 ip-10-66-21-90 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
Jul 23 05:52:55 ip-10-66-21-90 kernel: Call Trace:
Jul 23 05:52:55 ip-10-66-21-90 kernel:  dump_stack+0x5c/0x85
Jul 23 05:52:55 ip-10-66-21-90 kernel:  dump_header+0x94/0x229
Jul 23 05:52:55 ip-10-66-21-90 kernel:  oom_kill_process+0x213/0x410
Jul 23 05:52:55 ip-10-66-21-90 kernel:  out_of_memory+0x2ab/0x4c0
Jul 23 05:52:55 ip-10-66-21-90 kernel:  mem_cgroup_out_of_memory+0x49/0x80
Jul 23 05:52:55 ip-10-66-21-90 kernel:  mem_cgroup_oom_synchronize+0x2ed/0x330
Jul 23 05:52:55 ip-10-66-21-90 kernel:  ? mem_cgroup_css_online+0x30/0x30
Jul 23 05:52:55 ip-10-66-21-90 kernel:  pagefault_out_of_memory+0x32/0x77
Jul 23 05:52:55 ip-10-66-21-90 kernel:  __do_page_fault+0x4b3/0x4c0
Jul 23 05:52:55 ip-10-66-21-90 kernel:  ? page_fault+0x2f/0x50
Jul 23 05:52:55 ip-10-66-21-90 kernel:  page_fault+0x45/0x50
Jul 23 05:52:55 ip-10-66-21-90 kernel: RIP: 2f41e80:0x5624022029a0
Jul 23 05:52:55 ip-10-66-21-90 kernel: RSP: 3e1c1000:00007f1e6dd030a0 EFLAGS: 5624021fa4a0
Jul 23 05:52:55 ip-10-66-21-90 kernel: Task in /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/0e36987ee58480b66c889b5aa51c86362615dcfba864fd1147284109658ec321 killed as a result of limit of /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/0e36987ee58480b66c889b5aa51c86362615dcfba864fd1147284109658ec321
Jul 23 05:52:55 ip-10-66-21-90 kernel: memory: usage 1024000kB, limit 1024000kB, failcnt 0
Jul 23 05:52:55 ip-10-66-21-90 kernel: memory+swap: usage 1024000kB, limit 1024000kB, failcnt 24700
Jul 23 05:52:55 ip-10-66-21-90 kernel: kmem: usage 6748kB, limit 9007199254740988kB, failcnt 0
Jul 23 05:52:55 ip-10-66-21-90 kernel: Memory cgroup stats for /kubepods/burstable/podaac08eb8-8901-11e8-854f-0688fd98c98a/0e36987ee58480b66c889b5aa51c86362615dcfba864fd1147284109658ec321: cache:12KB rss:1017240KB rss_huge:247808KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:1017232KB inactive_file:12KB active_file:0K
Jul 23 05:52:55 ip-10-66-21-90 kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Jul 23 05:52:55 ip-10-66-21-90 kernel: [21851]     0 21851   585197   235279     581       5        0           998 mongod
Jul 23 05:52:55 ip-10-66-21-90 kernel: Memory cgroup out of memory: Kill process 21851 (mongod) score 1919 or sacrifice child
Jul 23 05:52:56 ip-10-66-21-90 kernel: Killed process 21851 (mongod) total-vm:2340788kB, anon-rss:908012kB, file-rss:33104kB, shmem-rss:0kB
Jul 23 05:52:56 ip-10-66-21-90 kernel: oom_reaper: reaped process 21851 (mongod), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Jul 23 06:04:08 ip-10-66-21-90 kernel: EXT4-fs error (device xvda9): ext4_lookup:1585: inode #4188273: comm du: deleted inode referenced: 4195542
Jul 23 06:04:08 ip-10-66-21-90 kernel: Aborting journal on device xvda9-8.
Jul 23 06:04:08 ip-10-66-21-90 kernel: EXT4-fs (xvda9): Remounting filesystem read-only


0
{{postValue.vote_count}}

I got the following error while booting up my Ubuntu 15.10 Desktop

{{forumCtrl.question_commentErr}}

mason

asked Mar 24, 2021

{{ forumCtrl.answerArr.length}} Answer


0
{{answer.voteCount}}

Edit
Delete

{{cmd.user.user_name}}

View More

{{forumCtrl.response_commentErr}}

answered {{answer.date}}

Your Answer

{{forumCtrl.responseErr}}

Ezoicreport this ad

Ezoicreport this ad


On some servers I use rsnapshot as a backup method. It’s fast, easy to manage and reliable. But a backup a few days ago returned the following error in the daily backup log:

echo 5190 > /var/run/rsnapshot.pid

/bin/rm -rf /backup/rsnapshot/daily.9/

/bin/rm: cannot remove `/backup/rsnapshot/daily.9/localhost/home/mail/web98p1/Maildir/cur/1359872706S=12695,W=12855:2,S’: Input/output error

/bin/rm: cannot remove `/backup/rsnapshot/daily.9/localhost/home/mail/web98p8/Maildir/cur/1360095843S=4225,W=4321:2,S’: Input/output error

/bin/rm: cannot remove `/backup/rsnapshot/daily.9/localhost/var/www/web136/files/g2data/cache/entity/3/2′: Input/output error

—————————————————————————-

rsnapshot encountered an error! The program was invoked with these options:

/usr/bin/rsnapshot daily

[Backflash] This particular server suffered a disk defect in the past days and the raid recovery had some issues to resynchronize. I removed the defect disk out of the raid array and told the guys in the data center to replace the defect disk. And here’s comes the problem: They made a big mistake by making a short SMART test which resulted in no errors and decided, the disk was good and rebooted the server. As the server booted, the Kernel recognized two disks with each a raid configuration on them. But guess what? mdraid tells the kernel that these disks are not in the same raid (because I previously removed the defect disk, remember?).

So by accident, the kernel took the defect disk, created a weird new raid array (called md127) and mounted the filesystems on it while the good disk was just mounted as read-only block device. During a day or so the server was running like this until I realized what happened. I then hoped I could at least resynchronize the defect disk with the good disk, so the new data would be synchronized. But due to too many I/O errors from the bad disk (because the disk is bad, told ya!!) the raid recovery failed at around 20%. So some data was resynchronized, some not. Some probably even contain invalid data as the disk was going to die. After a lot of nightly effort I could restore the server to a more or less stable system, thanks to the help of a very good friend. But one question remained open: What did the failed raid recovery do to the good disk? [/Backflash]

Back to present. My guess is that the input/output errors on that filesystem originate from the failed raid rebuild where some corrupt data might have come from the defect disk. I tried to manually remove the daily.9 folder to see what would happen:

rm -r daily.9

rm: cannot remove `daily.9/localhost/home/mail/web98p1/Maildir/cur/1359872706,S=12695,W=12855:2,S’: Input/output error

rm: cannot remove `daily.9/localhost/home/mail/web98p8/Maildir/cur/1360095843,S=4225,W=4321:2,S’: Input/output error

rm: cannot remove `daily.9/localhost/var/www/web136/files/g2data/cache/entity/3/2′: Input/output error

Same issue. Let’s see if dmesg tells us something useful:

dmesg

[116836.420063] EXT4-fs error (device dm-5): ext4_lookup: deleted inode referenced: 24274320

[116842.656399] EXT4-fs error (device dm-5): ext4_lookup: deleted inode referenced: 24400033

[116842.574064] EXT4-fs error (device dm-5): ext4_lookup: deleted inode referenced: 24273584

The «fs error» tells it all. There’s some issues on that file system. Next step: File system check.

As this is the backup file system, separately mounted, I can unmount it and make the file system check while the server continues to do its job.

umount /backup

fsck.ext4 /dev/mapper/vg0-backup

e2fsck 1.41.12 (17-May-2010)

/dev/mapper/vg0-backup contains a file system with errors, check forced.

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Entry ‘1360095843,S=4225,W=4321:2,S’ in /rsnapshot/daily.9/localhost/home/mail/web98p8/Maildir/cur (8674221) has deleted/unused inode 24274320.  Clear

? yes

Entry ‘2’ in /rsnapshot/daily.9/localhost/var/www/web136/files/g2data/cache/entity/3 (24400027) has deleted/unused inode 24400033.  Clear

? yes

Entry ‘1359872706,S=12695,W=12855:2,S’ in /rsnapshot/daily.9/localhost/home/mail/web98p1/Maildir/cur (8671256) has deleted/unused inode 24273584.  Clear

? yes

Heyyy.. these entries look familiar. Don’t they? So these files were referenced in an inode which does not exist anymore. To my luck this is an older backup and I can permit myself to clear the inodes from the file system. Well… what choice would I have anyway. The fsck continues:

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Inode 939825 ref count is 10, should be 9.  Fix

? yes

Inode 939826 ref count is 10, should be 9.  Fix? yes

Inode 939827 ref count is 10, should be 9.  Fix? yes

Inode 939843 ref count is 10, should be 9.  Fix? yes

On Pass 4 there came a lot of Inode ref count fix suggestions. Inode ref counts are nothing more than a count of hard links to a particular file. rsnapshot users know that rsnapshot works with hard links to save disk space with daily backups.

Pass 5: Checking group summary information

Block bitmap differences:  -(97030728—97030731) -(97033838—97033839) -97529271

Fix? yes

Free blocks count wrong for group #2961 (3626, counted=3632).

Fix

? yes

Free blocks count wrong for group #2976 (8930, counted=8931).

Fix

? yes

Free blocks count wrong (50341877, counted=50341884).

Fix

? yes

Inode bitmap differences:  -24273584 -24274320 -24400033

Fix

? yes

Free inodes count wrong for group #2963 (2017, counted=2019).

Fix

? yes

Free inodes count wrong for group #2978 (3021, counted=3022).

Fix

? yes

Directories count wrong for group #2978 (4051, counted=4050).

Fix

? yes

Free inodes count wrong (26902779, counted=26902782).

Fix

? yes



/dev/mapper/vg0-backup: ***** FILE SYSTEM WAS MODIFIED *****

/dev/mapper/vg0-backup: 5865218/32768000 files (0.1% non-contiguous), 80730116/131072000 blocks

Pass 5 fixed the count of free inodes and free blocks on the file system. Must also be an artifact from the failed raid recovery. That was the end of the fsck.

Another check to see if everything is alright now:

fsck.ext4 /dev/mapper/vg0-backup

e2fsck 1.41.12 (17-May-2010)

/dev/mapper/vg0-backup: clean, 5865218/32768000 files, 80730116/131072000 blocks

So the file system should now be clean. Falsely referenced inodes were deleted and rsnapshot should do its job correctly now. Let me check the rsnapshot logs:

echo 9385 > /var/run/rsnapshot.pid

/bin/rm -rf /backup/rsnapshot/daily.9/

mv /backup/rsnapshot/daily.8/ /backup/rsnapshot/daily.9/

[…]

Yep, rsnapshot continues its fine job once more. Over and out.

Add a comment

Show form to leave a comment

Comments (newest first)

Jake from wrote on Nov 27th, 2015:

Thank you for this well written walk through!


Понравилась статья? Поделить с друзьями:
  • Expression m error code
  • Expression fatal error stalker
  • Expression fatal error function simpleexceptionfilter file xrdebugnew cpp line 498
  • Expression expected pycharm как исправить
  • Expression error при вычислении возникло переполнение стека продолжение невозможно