My system raised the I/O error when I tried to use ‘ls’ on a mounted hard disk.
I am using
hadoop@hbase1:/hddata$ uname -a
Linux hbase1 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
hadoop@hbase1:/hddata$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/hbase2--vg-root 468028968 2715496 441532304 1% /
udev 6081916 4 6081912 1% /dev
tmpfs 2436652 336 2436316 1% /run
none 5120 0 5120 0% /run/lock
none 6091620 0 6091620 0% /run/shm
/dev/sda1 1922727280 867279740 957771940 48% /hddata
/dev/sdb1 233191 27854 192896 13% /boot
10.18.103.101:/data/marketdata 1883265024 1644255232 143344640 92% /srv/data/marketdatah
The last several lines of dmesg
hadoop@hbase1:/hddata$ dmesg | tail
[316263.280056] EXT4-fs (sda1): previous I/O error to superblock detected
[316263.281326] sd 0:0:0:0: [sda] Unhandled error code
[316263.281329] sd 0:0:0:0: [sda]
[316263.281330] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[316263.281332] sd 0:0:0:0: [sda] CDB:
[316263.281334] Write(10): 2a 00 00 00 00 3f 00 00 08 00
[316263.281342] end_request: I/O error, dev sda, sector 63
[316263.282584] Buffer I/O error on device sda1, logical block 0
[316263.283799] lost page write due to I/O error on sda1
[316263.283842] EXT4-fs error (device sda1): ext4_find_entry:1270: inode #2: comm bash: reading directory lblock 0
tshepang
11.9k21 gold badges90 silver badges134 bronze badges
asked Sep 16, 2013 at 2:56
Buffer I/O error on device sda1, logical block 0
This sort of thing is indicative of an impending hardware failure. I’d do what you can to back up everything you need to somewhere else and replace the drive before it fails irreparably.
answered Sep 16, 2013 at 3:19
Содержание
- «ls: reading directory ‘.’: Input/output error» only for 1 folder
- 1 Answer 1
- «Input/output error» when accessing a directory
- 9 Answers 9
- Can be fixed on OS X with Paragon NTFS
- UPDATE
- ls: reading directory .: Input/output error #367
- Comments
- Error in brand new External Hard Drive: «ls: reading directory ‘.’: Input/output error»
- 1 Answer 1
- ls: reading directory .: Input/output error on top level directory #197
- Comments
«ls: reading directory ‘.’: Input/output error» only for 1 folder
I am using Ubuntu 21.04, Gnome 3.38.5.
Just 1 folder gives me:
the rest of the Hard Disc works fine. It happened while the external hard disc was moved during transfer and obviously the transfer stopped. I am also getting a
Now i can’t see any files inside it and also, even though i was copying not moving, the files are nowhere.
I have checked those resources none seems to give me a solution:
Edit: added extra info
1 Answer 1
I got the following error trying to list ( ls ) a folder/directory on an External Hard Drive using Ubuntu 22.04 jammy:
I powered down the laptop during file transfer to the External Hard Drive folder/directory
- Connect External Hard drive to a computer with Windows 10 or Higher (I used windows 11)
- Go to My PC > select the external hard drive > right click and select properties> select Tools and run check ( chkdsk )
- The utility will scan and repair the external hard drive.
- This will result in a hidden protected folder/directory named found.000
To view the hidden folder/directory follow steps below:
- Press Windows Key.
- Type File Explorer Options then Enter.
- Select View Tab.
- Uncheck Hide protected operating system files. (Recommended)
- A prompt will appear, you will have to choose Yes or No if you want to continue changes made on your computer.
- Then click Apply.
To access a found.000 folder that is locked/protected
- Select the folder/directory and Go to folder properties
- Security
- Remove all permissions
- Click Enable Inheritance
- Click Apply
Now open found.000 folder/directory, there should be a folder named dir0000.chk with your folder/directory contents.
Источник
«Input/output error» when accessing a directory
I want to list and remove the content of a directory on a removable hard drive. But I have experienced «Input/output error»:
I was wondering what the problem is?
How can I recover or remove the directory pic and all of its content?
My OS is Ubuntu 12.04, and the removable hard drive has ntfs filesystem. Other directories not containing or inside pic on the removable hard drive are working fine.
Last part of output of dmesg after I tried to list the content of the directory:
9 Answers 9
Input/Output errors during filesystem access attempts generally mean hardware issues.
Type dmesg and check the last few lines of output. If the disc or the connection to it is failing, it’ll be noted there.
EDIT Are you mounting it via ntfs or ntfs-3g ? As I recall, the legacy ntfs driver had no stable write support and was largely abandoned when it turned out ntfs-3g was significantly more stable and secure.
As Sadhur states this is probably caused by disk hardware issues and the dmesg output is the right place to check this.
You can issue a surface scan of your disk from Linux /sbin/badblocks /dev/sda .
Check the manual page for more thorough tests an basic fixes (block relocation). This is all filesystem-agnostic, so it is safe even with an NTFS filesystem as it operates on the ‘disk surface’ level.
I personally made this to run on a monthly basis from cron. Of course you need to check if you receive the cron mails in your mailbox (which is often not the case by default). These mails end up in /var/mail/$USER or similar.
I created /etc/cron.d/badblocks :
Your filesystem is damaged , for NTFS volumes you should run a chkdsk under windows system , but it’s nearly impossible to recover. Sometimes you might need to format the disk.
A solution that works for me is to downgrade the ntfs-3g version from the 2014 release to the 2012 release. This should solve your ntfs partition access problem. In the long run this is not a solution because eventually you will need to run the latest release.
Nobody mentioned what to do if Linux tools are not working and only a Mac, but not Windows, is available.
Can be fixed on OS X with Paragon NTFS
In my case gparted said to go find a Windows PC which was nowhere to be found. But a Mac was around, for which this great piece of software is available. Installed the trial version, performed verify, then repair — and voilà!
I just wanted to add my solution to this thread for the benefit of others — I did some work on my system when my power supply failed — I must have reconnected the SATA cables in the wrong order as when I switched them over, everything worked again — no idea why the boot disk needed to be on a specific SATA port, anyway, might be the answer for someone else.
I just wanted to share my experience: on FreeBSD 10.3, I mounted my external hard drive with
Inside the hard drive, I did a mkdir to create a directory and then moved some files to it, of course with mv command. Finally I did the following command:
Then I mounted the hard drive on a Linux machine with kernel 4.4.0-78-generic. Now When I list the contents of the hard drive, the directory created on FreeBSD, named Jeff , is shown like below:
Also, when trying to remove the Jeff directory, I receive the following error message:
I couldn’t get rid of Jeff directory on Linux machine, therefore I used the FreeBSD machine and re-mounted the hard drive on FreeBSD again. But the ls , cd and rm commands on FreeBSD generate the same Input/output error . Looks like there has been a bug on FreeBSD ntfs-3g package.
UPDATE
I moved all my data from external hard drive to a Linux machine, of course the corrupt file Jeff couldn’t be moved due to I/O error. Then I reformatted the external hard drive with both zeroing of the volume and bad sector checking like this:
And then moved all the data back to the external volume. This way, I lost the corrupt file named Jeff , however, my external hard drive is clean of any I/O error.
Источник
ls: reading directory .: Input/output error #367
Action
I am trying to mount a drive in a Debian GNU/Linux 8 google compute engine terminal ssh
Problem
when the driver is mounted, doing a ls command to list files will return
ls: reading directory .: Input/output error
Desired action
mount the drive and list, read and write files into mounted google storage
the sequence that i am trying to exec:
- gcloud auth activate-service-account —key-file
- gcsfuse -o allow_other —dir-mode 777 —file-mode 777 —key-file=/home/achadosepedidos/deep-equator-150913-848d545c0879.json achados-e-pedidos-bucket achados-e-pedidos-bucket
the return is:
Using mount point: /mnt/achados-e-pedidos-bucket
Opening GCS connection.
Opening bucket.
Mounting file system.
File system has been successfully mounted.
this error appears:
ls: reading directory .: Input/output error
Issued the gcsfuse command again with debugs flags
- gcsfuse -o allow_other —dir-mode 777 —file-mode 777 —foreground —debug_gcs —debug_http —debug_fuse —debug_invariants —key-file=/home/achadosepedidos/deep-equator-150913-848d545c0879.json achados-e-pedidos-bucket achados-e-pedidos-bucket
Using mount point: /mnt/achados-e-pedidos-bucket
Opening GCS connection.
Opening bucket.
gcs: Req 0x0: ListObjects() (69.453719ms): Get https://www.googleapis.com/storage/v1/b/achados-e-pedidos-bucket/o?maxResults=1&projection=full: DumpResponse: stream error: stream ID 1; PROTOCOL_ERROR
Mounting file system.
fuse_debug: Op 0x00000001 connection.go:395] OK ()
File system has been successfully mounted.
fuse_debug: Op 0x00000002 connection.go:395] OK ()
fuse_debug: Op 0x00000003 connection.go:395] StatObject(«pedidos») (9.509517ms): Get https://www.googleapis.com/storage/v1/b/achados-e-pedidos-bucket/o/pedidos?projection=full: stream error: stream ID 3; PROTOCOL_ERROR
gcs: Req 0x2: -> StatObject(«pedidos/») (9.447651ms): Get https://www.googleapis.com/storage/v1/b/achados-e-pedidos-bucket/o/pedidos%2F?projection=full: DumpResponse: stream error: stream ID 5; PROTOCOL_ERROR
fuse_debug: Op 0x00000003 connection.go:476] -> Error: «LookUpChild: statObjectMayNotExist: StatObject: Get https://www.googleapis.com/storage/v1/b/achados-e-pedidos-bucket/o/pedidos?projection=full: stream error: stream ID 3; PROTOCOL_
ERROR»
fuse: 2019/12/12 00:47:28.017193 *fuseops.LookUpInodeOp error: LookUpChild: statObjectMayNotExist: StatObject: Get https://www.googleapis.com/storage/v1/b/achados-e-pedidos-bucket/o/pedidos?projection=full: stream error: stream ID 3; PROTOCOL_
ERROR
fuse_debug: Op 0x00000004 connection.go:395]
The text was updated successfully, but these errors were encountered:
Источник
Error in brand new External Hard Drive: «ls: reading directory ‘.’: Input/output error»
This morning I was downloading some files directly to an EXTERNAL Hard Drive (1TB Seagate Hard Drive, bought a week ago) and I had to shut down the PC without unmounting the disk and while it was downloading. Now I’m trying to list all the files within the folder where I was downloading and I get this error when I ls on it:
Aparently, the folder seems to be empty but it should have 200GB of files (around 160000 files). If I access it on Windows I can only see around 30000 files, but I can’t see anything on Ubuntu, where I was performing the download.
I have some other folders in the same Disk and none of them gives me the same error.
I tried fsck and it doesn’t seem to render any error:
Also, I know that there is a file called links.txt within this folder. I can access it by doing gedit links.txt and I can even copy it, but I can’t see it on Windows nor Ubuntu. So it’s seems that my files are there but there is some problems reading them. I think that I could manually copy them if I could remember the name of all of them.
I installed today the GNOME-tweaks-tool, some GNOME extensions and some themes. I don’t know it this can be related to the failure. I’m running Ubuntu 17.10 and Windows 10 on separate disks.
Thank you so much in advance
1 Answer 1
This is old enough that the original poster has probably moved on but maybe this will help someone. I had a very similar problem that I was able to ‘fix’. It was on the internal hard drive of a fairly new machine so I really didn’t think it was hardware related. It’s a Windows 10 PC with the Ubuntu Linux app. I had one folder (
40 files, 3 Gb) that became unreadable under Linux; same error ls: reading directory ‘.’: Input/output error . I couldn’t ls , mv , rm , rename , tar , etc. that one directory but all else seemed fine. I wasn’t sure how it got that way, but turning the machine off when things were open may have been it (I swap a large monitor between a couple machines and don’t always recall what state things are in for the monitor-less one). And, like the OP, I even had a file Notes.txt that I knew had been there and could no longer list but I could open it in vi!
The original post did help me (thanks) because I hadn’t yet thought of going to look at the same place from the Windows file explorer. When I did, everything I expected was there and looked okay except I noticed one file not intentionally created by me called .
I had been massaging/extracting some highway lat/longs from a shapefile/database set with my own code. So I just deleted the .
lock. file above with Windows and then the Linux side started working again just fine. I know this may be tougher to do with thousands files and it sounds like the OP may have had some file loss too, but perhaps there was some weird orphan file or file fragments like this that caused the problem.
Источник
ls: reading directory .: Input/output error on top level directory #197
root@myserver:/srv/s3# ls
ls: reading directory .: Input/output error
This is only for the first top-level directory. I can cd in to the next directory down and start to ls there.
I have seen other bugs that are similar to this — but not exactly the same issue as mine. This is a stock ubuntu 14.04 machine with s3fs version 1.78 with openssl.
Any ideas what can be wrong with this top level dir?
The text was updated successfully, but these errors were encountered:
@kabads Can you run s3fs with the -d -d -f -o f2 -o curldbg flags and report any errors?
I have redacted my bucket name:
- Connection #0 to host mybucket.s3-eu-west-1.amazonaws.com left intact
MultiRead(3481): failed a request(403: http://mybucket.s3-eu-west-1.amazonaws.com/%7EVersionArchive/)
multi_head_retry_callback(2170): Over retry count(3) limit(/
VersionArchive/).
readdir_multi_head(2246): error occuered in multi request(errno=-5).
s3fs_readdir(2301): readdir_multi_head returns error(-5).
unique: 24, error: -5 (Input/output error), outsize: 16
unique: 25, opcode: RELEASEDIR (29), nodeid: 1, insize: 64, pid: 0
unique: 25, success, outsize: 16
I’ve noticed that when there are merely folders in the directory, ls works fine. If there are standard files inside the directory I too am getting the error. I can touch a file, and receive the error, but see that file was uploaded to the s3 bucket on Amazon’s side.
I’m sorry for that this issue had been left at a long period of time.
This problem has occurred in that s3fs is probabry timeouted to get the object head information(multipart head request).
So please use latest codes which is fixed about multipart request problem, and try to set «retries» parameter for s3fs.
And if you get a same problem, please run s3fs with dbglevel(and curldbg) option.
These option helps us solve this issue.
Thanks in advance for your help.
I am having the exact same problem and I am using the latest code I believe:
s3fs —version
Amazon Simple Storage Service File System V1.79 with OpenSSL
fstab:
s3fs#my-bucket /mnt/my-folder fuse -d,iam_role=my-role,allow_other,curldbg,retries=5,endpoint=us-west-2,url=https://s3-us-west-2.amazonaws.com
And I am getting the following:
ls: reading directory .: Input/output error
The instance has the iam role and I am able to use «aws s3 —region us-west-2 ls s3://my-bucket/»
Strange thing is that I get in /var/log/messages the list of folders with the message:
s3fs: failed a request(403: https://s3-bucket-url/file-visible)..
s3fs: Over retry count(5) limit(/document.tgz).
s3fs: error occuered in multi request(errno=-5)
s3fs: readdir_multi_head returns error(-5).
So it is actually able to show the filename on /var/log/messages
Strange thing is that I do have another instance recently setup that works same role, the instance failing was working sometime ago, I am wondering if there is any service running that I need to restart.
I have unmount and mounted but the problem persist, I have also been using the following to unmount: fusermount -u , no difference.
First I want to see whether s3fs have failed to mount?
(did df command failed?)
As far as your results, s3fs seems to not be able to access by the access control of the S3.
We need to know the reason of this failure, if you can please set dbglevel/curldbg option and get the debug log detail.
We kept this issue open for a long time.
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
Источник
please see this question
How to resolve vi: out of memory error in vmware esxi 6.5
yesterday night, I had a problem about restoring my vmdk file and now I have access to my hard driver
and I added this hard to my VPS
but now I cannot read this hard information
at all, I can send you additional details of my problem
please read below commands, I used some of the methods to resolve my problem but until now I don’t have any chance to find a way to do that
~#nano /etc/fstab
UUID=###951671### /DATA ufs defaults 1 2
mkdir /DATA
mount /DATA
~:/DATA# ls
ls: reading directory ‘.’: Input/output error
~:/DATA# mount -o rw,remount /dev/sdb1
mount: /DATA: mount point not mounted or bad option.
~# umount /DATA
~# e2fsck /DATA
OUTPUT
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: Is a directory while trying to open /DATA
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
~# mount /DATA
mount: /DATA: WARNING: device write-protected, mounted read-only.
AND
umount /dev/sdb1
2. here I have a problem
fsck /dev/sdb1
mount /dev/sdb1
~# e2fsck /dev/sdb1
OUTPUT
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks…
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
/dev/sdb1 contains a ufs file system
additional commands to help you to know additional details
~# ls -lat | grep DATA
drwxr-xr-x 5 root root 1024 May 26 11:37 DATA
~# df -h | grep sd
OUTPUT
/dev/sda1 276G 8.7G 254G 4% /
**/dev/sdb1 197G 102G 80G 57% /DATA**
~# lsblk -f | grep sd
OUTPUT
sda
├─sda1 ext4 ###-c0fb-42ce-9c78-### 253.2G 3% /
├─sda2
└─sda5 swap ###-27b4-485b-98b3-### [SWAP]
sdb
└─sdb1 ufs ###951671### 79.3G 52% /DATA
At all, I would like to access this hard /dev/sdb1 in /DATA folder
How can I resolve this problem?
-
#2
List your NFS exports by posting the output of this command in the TrueNAS shell: «cat /etc/exports». That i/o error can mean you’re trying to mount the wrong path. Using NFSv3 only in the TrueNAS NFS server settings is still OK, otherwise choosing NFSv4 with NFSv3 style security is easier to use than NFSv4 alone.
The dataset you want to share must be created with a «generic» share type. You need to sort out the correct syntax for NFS mounts in Ubuntu, your last example muddles CIFS and NFS synatax. .
Last edited: Jan 19, 2021
-
#6
So its was effectively a wrong path causing the i/o error. Perhaps use underscore instead of a space character. With the space in the path name the server thinks the client is asking to mount a NFS export of /mnt/M2, which clearly does not exist.
I used Generic share type on the datasets. On a side note, Windows was able to open up the pool itself and I can see all datasets as folders. Is that not possible on unix?
Not with NFSv4. The «-alldirs» flag doesn’t do what you probably think it does as each dataset is a separate filesystem ( see: https://www.freebsd.org/cgi/man.cgi?exports(5)). So if you had four datasets in a pool each would have to have an NFS share reflected in the contents of the /etc/exports file and each would require a separate mount in Ubuntu.
Last edited: Jan 20, 2021