I have the following error with one of our web applications —
Query3 failed: Error writing file '/tmp/MY1fnqpm' (Errcode: 28) ... INSERT MailList... (removed the rest of the query for security reasons)
Any ideas — is this some hard disk space issue on my server?
user1981275
12.8k8 gold badges70 silver badges100 bronze badges
asked Sep 14, 2011 at 11:39
5
Use the perror
command:
$ perror 28
OS error code 28: No space left on device
Unless error codes are different on your system, your file system is full.
answered Sep 14, 2011 at 11:42
Arnaud Le BlancArnaud Le Blanc
97.1k23 gold badges204 silver badges194 bronze badges
2
We have experienced similar issue, and the problem was MySQL used /tmp directory for its needs (it’s default configuration). And /tmp was located on its own partition, that had too few space for big MySQL requests.
For more details take a look for this answer:
https://stackoverflow.com/a/3716778/994302
answered Jan 18, 2013 at 16:50
XSeryogaXSeryoga
3513 silver badges4 bronze badges
I had same problem but disk space was okay (only 40% full).
Problem were inodes, I had too many small files and my inodes were full.
You can check inode status with df -i
answered Jul 16, 2015 at 17:26
TurshijaTurshija
3372 silver badges5 bronze badges
0
The error means that you dont have enough space to create temp files needed by MySQL.
The first thing you can try is to expand the size of your /tmp/
partition. If you are under LVM, check the lvextend
command.
If you are not able to increase the size of your partition /tmp/
you can work in the MySQL configuration, edit the my.cnf
(typically on /etc/mysql/my.cnf
) file and look for this line:
tmpdir = /tmp/
Change it for whatever you want (example /var/tmp/
). Just be sure to have space and assign write permission for the mysql user in the new directory.
Hope this helps!
Lucas
1,4543 gold badges16 silver badges22 bronze badges
answered Jul 10, 2014 at 20:27
SlayerXSlayerX
1111 silver badge3 bronze badges
1
Run the following code:
du -sh /var/log/mysql
Perhaps mysql binary logs filled the memory, If so, follow the removal of old logs and restart the server. Also add in my.cnf:
expire_logs_days = 3
answered Mar 29, 2013 at 18:08
AlexAlex
492 bronze badges
2
This error occurs when you don’t have enough space in the partition. Usually MYSQL uses /tmp on linux servers. This may happen with some queries because the lookup was either returning a lot of data, or possibly even just sifting through a lot of data creating big temp files.
Edit your /etc/mysql/my.cnf
tmpdir = /your/new/dir
e.g
tmpdir = /var/tmp
Should be allocated with more space than /tmp that is usually in it’s own partition.
answered Feb 28, 2018 at 5:09
I had this same error and the problem was simply not enough space on my virtual machine. I deleted some unnecessary files and it started working again.
my
memory/disk space allocation looked something like this
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 37G 37G 127M 100% /
...
answered Dec 27, 2018 at 5:41
For those like me who don’t have a full disk or full inodes and that are using MySQL 8, you should take a look at your /var/lib/mysql directory to check if there is some large binlog.XXXXXX files.
It was the case for me and it prevented a query to be executed until the end because it seemed like this query was generating a large binlog file.
I just had to turn off the logbin generation by adding this parameter in /etc/mysql/mysql.conf.d/mysqld.cnf :
disable_log_bin
Then restart mysql service :
service mysql restart
After that i didn’t have the «No space left on device» error anymore!
answered Oct 6, 2022 at 12:27
You can also try using this line if the other doesn’t work:
du -sh /var/lib/mysql/database_Name
You may also want to check with your host and see how big they allow your databases to be.
answered Feb 4, 2015 at 23:46
For xampp users: on my experience, the problem was caused by a file, named ‘0’ and located in the ‘mysql’ folder. The size was tooooo huge (mine exploded to about 256 Gb). Its removal fixed the problem.
answered May 11, 2017 at 9:47
Sandro RosaSandro Rosa
4874 silver badges12 bronze badges
Today. I have same problem… my solution:
1) check inode: df -i
I saw:
root@vm22433:/etc/mysql# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 124696 304 124392 1% /dev
tmpfs 127514 452 127062 1% /run
/dev/vda1 1969920 1969920 0 100% /
tmpfs 127514 1 127513 1% /dev/shm
tmpfs 127514 3 127511 1% /run/lock
tmpfs 127514 15 127499 1% /sys/fs/cgroup
tmpfs 127514 12 127502 1% /run/user/1002
2) I began to look what folders use the maximum number of inods:
for i in /*; do echo $i; find $i |wc -l; done
soon I found in /home/tomnolane/tmp folder, which contained a huge number of files.
3) I removed /home/tomnolane/tmp folder
PROFIT.
4) checked:
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 124696 304 124392 1% /dev
tmpfs 127514 454 127060 1% /run
/dev/vda1 1969920 450857 1519063 23% /
tmpfs 127514 1 127513 1% /dev/shm
tmpfs 127514 3 127511 1% /run/lock
tmpfs 127514 15 127499 1% /sys/fs/cgroup
tmpfs 127514 12 127502 1% /run/user/1002
it’s ok.
5) restart mysql service
— it’s ok!!!!
answered Dec 1, 2018 at 22:05
I had the same problem and after little research found that the snap directory was occupying most of the space. So executed following command to get rid of it:
sudo apt autoremove --purge snapd
After that ran following command to get rid of useless/dev/loop mounts:
`sudo apt purge snapd ubuntu-core-launcher squashfs-tools`
After that ran following command to restart mysql:
sudo service mysql restart
It worked for me!
answered Jan 22, 2022 at 22:15
I have the following error with one of our web applications —
Query3 failed: Error writing file '/tmp/MY1fnqpm' (Errcode: 28) ... INSERT MailList... (removed the rest of the query for security reasons)
Any ideas — is this some hard disk space issue on my server?
user1981275
12.8k8 gold badges70 silver badges100 bronze badges
asked Sep 14, 2011 at 11:39
5
Use the perror
command:
$ perror 28
OS error code 28: No space left on device
Unless error codes are different on your system, your file system is full.
answered Sep 14, 2011 at 11:42
Arnaud Le BlancArnaud Le Blanc
97.1k23 gold badges204 silver badges194 bronze badges
2
We have experienced similar issue, and the problem was MySQL used /tmp directory for its needs (it’s default configuration). And /tmp was located on its own partition, that had too few space for big MySQL requests.
For more details take a look for this answer:
https://stackoverflow.com/a/3716778/994302
answered Jan 18, 2013 at 16:50
XSeryogaXSeryoga
3513 silver badges4 bronze badges
I had same problem but disk space was okay (only 40% full).
Problem were inodes, I had too many small files and my inodes were full.
You can check inode status with df -i
answered Jul 16, 2015 at 17:26
TurshijaTurshija
3372 silver badges5 bronze badges
0
The error means that you dont have enough space to create temp files needed by MySQL.
The first thing you can try is to expand the size of your /tmp/
partition. If you are under LVM, check the lvextend
command.
If you are not able to increase the size of your partition /tmp/
you can work in the MySQL configuration, edit the my.cnf
(typically on /etc/mysql/my.cnf
) file and look for this line:
tmpdir = /tmp/
Change it for whatever you want (example /var/tmp/
). Just be sure to have space and assign write permission for the mysql user in the new directory.
Hope this helps!
Lucas
1,4543 gold badges16 silver badges22 bronze badges
answered Jul 10, 2014 at 20:27
SlayerXSlayerX
1111 silver badge3 bronze badges
1
Run the following code:
du -sh /var/log/mysql
Perhaps mysql binary logs filled the memory, If so, follow the removal of old logs and restart the server. Also add in my.cnf:
expire_logs_days = 3
answered Mar 29, 2013 at 18:08
AlexAlex
492 bronze badges
2
This error occurs when you don’t have enough space in the partition. Usually MYSQL uses /tmp on linux servers. This may happen with some queries because the lookup was either returning a lot of data, or possibly even just sifting through a lot of data creating big temp files.
Edit your /etc/mysql/my.cnf
tmpdir = /your/new/dir
e.g
tmpdir = /var/tmp
Should be allocated with more space than /tmp that is usually in it’s own partition.
answered Feb 28, 2018 at 5:09
I had this same error and the problem was simply not enough space on my virtual machine. I deleted some unnecessary files and it started working again.
my
memory/disk space allocation looked something like this
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 37G 37G 127M 100% /
...
answered Dec 27, 2018 at 5:41
For those like me who don’t have a full disk or full inodes and that are using MySQL 8, you should take a look at your /var/lib/mysql directory to check if there is some large binlog.XXXXXX files.
It was the case for me and it prevented a query to be executed until the end because it seemed like this query was generating a large binlog file.
I just had to turn off the logbin generation by adding this parameter in /etc/mysql/mysql.conf.d/mysqld.cnf :
disable_log_bin
Then restart mysql service :
service mysql restart
After that i didn’t have the «No space left on device» error anymore!
answered Oct 6, 2022 at 12:27
You can also try using this line if the other doesn’t work:
du -sh /var/lib/mysql/database_Name
You may also want to check with your host and see how big they allow your databases to be.
answered Feb 4, 2015 at 23:46
For xampp users: on my experience, the problem was caused by a file, named ‘0’ and located in the ‘mysql’ folder. The size was tooooo huge (mine exploded to about 256 Gb). Its removal fixed the problem.
answered May 11, 2017 at 9:47
Sandro RosaSandro Rosa
4874 silver badges12 bronze badges
Today. I have same problem… my solution:
1) check inode: df -i
I saw:
root@vm22433:/etc/mysql# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 124696 304 124392 1% /dev
tmpfs 127514 452 127062 1% /run
/dev/vda1 1969920 1969920 0 100% /
tmpfs 127514 1 127513 1% /dev/shm
tmpfs 127514 3 127511 1% /run/lock
tmpfs 127514 15 127499 1% /sys/fs/cgroup
tmpfs 127514 12 127502 1% /run/user/1002
2) I began to look what folders use the maximum number of inods:
for i in /*; do echo $i; find $i |wc -l; done
soon I found in /home/tomnolane/tmp folder, which contained a huge number of files.
3) I removed /home/tomnolane/tmp folder
PROFIT.
4) checked:
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 124696 304 124392 1% /dev
tmpfs 127514 454 127060 1% /run
/dev/vda1 1969920 450857 1519063 23% /
tmpfs 127514 1 127513 1% /dev/shm
tmpfs 127514 3 127511 1% /run/lock
tmpfs 127514 15 127499 1% /sys/fs/cgroup
tmpfs 127514 12 127502 1% /run/user/1002
it’s ok.
5) restart mysql service
— it’s ok!!!!
answered Dec 1, 2018 at 22:05
I had the same problem and after little research found that the snap directory was occupying most of the space. So executed following command to get rid of it:
sudo apt autoremove --purge snapd
After that ran following command to get rid of useless/dev/loop mounts:
`sudo apt purge snapd ubuntu-core-launcher squashfs-tools`
After that ran following command to restart mysql:
sudo service mysql restart
It worked for me!
answered Jan 22, 2022 at 22:15
No space left on device (28) is a common error in Linux servers.
As a Server Administration Service provider, we see this error very often in VPSs, Dedicated Servers, AWS cloud instances and more.
It could happen seemingly for no reason at all (like refreshing a website), or when updating data (like backup sync, database changes, etc.).
What is the error “No space left on device (28)”?
For Linux to be able to create a file, it needs two things:
- Enough space to write that file.
- A unique identification number called “inode” (much like a social security number).
Most server owners look at and free up the disk space to resolve this error.
But what many don’t know is the secret “inode limit“.
What is “inode limit”, you ask? OK, lean in.
Linux identifies each file with a unique “inode number”, much like a Social Security number. It assigns 1 inode number for 1 file.
But each server has only a limited set of inode numbers for each disk. When it uses up all the unique inode numbers in a disk, it can’t create a new file.
Quite confusingly, it shows the error : No space left on device (28) and people go hunting for space usage issues.
Now that you know what to look for, let’s look at how to fix this issue.
In reality, there’s no single best way to resolve this error.
Our Dedicated Server Administrators maintain Linux servers of web hosts, app developers and other online service providers.
We’ve seen variations of this error in MySQL servers, PHP sites, Magento stores and even plain Linux servers.
Going over our support notes, we can say that an ideal solution that’s right for one server might cause unintended issues in another server.
So, before we go over the solutions, carefully study all options and choose the best for you.
Broadly, here are the ways in which we resolve “No space left on device (28)“:
- Deleting excess files.
- Increasing the space availability.
- Fixing service configuration.
Now let’s look at the details of each case.
Fixing Magento error session_start(): failed: No space left on device
Our Dedicated Server Administrators support several high traffic online shops that use Magento.
In a couple of these sites, we have seen the error “session_start(): failed: No space left on device” when the site traffic suddenly goes up during marketing campaigns, seasonal sales, etc.
That happens when Magento creates a lot of session and cache files to store visitor data and to speed up the site.
In all the cases, we’ve seen the inode count to be maxed out, while the was still disk space.
Here, deleting the session or cache files might look like a good idea, but those files will come back in a few minutes.
So, after we as a pemanent solution, we do these:
- Setup cache expiry and auto-cleaning – These servers uses PHP cache and Magento cache, whcih created a lot of files during high traffic hours. So, we configured cache expiry, and setup a script to clear all cache files older than 24 hours.
- Configure log rotation – Some logs grew to more than 50 GB, which posed a threat to the disk space. So, we configured the log rotate service to limit log size to 500 MB, and to delete all logs older than 1 week.
- Auto-clear session files – The session files were set to tbe auto-deleted with a custom script so that inode numbers are not used up.
- Audit disk usage periodically – On top of all this, our admins audit these servers every couple of weeks to make sure the systems are working properly, and if needed make corrections.
If you need help fixing this error in your Magento server, click here to talk to our Magento experts. We are online 24/7.
Resolving WordPress “Warning: sessionstart(): open(/tmp/mysite/tempfile) failed: No space left on device (28)”
WordPress is another popular web platform where we’ve seen this error.
When updating content or logging into the admin panel, a user sees the error:
Warning: sessionstart(): open(/tmp/mysite/tempfile) failed: No space left on device (28)
The most common reasons for this are:
- User’s space/inode quota exhaustion.
- Inode exhaustion in session directory.
- Space/Inode exhaustion in the server.
Fixing quota issues
If the site is hosted in a shared server or a VPS, there’ll be account default limits to storage space and inode counts.
In many cases, we’ve found large files (such as backup, videos, DB dumps, etc.) in the user’s home directory itself. But there are other locations that are not so obvious:
- Trash or Spam mail folders
- Catch-all mail accounts
- Web app log files (eg. WordPress error log)
- Old log files (eg. access_log.bak)
- Old uncompressed backups (from a previous site restore for eg.)
- Un-used web applications
The first step in such situations is to look at these locations, and remove all files that are not necessary. That’ll resolve the error in the website, but that is not enough.
An important part of our support philosophy is to implement fixes such that an issue won’t come back.
So, our server admins then dig in and figure out why the space exceeded in the first place.
Some common reasons and the permanent resolutions we implement are:
- Abandoned mailboxes – Unused mail accounts can easily become a spam dump, and eat up all the disk space. We set up disk usage alert mails to the site owner so that once any mail account exceeds the normal size, the account owner can investigate and delete unwanted mails or accounts.
- Unmaintained IMAP accounts – We’ve seen IMAP accounts with over 10 GB of mails in Trash. To avoid such issues, we setup fixed email quotas and alert mails so that users and site owners can take action before the quota tuns out.
- Old applications and files – In some websites we’ve seen old applications that were once used or installed to test features. These apps not only waste disk space, but are also a security threat. In the sites we maintain for our customers, we prevent such issues through periodic website/server audits where we detect and remove unused files.
Fixing inode exhaustion in session directory
WordPress plugins store session files in the home directory by default.
But some servers might have this set to an external directory like /tmp or /var depending on the application used to setup PHP and WordPress.
Since session files aren’t too large or numerous, it’ll usually be something else that’s taking up space in those directories.
For instance, in one server, the session directoy was set to /var/apps/users/path/to/tmpsession. The space in /var directory was exhausted by discarded backup dumps.
In that particular server, the solution was to delete old backups, but it can be something else in other servers.
So, we follow a top-down approach by looking at the inode count for each main folder, and then drill down to the directory that consumes the most inodes.
An example of a command used to find inode usage.
Fixing Space or Inode exhaustion in a server
This is a variation of the issue mentioned above.
Unoptimized backup processes or cache systems or log services can leave a lot of files lying around.
That’ll quickly consume the available space and inode quota.
So,
- We first sort the directories in a server based on their space and inode usage.
- Then we figure out which ones are not really needed, and delete the excess files.
- And last, we tweak the services that created those files to clear out the files after a fixed limit.
If your WordPress site or web application is currently showing this error, we can help you fix it. Click here to talk to our experts. We are online 24/7.
How to fix rsync: mkstemp “/path/to/file” no space left on device (28)
Many backup software use rsync tool to transfer backup files between servers.
We’ve seen many backup processes fail with the error rsync: mkstemp "/path/to/file" no space left on device (28)
.
Among the many reasons for this error, these are the most common:
- Double space issue – In its default setting, Rsync needs double the space to update a file. So, if rsync is trying to update a 20 GB file, it needs another 20 GB free space to hold the temporary file while the new version is being transferred from the source server. In some servers, this fails due to reaching space or quota limits.
- Quota exhaustion – In VPS servers there could be account level limits on number of inodes and space available. During transfer of large directories, this quota could get exhausted.
- Inode / Space limit on drive – Backup folders or database directories are sometimes mounted on a separate directory with limited space. So, when moving large archives, the space or inode limits can get exhausted.
To solve these issues, first we figure out exactly which limit is getting hit, and if there’s a way to implement an alternate solution.
For instance if we find that rsync is trying to update a large file of many GBs in size (eg. database dump or backup archive), we use an option called --inplace
in the rsync
command.
This will skip creating the temporary file, and will just update those parts of the file that were changed. In this way we avoid the need to upgrade disk quota.
Fixing space or inode overage
By far, the most common cause for inode or space exhaustion is all the good space being taken up by files that the server owner doesn’t need.
It could be old uncompressed backups, cache files, spam files and more.
So, the first step in our troubleshooting is always to sort all folders based on their space and inode usage.
Once we know the top folders that contributed to the disk overage, we can then drill down to weed out those that are not needed.
Here’s an example of a command that lists folders by space usage.
The second, and more important step in this resolution is to find out which process caused the junk file build-up and then reconfigure the service to automatically clean out old files.
Are you facing an rsync error right now? We can help you fix it in a few minutes. Click here to talk to our experts. We are online 24/7.
Fixing MySQL Errcode: 28 – No space left on device
MySQL servers sometimes run into this error when executing complex queries. An example is:
ERROR 3 (HY000) at line 1: Error writing file '/tmp/MY4Ei1vB' (Errcode: 28 - No space left on device)
When executing complex queries that merges several tables together, MySQL builds temporary tables in /tmp drive.
If the space or inodes are exhausted for any reason in these folders, MySQL will exit with this error.
Temporary drive can get quickly filled up with cache files, session files or other temporary files that was never cleaned up.
To resolve and prevent this from happening, we setup /tmp cleaning programs that’ll prevent junk files from piling up, and will be run every time the usage raises above 80%.
If you need help in resolving this error in your MySQL server, we can help you. Click here to talk to our experts. We are online 24/7.
How Bobcares helps prevent disk errors
All of what we have said here today is more “reactive” that “proactive”.
We said how we would RECOVER a server once a service fails with this error.
Here at Bobcares, our aim is to PREVENT such errors from happenig in the first place.
How do we do it? By being proactive.
We maintain production servers of web hosts, app developers, SaaS providers and other online businesses.
An important part in this maintenance service is Periodic Server Audits.
During audits, expert server administrators manually check every part of the server and fix everything that can affect the uptime of services.
Some of the checks to prevent disk errors are:
- Inode and Space usage – We check the available disk space and inodes in a server, and compare it with previous audit results. If we see an abnormal increase in the usage, we investigate in detail and regulate the service that’s causing the file growth.
- Temp folder auto-clearence – We setup scripts that keep the temp folders clean, and during the audits make sure that the program is working fine.
- Spam and catchall folders – We look for high volume mail directories, and delete accounts that are no longer maintained. This is has many times helped us prevent quota overage.
- Unused accounts deletion – Old user accounts that were either canceled or migrated out sometimes remain back in the server. We find and delete them.
- Old backups deletion – Sometimes people can leave uncompressed backups lying around. We detect such unused archives and delete them.
- Log file maintenance – We setup log rotation services that clears our old log files as it reaches a certain size limit. During our periodic audit we make sure these services are working well.
- Old packages deletion – Linux can leave old application installation packages even after an application is setup. We clear out old unused installation archives.
- Old application deletion – We scan for application folders that are no longer accessed, and delete unused apps and databases.
Of course, there are many more ways in which the disk space could be used up.
So, we customize the service for each of our customers, and make sure no part is left unchecked.
Over time this has helped us achieve close to 100% uptime for our customer servers.
Conclusion
No space left on device (28)
is a common error in Linux servers that is caused either due to lack of disk space or due to exhaustion of inodes. Today we’ve seen the various reasons for this error to occur while using Magento, WordPress, MySQL and Rsync.
Hi,
I’ve been running a script, inserting a large amount of data to several MySQL tables. Apparently, InnoDB crashed at some point (when ibdata1 reached 1.6 Gb). I had to restart the computer and when I tried to restart MariaDB service, I got the following errors:
Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: 150611 8:58:02 InnoDB: Operating system error number 5 in a file operation. Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: InnoDB: Error number 5 means 'Input/output error'. Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: InnoDB: Some operating system error numbers are described at Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: InnoDB: File operation call: 'read'. Jun 11 08:58:02 honolulu.bi.technion.ac.il mysqld_safe[7970]: InnoDB: Cannot continue operation.' Jun 11 08:58:02 honolulu.bi.technion.ac.il systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE Jun 11 08:58:02 honolulu.bi.technion.ac.il systemd[1]: mariadb.service: control process exited, code=exited status=1 Jun 11 08:58:02 honolulu.bi.technion.ac.il systemd[1]: Failed to start MariaDB database server. Jun 11 08:58:02 honolulu.bi.technion.ac.il systemd[1]: Unit mariadb.service entered failed state.
I checked /var/log/mariadb/mariadb.log to find out what might have caused the crash:
140911 10:40:38 InnoDB: Error: Write to file ./ibdata1 failed at offset 0 220200960. InnoDB: 1048576 bytes should have been written, only 0 were written. InnoDB: Operating system error number 28. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 28 means 'No space left on device'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html 140911 10:40:38 InnoDB: Assertion failure in thread 139942049851136 in file os0file.c line 4382 InnoDB: Failing assertion: ret InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately a150505 07:59:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 150505 07:59:38 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.SFCcCu' --pid-file='/var/lib/mysql/honolulu.bi.technion.ac.il-recover.pid' 150505 07:59:41 mysqld_safe WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1 150505 7:59:41 [Note] WSREP: wsrep_start_position var submitted: '00000000-0000-0000-0000-000000000000:-1' 150505 7:59:41 InnoDB: The InnoDB memory heap is disabled 150505 7:59:41 InnoDB: Mutexes and rw_locks use GCC atomic builtins 150505 7:59:41 InnoDB: Compressed tables use zlib 1.2.8 150505 7:59:41 InnoDB: Using Linux native AIO 150505 7:59:41 InnoDB: Initializing buffer pool, size = 128.0M 150505 7:59:41 InnoDB: Completed initialization of buffer pool 150505 7:59:41 InnoDB: highest supported file format is Barracuda. 150505 7:59:41 InnoDB: Waiting for the background threads to start 150505 7:59:42 Percona XtraDB (http://www.percona.com) 5.5.37-MariaDB-35.0 started; log sequence number 458740332 150505 7:59:42 [Note] Plugin 'FEEDBACK' is disabled. 150505 7:59:42 [Note] Server socket created on IP: '0.0.0.0'. 150505 7:59:42 [ERROR] mysqld: Table './mysql/user' is marked as crashed and should be repaired 150505 7:59:42 [Warning] Checking table: './mysql/user' 150505 7:59:42 [ERROR] mysql.user: 1 client is using or hasn't closed the table properly 150505 7:59:42 [ERROR] mysqld: Table './mysql/db' is marked as crashed and should be repaired 150505 7:59:42 [Warning] Checking table: './mysql/db' 150505 7:59:42 [ERROR] mysql.db: 1 client is using or hasn't closed the table properly 150505 7:59:42 [Note] Event Scheduler: Loaded 0 events 150505 7:59:42 [Note] WSREP: Read nil XID from storage engines, skipping position init 150505 7:59:42 [Note] WSREP: wsrep_load(): loading provider library 'none' 150505 7:59:42 [Note] [Debug] WSREP: dummy_init 150505 7:59:42 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.5.38-MariaDB-wsrep' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server, wsrep_25.10.r3997 150608 04:43:00 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 150608 04:43:00 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.7Yb3bw' --pid-file='/var/lib/mysql/honolulu.bi.technion.ac.il-recover.pid' 150608 04:43:13 mysqld_safe WSREP: Failed to recover position: '150608 4:43:01 InnoDB: The InnoDB memory heap is disabled 150608 4:43:01 InnoDB: Mutexes and rw_locks use GCC atomic builtins 150608 4:43:01 InnoDB: Compressed tables use zlib 1.2.8 150608 4:43:01 InnoDB: Using Linux native AIO 150608 4:43:01 InnoDB: Initializing buffer pool, size = 128.0M 150608 4:43:01 InnoDB: Completed initialization of buffer pool 150608 4:43:01 InnoDB: highest supported file format is Barracuda. InnoDB: Log scan progressed past the checkpoint lsn 4676403586 150608 4:43:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Error: tried to read 1048576 bytes at offset 0 1048576. InnoDB: Was only able to read 655360. 150608 4:43:13 InnoDB: Operating system error number 5 in a file operation. InnoDB: Error number 5 means 'Input/output error'. InnoDB: Some operating system error numbers are described at InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html InnoDB: File operation call: 'read'. InnoDB: Cannot continue operation.'
I have to say that there shouldn’t be any disk space problem:
df -h Filesystem Size Used Avail Use% Mounted on /dev/sda5 819G 27G 750G 4% / devtmpfs 7.9G 0 7.9G 0% /dev tmpfs 7.9G 0 7.9G 0% /dev/shm tmpfs 7.9G 8.9M 7.9G 1% /run tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup tmpfs 7.9G 12K 7.9G 1% /tmp /dev/sda3 1.9T 123M 1.8T 1% /home /dev/sda2 477M 112M 336M 25% /boot
The MariaDB is currently installed in default configuration.
I really need help on this issue…
1. What might have cause the problem? Do I have to configure some InnoDB parameters? Maybe the ibdata file size is a problem?
2. How to recover my data and restart the MariaDB service? (I tried all kind of tips I found in other forums, but I couldn’t backup the ibdata1 file or start InnoDB in recovery mode).
Answer
Note these lines in the log:
InnoDB: Operating system error number 28. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 28 means 'No space left on device'.
May be it’s, indeed, disk quota. Or may be the disk was full, but when you run your df -h it got some free space. Either way, error 28 can pretty much mean only one thing — disk full.
- ↑ InnoDB ↑
Comments
Content reproduced on this site is the property of its respective owners,
and this content is not reviewed in advance by MariaDB. The views, information and opinions
expressed by this content do not necessarily represent those of MariaDB or any other party.