Summary: This blog discusses the MySQL error ‘Incorrect Key File for Table’ in detail. It describes possible causes behind the error and solutions to fix them. To fix the error caused due to corrupted tables, try using Stellar Repair for MySQL software. Use the software’s demo version to repair the database and its corrupt tables, and preview the recoverable tables to check data integrity.
Contents
- Why Does MySQL Return ‘Incorrect Key File for Table’ Error When Running a Query?
- Possible Causes Behind MySQL Error’ Incorrect Key File for Table’ and Their Solutions
- Conclusion
You may encounter the MySQL error ‘Incorrect Key File for Table’ when running a large MySQL query. For instance, some users have reported receiving the error while executing a query to fetch thousands of records from a table using more than one JOIN.
Why Does MySQL Return ‘Incorrect Key File for Table’ Error When Running a Query?
The complete MySQL error message “Incorrect key file for table ‘FILEPATH.MYI’; try to repair it” suggests that you can try to repair the key. Looking at your table FILEPATH can help you find more details about the error:
- If the FILEPATH looks similar to ‘/tmp/#sql_xxxx.MYI‘, it means that your query is returning a large result set. The/tmp folder used for storing a temporary table containing the larger result set is too small for the query size. Essentially, there is not sufficient space in your /tmp folder for the temporary table.
- If your FILEPATH contains the database table’s name, it’s likely that your table is corrupt.
Possible Causes Behind MySQL Error’ Incorrect Key File for Table’ and Their Solutions
You may be getting the error because of a disk space issue or corruption in MySQL table. Read on to know how you can resolve these two issues.
Cause 1 – Insufficient Disk Space in the Temporary Folder (/Tmp)
Note: You won’t get the error on executing individual queries, but you may encounter the error when running large queries simultaneously on the same MySQL Server.
The primary reason that results in MySQL ‘Incorrect key file’ error is that your query is fetching data greater than the size of your /tmp folder (i.e., the folder used for storing temporary tables).
Solution – Change the location of ‘/temp’ folder
First of all, you must determine if the disk partition is full by looking at your /tmp folder size. Run the df command with ‘-h‘ option to check disk usage of the file system in human-readable form:
df –h
If the partition is full, then change the location of the /tmp folder to a disk with ample free space. Change the file location in the MySQL configuration file (.cnf).
Sometimes, the /tmp folder is big enough, but it might still get full. In that case, you will need to clean up some space in the folder manually.
Cause 2 – Corrupted Keys or Table
Table corruption is another reason that may lead to the ‘Incorrect Key File for Table’ error.
Solution – Re-create or Repair the Corrupted Table
To resolve the issue, drop/re-create the corrupt table and then re-insert the data.
For example, in the following query ‘DELETE’ operation is used for dropping the table, and ‘OPTIMIZE’ TABLE command is used afterward to free disk space.
mysql> DELETE tbl_name WHERE id < 200000;
mysql> OPTIMIZE TABLE tbl_name;
If you cannot re-insert the data, try repairing the table. For MyISAM tables, you can use the ‘REPAIR table’ command to fix the corruption:
REPAIR TABLE tbl_name USE_FRM;
This command will work only on MyISAM tables. For repairing tables with InnoDB database engine, check out the best ways to repair InnoDB table corruption in MySQL. But, running a repair command for each of the tables in the query can be a lengthy and time-consuming process.
What Else You Can Do to Repair Corrupt Tables?
Use Stellar Repair for MySQL software to repair corrupt tables. The software helps repair MySQL database and all its objects quickly in just a few clicks. It recovers all data from the corrupted database without data loss.
Conclusion
When you encounter the MySQL error ‘Incorrect Key File for Table’, the first thing you should do is to check the location of /tmp folder to ensure that it has enough disk space to store temporary tables. If not, then change the folder location to a disk having ample free space. If there is no issue with disk space, chances are that your database table is corrupt and needs to be repaired. If you fail to repair the table manually, using MySQL database repair software can help.
About The Author
Charanjeet
Charanjeet is a Technical Content Writer at Stellar®who specializes in writing about databases, e-mail recovery, and e-mail migration solutions. She loves researching and developing content that helps database administrators, organizations and novices to fix multiple problems related to MS SQL and MySQL databases and Microsoft Exchange.
Best Selling Products
Stellar Data Recovery Professional for Windows
Stellar Data Recovery has the right Windows Recovery tool for all your data recovery
Read More
Stellar Data Recovery Professional for Mac
Stellar Data Recovery for Mac program performs safe..
Read More
Stellar Photo Recovery
A comprehensive photo recovery software to restore photos, music & video files
Read More
Stellar Repair for Video
Powerful video repair tool for repairing corrupt or damaged MOV and other video files
Read More
Sometimes, you may experience “ERROR 1034: Incorrect key file” while running the ALTER TABLE or CREATE INDEX command:
mysql> alter table ontime add key(FlightDate); ERROR 1034 (HY000): Incorrect key file for table ‘ontime’; try to repair it |
As the error message mentions key file, it is reasonable to assume we’re dealing with the MyISAM storage engine (the legacy storage engine which used to have such a thing), but no, we can clearly see this table is InnoDB!
When the error message in MySQL is confusing or otherwise unhelpful, it is a good idea to check the MySQL error log:
2019—02—24T02:02:26.100600Z 9 [Warning] [MY—012637] [InnoDB] 1048576 bytes should have been written. Only 696320 bytes written. Retrying for the remaining bytes. 2019—02—24T02:02:26.100884Z 9 [Warning] [MY—012638] [InnoDB] Retry attempts for writing partial data failed. 2019—02—24T02:02:26.100894Z 9 [ERROR] [MY—012639] [InnoDB] Write to file (merge) failed at offset 625999872, 1048576 bytes should have been written, only 696320 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded. 2019—02—24T02:02:26.100907Z 9 [ERROR] [MY—012640] [InnoDB] Error number 28 means ‘No space left on device’ |
The most important part of this message is “Error number 28 means ‘No space left on device’” – so, we’re simply running out of disk space. You may wonder, though, what file is it being written to and where is it located? “Write to file (merge) failed” is your (albeit, not fully helpful) indication; “merge” here corresponds to the temporary file which is used to perform a Merge Sort operation when building Indexes through Sort (AKA Innodb Fast Index Creation).
This file is created in the directory set by innodb_tmpdir server variable if it is not set by the setting of tmpdir variable or OS default, such as /tmp on Linux. In many cases, such a tmpdir may be located on a filesystem that has little space, making this error occur quite frequently.
The amount of disk space required can be significant, sometimes exceeding the total size of the final table. When adding indexes on CHAR/VARCHAR columns, especially with multibyte character sets (utf8, utf8mb3, utf8mb4), the space allocated for each index entry will be roughly a multiple of the number of bytes per character in charset to the maximum length of the string. So adding an index on utf8 VARCHAR(100) column will require roughly 400 bytes for every row in the table.
Summary:
Are you getting the “ERROR 1034: Incorrect key file” message for InnoDB table? Check your error log and the tmpdir server variable!
What does it mean if MySQL returns the ‘Incorrect key file for table’ error for one of my queries? The answer is complicated and depends on which storage engine is returning the error. We have debugged two cases which we describe here.
File system out of space
When running the random query generator, one of the queries failed.
013-09-27T05:16:39 Query: SELECT * FROM (mysql . general_log AS table1 INNER JOIN INFORMATION_SCHEMA . INNODB_BUFFER_PAGE AS table2 ON ( table2 . SPACE = table1 . user_host ) ) ORDER BY table1 . thread_id LIMIT 168 failed: 126 Incorrect key file for table '/data/mysql7/performance_schema_vardir/tmp/#sql_6b8_17.MYI'; try to repair it
Since this query requires a sort, MySQL creates a hidden temporary table called ‘#sql_6b8_17.MYI’ to hold the intermediate results. While the query was executing, some operation performed on the MyISAM table returned an error. What could it be?
MySQL maps the HA_ERROR_CRASHED error received from the storage engine to the ER_NOT_KEYFILE error that the MySQL client sees. We need to track down where MyISAM returns the HA_ERROR_CRASHED error. Unfortunately, the MyISAM storage engine has lots of places where it returns HA_ERROR_CRASHED, so the cause can be almost anything.
In our case, we found that the MyISAM mi_write function eventually got an ENOSPC (no space) error when trying to write data to one of its files. The file system ran out of space. In this case, MyISAM returns the HA_ERROR_CRASHED error, the MySQL client gets the ER_NOT_KEYFILE error, and the random query generator notes the failed query. Simple, right?
Race in secondary index query
TokuDB returns the HA_ERROR_CRASHED error when it can not read a row from the primary index using a primary key that it read from a secondary index. What does that mean?
Each key defined for a TokuDB table is stored in a fractal tree dictionary. The fractal tree for the primary key is stored in the ‘main’ fractal tree dictionary. If the table does not define a primary key, then TokuDB manufactures a hidden primary key and uses it as the primary key. This hidden primary key is never seen outside of the TokuDB storage engine. Each secondary key is stored in its ‘key’ fractal tree dictionary. The key into these dictionaries is composed of the columns of the secondary key and the columns of the primary key appended to it.
Given this mapping, non-covering queries read the secondary index first and then read the primary index using the primary key read from the secondary index.
Lets suppose that we have a simple table.
mysql> show create table t;
| t | CREATE TABLE `t` (
`x` int(11) DEFAULT NULL,
`y` int(11) DEFAULT NULL,
KEY `x` (`x`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1 ROW_FORMAT=TOKUDB_ZLIB |
We insert a single row into the table
mysql 1> insert into t values (1,2);
Query OK, 1 row affected (0.00 sec)
On another MySQL client, we will run a query and force it to use the secondary key for ‘x’. Since it is not a covering key for the query, TokuDB will read a row from the secondary key and use the primary key that was retrieved to read a row in the primary hidden dictionary. We are going to put a delay between these two events and see what happens.
We use read uncommitted isolation level.
mysql 2> set session transaction isolation level read uncommitted;
Query OK, 0 rows affected (0.00 sec)
We set a debug variable that forces a delay between reading a row from the index on ‘k’ and then using the retrieved primary key to read the full row from the hidden primary index.
mysql 2> set tokudb_read_full_row_delay=10000000;
Now, we start the query. It should stall the thread in the read_full_row method for the delay time.
mysql 2> select * from t force index (x) where x>0;
blocked sleeping
On the other MySQL client, we delete the row.
mysql 1> delete from t where x=1;
Query OK, 1 row affected (0.00 sec)
Eventually, the query client resumes after the sleep completes and can no longer find the primary key in the primary fractal tree, so it returns the HA_ERR_CRASHED error.
mysql 2> resumes after the sleep completes
ERROR 1034 (HY000): Incorrect key file for table 't'; try to repair it
This problem does not occur for any other transaction isolation levels as far as we know. This is because TokuDB uses snapshot reads when reading MVCC leaf entries for all transaction isolation levels other than read uncommitted. For read uncommitted transaction isolation level, TokuDB just returns the latest MVCC entries from the leaf entry, which may be inconsistent with the rest of the table.
Maybe, TokuDB should silently eat this error for read uncommitted queries. What do you think?
Inconsistent TokuDB Table
Suppose that MySQL is doing an index lookup on TokuDB key that is not the primary key. It may have to use the primary key from the secondary key to retrieve the values of any columns that are not covered by the secondary key. If a secondary key exists in the secondary index and the primary key associated with the secondary key does NOT exist in the primary index, then TokuDB returns the HA_ERR_CRASHED error number to MySQL. This error gets translated to the ER_NOT_KEYFILE error number which gets translated to the ‘incorrect key file for table’ error message.
So, if a secondary index is inconsistent with the primary index, then this error can occur.
ASPECT #1: BAD FILE
Something tells me you somehow ran out of space in /tmpfs
.
Whenever MySQL creates a temp table
- It is created as a MyISAM table
- No
.frm
is created - Initial
.MYD
is 0 bytes - Initial
.MYI
is 1024 bytes (1K) - There are never any indexes in the
.MYI
I find that strange that MySQL could not create a 1K .MYI
.
Since ext3
has a default block size of 1K, while ext4
has 4K, perhaps the 4K blocksize may have allowed for more data. This, in itself, is not the real problem.
ASPECT #2: YOUR QUERY
If you have tens of thousands of tables, the query you are running looks quite demanding. In fact, the query says «Get me all databases that have 1 or more tables in them.» If you have a large number of tables, you could be looking at a rather large temp table in order collect table_schema
values before doing the DISTINCT
.
Sometimes it is easy to forget
- the
SELECT DISTINCT
executes aGROUP BY
internally - This true in the older versions of MySQL.
There are two Stackoverflow posts that mention this
Jun 06, 2012
: How to select distinct rows without using group by statement (My Post)Feb 24, 2009
: What’s faster, SELECT DISTINCT or GROUP BY in MySQL?
ANALYSIS
My guess is a combination of anomalies
ANOMALY #1 : You have a large number of tables
ANOMALY #2 : The temp table being made must have run out of space to have a damaged .MYI
ANOMALY #3 : The query does not use an index.
Look at the table
mysql> show create table information_schema.tablesG
*************************** 1. row ***************************
Table: TABLES
Create Table: CREATE TEMPORARY TABLE `TABLES` (
`TABLE_CATALOG` varchar(512) NOT NULL DEFAULT '',
`TABLE_SCHEMA` varchar(64) NOT NULL DEFAULT '',
`TABLE_NAME` varchar(64) NOT NULL DEFAULT '',
`TABLE_TYPE` varchar(64) NOT NULL DEFAULT '',
`ENGINE` varchar(64) DEFAULT NULL,
`VERSION` bigint(21) unsigned DEFAULT NULL,
`ROW_FORMAT` varchar(10) DEFAULT NULL,
`TABLE_ROWS` bigint(21) unsigned DEFAULT NULL,
`AVG_ROW_LENGTH` bigint(21) unsigned DEFAULT NULL,
`DATA_LENGTH` bigint(21) unsigned DEFAULT NULL,
`MAX_DATA_LENGTH` bigint(21) unsigned DEFAULT NULL,
`INDEX_LENGTH` bigint(21) unsigned DEFAULT NULL,
`DATA_FREE` bigint(21) unsigned DEFAULT NULL,
`AUTO_INCREMENT` bigint(21) unsigned DEFAULT NULL,
`CREATE_TIME` datetime DEFAULT NULL,
`UPDATE_TIME` datetime DEFAULT NULL,
`CHECK_TIME` datetime DEFAULT NULL,
`TABLE_COLLATION` varchar(32) DEFAULT NULL,
`CHECKSUM` bigint(21) unsigned DEFAULT NULL,
`CREATE_OPTIONS` varchar(255) DEFAULT NULL,
`TABLE_COMMENT` varchar(2048) NOT NULL DEFAULT ''
) ENGINE=MEMORY DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql>
A full table scan is required, resulting in the need for a temp table with an in-memory sort.
SUGGESTIONS
SUGGESTION #1 : Use ext4
. You already said the query works fine in ext4
.
SUGGESTION #2 : Increase tmpfs
to 16G
SUGGESTION #3 : Query the information_schema so that it does not use as much memory
DROP TABLE IF EXISTS mysql.dbtbcount;
CREATE TABLE mysql.dbtbcount
(
table_schema VARCHAR(64) NOT NULL,
table_count INT NOT NULL DEFAULT 0,
PRIMARY KEY (table_schema)
) ENGINE=MEMORY;
INSERT INTO mysql.dbtbcount (table_schema)
SELECT schema_name FROM information_schema.schemata
WHERE schema_name NOT IN ('information_schema','mysql');
INSERT INTO mysql.dbtbcount (table_schema)
SELECT table_schema FROM information_schema.tables
ON DUPLICATE KEY UPDATE table_count = table_count + 1;
SELECT * FROM mysql.dbtbcount;
SELECT COUNT(1) DistinctDBs FROM mysql.dbtbcount WHERE table_count > 0;
This counts all tables in all databases
- When any row has a zero table_count, that database is empty.
- Count all rows that have a nonzero
table_count
to get the DISTINCT count. - When done, run
DROP TABLE mysql.dbtbcount;
Give it a Try !!!
UPDATE 2013-08-19 17:40 EDT
Your problem should have been posted as a separate question. Nevertheless, let’s see what we can surmise from this.
You should not be using /tmp
as a backup location. Why?
- The OS needs to write temp file there. At OS reboot, all files vanish from it. So, a transient folder should not be the place for backups.
- mysqld has
/tmp
defined as the default folder for temp files
You need to reorganize the script to do parallel dumps
SUGGESTION
You should collect the databases and parallel mysqldump 25 databases at a time, starting from the biggest databases to the smallest. It gets faster with each next set of 25…
BACKUP_FOLDER=/any/path/but/tmp
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-h${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT db FROM (SELECT table_schema db,SUM(data_length) datasize"
SQLSTMT-"${SQLSTMT} FROM information_schema.tables"
SQLSTMT-"${SQLSTMT} WHERE table_schema NOT ('mysql','log')"
SQLSTMT-"${SQLSTMT} AND RIGHT(table_schema,7)<>'_schema'"
SQLSTMT-"${SQLSTMT} GROUP BY table_schema) A"
SQLSTMT-"${SQLSTMT} ORDER BY datasize DESC"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/ListOfDatabases.txt
COMMIT_COUNT=0
COMMIT_LIMIT=25
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
for DB in `cat /tmp/ListOfDatabases.txt`
do
BACKUP_FILE=${BACKUP_FOLDER}/${DB}.sql.gz
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_FILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
UPDATE 2013-08-20 10:03 EDT
If you don’t care about the size of the database here is an alternate method that will start the mysqldump process faster:
BACKUP_FOLDER=/any/path/but/tmp
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-h${MYSQL_HOST} -u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT-"${SQLSTMT} SELECT schema_name FROM information_schema.schemata"
SQLSTMT-"${SQLSTMT} WHERE schema_name NOT ('mysql','log')"
SQLSTMT-"${SQLSTMT} AND RIGHT(schema_name ,7)<>'_schema'"
COMMIT_COUNT=0
COMMIT_LIMIT=25
MYSQLDUMP_OPTIONS="--hex-blob --routines --triggers"
for DB in `cat /tmp/ListOfDatabases.txt`
do
BACKUP_FILE=${BACKUP_FOLDER}/${DB}.sql.gz
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} ${DB} | gzip > ${BACKUP_FILE} &
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
I wrote about doing parallel database and table dumps before : How can I optimize a mysqldump of a large database?
If you cannot get mydumper
to throttle based on database size, this is the way to go.