Innodb error the age of the last checkpoint is

I really need some mysql expertize. I am a newbi to mysql and I am seeing some server crash of my db in the past 1 week. I am using mysql 5.1.36 on Ubuntu. This is a dedicated mysql server with Dual

At first glance, I would say that your innodb_log_file_size is way too small. It should be bigger to do two things:

  • Accommodate any big BLOB or TEXT fields
  • Holding bigger transactions

Here is what you should do for now to see if it helps:

STEP 01) Change the following in /etc/my.cnf

[mysqld]
innodb_log_buffer_size          = 32M
innodb_buffer_pool_size         = 3G
innodb_log_file_size            = 768M

STEP 02) service mysql stop

STEP 03) rm -f /var/lib/mysql/ib_logfile*

STEP 04) service mysql start

This will rebuild the following files

  • /var/lib/mysql/ib_logfile0
  • /var/lib/mysql/ib_logfile1

Give it a Try !!!

UPDATE 2013-07-03 12:37 EDT

I have updated my other posts on this and missed this one

ButtleButkus just commented at 2013-07-03 07:18:56 EDT

Wouldn’t it be advisable to copy the ib_logfile* to another location for backup before deleting them?

Since there could be unfinished transactional data inside, here is what should be done

STEP 01) Change the following in /etc/my.cnf

[mysqld]
innodb_log_buffer_size          = 32M
innodb_buffer_pool_size         = 3G
innodb_log_file_size            = 768M

STEP 02) mysql -uroot -p -e»SET GLOBAL innodb_fast_shutdown = 0;»

STEP 03) service mysql stop

STEP 04) rm -f /var/lib/mysql/ib_logfile*

STEP 05) service mysql start

I added SET GLOBAL innodb_fast_shutdown = 0;. What does that do? It forces InnoDB to completely purge transactional changes from all of InnoDB moving parts, including the transactional logs (ib_logfile0, ib_logfile1). Thus, there is no need to backup the old ib_logfile0, ib_logfile1. If deleting them makes you nervous, then make Step 04

mv /var/lib/mysql/ib_logfile* ..

to put the old logs in /var/lib. If the recreation of the logs is successful and mysqld starts up, then you can delete the old logs.

I have been using this feature for a year now. I have updated my other posts to reflect this…

  • May 20, 2013 : Innodb one file per tablespace
  • May 05, 2013 : Issue after moving the ib_logfile1 and ib_logfile0 files
  • Jan 10, 2013 : Finding and fixing InnoDB index corruption
  • Dec 17, 2012 : MySQL Start/Stop
  • Feb 16, 2011 : How to safely change MySQL innodb variable ‘innodb_log_file_size’? (Last Update)
  • Feb 04, 2011 : MySQL InnoDB — innodb_file_per_table cons? (Last Update)
  • Oct 29, 2010 : Howto: Clean a mysql InnoDB storage engine? (Added as Step 03 on Jun 04, 2013)

If there are other older posts of mine where I do not mention innodb_fast_shutdown, let me know so I can update it. Thanks again, ButtleButkus.

I recently have to restore production DB to local environment and I saw mysql log giving with bunch of errors like this

InnoDB: largest such row.
150817 23:22:56  InnoDB: ERROR: the age of the last checkpoint is 9440934,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
150817 23:23:25  InnoDB: ERROR: the age of the last checkpoint is 9439873,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
150817 23:23:51  InnoDB: ERROR: the age of the last checkpoint is 9442549,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

Well, what does all this mean?

So, as it seems, the InnoDB log file settings need to be updated. I found a listing of the steps to take on MySQL’s website. Here are those steps fleshed out a little.

1) Make sure your innodb_fast_shutdown settings is not 2To fix this, set it to one by runnning the following query:

SET GLOBAL innodb_fast_shutdown = 1;

2) Shut down mysql and look for errors in the log to make sure nothing went wrong.

service mysqld stopcat /var/log/mysql.log

3) Copy the old log files to a new place in case something goes wrong.

mv /var/lib/mysql/ib_logfile* ..

4) Next edit your /etc/my.cnf file to increase your InnoDB log size:

innodb_log_file_size = 128M

You may see people suggesting really large values for the log file size — I saw a value of 768M in one StackOverflow answer. If that seems like a suspiciously large, random value to you (especially considering the default is 5M) then you’re on to something. But that begs the question, what should the value be? Here is a helpful article on how to properly size your log files.

5) Lastly, watch your /var/log/mysql.log file and start MySQL back up.

service mysqld start

One helpful tip, if mysql gives any error then you probably tried to skip step 3, like I did. Turns out, this is an important step and your MySQL server may not even start up, without it. It’s easy to fix though, either go back and do step 3 or remove your edits made in step 4 and restart mysqld. You should be all set.

This entry was posted in Linux, MySQL. Bookmark the permalink.

We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log

InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

This error rings true because we were working on a very large table that contains BLOB data types.

The best answer we found online said

To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you’ve moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.

and in the comments

Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.

which is I guess what happened, based on our current (default) innodb_log_file_size of 48mb?

SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name               | Value    |
+-----------------------------+----------+
| innodb_log_buffer_size      | 8388608  |
| innodb_log_compressed_pages | ON       |
| innodb_log_file_size        | 50331648 |
| innodb_log_files_in_group   | 2        |
| innodb_log_group_home_dir   | ./       |
+-----------------------------+----------+

So, this leads me to two pointed questions and one open-ended one:

  1. How do we determine the largest row so we can set our innodb_log_file_size to be bigger than that?
  2. What is the consequence of the action in step 1? I’d read about long recovery times with bigger logs.
  3. Is there anything else I should worry about regarding migrations, considering that we have a large table (650k rows, 6169.8GB) with unrestrained, variable length BLOB fields.

We’re running mysql 5.6 and here’s our my.cnf.

[mysqld]

#defaults
basedir                   = /opt/mysql/server-5.6
datadir                   = /var/lib/mysql
port                      = 3306
socket                    = /var/run/mysqld/mysqld.sock
tmpdir                    = /tmp
bind-address              = 0.0.0.0

#logs
log_error                 = /var/log/mysql/error.log
expire_logs_days          = 4
slow_query_log            = on
long_query_time           = 1


innodb_buffer_pool_size   = 11G

#http://stackoverflow.com/a/10866836/182484
collation-server          = utf8_bin
init-connect              ='SET NAMES utf8'
init_connect              ='SET collation_connection = utf8_bin'
character-set-server      = utf8
max_allowed_packet        = 64M
skip-character-set-client-handshake

#cache
query_cache_size          = 268435456
query_cache_type          = 1
query_cache_limit         = 1048576
```

As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes. Here’s the result of getting the largest byte size:

select pen_size as bytes,·
  pen_size  /  1024 / 1024 as mb,·
  pen_id from pen_sizes
  group by pen_id
  order by bytes desc
  limit 40

+---------+------------+--------+
| bytes   | mb         | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 |  84816 |
| 3379107 | 3.22256756 |  74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 |  93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 |  46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 |  76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 |  74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 |  76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 |  76188 |
| 1500157 | 1.43066120 |  77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 |  77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 |  77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 |  75256 |
| 1343621 | 1.28137684 |  75257 |
| 1334071 | 1.27226925 |  77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 |  75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+

This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?

I ran

SELECT table_name AS "Tables", 
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB" 
FROM information_schema.TABLES 
WHERE table_schema = 'codepen'

+-------------------+------------+
| Tables            | Size in MB |
+-------------------+------------+
...snip
| pens              |    6287.89 |
...snip

Понравилась статья? Поделить с друзьями:
  • Innodb error duplicate key writing word node to fts auxiliary index table
  • Innodb error could not open single table tablespace file
  • Innocallback dll ошибка
  • Innacebile boot device win 10 ошибка
  • Inkscape как изменить размер холста