Error 1206 hy000 the total number of locks exceeds the lock table size

I'm running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error: Error code 1206: The number of locks exceeds ...

I’m running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error:

Error code 1206: The number of locks exceeds the lock table size.

The queries in question are:

create temporary table SkusBought(
customerNum int(11),
sku int(11),
typedesc char(25),
key `customerNum` (customerNum)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into skusBought
select t1.* from
    (select customer, sku, typedesc from transactiondatatransit
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondatadelaware
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondataprestige
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku) t1
join
(select customernum from topThreetransit group by customernum) t2
on t1.customer = t2.customernum;

I’ve read that changing the configuration file to increase the buffer pool size will help, but that does nothing. What would be the way to fix this, either as a temporary workaround or a permanent fix?

EDIT: changed part of the query. Shouldn’t affect it, but I did a find-replace all and didn’t realize it screwed that up. Doesn’t affect the question.

EDIT 2: Added typedesc to t1. I changed it in the query but not here.

BenMorel's user avatar

BenMorel

33.5k49 gold badges174 silver badges310 bronze badges

asked Aug 1, 2011 at 15:59

maxman92's user avatar

7

This issue can be resolved by setting the higher values for the MySQL variable innodb_buffer_pool_size. The default value for innodb_buffer_pool_size will be 8,388,608.

To change the settings value for innodb_buffer_pool_size please see the below set.

  1. Locate the file my.cnf from the server. For Linux servers this will be mostly at /etc/my.cnf
  2. Add the line innodb_buffer_pool_size=64MB to this file
  3. Restart the MySQL server

To restart the MySQL server, you can use anyone of the below 2 options:

  1. service mysqld restart
  2. /etc/init.d/mysqld restart

Reference The total number of locks exceeds the lock table size

answered Jun 6, 2012 at 10:43

PHP Bugs's user avatar

PHP BugsPHP Bugs

1,11312 silver badges23 bronze badges

4

I found another way to solve it — use Table Lock. Sure, it can be unappropriate for your application — if you need to update table at same time.

See:
Try using LOCK TABLES to lock the entire table, instead of the default action of InnoDB’s MVCC row-level locking. If I’m not mistaken, the «lock table» is referring to the InnoDB internal structure storing row and version identifiers for the MVCC implementation with a bit identifying the row is being modified in a statement, and with a table of 60 million rows, probably exceeds the memory allocated to it. The LOCK TABLES command should alleviate this problem by setting a table-level lock instead of row-level:

SET @@AUTOCOMMIT=0;
LOCK TABLES avgvol WRITE, volume READ;
INSERT INTO avgvol(date,vol)
SELECT date,avg(vol) FROM volume
GROUP BY date;
UNLOCK TABLES;

Jay Pipes,
Community Relations Manager, North America, MySQL Inc.

Archie's user avatar

Archie

6,1823 gold badges35 silver badges44 bronze badges

answered Aug 15, 2011 at 14:41

Michael Goltsman's user avatar

4

From the MySQL documentation (that you already have read as I see):

1206 (ER_LOCK_TABLE_FULL)

The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a large INSERT, perform several smaller INSERT operations.

If increasing innodb_buffer_pool_size doesnt help, then just follow the indication on the bolded part and split up your INSERT into 3. Skip the UNIONs and make 3 INSERTs, each with a JOIN to the topThreetransit table.

Community's user avatar

answered Aug 3, 2011 at 15:12

MicSim's user avatar

MicSimMicSim

26.1k15 gold badges89 silver badges132 bronze badges

First, you can use sql command show global variables like 'innodb_buffer%'; to check the buffer size.

Solution is find your my.cnf file and add,

[mysqld]
innodb_buffer_pool_size=1G # depends on your data and machine

DO NOT forget to add [mysqld], otherwise, it won’t work.

In my case, ubuntu 16.04, my.cnf is located under the folder /etc/mysql/.

answered Nov 30, 2018 at 11:45

Bowen Xu's user avatar

Bowen XuBowen Xu

3,7161 gold badge21 silver badges25 bronze badges

I am running MySQL windows with MySQL workbench.
Go to Server > Server status
At the top it says configuration file: «path» (C:ProgramDataMySQL...my.ini)

Then in the file «my.ini» press control+F and find buffer_pool_size.
Set the value higher, I would recommend 64 MB (default is 8 MB).

Restart the server by going to Instance>Startup/Shutdown > Stop server (and then later start server again)

In my case I could not delete entries from my table.

answered May 8, 2018 at 13:42

dejoma's user avatar

dejomadejoma

3861 gold badge6 silver badges18 bronze badges

Fixing Error code 1206: The number of locks exceeds the lock table size.

In my case, I work with MySQL Workbench (5.6.17) running on Windows with WampServer 2.5.

For Windows/WampServer you have to edit the my.ini file (not the my.cnf file)

To locate this file go to Menu Server/Server Status (in MySQL Workbench) and look under Server Directories/ Base Directory

MySQL Server - Server Status

In my.ini file there are defined sections for different settings, look for section [mysqld] (create it if it does not exist) and add the command: innodb_buffer_pool_size=4G

[mysqld]
innodb_buffer_pool_size=4G

The size of the buffer_pool file will depend on your specific machine, in most cases, 2G or 4G will fix the problem.

Remember to restart the server so it takes the new configuration, it corrected the problem for me.

Hope it helps!

answered Jul 2, 2020 at 16:47

Efrain Plaza's user avatar

Same issue I’m getting in my MYSQL while running sql script Please look into below image..
Error code 1206: The number of locks exceeds the lock table size Picture

This is Mysql configuration issue so I made some changes in my.ini
and It’s working on my system & issue resolved.

We need to make some changes in my.ini which is available on following Path:- C:ProgramDataMySQLMySQL Server 5.7my.ini
and please update following changes in my.ini config file fields:-

key_buffer_size=64M
read_buffer_size=64M
read_rnd_buffer_size=128M
innodb_log_buffer_size=10M
innodb_buffer_pool_size=256M
query_cache_type=2
max_allowed_packet=16M

After all above changes please restart the MYSQL Service.
Please refer the image:- Microsoft MYSQL Service Picture

answered Aug 25, 2020 at 5:09

DhanRaj Wanjare's user avatar

If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert —

INSERT INTO SkusBought...

SELECT t1.customer, t1.SKU, t1.TypeDesc
FROM transactiondatatransit AS T1
LEFT OUTER JOIN topThreetransit AS T2
ON t1.customer = t2.customernum
WHERE T2.customernum IS NOT NULL

Repeat this for the other two tables — copy/paste is a fine method, simply change the FROM table name.
** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.

LEFT OUTER JOIN SkusBought AS T3
ON  t1.customer = t3.customer
AND t1.sku = t3.sku

-and then the last line of WHERE clause-

AND t3.customer IS NULL

Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.

answered Sep 9, 2011 at 18:34

Forrest Pugh's user avatar

in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:

C:ProgramDataMySQLMySQL Server 5.7

open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M.
This is how i fixed this problem

answered Jul 6, 2017 at 8:30

Ch HaXam's user avatar

Ch HaXamCh HaXam

4993 silver badges16 bronze badges

It is worth saying that the figure used for this setting is in BYTES — found that out the hard way!

answered Jan 26, 2018 at 17:42

Antony's user avatar

AntonyAntony

3,60727 silver badges30 bronze badges

1

This answer below does not directly answer the OP’s question. However,
I’m adding this answer here because this page is the first result when
you Google «The total number of locks exceeds the lock table size».


If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.

The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.

# Drop
DROP TABLE IF EXISTS
new_table;

# Create (we will add keys later)
CREATE TABLE
new_table
(
    num INT(11),
    row_id VARCHAR(255),
    row_value VARCHAR(255),
    row_date DATETIME
);

# Change the delimimter
DELIMITER //

# Create procedure
CREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)
BEGIN

    # Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.
    WHILE current_loop_date <= NOW() DO

        # Do something
        INSERT INTO
            user_behavior_search_tagged_keyword_statistics_with_type
            (
                num,
                row_id,
                row_value,
                row_date
            )
        SELECT
            # Do something interesting here
            num,
            row_id,
            row_value,
            row_date
        FROM
            old_table
        WHERE
            row_date >= current_loop_date AND
            row_date < current_loop_date + INTERVAL 1 WEEK;

        # Increment
        SET current_loop_date = current_loop_date + INTERVAL 1 WEEK;

    END WHILE;

END//

# Run
CALL do_repeat('2017-01-01');

# Cleanup
DROP PROCEDURE IF EXISTS do_repeat//

# Change the delimimter back
DELIMITER ;

# Add keys
ALTER TABLE
    new_table
MODIFY COLUMN
    num int(11) NOT NULL,
ADD PRIMARY KEY
    (num),
ADD KEY
    row_id (row_id) USING BTREE,
ADD KEY
    row_date (row_date) USING BTREE;

You can also adapt it to loop over the «num» column if your table doesn’t use a date.

Hope this helps someone!

Joshua Pinter's user avatar

Joshua Pinter

43.9k23 gold badges239 silver badges243 bronze badges

answered Apr 25, 2019 at 8:09

Joseph Shih's user avatar

Joseph ShihJoseph Shih

1,20413 silver badges24 bronze badges

Good Day,

I have had the same error when trying to remove millions of rows from a MySQL table.

My resolution was nothing to do with changing the configuration file of MySQL, but just to reduce the number of rows I am targeting by specifying the max id per transaction. Instead of targeting all the rows with 1 transaction, I would suggest targeting portions per transaction. It might take more transactions to get the job done, but atleast you will get somewhere other than trying to fiddle around with MySQL configurations.

Example:

delete from myTable where groupKey in ('any1', 'any2') and id < 400000;

and not

delete from myTable where groupKey in ('any1', 'any2');

The query might still be optimized by using groupBy and orderBy clauses.

Dharman's user avatar

Dharman

29.3k21 gold badges80 silver badges131 bronze badges

answered Jan 20, 2022 at 8:47

Mini-Man's user avatar

I was facing a similar problem when I was trying to insert some million rows into a database using python. The solution is to group these inserts together into smaller chunks to reduce memory usage, using the executemany function to insert each chuck and then committing simultaneously instead of executing one commit in the end.

def insert(query, items, conn):
    GROUPS = 10
    total = items.shape[0]
    group = total // GROUPS
    items = list(items.itertuples(name=None, index=None))
    for i in range(GROUPS):
        cursor.executemany(query, items[group * i : group * (i + 1)])
        conn.commit()
        print('#', end='')
    print()

There’s also a neat progress bar in the implementation above.

answered Mar 25, 2022 at 11:02

Mehul Todi's user avatar

Mysql solves The total number of locks exceeds the lock table size error
The total number of locks exceeds the lock table size error occurs when a field is modified correctly the first time, but the second time when a local UPDATE of one million data is performed

Quoting the following explanation from the Internet

If you’re running an operation on a large number of rows within a table that uses the InnoDB storage engine, you might see this error: ERROR 1206 (HY000): The total number of locks exceeds the lock table size MySQL is trying to tell you that it doesn’t have enough room to store all of the row locks that it would need to execute your query. The only way to fix it for sure is to adjust innodb_buffer_pool_size and restart MySQL. By default, this is set to only 8MB, which is too small for anyone who is using InnoDB to do anything. If you need a temporary workaround, reduce the amount of rows you’re manipulating in one query. For example, if you need to delete a million rows from a table, try to delete the records in chunks of 50,000 or 100,000 rows. If you’re inserting many rows, try to insert portions of the data at a single time.

It turns out that this problem occurs when InnoDB table performs update, insert, and delete operations on large batches of data, so you need to adjust the value of InnoDB global innodb_buffer_pool_size to solve this problem and restart mysql service. Check the current database storage engine and use ENGINE=InnoDB type at creation time. Default innodb_buffer_pool_size=8M

#View the MySQL storage engine
mysql> show variables like '%storage_engine%';
+----------------------------------+--------+
| Variable_name                    | Value  |
+----------------------------------+--------+
| default_storage_engine           | InnoDB |
| default_tmp_storage_engine       | InnoDB |
| disabled_storage_engines         |        |
| internal_tmp_disk_storage_engine | InnoDB |
+----------------------------------+--------+
4 rows in set, 1 warning (0.00 sec)

# Check the size of MySQL cache pool
#You can see that the default cache pool size is 8388608 = 8 * 1024 * 1024 = 8 MB. you need to change it to a larger size.
mysql> show variables like "%_buffer_pool_size%";
+-------------------------+---------+
| Variable_name           | Value   |
+-------------------------+---------+
| innodb_buffer_pool_size | 8388608 |
+-------------------------+---------+
1 row in set, 1 warning (0.00 sec)

#Modify innodb_buffer_pool_size
mysql> SET GLOBAL innodb_buffer_pool_size=2147483648;
#change to 2g.
#Note that the changes take effect after mysql 5.7, but the previous version has to be modified in my.cnf and restarted.

Similar Posts:

  1. Major Hayden/
  2. Oldposts/
  3. MySQL: The total number of locks exceeds the lock table size/

29 January 2010·335 words·2 mins

This problem has cropped up for me a few times, but I’ve always forgotten to make a post about it. If you’re working with a large InnoDB table and you’re updating, inserting, or deleting a large volume of rows, you may stumble upon this error:

ERROR 1206 (HY000): The total number of locks exceeds the lock table size

InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you’re doing anything with InnoDB on your server.

Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There’s simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post’s writing).

Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it’s shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won’t use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.

So, you need a workaround without a MySQL restart?

If you’re in a pinch, and you need a workaround, break up your statements into chunks. If you need to delete a million rows, try deleting 5-10% of those rows per transaction. This may allow you to sneak under the lock table size limitations and clear out some data without restarting MySQL.

To learn more about InnoDB’s parameters, visit the MySQL documentation.

mysql error «ERROR 1206 (HY000): The total number of locks exceeds the lock table size» solution

Problem background

      In MySQL 5.6, the amount of data in the data table using the innodb engine is increasing (such as millions of records in a single table), and some large batches of updateSQL statements will be executed by default.
   The engine parameters of is too small and an error is reported. The typical error types are as follows:
 ERROR 1206 (HY000): The total number of locks exceeds the lock table size
   For example, execute a SQL command similar to this in a single table with 200w+ records: delete from table_xxx where col_1 like 
   '%http://www.youku.com/%' and there are many records that meet the fuzzy conditions, the InnoDB engine will throw the error given above due to too many rows that need to be locked.
   Checking information (such as here) shows that this type of error is caused by inappropriate default configuration parameters of InnoDB.
   Obviously, the solution to this exception is to modify the configuration and restart mysqld.

The following explains how to reproduce the problem and the steps to solve it in the replication environment of MHA:

1.1 Environmental description

  #MHAEnvironment
 192.168.2.132 mydb1   #Master                CENTOS7
 192.168.2.133 mydb2   #Slave                 CENTOS7
 192.168.2.131 mydb3   #MHAManager            CENTOS7

1.2 Build a test table and simulate an error

#Build table script
USE test;
CREATE TABLE `UC_USER` (
 `ID` BIGINT (20),
 `USER_NAME` VARCHAR (400),
 `USER_PWD` VARCHAR (800),
 `BIRTHDAY` DATETIME ,
 `NAME` VARCHAR (800),
 `USER_ICON` VARCHAR (2000),
 `SEX` CHAR (4),
 `NICKNAME` VARCHAR (800),
 `STAT` VARCHAR (40),
 `USER_MALL` BIGINT (20),
 `LAST_LOGIN_DATE` DATETIME ,
 `LAST_LOGIN_IP` VARCHAR (400),
 `SRC_OPEN_USER_ID` BIGINT (20),
 `EMAIL` VARCHAR (800),
 `MOBILE` VARCHAR (200),
 `IS_DEL` CHAR (4),
 `IS_EMAIL_CONFIRMED` VARCHAR (4),
 `IS_PHONE_CONFIRMED` VARCHAR (4),
 `CREATER` BIGINT (20),
 `CREATE_DATE` DATETIME ,
 `UPDATE_DATE` DATETIME ,
 `PWD_INTENSITY` VARCHAR (4),
 `MOBILE_TGC` VARCHAR (256),
 `MAC` VARCHAR (256),
 `SOURCE` VARCHAR (4),
 `ACTIVATE` VARCHAR (4),
 `ACTIVATE_TYPE` VARCHAR (4),
 `IS_LIFE` VARCHAR (4)
) ENGINE=INNODB;


 #Insert data is submitted in batches in the form of a process
DELIMITER $$     
USE `test`$$     
DROP PROCEDURE IF EXISTS `pro_test_data`$$     
CREATE PROCEDURE `pro_test_data`( pos_begin INT,pos_end INT)
BEGIN
 DECLARE i INT;
 SET i=pos_begin;
 SET AUTOCOMMIT=0;
 WHILE  i>=pos_begin && i<= pos_end DO      
       INSERT INTO test.`UC_USER` (`ID`, `USER_NAME`, `USER_PWD`, `BIRTHDAY`, `NAME`, `USER_ICON`, `SEX`, `NICKNAME`, `STAT`, `USER_MALL`, `LAST_LOGIN_DATE `, `LAST_LOGIN_IP`, `SRC_OPEN_USER_ID`, `EMAIL`, `MOBILE`, `IS_DEL`, `IS_EMAIL_CONFIRMED`, `IS_PHONE_CONFIRMED`, `CREATER`, `CREATE_DATE`, `UPDATE_DATE`, `PWD_INTENS_TG`, `MOBILE_TG `MAC`, `SOURCE`, `ACTIVATE`, `ACTIVATE_TYPE`, `IS_LIFE`) VALUES(i,'admin','1ba613b3676a4a06d6204b407856f374',NOW(),'supertube','group1/M00/03/BC/ wKi0d1QkFaWAHhEwAAAoJ58qOcg271.jpg','1','admin2014','01','1',NOW(),'192.168.121.103',NULL,'','10099990001','0','1','0 ',NULL,NULL,NULL,'1','E5F10CAA4EBB44C4B23726CBBD3AC413','1-3','0','2','2','1');
       SET i=i + 1; # Next, it is judged that a batch of 30W will be committed once.
   IF MOD(i,300000)<=0 THEN
INSERT INTO test.uc_log(id,msg)VALUES(i,'begin to commmit a group insert sql data.');
COMMIT;
   END IF;
 END WHILE;
END$$     
DELIMITER ;


 #log 
CREATE TABLE `uc_log` (
 `msg` varchar(1000) DEFAULT NULL comment'Submit information record',
`id` int(11) DEFAULT NULL  
) ENGINE=InnoDB DEFAULT CHARSET=utf





 #Insert data 1000w
mysql> call test.pro_test_data_1(0,10000000);
Query OK, 1 row affected (1 hour 37 min34.57 sec)    
 
mysql>
mysql> select count(1) from test.`UC_USER_1`;
+-----------+
| count(1)  |
+-----------+
| 10000000    |
+-----------+
1 row in set (3 min 0.14 sec)



 #Add primary key
alter  table  test.UC_USER  add primary key(id);



 #Simulation error statement  
update  test.UC_USER a,
(select id,MOBILE from test.UC_USER 
  where id %3=0 ) b 
  set a.MOBILE=b.MOBILE
  where a.id = b.id

78739949.jpg

1.3 Causes and solutions

1.3.1 Reason

Here deliberately set the innodb_buffer_pool_size of the master-slave library to 8m
41853935.jpg

1.3.2 Modify the parameters of the slave library, and then restart the slave library
[[email protected] ~]$ vi /MySQL/my3306/my.cnf
innodb_buffer_pool_size=128m

[[email protected] ~]$ mysqladmin shutdown -uroot -proot123
Warning: Using a password on the command line interface can be insecure.
170830 23:52:56 mysqld_safe mysqld from pid file /MySQL/my3306/run/mysqld.pid ended

[[email protected] ~]$ mysqld_safe --defaults-file=/MySQL/my3306/my.cnf --user=mysql &
[1] 60117
[[email protected] ~]$ 170830 23:53:38 mysqld_safe Logging to '/MySQL/my3306/log/error.log'.
170830 23:53:39 mysqld_safe Starting mysqld daemon with databases from /MySQL/my3306/data


mysql> show variables like '%buffer%'
    -> ;
+-------------------------------------+----------------+
| Variable_name                       | Value          |
+-------------------------------------+----------------+
| bulk_insert_buffer_size             | 8388608        |
| innodb_buffer_pool_dump_at_shutdown | OFF            |
| innodb_buffer_pool_dump_now         | OFF            |
| innodb_buffer_pool_filename         | ib_buffer_pool |
| innodb_buffer_pool_instances        | 8              |
| innodb_buffer_pool_load_abort       | OFF            |
| innodb_buffer_pool_load_at_startup  | OFF            |
| innodb_buffer_pool_load_now         | OFF            |
| innodb_buffer_pool_size             | 134217728      |
| innodb_change_buffer_max_size       | 25             |
| innodb_change_buffering             | inserts        |
| innodb_log_buffer_size              | 67108864       |
| innodb_sort_buffer_size             | 1048576        |
| join_buffer_size                    | 262144         |
| key_buffer_size                     | 8388608        |
| myisam_sort_buffer_size             | 8388608        |
| net_buffer_length                   | 16384          |
| preload_buffer_size                 | 32768          |
| read_buffer_size                    | 131072         |
| read_rnd_buffer_size                | 262144         |
| sort_buffer_size                    | 262144         |
| sql_buffer_result                   | OFF            |
+-------------------------------------+----------------+
1.3.3 master: close event_scheduler (ie mydb1)
mysql> set global event_scheduler=off;

18737830.jpg

1.3.4 manager: close the management process (ie mydb3)
[[email protected] /]#  /usr/local/bin/masterha_stop --conf=/u01/mha/etc/app.cnf
MHA Manager is not running on app(2:NOT_RUNNING).
1.3.5 manager: Check the configuration file
Has /u01/mha/etc/app.cnf been modified or damaged? If it is damaged, you need to edit the correct configuration file: /u01/mha/etc/app.cnf
cp /u01/mha/etc/app.cnf.bak /u01/mha/etc/app.cnf
1.3.6 Start switching:
/usr/local/bin/masterha_master_switch --master_state=alive --conf=/u01/mha/etc/app.cnf

21922019.jpg
48727888.jpg

1.3.7 Modify my.cnf from the new library mydb1 and restart
[[email protected] ~]$ vi /MySQL/my3306/my.cnf
innodb_buffer_pool_size=128m

[[email protected] ~]$ mysqladmin shutdown -uroot -proot123
   
[[email protected] ~]$ mysqld_safe --defaults-file=/MySQL/my3306/my.cnf --user=mysql &
1.3.8 new master(old slave) mydb2
mysql> show master status G;
*************************** 1. row ***************************
         File: binlog.000014
     Position: 120
 Binlog_Do_DB: 
 Binlog_Ignore_DB: 
 Executed_Gtid_Set: 
 1 row in set (0.00 sec)

1.3.9 new slave(old master) mydb1

    CHANGE MASTER TO
    MASTER_HOST='192.168.2.133',
    MASTER_PORT=3306,
    MASTER_LOG_FILE='binlog.000014',
    MASTER_LOG_POS=120,
    MASTER_USER='rep',
    MASTER_PASSWORD='rep123';

    mysql> start slave;
    mysql> show slave statusG
1.3.10 Start the management node to view the cluster status
[[email protected] mha]# /usr/local/bin/masterha_manager --conf=/u01/mha/etc/app.cnf &
 #Or
[[email protected] mha]# /usr/local/bin/masterha_manager --conf=/u01/mha/etc/app.cnf --remove_dead_master_conf --ignore_last_failover
[[email protected] mha]# /usr/local/bin/masterha_check_repl --conf=/u01/mha/etc/app.cnf

64123422.jpg

1.3.11 Re-run the error statement successfully

69745289.jpg

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Error 1133 42000 can t find any matching row in the user table
  • Error 1101 mysql
  • Error 1075 the dependency service does not exist or has been marked for deletion
  • Error 1072 sql
  • Error 1063 42000 incorrect column specifier for column

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии