Sqlstate hy000 general error 2013 lost connection to mysql server during query

MySQL Error 2013 (hy000) appears when the connection between  MySQL client and database server times out. Fix includes altering MySQL configuration.

Is your database restore stuck with MySQL Error 2013 (hy000)?

Often queries or modifications in large databases result in MySQL errors due to server timeout limits.

At Bobcares, we often get requests to fix MySQL errors, as a part of our Server Management Services.

Today, let’s see how our Support Engineers fix MySQL Error 2013 (hy000) for our customers.

Why this MySQL Error 2013 (hy000) happens?

While dealing with MySQL, we may encounter some errors. Today, we are going to discuss one such error.

This MySQL 2013 error occurs during a restore of databases via mysqldump, in MySQL replication, etc.

This error appears when the connection between MySQL client and database server times out.

In general, this happens in databases with large tables. As a result, it takes too much time for the query to return data and the connection drops with an error.

Other reasons for the error include a large number of aborted connections, insufficient server memory, server restrictions, etc.

How do we fix MySQL Error 2013 (hy000)?

The fix for MySQL Error 2013 (hy000) depends a lot on the triggering reason. Let’s now see how our MySQL Engineers help customers solve it.

1. Changing MySQL limits

Recently, one of our customers approached us saying that he is getting an error like the one shown below while he is trying to connect with MySQL server.

MySQL Error 2013 (hy000)

So, our Engineers checked in detail and found that the connect_timeout value was set to only a few seconds. So, we increased it to 10 in the MySQL configuration file. For that, we followed the steps below:

Firstly, we opened the MySQL configuration file at /etc/mysql/my.cnf

Then, we searched for connect_timeout and set it as:

connect_timeout=10

Then we tried connecting with MySQL server and we were successful.

Additionally, it requires the proper setting of the variable max_allowed_packet in the MySQL configuration file too. While trying to restore the dump file in GB sizes, we increase the value to a higher one.

2. Disable Access restrictions

Similarly, this error also appears when the host has access restrictions. In such cases, we fix this by adding the client’s IP in /etc/hosts.allow or allow it in the server firewall.

Also, the error can happen due to the unavailability of the server. Recently, in a similar instance, the problem was not related to MySQL server or MySQL settings. We did a deep dig and found that high network traffic is causing the problem.

When we checked we found that a weird process running by the Apache user. So, we killed that and this fixed the error.

3. Increasing Server Memory

Last and not least, MySQL memory allocation also becomes a key factor for the error. Here, the server logs will have related entries showing the insufficient memory limit.

Therefore, our Dedicated Engineers reduce the innodb_buffer_pool size. This reduces the memory allocation on the server and fixes the error.

[Need assistance with MySQL errors – We can help you fix it]

Conclusion

In short, we discussed in detail on the causes for MySQL Error 2013 (hy000) and saw how our Support Engineers fix this error for our customers.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

If you spend time running lots of MySQL queries, you might come across the Error Code: 2013. Lost connection to MySQL server during query. This article offers some suggestions on how to avoid or fix the problem.

Why this happens

This error appears when the connection between your MySQL client and database server times out. Essentially, it took too long for the query to return data so the connection gets dropped.

Most of my work involves content migrations. These projects usually involve running complex MySQL queries that take a long time to complete. I’ve found the WordPress wp_postmeta table especially troublesome because a site with tens of thousands of posts can easily have several hundred thousand postmeta entries. Joins of large datasets from these types of tables can be especially intensive.

Avoid the problem by refining your queries

In many cases, you can avoid the problem entirely by refining your SQL queries. For example, instead of joining all the contents of two very large tables, try filtering out the records you don’t need. Where possible, try reducing the number of joins in a single query. This should have the added benefit of making your query easier to read. For my purposes, I’ve found that denormalizing content into working tables can improve the read performance. This avoids time-outs.

Re-writing the queries isn’t always option so you can try the following server-side and client-side workarounds.

Server-side solution

If you’re an administrator for your MySQL server, try changing some values. The MySQL documentation suggests increasing the net_read_timeout or connect_timeout values on the server.

Client-side solution

You can increase your MySQL client’s timeout values if you don’t have administrator access to the MySQL server.

MySQL Workbench

You can edit the SQL Editor preferences in MySQL Workbench:

  1. In the application menu, select Edit > Preferences > SQL Editor.
  2. Look for the MySQL Session section and increase the DBMS connection read time out value.
  3. Save the settings, quite MySQL Workbench and reopen the connection.

Navicat

How to edit Navicat preferences:

  1. Control-click on a connection item and select Connection Properties > Edit Connection.
  2. Select the Advanced tab and increase the Socket Timeout value.

Command line

On the command line, use the connect_timeout variable.

Python script

If you’re running a query from a Python script, use the connection argument:
con.query('SET GLOBAL connect_timeout=6000')

The misstep depicted in the title of this article is entirely outstanding. A couple of attempts on handling the issue exist in various articles on the web. However, in error 2013 you lost connection to MySQL server during a query, concerning this article, there is a specific condition that is exceptionally novel so in the end, it causes the error to occur. The error occurs with the specific error message. That error message is in the going with yield message:


  • root@hostname ~# mysql – uroot – p – h 127.0.0.1 – P 4406
  • Enter secret key:
  • Error 2013 (HY000): Lost relationship with MySQL server at ‘examining initial correspondence bundle’, system error: 0
  • root@hostname ~#

The above error message is a result of partner with a MySQL Database Server. It is a standard MySQL Database Server running on a machine. However, the real connection is a substitute one. The connection exists using Worker holder running collaboration. Coming up next is the reliable running course of that Worker holder:


  • root@hostname ~# netstat – tulpn | grep 4406
  • tcp6 0:4406: * LISTEN 31814/worker-go-between
  • root@hostname ~#

There are at this point lots of articles analyze about this mix-up. For a model in this association in the stack overflow or this association and one more in this association, besides in this association. The general issue is truly something almost identical. There is something misguided in the running arrangement of the regular MySQL Database Server.

Why this happens

This misstep appears when the relationship between your MySQL client and database server times out. Essentially, it took unreasonably long for the request to return data so the connection gets dropped.

By far most of my work incorporates content migrations. These activities for the most part incorporate running complex MySQL requests that burn through a huge lump of the day to wrap up. I’ve found the WordPress wp_postmeta table especially hazardous considering the way that a site with countless posts can without a very remarkable stretch have two or three hundred thousand post meta sections. Joins of enormous datasets from such tables can be especially genuine.

Avoid the issue by sanitizing your requests

Generally speaking, you can avoid the issue absolutely by refining your SQL questions. For example, instead of joining all of the substance of two especially immense tables, have a go at filtering through the records you needn’t waste time with. Where possible, have a go at reducing the amount of partakes in a singular inquiry. This should have the extra benefit of simplifying your inquiry to examine. For my inspirations, I’ve found that denormalizing content into working tables can deal with the read execution. This avoids breaks.

Re-making the inquiries isn’t, by and large, another option so you can effort the going with server-side and client-side workarounds.

A server-side course of action

If you’re ahead for your MySQL server, make a pass at changing a couple of characteristics. The MySQL documentation proposes extending the net_read_timeout or connect timeout values on the server.

The client-side course of action

You can extend your MySQL client’s sever regards on the possibility that you don’t have exclusive induction to the MySQL server.

MySQL Worktable

You can adjust the SQL Editor tendencies in MySQL Work Table:

  • In the application menu, select Edit > Preferences > SQL Editor.
  • Quest for the MySQL Session portion and augmentation the DBMS connection read break regard.
  • Save the settings, very MySQL Work Table, and return the connection.

Step for handling the issue

There are a couple of stages for handling the issue above. There are two segments for handling the issue. The underlying portion is for perceiving the principal driver of the issue. Later on, the ensuing part is the genuine plan taken for tending to the fundamental driver of that issue. Thusly, going with the region which is the underlying portion will focus on power to search for the justification behind the issue.

Glancing through the justification behind the issue

Because of this article, coming up next is the means for settling the mix-up

  1. Check whether the MySQL Database Server measure is truly running. Effect it as follows using any request plan open in the working for truly taking a gander at a running connection. Concerning this article, it is ‘systemctl status MySQL. Thusly, coming up next is a model for the execution of the request plan:

  • root@hostname ~# systemctl status MySQL
  • service – MySQL Community Server
  • Stacked: stacked (/lib/systemd/structure/mysql.service; horrible; vendor preset: engaged)
  • Dynamic: dynamic (running) since Mon 2019-09-16 13:16:12; 40s back
  • Cycle: 14867 ExecStart=/usr/sbin/mysqld – demonize – pid-file=/run/mysqld/mysqld. Pid (code=exited, status=0/SUCCESS)
  • Connection: 14804 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
  • Guideline PID: 14869 (mysqld)
  • Tasks: 31 (limit: 4915)
  • Group:/system. Slice/mysql.service
  • └─14869/usr/sbin/mysqld – daemonize – pid-file=/run/mysqld/mysqld. Pid
  • root@hostname ~#

  1. Preceding partner with MySQL Database Server using a substitute port focusing on any moving toward requesting where it is a worker compartment measure dealing with, basically test the regular connection. By the day’s end, the partner using the normal port tuning in the machine for any moving toward a relationship with MySQL Database Server. Normally, it exists in port ‘3306’. Do it as follow:

  • root@hostname ~# mysql – uroot
  • Goof 2002 (HY000): Can’t interface with neighborhood MySQL server through connection ‘/var/run/mysqld/mysqld. Sock’ (2)
  • root@hostname ~#

The above screw-up message is where the genuine root issue is. Check for the genuine report which is tending to the connection record for MySQL daemon measure as follows:


  • root@hostname ~# disc/var/run/mysqld/
  • root@hostname ~# ls
  • Pid MySQL. sock MySQL. sock. Lock
  • root@hostname ~#

As shown by the above yield, the report doesn’t exist. That is the explanation the relationship with MySQL Database Server is continually failed. Even though the connection cooperation is done through the default port of ‘3306’.

  1. The effort to restart the cooperation and trust that it will handle the issue.

  • root@hostname ~# systemctl stop MySQL
  • root@hostname ~# systemctl start MySQL
  • root@hostname ~# mysql – uroot
  • Slip-up 2002 (HY000): Can’t interface with neighborhood MySQL server through connection ‘/var/run/mysqld/mysqld. Sock’ (2)
  • root@hostname ~#

  1. Unfortunately, the above cycle moreover wraps up in disappointment. Progress forward the movement for handling the issue, just check the MySQL Database arrangement record. In the wake of really investigating the report course of action, it doesn’t fit in any way shape, or form. Eventually, going through hours for changing the arrangement records, nothing happens.

For the reason happens above, check the right game plan before to see which MySQL Database Server arrangement is used by the running MySQL Database Server.

How might we fix MySQL Error 2013 (hy000)?

The fix for MySQL Error 2013 (hy000) depends a ton upon the setting off reason. We should now see how our MySQL Engineers help customers with settling it.

1. Changing MySQL limits

Lately, one of our customers pushed toward us saying that he is getting a mix-up as the one showed underneath while he is efforting to interface with the MySQL server.

Along these lines, our Engineers checked thoroughly and found that the connect timeout regard was set to two or three minutes. Thusly, we extended it to 10 in the MySQL arrangement record. For that, we followed the means underneath:

First thing, we opened the MySQL plan archive at, etc/MySQL/my.cnf

Then, we searched for connect timeout and set it as:

  • connect timeout=10

Then, we had a go at partner with MySQL server and we were viable.

Additionally, it requires the genuine setting of the variable max_allowed_packet in the MySQL arrangement record also. While efforting to restore the landfill record in GB sizes, we increase the value to a higher one.

2. Disabled person Access limits

This slip-up in like manner appears when the host approaches impediments. In such cases, we fix this by adding the client’s IP in, etc/hosts. Allow or license it in the server firewall.

Similarly, the error can happen as a result of the detachment of the server. Lately, in a similar case, the issue was not related to MySQL server or MySQL settings. We did a significant tunnel and found that high association traffic is causing the issue.

Exactly when we checked we found that an unconventional communication running by the Apache customer. Thusly, we killed that, and this good misstep.

3. Growing Server Memory

Last and not least, MySQL memory apportioning furthermore transforms into a basic factor for the slip-up. Here, the server logs will have related segments showing the lacking memory limit.

Subsequently, our Dedicated Engineers decline the innodb_buffer_pool size. This reduces the memory segment on the server and fixes the slip-up.

Checking the MySQL Database Server configuration used by the running MySQL Database Server

In the past fragment or part, there is a need to search for the real plan report used by the running MySQL Database Server. It is essential to guarantee that the plan record used is the right one. Thusly, every change can invite the right impact on dealing with the mix-up issue. Coming up next is the movement for searching for it:

  1. Truly check out the once-over of the assistance first by suggesting the running framework. In the past part, the running framework is the ‘MySQL one. Execute the going with request guide to list the open running cycle:
  • systemctl list-unit-reports | grep MySQL

The yield of the above request plan for a model is in the going with one:


  • user hostname: ~$ systemctl list-unit-archives | grep MySQL
  • service horrendous
  • Service horrendous
  • user hostname: ~$

  1. Then, at that point, truly investigate the substance of the help by executing the going with the request. Pick the right help, in this particular circumstance, it is ‘MySQL. service’:

  • user hostname: ~$ systemctl cat myself. service
  • #/lib/system/structure/Mysql.service
  • # MySQL systemd organization record
  • [Unit]
  • Description=MySQL Community Server
  • After=network. Target
  • [Install]
  • Wanted by=multi-user. Target
  • [Service]
  • Type=forking
  • User=mysql
  • Group=mysql
  • PIDFile=/run/mysqld/mysqld. Pid
  • PermissionsStartOnly=true
  • ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
  • ExecStart=/usr/sbin/mysqld – daemonize – pid-file=/run/mysqld/mysqld. Pid
  • Timeouts=600
  • Restart=on-dissatisfaction
  • Runtime Directory=mysqld
  • RuntimeDirectoryMode=755
  • LimitNOFILE=5000
  • user hostname: ~$

  1. The record obligated for starting the help is in the archive ‘/usr/share/MySQL/MySQL-systems-start’ according to the yield message above. Coming up next is the substance of that record which is only fundamental for it:

  • if [! – r, etc/MySQL/my.cnf]; then,
  • resonation “MySQL arrangement not found at, etc/MySQL/my.in. Assuming no one minds, make one.”
  • leave 1
  • fi
  • …..

  1. Resulting in truly taking a gander at the substance of the report ‘/, etc/MySQL/my.on, obviously, it isn’t the right record. Accordingly, to be more exact, there are other ways to deal with find the planned archive used by the running MySQL Database Server. The reference or the information exists in this association. Hence, according to the information in that association, basically perform the going with request guide to get the right one. It is forgetting the cycle ID and the right MySQL Database Server running collaboration:

  • root@hostname ~# netstat – tulpn | grep 3306
  • tcp6 0:3306: * LISTEN 21192/mysqld
  • root@hostname ~# ps aux | grep 21192
  • root@hostname ~# ps aux | grep 21192
  • mysql 21192 0.2 0.1 3031128 22664? Sl Sep16 1:39/usr/sbin/mysqld – daemonize – pid-file=/run/mysqld/mysqld. Pid
  • root 25442 0.0 23960 1068 pts/20 S+ 01:41 0:00 grep 21192
  • root@hostname ~#
  • Ensuing to getting the right running cycle, do the going with the request of ‘trace file_name_process’:
  • root@hostname ~# album/usr/bin/
  • root@hostname ~# strace. /mysqld
  • Coming up next is fundamental for the yield of the request:
  • detail, etc/my.cnf”, 0x7fff2e917880) = – 1 ENOENT (No such archive or vault)
  • detail, etc/mysql/my.cnf”, {st_mode=S_IFREG|0644, st_size=839, …}) = 0
  • openat (AT_FDCWD, “/, etc/mysql/my.cnf”, O_RDONLY) = 3
  • fstat (3, {st_mode=S_IFREG|0644, st_size=839, …}) = 0
  • brk(0x35f6000) = 0x35f6000
  • read (3, “#n# The MySQL data base server co”…, 4096) = 839
  • openat (AT_FDCWD, “/, etc/mysql/conf. d/”, O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4
  • fstat (4, {st_mode=S_IFDIR|0755, st_size=4096, …}) = 0
  • get dents (4, /* 4 entries */, 32768) = 120
  • get dents (4, /* 0 entries */, 32768) = 0
  • close (4) = 0
  • detail, etc/mysql/conf.d/mysql.cnf”, {st_mode=S_IFREG|0644, st_size=629, …}) = 0
  • openat (AT_FDCWD, “/, etc/mysql/conf.d/mysql.cnf”, O_RDONLY) = 4
  • fstat (4, {st_mode=S_IFREG|0644, st_size=629, …}) = 0
  • read (4, “[mysqld]nn# Connection and Three”…, 4096) = 629
  • read (4, “”, 4096) = 0
  • close (4) = 0
  • detail, etc/mysql/conf.d/mysqldump.cnf”, {st_mode=S_IFREG|0644, st_size=55, …}) = 0
  • openat (AT_FDCWD, “/, etc/mysql/conf.d/mysqldump.cnf”, O_RDONLY) = 4
  • fstat (4, {st_mode=S_IFREG|0644, st_size=55, …}) = 0
  • read (4, “[MySQL dump] nquicknquote-namesnma”…, 4096) = 55
  • read (4, “”, 4096) = 0
  • close (4) = 0
  • read (3, “”, 4096) = 0
  • close (3) = 0
  • detail (“/root/.my. cnf”, 0x7fff2e917880) = – 1 ENOENT (No such record or list)

The right one is finally in ‘/, etc/MySQL/conf.d/mysql.cf. Resulting in truly investigating the substance of the record, it is an empty archive. This is its essential driver. There has been some update and inverse present type of the MySQL Database Server, it making some disaster area the MySQL Database Server. The plan is essentially to fill that empty plan record with the right arrangement. The reference for the right arrangement of MySQL Database Server exists in this association. Restart the MySQL Server again, the above error issue will be tended to.

When you run MySQL queries, sometimes you may encounter an error saying you lost connection to the MySQL server as follows:

Error Code: 2013. Lost connection to MySQL server during query

The error above commonly happens when you run a long or complex MySQL query that runs for more than a few seconds.

To fix the error, you may need to change the timeout-related global settings in your MySQL database server.

Increase the connection timeout from the command line using –connect-timeout option

If you’re accessing MySQL from the command line, then you can increase the number of seconds MySQL will wait for a connection response using the --connect-timeout option.

By default, MySQL will wait for 10 seconds before responding with a connection timeout error.

You can increase the number to 120 seconds to wait for two minutes:

mysql -uroot -proot --connect-timeout 120

You can adjust the number 120 above to the number of seconds you’d like to wait for a connection response.

Once you’re inside the mysql console, try running your query again to see if it’s completed successfully.

Using the --connect-timeout option changes the timeout seconds temporarily. It only works for the current MySQL session you’re running, so you need to use the option each time you want the connection timeout to be longer.

If you want to make a permanent change to the connection timeout variable, then you need to adjust the settings from either your MySQL database server or the GUI tool you used to access your database server.

Let’s see how to change the timeout global variables in your MySQL database server first.

Adjust the timeout global variables in your MySQL database server

MySQL database stores timeout-related global variables that you can access using the following query:

SHOW VARIABLES LIKE "%timeout";

Here’s the result from my local database. The highlighted variables are the ones you need to change to let MySQL run longer queries:

+-----------------------------------+----------+
| Variable_name                     | Value    |
+-----------------------------------+----------+
| connect_timeout                   | 10       |
| delayed_insert_timeout            | 300      |
| have_statement_timeout            | YES      |
| innodb_flush_log_at_timeout       | 1        |
| innodb_lock_wait_timeout          | 50       |
| innodb_rollback_on_timeout        | OFF      |
| interactive_timeout               | 28800    |
| lock_wait_timeout                 | 31536000 |
| mysqlx_connect_timeout            | 30       |
| mysqlx_idle_worker_thread_timeout | 60       |
| mysqlx_interactive_timeout        | 28800    |
| mysqlx_port_open_timeout          | 0        |
| mysqlx_read_timeout               | 30       |
| mysqlx_wait_timeout               | 28800    |
| mysqlx_write_timeout              | 60       |
| net_read_timeout                  | 30       |
| net_write_timeout                 | 60       |
| replica_net_timeout               | 60       |
| rpl_stop_replica_timeout          | 31536000 |
| rpl_stop_slave_timeout            | 31536000 |
| slave_net_timeout                 | 60       |
| wait_timeout                      | 28800    |
+-----------------------------------+----------+

To change the variable values, you can use the SET GLOBAL query as shown below:

SET GLOBAL connect_timeout = 600; 

The above query should adjust the connect_timeout variable value to 600 seconds. You can adjust the numbers as you see fit.

Adjust the timeout variables in your MySQL configuration files

Alternatively, if you’re using a MySQL configuration file to control the settings of your connections, then you can edit the my.cnf file (Mac) or my.ini file (Windows) used by your MySQL connection.

Open that configuration file using the text editor of your choice and try to find the following variables in mysqld :

[mysqld]
connect_timeout = 10
net_read_timeout = 30
wait_timeout = 28800
interactive_timeout = 28800

The wait_timeout and interactive_timeout variables shouldn’t cause any problem because they usually have 28800 seconds (or 8 hours) as their default value.

To prevent the timeout error, you need to increase the connect_timeout and net_read_timeout variable values. I’d suggest setting it to at least 600 seconds (10 minutes)

If you’re using GUI MySQL tools like MySQL Workbench, Sequel Ace, or PHPMyAdmin, then you can also find timeout-related variables that are configured by these tools in their settings or preferences menu.

For example, in MySQL Workbench for Windows, you can find the timeout-related settings in Edit > Preferences > SQL Editor as shown below:

If you’re using Mac, then the menu should be in MySQLWorkbench > Preferences > SQL Editor as shown below:

If you’re using Sequel Ace like me, then you can find the connection timeout option in the Preferences > Network menu.

Here’s a screenshot from Sequel Ace Network settings:

For other GUI tools, you need to find the option yourself. You can try searching the term [tool name] connection timeout settings in Google to find the option.

And those are the four solutions you can try to fix the MySQL connection lost during query issue.

I hope this tutorial has been helpful for you 🙏

Понравилась статья? Поделить с друзьями:
  • Sqlstate hy000 general error 1114 the table is full
  • Sqlstate hy000 general error 1005 can t create table
  • Sqlstate hy000 general error 1 no such table
  • Sqlstate 42703 undefined column 7 error
  • Sqlstate 42501 insufficient privilege 7 error permission denied