Error receive read timeout

Read timeout error (Page 1) — Issues and Errors — RemoteXY community —

You are not logged in. Please login or register.

Active topics Unanswered topics

Pages 1

You must login or register to post a reply

1 2020-06-08 16:27:30

  • kartikpkr
  • New member
  • Offline
  • Registered: 2020-06-08
  • Posts: 3

Topic: Read timeout error

I face this issue while connecting the HC-05 module with my phone.

— Start connection
— Connecting to bluetooth device
— Bluetooth device connected
— Recieving GUI configuration
— Read timeout error
— Recieving GUI configuration, try 2
— Read timeout error
— Recieving GUI configuration, try 3
— Read timeout error
— Recieving GUI configuration, try 4
— Read timeout error
— Disconnect

After which a window pops up saying «Board Not Reply»

2 Reply by remotexy 2020-06-09 19:21:19

  • remotexy
  • remotexy
  • Administrator
  • Offline
  • Registered: 2016-10-27
  • Posts: 817

Re: Read timeout error

If you receive this specific error, check yourself on the list, going to the next point only after checking the previous one:

-your Arduino is not on;
-the required sketch has not been loaded into the Arduino;
-the RemoteXY library did not update, using old version of library;
-power is not supplied to the Bluetooth module, the power contacts may be reversed;
-RX and TX contacts of the Bluetooth module, or one of them are not connected to the controller, a bad contact;
-the RX and TX contacts of the Bluetooth module are not connected correctly, they may be interchanged, check the scheme (step 4);
-incorrect configuration settings or connection settings had been chosen before generating the source code;
-the data baud rate of HC-05 (06) module does not match which selected into the RemoteXY configuration (default 9600);
-the Bluetooth module is defective.

3 Reply by kartikpkr 2020-06-10 06:35:21

  • kartikpkr
  • New member
  • Offline
  • Registered: 2020-06-08
  • Posts: 3

Re: Read timeout error

remotexy wrote:

If you receive this specific error, check yourself on the list, going to the next point only after checking the previous one:

-your Arduino is not on;
-the required sketch has not been loaded into the Arduino;
-the RemoteXY library did not update, using old version of library;
-power is not supplied to the Bluetooth module, the power contacts may be reversed;
-RX and TX contacts of the Bluetooth module, or one of them are not connected to the controller, a bad contact;
-the RX and TX contacts of the Bluetooth module are not connected correctly, they may be interchanged, check the scheme (step 4);
-incorrect configuration settings or connection settings had been chosen before generating the source code;
-the data baud rate of HC-05 (06) module does not match which selected into the RemoteXY configuration (default 9600);
-the Bluetooth module is defective.

From the above points all of them except the last one (Bluetooth module may be defective) was checked and still the problem persists.

To check the bluetooth module. I connected the module to a power source and shorted the RX and TX pin on the module Then i connected the module to my phone and using a terminal app I sent a single character and I got the return value as the same character, which is what I suppose should happen when the data is sent and recieved properly by the module.

I am just a beginner and new to this field, so the problem can be something really basic.

Thanks for the reply

Posts: 3

Pages 1

You must login or register to post a reply

I have a Tomcat based web application. I am intermittently getting the following exception,

Caused by: java.net.SocketTimeoutException: Read timed out
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(SocketInputStream.java:150)
    at java.net.SocketInputStream.read(SocketInputStream.java:121)
    at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:532)
    at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:501)
    at org.apache.coyote.http11.InternalInputBuffer$InputStreamInputBuffer.doRead(InternalInputBuffer.java:563)
    at org.apache.coyote.http11.filters.IdentityInputFilter.doRead(IdentityInputFilter.java:124)
    at org.apache.coyote.http11.AbstractInputBuffer.doRead(AbstractInputBuffer.java:346)
    at org.apache.coyote.Request.doRead(Request.java:422)
    at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:290)
    at org.apache.tomcat.util.buf.ByteChunk.substract(ByteChunk.java:431)
    at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:315)
    at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:200)
    at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)

Unfortunately I don’t have access to the client, so I am just trying to confirm on various reasons this can happen,

  1. Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be Tomcat connector → connectionTimeout attribute.

  2. Client has a read timeout set, and server is taking longer than that to respond.

  3. One of the threads I went through, said this can happen with high concurrency and if the keepalive is enabled.

For #1, the initial value I had set was 20 sec, I have bumped this up to 60sec, will test, and see if there are any changes.

Meanwhile, if any of you guys can provide you expert opinion on this, that’l be really helpful. Or for that matter any other reason you can think of which might cause this issue.

double-beep's user avatar

double-beep

4,85916 gold badges32 silver badges41 bronze badges

asked Jun 13, 2013 at 4:30

Victor's user avatar

1

Server is trying to read data from the request, but its taking longer than the timeout value for the data to arrive from the client. Timeout here would typically be tomcat connector -> connectionTimeout attribute.

Correct.

Client has a read timeout set, and server is taking longer than that to respond.

No. That would cause a timeout at the client.

One of the threads i went through, said this can happen with high concurrency and if the keepalive is enabled.

That is obviously guesswork, and completely incorrect. It happens if and only if no data arrives within the timeout. Period. Load and keepalive and concurrency have nothing to do with it whatsoever.

It just means the client isn’t sending. You don’t need to worry about it. Browser clients come and go in all sorts of strange ways.

answered Jun 13, 2013 at 5:45

user207421's user avatar

6

Here are the basic instructions:-

  1. Locate the «server.xml» file in the «conf» folder beneath Tomcat’s base directory (i.e. %CATALINA_HOME%/conf/server.xml).
  2. Open the file in an editor and search for <Connector.
  3. Locate the relevant connector that is timing out — this will typically be the HTTP connector, i.e. the one with protocol="HTTP/1.1".
  4. If a connectionTimeout value is set on the connector, it may need to be increased — e.g. from 20000 milliseconds (= 20 seconds) to 120000 milliseconds (= 2 minutes). If no connectionTimeout property value is set on the connector, the default is 60 seconds — if this is insufficient, the property may need to be added.
  5. Restart Tomcat

answered Sep 19, 2018 at 13:10

Steve Chambers's user avatar

Steve ChambersSteve Chambers

35.5k21 gold badges151 silver badges203 bronze badges

Connection.Response resp = Jsoup.connect(url) //
                .timeout(20000) //
                .method(Connection.Method.GET) //
                .execute();

actually, the error occurs when you have slow internet so try to maximize the timeout time and then your code will definitely work as it works for me.

answered Mar 5, 2019 at 15:09

Rishabh Gupta's user avatar

1

I had the same problem while trying to read the data from the request body. In my case which occurs randomly only to the mobile-based client devices. So I have increased the connectionUploadTimeout to 1min as suggested by this link

answered Dec 4, 2020 at 17:23

Anand Kumar's user avatar

I have the same issue. The java.net.SocketTimeoutException: Read timed out error happens on Tomcat under Mac 11.1, but it works perfectly in Mac 10.13. Same Tomcat folder, same WAR file. Have tried setting timeout values higher, but nothing I do works.
If I run the same SpringBoot code in a regular Java application (outside Tomcat 9.0.41 (tried other versions too), then it works also.

Mac 11.1 appears to be interfering with Tomcat.

As another test, if I copy the WAR file to an AWS EC2 instance, it works fine there too.

Spent several days trying to figure this out, but cannot resolve.

Suggestions very welcome! :)

answered Jan 31, 2021 at 13:38

Morkus's user avatar

MorkusMorkus

4876 silver badges19 bronze badges

1

This happenned to my application, actually I was using a single Object which was being called by multiple functions and those were not thread safe.

Something like this :

Class A{
    Object B;
    function1(){
        B.doSomething();
    }
    function2(){
        B.doSomething();
    }
}

As they were not threadsafe, I was getting these errors :

redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Socket is closed

and

redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out

This is how I fixed it :

Class A{
    function1(){
        Object B;
        B.doSomething();
    }
    function2(){    
        Object B;
        B.doSomething();
    }
}

Hope it helps

answered Jun 22, 2022 at 10:27

Aditya's user avatar

It means time out from your server response. It causes due to server config and internet response.

answered Mar 20, 2021 at 15:32

jawad zahoor's user avatar

I am using 11.2 and received timeouts.

I resolved by using the version of jsoup below.

    <dependency>
        <groupId>org.jsoup</groupId>
        <artifactId>jsoup</artifactId>
        <version>1.7.2</version>
        <scope>compile</scope>
    </dependency>

entpnerd's user avatar

entpnerd

9,6897 gold badges44 silver badges67 bronze badges

answered Feb 7, 2018 at 14:40

Shahid Hussain Abbasi's user avatar

Problem

Error: » java.net.SocketTimeoutException: Read timed out» when trying to complete various tasks in Deployment Manager.

These tasks include trying to view job histories, alter job schedules, and deleting files or versions of files from the repository.

Resolving The Problem

The following steps can often resolve these errors:

— Verify there are no general network problems.

— Within the Browser-Based Deployment Manager (accessed via web browser at /security/login from your context root) there is a setting which controls the system timeout value.

The setting is called «Protocol Timeout». Directions for changing this setting can be found in the Administrators Guide appropriate for your version. Increasing this value may allow enough to for the operation requested to complete before the error is generated.

Please also keep in mind that in many cases the task requested will still complete despite the error. If you attempt to delete a file and this error occurs the server may still finish processing the delete operation after the client interface receives this error.

Related Information

[{«Product»:{«code»:»SS69YH»,»label»:»IBM SPSS Collaboration and Deployment Services»},»Business Unit»:{«code»:»BU059″,»label»:»IBM Software w/o TPS»},»Component»:»Not Applicable»,»Platform»:[{«code»:»PF025″,»label»:»Platform Independent»}],»Version»:»Not Applicable»,»Edition»:»»,»Line of Business»:{«code»:»LOB10″,»label»:»Data and AI»}}]

Nothing can be as annoying as getting timeout error when accessing an application.

One such error is FTP “Read timed out“. You are waiting for a successful FTP connection, but at the end it throws a timeout error.

Quite frustrating right? And, you know that the connection timed out, but where?

At Bobcares, we help website owners resolve these errors as part of our Dedicated Support Services for web hosts.

Today, we’ll discuss the top 4 reasons for ftp read timed out error and how we fix them.

What is FTP ‘Read Timed out’ error?

FTP Read Timed out error means that the client or server couldn’t read data from the source and has given up waiting for the information requested.

Users see the complete error message like this:

Failed to upload file
Establishing FTP connection failed: Read timed out
Read timed out

A Read timeout error explains a little about the error and the reason for the error, it just identifies that an error has occurred. So our Hosting Engineers analyze the FTP logs(/var/log/messages) to identify the origin of the issue.

FTP ‘Read Timed out’ error – Causes and Solutions

Let’s now see the main reasons for FTP read timed out error and how our Server Support Engineers fix them.

1) Firewall blocking Passive ports in server

The standard FTP ports are 20 and 21, and these ports should be opened in the server for proper functioning of FTP.

In addition to that, the FTP server should accept connections to Passive FTP ports which vary from server to server.

But, the problem is that most servers accept connections only on standard ports. And, if it’s not specifically set to access connections on Passive FTP ports, the incoming connections fail.

Consequently, users see FTP Read Timed out error.

Solution

Firstly, our Hosting Engineers confirm that the connectivity to the standard FTP ports 20 and 21 works well using the below command.

telnet hostname 20
telnet hostname 21

Secondly, we verify that the Passive port range is specified in the FTP configuration file.

For example, in a ProFTPd server, we un-comment the following directive in the configuration file /etc/proftpd.conf to specify the Passive port range.

PassivePorts 49152 65535

Finally, we open these passive port range in the server firewall.

For example, in a Linux server, we open the passive port range in firewall using the below command.

iptables -A INPUT -p tcp --match multiport --dports xxxxx:yyyyy -j ACCEPT

Here, xxxxx is the starting port, and yyyyy is the ending port in the Passive port range.

[Do you need help enabling Passive ports on your server? One of our Sever Experts can enable it for you in minutes.]

2) FTP client set to use Active mode

Another common reason for this error is that users enable Active mode in FTP client to transfer the files.

FTP transfers can happen in Passive and Active modes. However, active mode requires users to configure their PCs to connect to standard ports from the server.

In Active mode, the FTP client doesn’t make the actual connection to the server. Instead, it tells the server on which port it listens, and the server connects back to the specified port.

But this connection appears to be a cyber attack from the client side firewall, and hence it blocks such non-standard connections.

We’ve seen cases in which users have accidentally set their FTP mode to Active, resulting in FTP Read Timed out errors.

Solution

The solution here differs based on the FTP client software used by the users.

So, our Hosting Engineers first get the FTP client details used by the customer. And, we help users navigate their FTP client settings and enable Passive mode.

We always recommend users to enable Passive mode as the default option in their FTP clients.

[Not sure about the FTP settings to be used for your account? One of our Support Experts can help you here.]

3) Increase connection timeout limit

Usually, FTP Read Timed out error can occur when users try to upload a relatively large file. And, this problem is related to the internal timeout settings of the FTP client.

In other words, when users upload a large file, then the upload process may fail if it’s not completed within that predefined connection timeout limit.

Solution

In such cases, our Support Engineers help users to increase the Timeout values in their FTP clients accordingly.

Alternatively in some cases, we completely disable the Timeout value by setting it’s value as 0.

4) Intermediate firewall or routers block Passive ports

The majority of users are unaware of the firewall between their PC and server. This can be your network administrator or ISPs setting up a third-party firewall, intermediate firewall or a firewall on a router.

We’ve seen routers, proxies, etc. block connections through passive ports.

Solution

To resolve this, we ask users to turn off their gateways or routers to establish a direct connection. This helps us to determine if the block exists at the intermediate end.

Once, we’ve confirmed that the problem is with the intermediate devices, users can work with their network administrator or ISPs to set up the intermediate firewall to allow connections to passive ports.

Conclusion

In short, FTP Read Timed out error can happen due to server firewall settings, ftp client settings, and more. Today we’ve discussed the top 4 reasons of this error and how our Dedicated Support Engineers fix them.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

SEE SERVER ADMIN PLANS

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

Net::OpenTimeout’s good-for-nothing brother.

Net::ReadTimeout is raised when a chunk of data cannot be read within a specified amount of time.

# read_timeout_1.rb
require 'net/http'

# create a new http connection object, the connection isn't made yet
c = Net::HTTP.new("www.example.com")

# set the read timeout to 1ms
# i.e. if we can't read the response or a chunk within 1ms this will cause a
# Net::ReadTimeout error when the request is made
c.read_timeout = 0.001

# make a GET request after opening a connection
response = c.request_get("/index.html")

# print the response status code
puts "RESPONSE STATUS CODE: #{response.code}"

$ ruby read_timeout_1.rb
# => ../net/protocol.rb:176:in `rbuf_fill': Net::ReadTimeout (Net::ReadTimeout)

When we execute this code, Net::ReadTimeout is raised because it takes more than 1 ms for the server to send a response after the connection is set up. It is important to set the read_timeout to a sensible number of seconds/milliseconds based on your requirements. The default value is 60 seconds.

» In the context of HTTP requests

A successful HTTP request has 3 main steps:

  1. Open a TCP connection to the endpoint.
  2. Send a request over the connection.
  3. Read the response written to the connection.

Net::ReadTimeout is raised when step 3 doesn’t succeed within the given time.

This error is not the same as Net::OpenTimeout, which is raised when a connection cannot be setup within a given amount of time (i.e. when step 1 does not succeed within a given time).

If you (or the library that you use) don’t define a read_timeout, your code will be stuck trying to read a response from a slow server for a long time when something is wrong with the network or the endpoint. So, setting these timeouts is very important to build predictable systems.

When you run into Net::ReadTimeout, you should handle it by retrying the request a few times, or giving up and showing a helpful error to the user:

# read_timeout_2.rb
require 'net/http'

def get(host, path, retries = 3)
  # create a new http connection object, the connection isn't made yet
  c = Net::HTTP.new(host)

  # set the read timeout to 1ms
  # i.e. if we can't read the response or a chunk within 1ms this will cause a
  # Net::ReadTimeout error when the request is made
  c.read_timeout = 0.001

  # make a GET request after opening a connection
  response = c.request_get("/index.html")

  # print the response status code
  puts "RESPONSE STATUS CODE: #{response.code}"

rescue Net::OpenTimeout => e
  puts "TRY #{retries}/n ERROR: timed out while trying to connect #{e}"
  if retries <= 1
    raise
  end
  get(host, path, retries - 1)
end

get("www.example.com", "/index.html")
# => ../net/protocol.rb:176:in `rbuf_fill': Net::ReadTimeout (Net::ReadTimeout)

» Too many Net::ReadTimeout errors

Ideally you shouldn’t be getting too many read timeouts if read_timeout is set to a sensible value. If you get too many of these errors, it could indicate that:

  1. You have set a low read_timeout, which can be fixed by increasing the value.
  2. The target endpoint response times have a lot of deviation, in which case you must either send the traffic in a controlled way (by throttling it), or if the target endpoint is in your control, improve its response time.

» Usage in common gems

Most HTTP clients provide a way to configure read_timeout; here are a few of the common ones:

» 1. HTTParty

HTTParty allows you to set a read timeout using the code below:

# use a read_timeout of 100ms
HTTParty.get('http://www.example.com', { read_timeout: 0.1 })

# use it in a custom client
class SaneHTTPClient
  include HTTParty
  read_timeout 1
end

SaneHTTPClient.get("www.example.com")

» 2. Faraday

Faraday allows you to pass read timeout using the timeout option as part of the :request option hash, which throws a Faraday::TimeoutError error in case of a read timeout.

# set the read timeout to 1ms
conn = Faraday.new(url: "http://www.example.com", request: { timeout: 0.001 })
conn.get("/index.html")
# => Faraday::TimeoutError: Net::ReadTimeout

» 3. REST Client

REST Client offers read_timeout as an option to the RestClient::Request.execute function which throws a RestClient::Exceptions::ReadTimeout error in case of a read timeout.

# set the read timeout to 1ms
RestClient::Request.execute(method: :get, url: 'http://example.com/',
                            read_timeout: 0.001)
# => RestClient::Exceptions::ReadTimeout: Timed out reading data from server

» Pedantic Note

read_timeout usually specifies the time to read responses while making HTTP requests. However, in case of chunked responses, this timeout applies just for reading a single chunk of the response. So, if we have an HTTP server which streams data by sending a chunk every second and the whole response takes 10 minutes, setting a read_timeout to 2 seconds will not error out because we are receiving a chunk every second. So, don’t be surprised if a read_timeout of 2 seconds doesn’t error out even after 10 minutes while using chunked responses.

» More resources

The Ultimate Guide to Ruby Timeouts is a great article which documents how to add timeouts while using popular gems.

The purpose of this article is to familiarize merchant services industry players with the mechanisms implemented in payment gateway software for the cases when transaction processing becomes impossible due to temporary loss of connection with the “back end” (for instance, a bank) or due to some other errors.

Problems with transaction processing might be caused by one of the two conceptually different reasons: timeouts and errors.

The first potential reason is a so-called timeout. Essentially, timeout means that there is a connection problem.

There are two types of timeouts sometimes referred to as ‘socket timeout (or connection timeout)’ and ‘read timeout’.

The key differences are as follows.

Connection timeout

Connection timeout usually occurs within 5 seconds. Connection timeout indicates that connection with the back end is impossible, and the server, to which the data needs to be transferred, cannot be reached. Issues with connection can be caused by DNS problems, server failure, Firewall rules blocking specific port, or some other reasons. In such cases the, a backup (or secondary) URL can be used (if available). If no connection can be established through either URL of a given service, further processing of transactions is impossible. It is important to note that no information is communicated to the host server when connection timeout occurs.

Read timeout

Read timeout usually occurs within 40 to 60 seconds. Read timeout occurs when the socket is open, connection to the host server is established, the request is sent, but the response from the server is not received on time, and cannot be read. Since the authorization request has already been sent, the transaction may have been processed (and approved), but, possibly, some error has occurred on the way back from the host server. As there is no clear response, there is a risk of charging the customer for the second time (if transaction is reattempted).

There are two approaches that are used by integrators to deal with situations like the one described above. These approaches are, generally, referred to as ‘authorization and capture’ (explicit or implicit capture) and ‘timeout reversal’.

Authorization-and-capture

The basic premise of authorization-and-capture approach is that any authorization processed has to be subsequently captured (confirmed) by an additional request. If timeout occurs, the submitting system does not send out the confirmation (since it never got the response), and, consequently, the host system reverses the authorization (because it has not been confirmed).

The capture operation can be executed in one of two ways: explicitly or implicitly. In case of explicit capture a separate ‘capture’ request is sent to the host server to confirm a previously successful authorization. In case of implicit capture, the reference number of the previously successful authorization that needs to be captured, is included as part of the subsequent authorization, or as part of the final settlement call.

In other words, in case of explicit capture the authorization request is submitted as one message and capture is sent as another separate message, while in case of implicit capture, authorization request is submitted as one message, while capture message (reference number of the successful authorization) is included in the authorization request of the next transaction. The final (end-of-the-day) message includes reference number of the last transaction which needs to be captured (confirmed).

Implicit capture is not recommended when time-initiated host capture is used.

Timeout reversal

Another approach is to explicitly reverse transaction with a host if a timeout has occurred. When a read timeout occurs, one or more attempts to reverse (or ‘void’) the authorization that timed out is made. When a transaction is submitted, and timeout occurs, it is assumed that the problem is temporary. In some cases up to three reversal attempts are made, each 40 seconds apart. It is assumed that within the next 160 seconds (40 seconds – initial waiting, plus 3 reversals) the problem of connection with the server will be resolved. If authorization is successful, subsequent reversals will void it (so that the customer will not be accidentally charged).

Conclusions

When a company integrates with some payment system (especially, if it plans to submit large transaction volumes in real time), it needs to consider and test the system’s processing strategy, keeping in mind the issues, related to read and connection timeouts.

(check apply)

  • read the contribution guideline

Problem

i’m storing 570 ~ 600GB in 24 hours. (calculated by only string log size)
and my td-agent has two responsibility.

  • storing string logs as files
  • storing string logs as elasticsearch documents (of course, indicec has mapping)

file logs content is the same as elasticsearch documents. (1:1 match)
and my elasticsearch node is only 1(master, data, coordination all in one, this can be scaled out after day).

cpu usage avg is 10~20%.
8core, 64GB ram.

sometimes, when i push documents in elasticsearch via fluentd, ‘read timeout reached’ error caused.

and after a while, ‘retry succeed’ log has caused and input success.

but above error and warning is caused everytime.

in this time, buffer never flushed. just append as buffer files.

and after a while, buffer files too many created to input elasticsearch.

as a result, buffer files are infinitely increasing.

buffer files create speed >>>>> buffer files flush speed

it seems not just input-output data speed problem.

following is td-agent.log

2019-08-06 08:55:04 +0900 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2019-08-06 08:55:05 +0900 chunk="58f676f133358450d1be0ce8155101fa" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:04 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:05 +0900 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2019-08-06 08:55:06 +0900 chunk="58f676f13352848f9efc996bbafcb211" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:05 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:06 +0900 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2019-08-06 08:55:07 +0900 chunk="58f650dc7d3fb60fde13ad3f6a8a39d6" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:06 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:08 +0900 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2019-08-06 08:55:09 +0900 chunk="58f650daeb7216733a89c42a421490cf" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:08 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:16 +0900 [warn]: #0 failed to flush the buffer. retry_time=2 next_retry_seconds=2019-08-06 08:55:18 +0900 chunk="58f676f13352848f9efc996bbafcb211" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:16 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:16 +0900 [warn]: #0 failed to flush the buffer. retry_time=3 next_retry_seconds=2019-08-06 08:55:20 +0900 chunk="58f676f133358450d1be0ce8155101fa" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"http"}): read timeout reached"
  2019-08-06 08:55:16 +0900 [warn]: #0 suppressed same stacktrace
2019-08-06 08:55:20 +0900 [warn]: #0 retry succeeded. chunk_id="58f650dc7d3fb60fde13ad3f6a8a39d6"

in 08:55:04 ~ 08:55:16, ‘could not push logs to Elasticsearch cluster ({:host=>»127.0.0.1″, :port=>9200, :scheme=>»http»}): read timeout reached’ error has caused.

so, in this time, buffer has not been flushed.

in this time, destination index documents count was not increased.
(i check this via kibana)

this state is maintained until 08:55:20 and buffer files are continually created.

and 08:55:20, there is retry succeeded log has been occurred.

in this time, i checked buffer files directory, and buffer files was flushed in destination index
(i check this via kibana)

but buffer files flushed a liitle, and after a while, read timeout reached occurred.

Steps to replicate

  • td-agent.conf
<match log.**>
        @type copy
        <store>
                @type forest
                subtype elasticsearch

                <template>
                         index_name ${tag}_%Y.%m.%d

                         <buffer tag, time>
                                @type file
                                path /log/${tag}.buffer

                               timekey_use_utc false
                               timekey_zone +0900

                               chunk_limit_size 51m # forward server's max chunk size is '50m', and forwarding every 1 second
                               flush_mode interval
                               flush_interval 1s

                               flush_thread_count 8

                               timekey 1m # chunks per hours ("3600" also available)
                        </buffer>

                        host 127.0.0.1
                        port 9200

#                       reconnect_on_error true  # default : false
#                       reload_on_failure true   # default : false
#                       reload_connections false # default : true
#                       request_timeout 10s # default : 5s

                        <secondary>
                                @type file
                                path /log/failed/${tag}.buffer
                        </secondary>
                </template>

        </store>
</match>

<source>
        @type forward
        port 24224
</source>

Expected Behavior or What you need to ask

how can i handle this error? any idea??

i tried request_timeoutand

reconnect_on_error true
reload_on_failure true
reload_connections false

but this options cant help me now.

request_timeout never given more then 10s. i will try more bigger value (example, 20s).
(but i am worried about this. i will try this later.)

and chunk_limit_size is 51m now, because log send server’s forward chunk limit size is 50m.
(forwarding server to receiving server flush_interval 1s,
receiving server to elasticsearch flush_interval 1s.)

i set receiving server’s chunk_limit_size as 4m before.
but if forwarding server’s data’s chunk size exceed 4m, my receiving server’s td-agent occurred following error.

2019-08-06 05:33:27 +0900 [warn]: #0 chunk bytes limit exceeds for an emitted event stream: 5665793bytes

(5665793bytes = 5.4MB.)

Using Fluentd and ES plugin versions

Fluentd : 1.3.3 (td-agent 3)
ES : 6.7

  • OS version
    CentOS 7.6

  • Bare Metal or within Docker or Kubernetes or others?
    neither Docker nor Kubernetes

  • Fluentd v0.12 or v0.14/v1.0

    • paste result of fluentd --version or td-agent --version
      td-agent 1.3.3
  • ES plugin 3.x.y/2.x.y or 1.x.y

    • paste boot log of fluentd or td-agent
    • paste result of fluent-gem list, td-agent-gem list or your Gemfile.lock
      elasticsearch (6.1.0)
      elasticsearch-api (6.1.0)
      elasticsearch-transport (6.1.0)
      fluent-plugin-elasticsearch (3.0.1)
  • ES version (optional)
    ES 6.7

  • ES template(s) (optional)

Понравилась статья? Поделить с друзьями:
  • Error reason assert cyberpunk
  • Error reaper and ink geno
  • Error reading zip file
  • Error reading xmlstreamreader unexpected eof in prolog at row col unknown source 1 0
  • Error reading weapon data file for