At error code h20 desc app boot timeout

A description of the custom error information written to logs when your app experiences an error.

Last updated November 07, 2022

Table of Contents

  • H10 — App crashed
  • H11 — Backlog too deep
  • H12 — Request timeout
  • H13 — Connection closed without response
  • H14 — No web dynos running
  • H15 — Idle connection
  • H16 — (No Longer in Use)
  • H17 — Poorly formatted HTTP response
  • H18 — Server Request Interrupted
  • H19 — Backend connection timeout
  • H20 — App boot timeout
  • H21 — Backend connection refused
  • H22 — Connection limit reached
  • H23 — Endpoint misconfigured
  • H24 — Forced close
  • H25 — HTTP Restriction
  • H26 — Request Error
  • H27 — Client Request Interrupted
  • H28 — Client Connection Idle
  • H31 — Misdirected Request
  • H80 — Maintenance mode
  • H81 — Blank app
  • H82 — You’ve used up your dyno hour pool
  • H83 — Planned Service Degradation
  • H99 — Platform error
  • R10 — Boot timeout
  • R12 — Exit timeout
  • R13 — Attach error
  • R14 — Memory quota exceeded
  • R15 — Memory quota vastly exceeded
  • R16 — Detached
  • R17 — Checksum error
  • R99 — Platform error
  • L10 — Drain buffer overflow
  • L11 — Tail buffer overflow
  • L12 — Local buffer overflow
  • L13 — Local delivery error
  • L14 — Certificate validation error
  • L15 — Tail buffer temporarily unavailable

Whenever your app experiences an error, Heroku will return a standard error page with the HTTP status code 503. To help you debug the underlying error, however, the platform will also add custom error information to your logs. Each type of error gets its own error code, with all HTTP errors starting with the letter H and all runtime errors starting with R. Logging errors start with L.

H10 — App crashed

A crashed web dyno or a boot timeout on the web dyno will present this error.

2010-10-06T21:51:04-07:00 heroku[web.1]: State changed from down to starting
2010-10-06T21:51:07-07:00 app[web.1]: Starting process with command: `bundle exec rails server -p 22020`
2010-10-06T21:51:09-07:00 app[web.1]: >> Using rails adapter
2010-10-06T21:51:09-07:00 app[web.1]: Missing the Rails 2.3.5 gem. Please `gem install -v=2.3.5 rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed.
2010-10-06T21:51:10-07:00 heroku[web.1]: Process exited
2010-10-06T21:51:12-07:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H11 — Backlog too deep

When HTTP requests arrive faster than your application can process them, they can form a large backlog on a number of routers. When the backlog on a particular router passes a threshold, the router determines that your application isn’t keeping up with its incoming request volume. You’ll see an H11 error for each incoming request as long as the backlog is over this size. The exact value of this threshold may change depending on various factors, such as the number of dynos in your app, response time for individual requests, and your app’s normal request volume.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H11 desc="Backlog too deep" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

The solution is to increase your app’s throughput by adding more dynos, tuning your database (for example, adding an index), or making the code itself faster. As always, increasing performance is highly application-specific and requires profiling.

H12 — Request timeout

For more information on request timeouts (including recommendations for resolving them), take a look at our article on the topic.

An HTTP request took longer than 30 seconds to complete. In the example below, a Rails app takes 37 seconds to render the page; the HTTP router returns a 503 prior to Rails completing its request cycle, but the Rails process continues and the completion message shows after the router message.

2010-10-06T21:51:07-07:00 app[web.2]: Processing PostController#list (for 75.36.147.245 at 2010-10-06 21:51:07) [GET]
2010-10-06T21:51:08-07:00 app[web.2]: Rendering template within layouts/application
2010-10-06T21:51:19-07:00 app[web.2]: Rendering post/list
2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=6ms service=30001ms status=503 bytes=0
2010-10-06T21:51:42-07:00 app[web.2]: Completed in 37000ms (View: 27, DB: 21) | 200 OK [http://myapp.heroku.com/]

This 30-second limit is measured by the router, and includes all time spent in the dyno, including the kernel’s incoming connection queue and the app itself.

See Request Timeout for more, as well as a language-specific article on this error:

  • H12 — Request Timeout in Ruby (MRI)

H13 — Connection closed without response

This error is thrown when a process in your web dyno accepts a connection but then closes the socket without writing anything to it.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=3030ms service=9767ms status=503 bytes=0

One example where this might happen is when a Unicorn web server is configured with a timeout shorter than 30s and a request has not been processed by a worker before the timeout happens. In this case, Unicorn closes the connection before any data is written, resulting in an H13.

An example of an H13 can be found here.

H14 — No web dynos running

This is most likely the result of scaling your web dynos down to 0 dynos. To fix it, scale your web dynos to 1 or more dynos:

$ heroku ps:scale web=1

Use the heroku ps command to determine the state of your web dynos.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H15 — Idle connection

The dyno did not send a full response and was terminated due to 55 seconds of inactivity. For example, the response indicated a Content-Length of 50 bytes which were not sent in time.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H15 desc="Idle connection" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=55449ms status=503 bytes=18

H16 — (No Longer in Use)

Heroku no longer emits H16 errors

H17 — Poorly formatted HTTP response

Our HTTP routing stack has no longer accepts responses that are missing a reason phrase in the status line. ‘HTTP/1.1 200 OK’ will work with the new router, but ‘HTTP/1.1 200’ will not.

This error message is logged when a router detects a malformed HTTP response coming from a dyno.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H17 desc="Poorly formatted HTTP response" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=1ms status=503 bytes=0

H18 — Server Request Interrupted

An H18 signifies that the socket connected, and some data was sent; The error occurs in cases where the socket was destroyed before sending a complete response, or if the server responds with data before reading the entire body of the incoming request.

2010-10-06T21:51:37-07:00 heroku[router]: sock=backend at=error code=H18 desc="Server Request Interrupted" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=1ms status=503 bytes=0

An example of an H18 can be found here.

H19 — Backend connection timeout

A router received a connection timeout error after 5 seconds of attempting to open a socket to a web dyno. This is usually a symptom of your app being overwhelmed and failing to accept new connections in a timely manner. For Common Runtime apps, if you have multiple dynos, the router will retry multiple dynos before logging H19 and serving a standard error page. Private Space routers can’t reroute requests to another web dyno.

If your app has a single web dyno, it is possible to see H19 errors if the runtime instance running your web dyno fails and is replaced. Once the failure is detected and the instance is terminated your web dyno will be restarted somewhere else, but in the meantime, H19s may be served as the router fails to establish a connection to your dyno. This can be mitigated by running more than one web dyno.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H19 desc="Backend connection timeout" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=5001ms service= status=503 bytes=

H20 — App boot timeout

The router will enqueue requests for 75 seconds while waiting for starting processes to reach an “up” state. If after 75 seconds, no web dynos have reached an “up” state, the router logs H20 and serves a standard error page.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

The Ruby on Rails asset pipeline can sometimes fail to run during git push, and will instead attempt to run when your app’s dynos boot. Since the Rails asset pipeline is a slow process, this can cause H20 boot timeout errors.

This error differs from R10 in that the H20 75-second timeout includes platform tasks such as internal state propagation, requests between internal components, slug download, unpacking, container preparation, etc… The R10 60-second timeout applies solely to application startup tasks.

If your application requires more time to boot, you may use the boot timeout tool to increase the limit. However, in general, slow boot times will make it harder to deploy your application and will make recovery from dyno failures slower, so this should be considered a temporary solution.

H21 — Backend connection refused

A router received a connection refused error when attempting to open a socket to your web process. This is usually a symptom of your app being overwhelmed and failing to accept new connections.

For Common Runtime apps, the router will retry multiple dynos before logging H21 and serving a standard error page. Private Spaces apps are not capable of sending the requests to multiple dynos.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H21 desc="Backend connection refused" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service= status=503 bytes=

H22 — Connection limit reached

A routing node has detected an elevated number of HTTP client connections attempting to reach your app. Reaching this threshold most likely means your app is under heavy load and is not responding quickly enough to keep up. The exact value of this threshold may change depending on various factors, such as the number of dynos in your app, response time for individual requests, and your app’s normal request volume.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H22 desc="Connection limit reached" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H23 — Endpoint misconfigured

A routing node has detected a websocket handshake, specifically the ‘Sec-Websocket-Version’ header in the request, that came from an endpoint (upstream proxy) that does not support websockets.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H23 desc="Endpoint misconfigured" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H24 — Forced close

The routing node serving this request was either shutdown for maintenance or terminated before the request completed.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H24 desc="Forced close" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=80000ms status= bytes=18

H25 — HTTP Restriction

This error is logged when a routing node detects and blocks a valid HTTP response that is judged risky or too large to be safely parsed. The error comes in four types.

Currently, this functionality is experimental, and is only made available to a subset of applications on the platform.

Invalid content length

The response has multiple content lengths declared within the same response, with varying lengths.

2014-03-20T14:22:00.203382+00:00 heroku[router]: at=error code=H25 desc="HTTP restriction: invalid content length" method=GET path="/" host=myapp.herokuapp.com request_id=3f336f1a-9be3-4791-afe3-596a1f2a481f fwd="17.17.17.17" dyno=web.1 connect=0 service=1 status=502 bytes=537

Oversized cookies

The cookie in the response will be too large to be used again in a request to the Heroku router or SSL endpoints.

2014-03-20T14:18:57.403882+00:00 heroku[router]: at=error code=H25 desc="HTTP restriction: oversized cookie" method=GET path="/" host=myapp.herokuapp.com request_id=90cfbbd2-0397-4bab-828f-193050a076c4 fwd="17.17.17.17" dyno=web.1 connect=0 service=2 status=502 bytes=537

A single header line is deemed too long (over 512kb) and the response is discarded on purpose.

2014-03-20T14:12:28.555073+00:00 heroku[router]: at=error code=H25 desc="HTTP restriction: oversized header" method=GET path="/" host=myapp.herokuapp.com request_id=ab66646e-84eb-47b8-b3bb-2031ecc1bc2c fwd="17.17.17.17" dyno=web.1 connect=0 service=397 status=502 bytes=542

Oversized status line

The status line is judged too long (8kb) and the response is discarded on purpose.

2014-03-20T13:54:44.423083+00:00 heroku[router]: at=error code=H25 desc="HTTP restriction: oversized status line" method=GET path="/" host=myapp.herokuapp.com request_id=208588ac-1a66-44c1-b665-fe60c596241b fwd="17.17.17.17" dyno=web.1 connect=0 service=3 status=502 bytes=537

H26 — Request Error

This error is logged when a request has been identified as belonging to a specific Heroku application, but cannot be delivered entirely to a dyno due to HTTP protocol errors in the request. Multiple possible causes can be identified in the log message.

The request has an expect header, and its value is not 100-Continue, the only expect value handled by the router. A request with an unsupported expect value is terminated with the status code 417 Expectation Failed.

2014-05-14T17:17:37.456997+00:00 heroku[router]: at=error code=H26 desc="Request Error" cause="unsupported expect header value" method=GET path="/" host=myapp.herokuapp.com request_id=3f336f1a-9be3-4791-afe3-596a1f2a481f fwd="17.17.17.17" dyno= connect= service= status=417 bytes=

The request has an HTTP header with a value that is either impossible to parse, or not handled by the router, such as connection: ,.

2014-05-14T17:17:37.456997+00:00 heroku[router]: at=error code=H26 desc="Request Error" cause="bad header" method=GET path="/" host=myapp.herokuapp.com request_id=3f336f1a-9be3-4791-afe3-596a1f2a481f fwd="17.17.17.17" dyno= connect= service= status=400 bytes=

Bad chunk

The request has a chunked transfer-encoding, but with a chunk that was invalid or couldn’t be parsed correctly. A request with this status code will be interrupted during transfer to the dyno.

2014-05-14T17:17:37.456997+00:00 heroku[router]: at=error code=H26 desc="Request Error" cause="bad chunk" method=GET path="/" host=myapp.herokuapp.com request_id=3f336f1a-9be3-4791-afe3-596a1f2a481f fwd="17.17.17.17" dyno=web.1 connect=1 service=0 status=400 bytes=537

H27 — Client Request Interrupted

The client socket was closed either in the middle of the request or before a response could be returned. For example, the client closed their browser session before the request was able to complete.

2010-10-06T21:51:37-07:00 heroku[router]: sock=client at=warning code=H27 desc="Client Request Interrupted" method=POST path="/submit/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=0ms status=499 bytes=0

H28 — Client Connection Idle

The client did not send a full request and was terminated due to 55 seconds of inactivity. For example, the client indicated a Content-Length of 50 bytes which were not sent in time.

2010-10-06T21:51:37-07:00 heroku[router]: at=warning code=H28 desc="Client Connection Idle" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=55449ms status=499 bytes=18

H31 — Misdirected Request

The client sent a request to the wrong endpoint. This could be because the client used stale DNS information or is accessing the app through a CDN that has stale DNS information. Verify that DNS is correctly configured for your app. If a CDN is configured for the app, consider contacting your CDN provider.

If you and your app users can successfully access the app in a browser (or however the app is used), this may not be cause for concern. The errors may be caused by clients (typically web-crawlers) with cached DNS entries trying to access a now-invalid endpoint or IP address for your app.

You can verify the validity of user agent through the app log error message as shown in the example below:

error code=H31 desc="Misdirected Request" method=GET path="/" host=[host.com] request_id=[guid] fwd="[IP]" dyno= connect= service= status=421 bytes= protocol=http agent="<agent>"

H80 — Maintenance mode

This is not an error, but we give it a code for the sake of completeness. Note the log formatting is the same but without the word “Error”.

2010-10-06T21:51:07-07:00 heroku[router]: at=info code=H80 desc="Maintenance mode" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H81 — Blank app

No code has been pushed to this application. To get rid of this message you need to do one deploy. This is not an error, but we give it a code for the sake of completeness.

2010-10-06T21:51:07-07:00 heroku[router]: at=info code=H81 desc="Blank app" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H82 — You’ve used up your dyno hour pool

This error indicates that an account has exhausted its monthly dyno hour quota for Free or Eco dynos and its apps running these dynos are sleeping. You can view your app’s Free or Eco dyno usage in the Heroku dashboard.

2015-10-06T21:51:07-07:00 heroku[router]: at=info code=H82 desc="You've used up your dyno hour pool" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H83 — Planned Service Degradation

This indicates that your app is temporarily unavailable as Heroku makes necessary changes to support the retirement of a feature that has reached end of life. You will likely encounter an error screen when attempting to access your application and see the error below in your logs. Please reference the Heroku Changelog and the Heroku Status page for more details and the timeline of the planned service outage.

2021-10-10T21:51:07-07:00 heroku[router]: at=info code=H83 desc="Service Degradation" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H99 — Platform error

H99 and R99 are the only error codes that represent errors in the Heroku platform.

This indicates an internal error in the Heroku platform. Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H99 desc="Platform error" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

R10 — Boot timeout

A web process took longer than 60 seconds to bind to its assigned $PORT. When this happens, the dyno’s process is killed and the dyno is considered crashed. Crashed dynos are restarted according to the dyno manager’s restart policy.

2011-05-03T17:31:38+00:00 heroku[web.1]: State changed from created to starting
2011-05-03T17:31:40+00:00 heroku[web.1]: Starting process with command: `bundle exec rails server -p 22020 -e production`
2011-05-03T17:32:40+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2011-05-03T17:32:40+00:00 heroku[web.1]: Stopping process with SIGKILL
2011-05-03T17:32:40+00:00 heroku[web.1]: Process exited
2011-05-03T17:32:41+00:00 heroku[web.1]: State changed from starting to crashed

This error is often caused by a process being unable to reach an external resource, such as a database, or the application doing too much work, such as parsing and evaluating numerous, large code dependencies, during startup.

Common solutions are to access external resources asynchronously, so they don’t block startup, and to reduce the amount of application code or its dependencies.

If your application requires more time to boot, you may use the boot timeout tool to increase the limit. However, in general, slow boot times will make it harder to deploy your application and will make recovery from dyno failures slower, so this should be considered a temporary solution.

One exception is for apps using the Java buildpack, Gradle buildpack, heroku-deploy toolbelt plugin, or Heroku Maven plugin, which will be allowed 90 seconds to bind to their assigned port.

R12 — Exit timeout

A process failed to exit within 30 seconds of being sent a SIGTERM indicating that it should stop. The process is sent SIGKILL to force an exit.

2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:11+00:00 heroku[worker.1]: Stopping process with SIGTERM
2011-05-03T17:40:11+00:00 app[worker.1]: Ignoring SIGTERM
2011-05-03T17:40:14+00:00 app[worker.1]: Working
2011-05-03T17:40:18+00:00 app[worker.1]: Working
2011-05-03T17:40:21+00:00 heroku[worker.1]: Error R12 (Exit timeout) -> Process failed to exit within 30 seconds of SIGTERM
2011-05-03T17:40:21+00:00 heroku[worker.1]: Stopping process with SIGKILL
2011-05-03T17:40:21+00:00 heroku[worker.1]: Process exited

R13 — Attach error

A dyno started with heroku run failed to attach to the invoking client.

2011-06-29T02:13:29+00:00 app[run.3]: Awaiting client
2011-06-29T02:13:30+00:00 heroku[run.3]: State changed from starting to up
2011-06-29T02:13:59+00:00 app[run.3]: Error R13 (Attach error) -> Failed to attach to process
2011-06-29T02:13:59+00:00 heroku[run.3]: Process exited

R14 — Memory quota exceeded

A dyno requires memory in excess of its quota. If this error occurs, the dyno will page to swap space to continue running, which may cause degraded process performance. The R14 error is calculated by total memory swap, rss and cache.

2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:10+00:00 heroku[worker.1]: Process running mem=1028MB(103.3%)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2011-05-03T17:41:52+00:00 app[worker.1]: Working

If you are getting a large number of R14 errors, your application performance is likely severely degraded. Resolving R14 memory errors are language specific:

  • R14 — Memory Quota Exceeded in Ruby (MRI)
  • Troubleshooting Memory Issues in Java Applications
  • Troubleshooting Node.js Memory Use

R15 — Memory quota vastly exceeded

A dyno requires vastly more memory than its quota and is consuming excessive swap space. If this error occurs, the dyno will be forcibly killed with SIGKILL (which cannot be caught or handled) by the platform. The R15 error is calculated by total memory swap and rss; cache is not included.

2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:10+00:00 heroku[worker.1]: Process running mem=1029MB(201.0%)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Error R15 (Memory quota vastly exceeded)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Stopping process with SIGKILL
2011-05-03T17:40:12+00:00 heroku[worker.1]: Process exited

In Private Spaces, dynos exceeding their memory quota do not use swap space and thus do not emit R14 errors.

Private Space dynos vastly exceeding their memory quota generally will emit R15 errors but occasionally the platform may shut down the dyno before the R15 is sent, causing the error to be dropped. If an R15 is emitted it will only be visible in the app log stream but not in the dashboard Application Metrics interface. Other non-R15 types of errors from Private Space dynos are correctly surfaced in the Application Metrics interface.

For Private Space dynos vastly exceeding their memory quota the platform kills dyno processes consuming large amounts of memory, but may not kill the dyno itself.

R16 — Detached

An attached dyno is continuing to run after being sent SIGHUP when its external connection was closed. This is usually a mistake, though some apps might want to do this intentionally.

2011-05-03T17:32:03+00:00 heroku[run.1]: Awaiting client
2011-05-03T17:32:03+00:00 heroku[run.1]: Starting process with command `bash`
2011-05-03T17:40:11+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:16+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:21+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:26+00:00 heroku[run.1]: Error R16 (Detached) -> An attached process is not responding to SIGHUP after its external connection was closed.

R17 — Checksum error

This indicates an error with runtime slug checksum verification. If the checksum does not match or there is another problem with the checksum when launch a dyno, an R17 error will occur and the dyno will fail to launch. Check the log stream for details about the error.

2016-08-16T12:39:56.439438+00:00 heroku[web.1]: State changed from provisioning to starting
2016-08-16T12:39:57.110759+00:00 heroku[web.1]: Error R17 (Checksum error) -> Checksum does match expected value. Expected: SHA256:ed5718e83475c780145609cbb2e4f77ec8076f6f59ebc8a916fb790fbdb1ae64 Actual: SHA256:9ca15af16e06625dfd123ebc3472afb0c5091645512b31ac3dd355f0d8cc42c1
2016-08-16T12:39:57.212053+00:00 heroku[web.1]: State changed from starting to crashed

If this error occurs, try deploying a new release with a correct checksum or rolling back to an older release. Ensure the checksum is formatted and calculated correctly with the SHA256 algorithm. The checksum must start with SHA256: followed by the calculated SHA256 value for the compressed slug. If you did not manually calculate the checksum and error continues to occur, please contact Heroku support.

R99 — Platform error

R99 and H99 are the only error codes that represent errors in the Heroku platform.

This indicates an internal error in the Heroku platform. Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.

L10 — Drain buffer overflow

2013-04-17T19:04:46+00:00 d.1234-drain-identifier-567 heroku logplex - - Error L10 (output buffer overflow): 500 messages dropped since 2013-04-17T19:04:46+00:00.

The number of log messages being generated has temporarily exceeded the rate at which they can be received by a drain consumer (such as a log management add-on) and Logplex, Heroku’s logging system, has discarded some messages in order to handle the rate difference.

A common cause of L10 error messages is the exhaustion of capacity in a log consumer. If a log management add-on or similar system can only accept so many messages per time period, your application may experience L10s after crossing that threshold.

Another common cause of L10 error messages is a sudden burst of log messages from a dyno. As each line of dyno output (e.g. a line of a stack trace) is a single log message, and Logplex limits the total number of un-transmitted log messages it will keep in memory to 1024 messages, a burst of lines from a dyno can overflow buffers in Logplex. In order to allow the log stream to catch up, Logplex will discard messages where necessary, keeping newer messages in favor of older ones.

You may need to investigate reducing the volume of log lines output by your application (e.g. condense multiple log lines into a smaller, single-line entry). You can also use the heroku logs -t command to get a live feed of logs and find out where your problem might be. A single dyno stuck in a loop that generates log messages can force an L10 error, as can a problematic code path that causes all dynos to generate a multi-line stack trace for some code paths.

L11 — Tail buffer overflow

A heroku logs –tail session cannot keep up with the volume of logs generated by the application or log channel, and Logplex has discarded some log lines necessary to catch up. To avoid this error you will need run the command on a faster internet connection (increase the rate at which you can receive logs) or you will need to modify your application to reduce the logging volume (decrease the rate at which logs are generated).

2011-05-03T17:40:10+00:00 heroku[logplex]: L11 (Tail buffer overflow) -> This tail session dropped 1101 messages since 2011-05-03T17:35:00+00:00

L12 — Local buffer overflow

The application is producing logs faster than the local delivery process (log-shuttle) can deliver them to logplex and has discarded some log lines in order to keep up. If this error is sustained you will need to reduce the logging volume of your application.

2013-11-04T21:31:32.125756+00:00 app[log-shuttle]: Error L12: 222 messages dropped since 2013-11-04T21:31:32.125756+00:00.

L13 — Local delivery error

The local log delivery process (log-shuttle) was unable to deliver some logs to Logplex and has discarded them. This can happen during transient network errors or during logplex service degradation. If this error is sustained please contact support.

2013-11-04T21:31:32.125756+00:00 app[log-shuttle]: Error L13: 111 messages lost since 2013-11-04T21:31:32.125756+00:00.

L14 — Certificate validation error

The application is configured with a TLS syslog drain that doesn’t have a valid TLS certificate.

You should check that:

  1. You’re not using a self-signed certificate.
  2. The certificate is up to date.
  3. The certificate is signed by a known and trusted CA.
  4. The CN hostname embedded in the certificate matches the hostname being connected to.
2015-09-04T23:28:48+00:00 heroku[logplex]: Error L14 (certificate validation): error="bad certificate" uri="syslog+tls://logs.example.com:6514/"

L15 — Tail buffer temporarily unavailable

The tail buffer that stores the last 1500 lines of your logs is temporarily unavailable. Run heroku logs again. If you still encounter the error, run heroku logs -t to stream your logs (which does not use the tail buffer).

I tried to run my nodejs app[using socket.io]. I deployed it with no error. But when I run it, I got this:

2013-09-07T01:13:09.697674+00:00 heroku[api]: Release v2 created by vietminhle98@gmail.com
2013-09-07T12:15:57.148172+00:00 heroku[router]: at=info code= desc="Blank app" method=GET 

    path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=502 bytes=
2013-09-07T12:15:57.672631+00:00 heroku[router]: at=info code= desc="Blank app" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=502 bytes=
2013-09-08T01:14:00+00:00 heroku[slug-compiler]: Slug compilation started
2013-09-08T01:14:28.327510+00:00 heroku[api]: Scale to web=1 by vietminhle98@gmail.com
2013-09-08T01:14:28.351861+00:00 heroku[api]: Add PATH config by vietminhle98@gmail.com
2013-09-08T01:14:28.382733+00:00 heroku[api]: Release v3 created by vietminhle98@gmail.com
2013-09-08T01:14:28.426015+00:00 heroku[api]: Deploy 6be3e0c by vietminhle98@gmail.com
2013-09-08T01:14:28.439849+00:00 heroku[api]: Release v4 created by vietminhle98@gmail.com
2013-09-08T01:14:28+00:00 heroku[slug-compiler]: Slug compilation finished
2013-09-08T01:14:32.477748+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:14:34.751076+00:00 app[web.1]: LISTENING
2013-09-08T01:14:35.314545+00:00 app[web.1]: info: socket.io started
2013-09-08T01:15:34.427477+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:15:34.427784+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:15:35.676986+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:15:35.676986+00:00 heroku[web.1]: State changed from crashed to starting
2013-09-08T01:15:35.663597+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:15:37.774955+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:15:38.653254+00:00 app[web.1]: LISTENING
2013-09-08T01:15:38.755820+00:00 app[web.1]: info: socket.io started
2013-09-08T01:15:59.434156+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:16:27.246666+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:16:38.781151+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:16:38.781376+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:16:39.983421+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:16:40.011325+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:16:41.541487+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:18:44.971726+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:18:47.279357+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:20:09.452479+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:20:08.839678+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:26:06.279799+00:00 heroku[web.1]: State changed from crashed to starting
2013-09-08T01:26:08.503114+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:26:09.522676+00:00 app[web.1]: LISTENING
2013-09-08T01:26:09.656509+00:00 app[web.1]: info: socket.io started
2013-09-08T01:27:09.615998+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:27:09.616415+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:27:10.895911+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:27:10.909789+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:27:13.347426+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:27:14.033545+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:28:02.549578+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:28:06.335482+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/index.html host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:28:07.041450+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= serv
ice= status=503 bytes=
2013-09-08T01:34:23.479191+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/index.html host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:34:24.942938+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:34:29.678584+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:34:29.090141+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:36:38.073605+00:00 heroku[web.1]: State changed from crashed to starting
2013-09-08T01:36:41.051197+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:36:42.526835+00:00 app[web.1]: LISTENING
2013-09-08T01:36:42.927958+00:00 app[web.1]: info: socket.io started
2013-09-08T01:37:02.906047+00:00 heroku[api]: Scale to web=1 by vietminhle98@gmail.com
2013-09-08T01:37:43.000152+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:37:43.000399+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:37:44.751965+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:37:44.766078+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:45:39.155699+00:00 heroku[api]: Starting process with command `rake db:migrate` by vietminhle98@gmail.com
2013-09-08T01:45:41.607954+00:00 heroku[run.2140]: Awaiting client
2013-09-08T01:45:42.584829+00:00 heroku[run.2140]: Starting process with command `rake db:migrate`
2013-09-08T01:45:43.920789+00:00 heroku[run.2140]: Process exited with status 1
2013-09-08T01:45:43.935727+00:00 heroku[run.2140]: State changed from starting to complete
2013-09-08T01:46:55.968060+00:00 heroku[web.1]: State changed from crashed to starting
2013-09-08T01:46:58.347733+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:47:00.425372+00:00 app[web.1]: LISTENING
2013-09-08T01:47:00.582876+00:00 app[web.1]: info: socket.io started
2013-09-08T01:47:59.952712+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:47:59.952459+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:48:01.523065+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:49:02.687355+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:49:03.561165+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:48:01.532500+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:50:30.232096+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:50:29.121508+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:50:49.246283+00:00 heroku[web.1]: State changed from crashed to starting
2013-09-08T01:50:51.161871+00:00 heroku[web.1]: Starting process with command `node server.js`
2013-09-08T01:50:51.827825+00:00 app[web.1]: LISTENING
2013-09-08T01:50:51.945835+00:00 app[web.1]: info: socket.io started
2013-09-08T01:51:51.972737+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2013-09-08T01:51:51.972975+00:00 heroku[web.1]: Stopping process with SIGKILL
2013-09-08T01:51:53.141508+00:00 heroku[web.1]: Process exited with status 137
2013-09-08T01:51:53.152332+00:00 heroku[web.1]: State changed from starting to crashed
2013-09-08T01:51:54.484616+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:51:55.899914+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:52:19.791497+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:52:20.616151+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:52:39.086637+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:52:39.688008+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:51:54.508738+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:51:55.281138+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:54:03.778339+00:00 heroku[api]: Starting process with command `bash` by vietminhle98@gmail.com
2013-09-08T01:54:07.822098+00:00 heroku[run.9425]: Awaiting client
2013-09-08T01:54:07.856807+00:00 heroku[run.9425]: Starting process with command `bash`
2013-09-08T01:54:08.924087+00:00 heroku[run.9425]: State changed from starting to up
2013-09-08T01:54:24.533998+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:54:25.693218+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:54:28.098195+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:54:28.648493+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path=/favicon.ico host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=
2013-09-08T01:56:27.247187+00:00 heroku[run.9425]: Process exited with status 1
2013-09-08T01:56:27.257609+00:00 heroku[run.9425]: State changed from up to complete

That’s the complete logs. I scanned through it and find 3 errors:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch

at=error code=H20 desc="App boot timeout" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=

at=error code=H10 desc="App crashed" method=GET path=/ host=fnboard.herokuapp.com fwd="122.109.112.187" dyno= connect= service= status=503 bytes=

I dont know why these three errors happen at the same time and I dont know what’s wrong with my app, please help

My package.json:

{
"name": "fnBoard",
  "version": "0.0.1",
  "private": true,
  "scripts": {
  "start": "node server.js"
 },
  "dependencies": {
    "socket.io": "0.9.x"
  },
    "engines": {
     "node": "0.10.x",
     "npm": "1.3.x"
  }
}

Procfile:

web: node server.js

.gitignore:

node_modules

Categories

Last updated November 07, 2022

Table of Contents

Whenever your app experiences an error, Heroku will return a standard error page with the HTTP status code 503. To help you debug the underlying error, however, the platform will also add custom error information to your logs. Each type of error gets its own error code, with all HTTP errors starting with the letter H and all runtime errors starting with R. Logging errors start with L.

H10 — App crashed

A crashed web dyno or a boot timeout on the web dyno will present this error.

H11 — Backlog too deep

When HTTP requests arrive faster than your application can process them, they can form a large backlog on a number of routers. When the backlog on a particular router passes a threshold, the router determines that your application isn’t keeping up with its incoming request volume. You’ll see an H11 error for each incoming request as long as the backlog is over this size. The exact value of this threshold may change depending on various factors, such as the number of dynos in your app, response time for individual requests, and your app’s normal request volume.

The solution is to increase your app’s throughput by adding more dynos, tuning your database (for example, adding an index), or making the code itself faster. As always, increasing performance is highly application-specific and requires profiling.

H12 — Request timeout

For more information on request timeouts (including recommendations for resolving them), take a look at our article on the topic.

An HTTP request took longer than 30 seconds to complete. In the example below, a Rails app takes 37 seconds to render the page; the HTTP router returns a 503 prior to Rails completing its request cycle, but the Rails process continues and the completion message shows after the router message.

This 30-second limit is measured by the router, and includes all time spent in the dyno, including the kernel’s incoming connection queue and the app itself.

See Request Timeout for more, as well as a language-specific article on this error:

H13 — Connection closed without response

This error is thrown when a process in your web dyno accepts a connection but then closes the socket without writing anything to it.

One example where this might happen is when a Unicorn web server is configured with a timeout shorter than 30s and a request has not been processed by a worker before the timeout happens. In this case, Unicorn closes the connection before any data is written, resulting in an H13.

H14 — No web dynos running

This is most likely the result of scaling your web dynos down to 0 dynos. To fix it, scale your web dynos to 1 or more dynos:

Use the heroku ps command to determine the state of your web dynos.

H15 — Idle connection

The dyno did not send a full response and was terminated due to 55 seconds of inactivity. For example, the response indicated a Content-Length of 50 bytes which were not sent in time.

H16 — (No Longer in Use)

Heroku no longer emits H16 errors

H17 — Poorly formatted HTTP response

Our HTTP routing stack has no longer accepts responses that are missing a reason phrase in the status line. ‘HTTP/1.1 200 OK’ will work with the new router, but ‘HTTP/1.1 200’ will not.

This error message is logged when a router detects a malformed HTTP response coming from a dyno.

H18 — Server Request Interrupted

An H18 signifies that the socket connected, and some data was sent; The error occurs in cases where the socket was destroyed before sending a complete response, or if the server responds with data before reading the entire body of the incoming request.

H19 — Backend connection timeout

A router received a connection timeout error after 5 seconds of attempting to open a socket to a web dyno. This is usually a symptom of your app being overwhelmed and failing to accept new connections in a timely manner. For Common Runtime apps, if you have multiple dynos, the router will retry multiple dynos before logging H19 and serving a standard error page. Private Space routers can’t reroute requests to another web dyno.

If your app has a single web dyno, it is possible to see H19 errors if the runtime instance running your web dyno fails and is replaced. Once the failure is detected and the instance is terminated your web dyno will be restarted somewhere else, but in the meantime, H19s may be served as the router fails to establish a connection to your dyno. This can be mitigated by running more than one web dyno.

H20 — App boot timeout

The router will enqueue requests for 75 seconds while waiting for starting processes to reach an “up” state. If after 75 seconds, no web dynos have reached an “up” state, the router logs H20 and serves a standard error page.

The Ruby on Rails asset pipeline can sometimes fail to run during git push, and will instead attempt to run when your app’s dynos boot. Since the Rails asset pipeline is a slow process, this can cause H20 boot timeout errors.

This error differs from R10 in that the H20 75-second timeout includes platform tasks such as internal state propagation, requests between internal components, slug download, unpacking, container preparation, etc… The R10 60-second timeout applies solely to application startup tasks.

If your application requires more time to boot, you may use the boot timeout tool to increase the limit. However, in general, slow boot times will make it harder to deploy your application and will make recovery from dyno failures slower, so this should be considered a temporary solution.

H21 — Backend connection refused

A router received a connection refused error when attempting to open a socket to your web process. This is usually a symptom of your app being overwhelmed and failing to accept new connections.

For Common Runtime apps, the router will retry multiple dynos before logging H21 and serving a standard error page. Private Spaces apps are not capable of sending the requests to multiple dynos.

H22 — Connection limit reached

A routing node has detected an elevated number of HTTP client connections attempting to reach your app. Reaching this threshold most likely means your app is under heavy load and is not responding quickly enough to keep up. The exact value of this threshold may change depending on various factors, such as the number of dynos in your app, response time for individual requests, and your app’s normal request volume.

H23 — Endpoint misconfigured

A routing node has detected a websocket handshake, specifically the ‘Sec-Websocket-Version’ header in the request, that came from an endpoint (upstream proxy) that does not support websockets.

H24 — Forced close

The routing node serving this request was either shutdown for maintenance or terminated before the request completed.

H25 — HTTP Restriction

This error is logged when a routing node detects and blocks a valid HTTP response that is judged risky or too large to be safely parsed. The error comes in four types.

Currently, this functionality is experimental, and is only made available to a subset of applications on the platform.

Invalid content length

The response has multiple content lengths declared within the same response, with varying lengths.

Oversized cookies

The cookie in the response will be too large to be used again in a request to the Heroku router or SSL endpoints.

Oversized header

A single header line is deemed too long (over 512kb) and the response is discarded on purpose.

Oversized status line

The status line is judged too long (8kb) and the response is discarded on purpose.

H26 — Request Error

This error is logged when a request has been identified as belonging to a specific Heroku application, but cannot be delivered entirely to a dyno due to HTTP protocol errors in the request. Multiple possible causes can be identified in the log message.

Unsupported expect header value

The request has an expect header, and its value is not 100-Continue , the only expect value handled by the router. A request with an unsupported expect value is terminated with the status code 417 Expectation Failed .

Bad header

The request has an HTTP header with a value that is either impossible to parse, or not handled by the router, such as connection: , .

Bad chunk

The request has a chunked transfer-encoding, but with a chunk that was invalid or couldn’t be parsed correctly. A request with this status code will be interrupted during transfer to the dyno.

H27 — Client Request Interrupted

The client socket was closed either in the middle of the request or before a response could be returned. For example, the client closed their browser session before the request was able to complete.

H28 — Client Connection Idle

The client did not send a full request and was terminated due to 55 seconds of inactivity. For example, the client indicated a Content-Length of 50 bytes which were not sent in time.

2010-10-06T21:51:37-07:00 heroku[router]: at=warning code=H28 desc=»Client Connection Idle» method=GET path=»/» host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=55449ms status=499 bytes=18

H31 — Misdirected Request

The client sent a request to the wrong endpoint. This could be because the client used stale DNS information or is accessing the app through a CDN that has stale DNS information. Verify that DNS is correctly configured for your app. If a CDN is configured for the app, consider contacting your CDN provider.

If you and your app users can successfully access the app in a browser (or however the app is used), this may not be cause for concern. The errors may be caused by clients (typically web-crawlers) with cached DNS entries trying to access a now-invalid endpoint or IP address for your app.

You can verify the validity of user agent through the app log error message as shown in the example below:

H80 — Maintenance mode

This is not an error, but we give it a code for the sake of completeness. Note the log formatting is the same but without the word “Error”.

H81 — Blank app

No code has been pushed to this application. To get rid of this message you need to do one deploy. This is not an error, but we give it a code for the sake of completeness.

H82 — You’ve used up your dyno hour pool

This error indicates that an account has exhausted its monthly dyno hour quota for Free or Eco dynos and its apps running these dynos are sleeping. You can view your app’s Free or Eco dyno usage in the Heroku dashboard.

H83 — Planned Service Degradation

This indicates that your app is temporarily unavailable as Heroku makes necessary changes to support the retirement of a feature that has reached end of life. You will likely encounter an error screen when attempting to access your application and see the error below in your logs. Please reference the Heroku Changelog and the Heroku Status page for more details and the timeline of the planned service outage.

H99 — Platform error

H99 and R99 are the only error codes that represent errors in the Heroku platform.

This indicates an internal error in the Heroku platform. Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.

R10 — Boot timeout

A web process took longer than 60 seconds to bind to its assigned $PORT . When this happens, the dyno’s process is killed and the dyno is considered crashed. Crashed dynos are restarted according to the dyno manager’s restart policy.

This error is often caused by a process being unable to reach an external resource, such as a database, or the application doing too much work, such as parsing and evaluating numerous, large code dependencies, during startup.

Common solutions are to access external resources asynchronously, so they don’t block startup, and to reduce the amount of application code or its dependencies.

If your application requires more time to boot, you may use the boot timeout tool to increase the limit. However, in general, slow boot times will make it harder to deploy your application and will make recovery from dyno failures slower, so this should be considered a temporary solution.

One exception is for apps using the Java buildpack, Gradle buildpack, heroku-deploy toolbelt plugin, or Heroku Maven plugin, which will be allowed 90 seconds to bind to their assigned port.

R12 — Exit timeout

A process failed to exit within 30 seconds of being sent a SIGTERM indicating that it should stop. The process is sent SIGKILL to force an exit.

R13 — Attach error

A dyno started with heroku run failed to attach to the invoking client.

R14 — Memory quota exceeded

A dyno requires memory in excess of its quota. If this error occurs, the dyno will page to swap space to continue running, which may cause degraded process performance. The R14 error is calculated by total memory swap, rss and cache.

If you are getting a large number of R14 errors, your application performance is likely severely degraded. Resolving R14 memory errors are language specific:

R15 — Memory quota vastly exceeded

A dyno requires vastly more memory than its quota and is consuming excessive swap space. If this error occurs, the dyno will be forcibly killed with SIGKILL (which cannot be caught or handled) by the platform. The R15 error is calculated by total memory swap and rss; cache is not included.

In Private Spaces, dynos exceeding their memory quota do not use swap space and thus do not emit R14 errors.

Private Space dynos vastly exceeding their memory quota generally will emit R15 errors but occasionally the platform may shut down the dyno before the R15 is sent, causing the error to be dropped. If an R15 is emitted it will only be visible in the app log stream but not in the dashboard Application Metrics interface. Other non-R15 types of errors from Private Space dynos are correctly surfaced in the Application Metrics interface.

For Private Space dynos vastly exceeding their memory quota the platform kills dyno processes consuming large amounts of memory, but may not kill the dyno itself.

R16 — Detached

An attached dyno is continuing to run after being sent SIGHUP when its external connection was closed. This is usually a mistake, though some apps might want to do this intentionally.

R17 — Checksum error

This indicates an error with runtime slug checksum verification. If the checksum does not match or there is another problem with the checksum when launch a dyno, an R17 error will occur and the dyno will fail to launch. Check the log stream for details about the error.

If this error occurs, try deploying a new release with a correct checksum or rolling back to an older release. Ensure the checksum is formatted and calculated correctly with the SHA256 algorithm. The checksum must start with SHA256: followed by the calculated SHA256 value for the compressed slug. If you did not manually calculate the checksum and error continues to occur, please contact Heroku support.

R99 — Platform error

R99 and H99 are the only error codes that represent errors in the Heroku platform.

This indicates an internal error in the Heroku platform. Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.

L10 — Drain buffer overflow

The number of log messages being generated has temporarily exceeded the rate at which they can be received by a drain consumer (such as a log management add-on) and Logplex, Heroku’s logging system, has discarded some messages in order to handle the rate difference.

A common cause of L10 error messages is the exhaustion of capacity in a log consumer. If a log management add-on or similar system can only accept so many messages per time period, your application may experience L10s after crossing that threshold.

Another common cause of L10 error messages is a sudden burst of log messages from a dyno. As each line of dyno output (e.g. a line of a stack trace) is a single log message, and Logplex limits the total number of un-transmitted log messages it will keep in memory to 1024 messages, a burst of lines from a dyno can overflow buffers in Logplex. In order to allow the log stream to catch up, Logplex will discard messages where necessary, keeping newer messages in favor of older ones.

You may need to investigate reducing the volume of log lines output by your application (e.g. condense multiple log lines into a smaller, single-line entry). You can also use the heroku logs -t command to get a live feed of logs and find out where your problem might be. A single dyno stuck in a loop that generates log messages can force an L10 error, as can a problematic code path that causes all dynos to generate a multi-line stack trace for some code paths.

L11 — Tail buffer overflow

A heroku logs –tail session cannot keep up with the volume of logs generated by the application or log channel, and Logplex has discarded some log lines necessary to catch up. To avoid this error you will need run the command on a faster internet connection (increase the rate at which you can receive logs) or you will need to modify your application to reduce the logging volume (decrease the rate at which logs are generated).

L12 — Local buffer overflow

The application is producing logs faster than the local delivery process (log-shuttle) can deliver them to logplex and has discarded some log lines in order to keep up. If this error is sustained you will need to reduce the logging volume of your application.

L13 — Local delivery error

The local log delivery process (log-shuttle) was unable to deliver some logs to Logplex and has discarded them. This can happen during transient network errors or during logplex service degradation. If this error is sustained please contact support.

L14 — Certificate validation error

The application is configured with a TLS syslog drain that doesn’t have a valid TLS certificate.

You should check that:

  1. You’re not using a self-signed certificate.
  2. The certificate is up to date.
  3. The certificate is signed by a known and trusted CA.
  4. The CN hostname embedded in the certificate matches the hostname being connected to.

L15 — Tail buffer temporarily unavailable

The tail buffer that stores the last 1500 lines of your logs is temporarily unavailable. Run heroku logs again. If you still encounter the error, run heroku logs -t to stream your logs (which does not use the tail buffer).

Источник

Hi @freiksenet, thanks for replying! :)

I have checked your guide to deploying on Heroku. I believe I followed it to the letter. I ran

$ heroku buildpacks:set heroku/nodejs
$ heroku buildpacks:add https://github.com/heroku/heroku-buildpack-static.git

in the command line. The project has an app.json file that looks like this:

{
  "buildpacks": [
    {
      "url": "heroku/nodejs"
    },
    {
      "url": "https://github.com/heroku/heroku-buildpack-static"
    }
  ]
}

It also has a scripts in package.json that looks like this:

  "scripts": {
    "build": "gatsby build",
    "develop": "gatsby develop",
    "start": "npm run develop",
    "format": "prettier --write "src/**/*.js"",
    "test": "echo "Write tests! -> https://gatsby.app/unit-testing"",
    "heroku-postbuild": "gatsby build"
  },

And lastly, it has a static.json that looks like this:

{
  "root": "public/",
  "headers": {
    "/**.js": {
      "Cache-Control": "public, max-age=0, must-revalidate"
    }
  }
}

I believe I had previously checked all the boxes here, and yet it’s still not working.

Introduction

Whenever your app experiences an error, Heroku will return a standard error page with the HTTP status code 503. To help you debug the underlying error, however, the platform will also add custom error information to your logs. Each type of error gets its own error code, with all HTTP errors starting with the letter H and all runtime errors starting with R. Logging errors start with L.

Syntax

  • H10 — App crashed
  • H11 — Backlog too deep
  • H12 — Request timeout
  • H13 — Connection closed without response
  • H14 — No web dynos running
  • H15 — Idle connection
  • H16 — Redirect to herokuapp.com
  • H17 — Poorly formatted HTTP response
  • H18 — Server Request Interrupted
  • H19 — Backend connection timeout
  • H20 — App boot timeout
  • H21 — Backend connection refused
  • H22 — Connection limit reached
  • H23 — Endpoint misconfigured
  • H24 — Forced close
  • H25 — HTTP Restriction
  • H26 — Request Error
  • H27 — Client Request Interrupted
  • H28 — Client Connection Idle
  • H80 — Maintenance mode
  • H81 — Blank app
  • H82 — Free dyno quota exhausted
  • H99 — Platform error
  • R10 — Boot timeout
  • R12 — Exit timeout
  • R13 — Attach error
  • R14 — Memory quota exceeded
  • R15 — Memory quota vastly exceeded
  • R16 – Detached
  • R17 — Checksum error
  • R99 — Platform error
  • L10 — Drain buffer overflow
  • L11 — Tail buffer overflow
  • L12 — Local buffer overflow
  • L13 — Local delivery error
  • L14 — Certificate validation error

H10 — App crashed

A crashed web dyno or a boot timeout on the web dyno will present this error.

2010-10-06T21:51:04-07:00 heroku[web.1]: State changed from down to starting
2010-10-06T21:51:07-07:00 app[web.1]: Starting process with command: `bundle exec rails server -p 22020`
2010-10-06T21:51:09-07:00 app[web.1]: >> Using rails adapter
2010-10-06T21:51:09-07:00 app[web.1]: Missing the Rails 2.3.5 gem. Please `gem install -v=2.3.5 rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed.
2010-10-06T21:51:10-07:00 heroku[web.1]: Process exited
2010-10-06T21:51:12-07:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H11 — Backlog too deep

When HTTP requests arrive faster than your application can process them, they can form a large backlog on a number of routers. When the backlog on a particular router passes a threshold, the router determines that your application isn’t keeping up with its incoming request volume. You’ll see an H11 error for each incoming request as long as the backlog is over this size. The exact value of this threshold may change depending on various factors, such as the number of dynos in your app, response time for individual requests, and your app’s normal request volume.

2010-10-06T21:51:07-07:00 heroku[router]: at=error code=H11 desc="Backlog too deep" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

The solution is to increase your app’s throughput by adding more dynos, tuning your database (for example, adding an index), or making the code itself faster. As always, increasing performance is highly application-specific and requires profiling.

H12 — Request timeout

An HTTP request took longer than 30 seconds to complete. In the example below, a Rails app takes 37 seconds to render the page; the HTTP router returns a 503 prior to Rails completing its request cycle, but the Rails process continues and the completion message shows after the router message.

2010-10-06T21:51:07-07:00 app[web.2]: Processing PostController#list (for 75.36.147.245 at 2010-10-06 21:51:07) [GET]
2010-10-06T21:51:08-07:00 app[web.2]: Rendering template within layouts/application
2010-10-06T21:51:19-07:00 app[web.2]: Rendering post/list
2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=6ms service=30001ms status=503 bytes=0
2010-10-06T21:51:42-07:00 app[web.2]: Completed in 37000ms (View: 27, DB: 21) | 200 OK [http://myapp.heroku.com/]

This 30-second limit is measured by the router, and includes all time spent in the dyno, including the kernel’s incoming connection queue and the app itself.

H13 — Connection closed without response

This error is thrown when a process in your web dyno accepts a connection, but then closes the socket without writing anything to it.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=3030ms service=9767ms status=503 bytes=0

One example where this might happen is when a Unicorn web server is configured with a timeout shorter than 30s and a request has not been processed by a worker before the timeout happens. In this case, Unicorn closes the connection before any data is written, resulting in an H13.

H14 — No web dynos running

This is most likely the result of scaling your web dynos down to 0 dynos. To fix it, scale your web dynos to 1 or more dynos:

$ heroku ps:scale web=1

Use the heroku ps command to determine the state of your web dynos.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno= connect= service= status=503 bytes=

H15 — Idle connection

The dyno did not send a full response and was terminated due to 55 seconds of inactivity. For example, the response indicated a Content-Length of 50 bytes which were not sent in time.

2010-10-06T21:51:37-07:00 heroku[router]: at=error code=H15 desc="Idle connection" method=GET path="/" host=myapp.herokuapp.com fwd=17.17.17.17 dyno=web.1 connect=1ms service=55449ms status=503 bytes=18

Just encountered this. I’m not sure what caused it but «heroku ps:restart» seemed to fix it.


After several attempts «heroku ps:restart» solved this issue with few minutes of downtime. It didn’t help when I tried ‘heroku restart’ alone.


Listening to 127.0.0.1 leads to the code=H20 desc=»App boot timeout» problem for us. Changing the listening address to 0.0.0.0 solves the problem.

Also, don’t use your own port, but instead use the environment variable PORT , which is passed to your app environment variables by heroku. Otherwise, you’ll also get this problem.

Here is our node code:

const { PORT=3000, LOCAL_ADDRESS='0.0.0.0' } = process.envserver.listen(PORT, LOCAL_ADDRESS, () => {  const address = server.address();  console.log('server listening at', address);});

So, try to log your listening address and port and check them firstly.


Сбой производственного приложения. «H20 – тайм-аут загрузки приложения»

Наше производственное приложение сталкивается с серьезным Outage with H20 - App boot timeout зарегистрирована ошибка.

Доказательства убедительно указывают на проблему с платформой heroku. Нам нужна быстрая поддержка.

5 ответы

Только что с этим столкнулся. Я не уверен, что вызвало это, но «heroku ps:restart», похоже, исправило это.

Создан 07 фев.

После нескольких попыток «heroku ps:restart» решил эту проблему за несколько минут простоя. Это не помогло, когда я попробовал «перезапустить героку» в одиночку.

Создан 17 фев.

Прослушивание 127.0.0.1 приводит к code=H20 desc=»Тайм-аут загрузки приложения» проблема для нас. Изменение адрес прослушивания до 0.0.0.0 решает проблему.

Кроме того, не используйте свой собственный порт, а вместо этого используйте переменную окружения PORT , который heroku передает в переменные среды вашего приложения. В противном случае вы также получите эту проблему.

Вот код нашего узла:

const { PORT=3000, LOCAL_ADDRESS='0.0.0.0' } = process.env
server.listen(PORT, LOCAL_ADDRESS, () => {
  const address = server.address();
  console.log('server listening at', address);
});

Итак, попробуйте зарегистрировать свой адрес прослушивания и порт и проверить их в первую очередь.

ответ дан 07 апр.

Мое решение было ИЗМЕНИТЬ ПОРТ, это общая идея.

В моем случае я использовал NODEJS:

const app = express();    
app.set('port', process.env.PORT || 3000); // Process.env.PORT change automatically the port IF 3000 port is being used.
app.listen(app.get('port'), () => console.log(`Node server listening on port ${app.get('port')}!`));

ответ дан 06 мая ’20, 21:05

У меня только что была эта проблема — heroku restart и heroku ps:restart мне не помогло.

Я временно обновил динамометрический стенд (заставив новый динамометрический стенд взять на себя управление). Это решило проблему.

Создан 10 сен.

Не тот ответ, который вы ищете? Просмотрите другие вопросы с метками

heroku

or задайте свой вопрос.

Ошибка H20 обычно означает, что ваше приложение занимает долгую загрузку после простоя.

The router will enqueue requests for 75 seconds while waiting for starting processes to reach an "up" state. If after 75 seconds, no web dynos have reached an "up" state, the router logs H20 and serves a standard error page.

Вы должны проверить, были ли эти ресурсы предварительно скомпилированы или есть другие вещи, которые могут замедлить загрузку приложения.

Время ожидания отладки:

Одной из причин таймаута запроса является бесконечный цикл в коде. Проверьте локально и посмотрите, можете ли вы реплицировать проблему и исправить ошибку.

Другая возможность заключается в том, что вы пытаетесь выполнить какую-то долгосрочную задачу внутри своего веб-процесса, например:

Sending an email
Accessing a remote API 
Web scraping / crawling
Rendering an image or PDF
Heavy computation 
Heavy database usage (slow or numerous queries)

Если это так, вы должны переместить этот тяжелый подъем в фоновое задание, которое может выполняться асинхронно с вашего веб-запроса.

Другой класс тайм-аутов возникает, когда внешняя служба, используемая вашим приложением, недоступна или перегружена. В этом случае ваше веб-приложение с большой вероятностью будет иметь тайм-аут, если вы не переместите работу на задний план. В некоторых случаях, когда вы должны обрабатывать эти запросы во время своего веб-запроса, вы всегда должны планировать случай сбоя.

Понравилась статья? Поделить с друзьями:
  • At error code h10 desc app crashed method get path favicon ico
  • At entercnd a710 error
  • At entercnd 710 error
  • At datalock oem code error
  • At datalock error что делать