Ошибка upstream sent too big header while reading response header from upstream

I am getting these kind of errors: 2014/05/24 11:49:06 [error] 8376#0: *54031 upstream sent too big header while reading response header from upstream, client: 107.21.193.210, server: aamjanata....

I am getting these kind of errors:

2014/05/24 11:49:06 [error] 8376#0: *54031 upstream sent too big header while reading response header from upstream, client: 107.21.193.210, server: aamjanata.com, request: «GET /the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https://aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20ht

Always it is the same. A url repeated over and over with comma separating. Can’t figure out what is causing this. Anyone have an idea?

Update: Another error:

http request count is zero while sending response to client

Here is the config. There are other irrelevant things, but this part was added/edited

fastcgi_cache_path /var/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;
    # Upstream to abstract backend connection(s) for PHP.
    upstream php {
            #this should match value of "listen" directive in php-fpm pool
            server unix:/var/run/php5-fpm.sock;
    }

And then in the server block:
set $skip_cache 0;

    # POST requests and urls with a query string should always go to PHP
    if ($request_method = POST) {
            set $skip_cache 1;
    }
    if ($query_string != "") {
            set $skip_cache 1;
    }

    # Don't cache uris containing the following segments
    if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") {
            set $skip_cache 1;
    }

    # Don't use the cache for logged in users or recent commenters
    if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
            set $skip_cache 1;
    }

    location / {
            # This is cool because no php is touched for static content.
            # include the "?$args" part so non-default permalinks doesn't break when using query string
            try_files $uri $uri/ /index.php?$args;
    }


    location ~ .php$ {
            try_files $uri /index.php;
            include fastcgi_params;
            fastcgi_pass php;
            fastcgi_read_timeout 3000;

            fastcgi_cache_bypass $skip_cache;
            fastcgi_no_cache $skip_cache;

            fastcgi_cache WORDPRESS;
            fastcgi_cache_valid  60m;
    }

    location ~ /purge(/.*) {
        fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1";
    }`

Paulo Boaventura's user avatar

asked May 24, 2014 at 11:59

Vidyut's user avatar

3

Add the following to your conf file

fastcgi_buffers 16 16k; 
fastcgi_buffer_size 32k;

answered May 24, 2014 at 13:46

Neo's user avatar

NeoNeo

6,5571 gold badge19 silver badges25 bronze badges

14

If nginx is running as a proxy / reverse proxy

that is, for users of ngx_http_proxy_module

In addition to fastcgi, the proxy module also saves the request header in a temporary buffer.

So you may need also to increase the proxy_buffer_size and the proxy_buffers, or disable it totally (Please read the nginx documentation).

Example of proxy buffering configuration

http {
  proxy_buffer_size   128k;
  proxy_buffers   4 256k;
  proxy_busy_buffers_size   256k;
}

Example of disabling your proxy buffer (recommended for long polling servers)

http {
  proxy_buffering off;
}

For more information: Nginx proxy module documentation

icc97's user avatar

icc97

10.7k8 gold badges69 silver badges88 bronze badges

answered Dec 18, 2014 at 16:26

amd's user avatar

amdamd

20.2k6 gold badges49 silver badges67 bronze badges

6

Plesk instructions

I combined the top two answers here

In Plesk 12, I had nginx running as a reverse proxy (which I think is the default). So the current top answer doesn’t work as nginx is also being run as a proxy.

I went to Subscriptions | [subscription domain] | Websites & Domains (tab) | [Virtual Host domain] | Web Server Settings.

Then at the bottom of that page you can set the Additional nginx directives which I set to be a combination of the top two answers here:

fastcgi_buffers         16  16k;
fastcgi_buffer_size         32k;
proxy_buffer_size          128k;
proxy_buffers            4 256k;
proxy_busy_buffers_size    256k;

answered Mar 29, 2017 at 12:32

icc97's user avatar

icc97icc97

10.7k8 gold badges69 silver badges88 bronze badges

4

upstream sent too big header while reading response header from upstream is nginx’s generic way of saying «I don’t like what I’m seeing»

  1. Your upstream server thread crashed
  2. The upstream server sent an invalid header back
  3. The Notice/Warnings sent back from STDERR overflowed their buffer and both it and STDOUT were closed

3: Look at the error logs above the message, is it streaming with logged lines preceding the message? PHP message: PHP Notice: Undefined index:
Example snippet from a loop my log file:

2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "PHP message: PHP Notice:  Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice:  Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
... // 20 lines of same
PHP message: PHP Notice:  Undefined index: Firstname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice:  Undefined index: Lastname in /srv/www/classes/data_convert.php on line 1090
PHP message: PHP Notice:  Undef
2015/11/23 10:30:02 [error] 32451#0: *580927 FastCGI sent in stderr: "ta_convert.php on line 1090
PHP message: PHP Notice:  Undefined index: Firstname

you can see in the 3rd line from the bottom that the buffer limit was hit, broke, and the next thread wrote in over it. Nginx then closed the connection and returned 502 to the client.

2: log all the headers sent per request, review them and make sure they conform to standards (nginx does not permit anything older than 24 hours to delete/expire a cookie, sending invalid content length because error messages were buffered before the content counted…). getallheaders function call can usually help out in abstracted code situations php get all headers

examples include:

<?php
//expire cookie
setcookie ( 'bookmark', '', strtotime('2012-01-01 00:00:00') );
// nginx will refuse this header response, too far past to accept
....
?>

and this:

<?php
header('Content-type: image/jpg');
?>

<?php   //a space was injected into the output above this line
header('Content-length: ' . filesize('image.jpg') );
echo file_get_contents('image.jpg');
// error! the response is now 1-byte longer than header!!
?>

1: verify, or make a script log, to ensure your thread is reaching the correct end point and not exiting before completion.

answered Nov 23, 2015 at 18:29

ppostma1's user avatar

ppostma1ppostma1

3,5781 gold badge26 silver badges28 bronze badges

2

I have a django application deployed to EBS, and I am using Python 3.8 running on 64bit Amazon Linux 2. The following method worked for me (note folder structure might be DIFFERENT if you’re using previous Linux versions. For more, see official documentation here

Make the .platform folder and its sub-directory as shown below:

|-- .ebextensions          # Don't put nginx config here
|   |-- django.config        
|-- .platform              # Make ".platform" folder and its subfolders
    |-- nginx                
    |   -- conf.d
    |       -- proxy.conf

Note that proxy.conf file should be placed inside .platform folder, NOT .ebextensions folder or the .elasticbeanstalk folder. The extension should end with .conf NOT .config.

Inside the proxy.conf file, copy & paste these lines directly:

client_max_body_size 50M;
large_client_header_buffers 4 32k;
fastcgi_buffers 16 32k;
fastcgi_buffer_size 32k;
proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

There is no need to issue command to restart nginx (for Amazon Linux 2)

Deploy the source code to elastic beanstalk again.

answered Jun 5, 2021 at 4:54

camole's user avatar

camolecamole

1011 silver badge2 bronze badges

3

Add:

fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size   128k;
proxy_buffers   4 256k;
proxy_busy_buffers_size   256k;

To server{} in nginx.conf

Works for me.

answered Dec 22, 2020 at 15:20

Jorge Cuerdo's user avatar

1

We ended up realising that our one server that was experiencing this had busted fpm config resulting in php errors/warnings/notices that’d normally be logged to disk were being sent over the FCGI socket. It looks like there’s a parsing bug when part of the header gets split across the buffer chunks.

So setting php_admin_value[error_log] to something actually writeable and restarting php-fpm was enough to fix the problem.

We could reproduce the problem with a smaller script:

<?php
for ($i = 0; $i<$_GET['iterations']; $i++)
    error_log(str_pad("a", $_GET['size'], "a"));
echo "got heren";

Raising the buffers made the 502s harder to hit but not impossible, e.g native:

bash-4.1# for it in {30..200..3}; do for size in {100..250..3}; do echo "size=$size iterations=$it $(curl -sv "http://localhost/debug.php?size=$size&iterations=$it" 2>&1 | egrep '^< HTTP')"; done; done | grep 502 | head
size=121 iterations=30 < HTTP/1.1 502 Bad Gateway
size=109 iterations=33 < HTTP/1.1 502 Bad Gateway
size=232 iterations=33 < HTTP/1.1 502 Bad Gateway
size=241 iterations=48 < HTTP/1.1 502 Bad Gateway
size=145 iterations=51 < HTTP/1.1 502 Bad Gateway
size=226 iterations=51 < HTTP/1.1 502 Bad Gateway
size=190 iterations=60 < HTTP/1.1 502 Bad Gateway
size=115 iterations=63 < HTTP/1.1 502 Bad Gateway
size=109 iterations=66 < HTTP/1.1 502 Bad Gateway
size=163 iterations=69 < HTTP/1.1 502 Bad Gateway
[... there would be more here, but I piped through head ...]

fastcgi_buffers 16 16k; fastcgi_buffer_size 32k;:

bash-4.1# for it in {30..200..3}; do for size in {100..250..3}; do echo "size=$size iterations=$it $(curl -sv "http://localhost/debug.php?size=$size&iterations=$it" 2>&1 | egrep '^< HTTP')"; done; done | grep 502 | head
size=223 iterations=69 < HTTP/1.1 502 Bad Gateway
size=184 iterations=165 < HTTP/1.1 502 Bad Gateway
size=151 iterations=198 < HTTP/1.1 502 Bad Gateway

So I believe the correct answer is: fix your fpm config so it logs errors to disk.

answered Aug 8, 2017 at 23:20

lyte's user avatar

lytelyte

1,16510 silver badges9 bronze badges

2

Faced the same problem when running Symfony app in php-fpm and nginx in docker containers.

After some research found that it was caused by php-fpm’s stderr written to nginx logs. I.e. php warnings (which is generated intensively in Symfony debug mode) was duplicated in docker logs php-fpm:

[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerDisallowRobotsIndexingListener::onResponse"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerStreamedResponseListener::onKernelResponse"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerLocaleListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerRouterListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerLocaleAwareListener::onKernelFinishRequest"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: "NOTICE: PHP message: [debug] Notified event "kernel.terminate" to listener "SymfonyComponentHttpKernelEventListenerProfilerListener::onKernelTerminate"."
[09-Jul-2021 12:25:46] WARNING: [pool www] child 38 said into stderr: ""

and docker logs nginx:

2021/07/09 12:25:46 [error] 30#30: *2 FastCGI sent in stderr: "ller" to listener "OblgazAPIAPICommonInfrastructureEventSubscriberLegalAuthenticationChecker::checkAuthentication".

PHP message: [debug] Notified event "kernel.controller_arguments" to listener "SymfonyComponentHttpKernelEventListenerErrorListener::onControllerArguments".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerResponseListener::onKernelResponse".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelDataCollectorRequestDataCollector::onKernelResponse".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerProfilerListener::onKernelResponse".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyBundleWebProfilerBundleEventListenerWebDebugToolbarListener::onKernelResponse".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerDisallowRobotsIndexingListener::onResponse".

PHP message: [debug] Notified event "kernel.response" to listener "SymfonyComponentHttpKernelEventListenerStreamedResponseListener::onKernelResponse".

PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerLocaleListener::onKernelFinishRequest".

PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerRouterListener::onKernelFinishRequest".

PHP message: [debug] Notified event "kernel.finish_request" to listener "SymfonyComponentHttpKernelEventListenerLocaleAwareListener::onKernelFinishRequest".

PHP message: [debug] Notified event "kernel.exception" to listener "SymfonyComponentHttpKernelEventListenerErrorListener::logKernelException".

PHP message: [debug] Notified event "kernel.exception" to listener "SymfonyComponentHttpKernelEventListenerProfilerListener::onKernelException".

and then nginx logs ended with

2021/07/09 12:25:46 [error] 30#30: *2 upstream sent too big header while reading response header from upstream ...

and I got 502 error.

Increasing the fastcgi_buffer_size in nginx config helped, but it seems more like a suppression the problem, not a treatment.

A better solution is to disable php-fpm to send logs by FastCGI. Found it can be made by setting fastcgi.logging=0 in php.ini (by default it is 1). php docs.

After changing it to 0, the problem goes away and nginx logs looks much cleaner docker logs nginx:

172.18.0.1 - - [09/Jul/2021:12:36:02 +0300] "GET /my/symfony/app HTTP/1.1" 401 73 "-" "PostmanRuntime/7.26.8"
172.18.0.1 - - [09/Jul/2021:12:36:04 +0300] "GET /my/symfony/app HTTP/1.1" 401 73 "-" "PostmanRuntime/7.26.8"

and all php-fpm logs are still in their place in php-fpm log.

answered Jul 9, 2021 at 10:06

Ilia Yatsenko's user avatar

fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;

it worked for me when I increased the numbers

answered Nov 17, 2021 at 9:16

Atila Pehlivan's user avatar

1

If you’re using Symfony framework:
Before messing with Nginx config, try to disable ChromePHP first.

1 — Open app/config/config_dev.yml

2 — Comment these lines:

#chromephp:
    #type:   chromephp
    #level:  info

ChromePHP pack the debug info json-encoded in the X-ChromePhp-Data header, which is too big for the default config of nginx with fastcgi.

Source: https://github.com/symfony/symfony/issues/8413#issuecomment-20412848

answered Sep 21, 2017 at 18:08

Lucas Bustamante's user avatar

Lucas BustamanteLucas Bustamante

15k7 gold badges86 silver badges82 bronze badges

0

This is still the highest SO-question on Google when searching for this error, so let’s bump it.

When getting this error and not wanting to deep-dive into the NGINX settings immediately, you might want to check your outputs to the debug console.
In my case I was outputting loads of text to the FirePHP / Chromelogger console, and since this is all sent as a header, it was causing the overflow.

It might not be needed to change the webserver settings if this error is caused by just sending insane amounts of log messages.

answered Dec 19, 2019 at 9:24

DavidKunz's user avatar

DavidKunzDavidKunz

3153 silver badges8 bronze badges

I am not sure that the issue is related to what header php is sending.
Make sure that the buffering is enabled. The simple way is to create a proxy.conf file:

proxy_redirect          off;
proxy_set_header        Host            $host;
proxy_set_header        X-Real-IP       $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size    100m;
client_body_buffer_size 128k;
proxy_connect_timeout   90;
proxy_send_timeout      90;
proxy_read_timeout      90;
proxy_buffering         on;
proxy_buffer_size       128k;
proxy_buffers           4 256k;
proxy_busy_buffers_size 256k;

And a fascgi.conf file:

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;
fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;
fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;
fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;
fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;
fastcgi_buffers 128 4096k;
fastcgi_buffer_size 4096k;
fastcgi_index  index.php;
fastcgi_param  REDIRECT_STATUS    200;

Next you need to call them in your default config server this way:

http {
  include    /etc/nginx/mime.types;
  include    /etc/nginx/proxy.conf;
  include    /etc/nginx/fastcgi.conf;
  index    index.html index.htm index.php;
  log_format   main '$remote_addr - $remote_user [$time_local]  $status '
    '"$request" $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';
  #access_log   /logs/access.log  main;
  sendfile     on;
  tcp_nopush   on;
 # ........
}

answered Apr 23, 2020 at 15:05

macherif's user avatar

macherifmacherif

3482 silver badges5 bronze badges

In our case we got this nginx error because our backend generated redirect response with a very long URL:

HTTP/1.1 301 Moved Permanently 
Location: https://www.example.com/?manyParamsHere...

Just for curiosity, we saved that big URL to a file and it size was 4.4 Kb.

Adding two lines to the config file /etc/nginx/conf.d/some_site.conf helped us to fix this error:

server {
    # ...
    location ~ ^/index.php(/|$) {
        fastcgi_pass php-upstream;
        fastcgi_split_path_info ^(.+.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

        # Add these lines:
        fastcgi_buffer_size 32k;
        fastcgi_buffers 4 32k;
    }
}

Read more about these params at the official nginx documentation.

answered Mar 18, 2021 at 14:34

yesnik's user avatar

yesnikyesnik

3,6452 gold badges28 silver badges25 bronze badges

1

For Symfony projects, try adding this line to your .env file:

SHELL_VERBOSITY=0

https://symfony.com/doc/current/console/verbosity.html

I will just leave it here, because I spent a crazy amount of time debugging this in my project and only this particular solution worked 100% for me (for the right reasons) and I have never found this answer related to this topic. Maybe someone will find it helpful.

answered Sep 24, 2021 at 21:28

Michał Moskal's user avatar

I had this error and I found 3 ways to fix that:

  • Set SHELL_VERBOSITY=0 or another value < 3 in .env: https://stackoverflow.com/a/69321273/13653732 In this case you disable your PHP logs, but they can be useful for development and debugging.
  • Set fastcgi.logging=0 in php.ini. It is the same result like above.
  • Update Symfony from 5.2 to 5.3. I think the old version has a problem with that.

All PHP Symfony logs perceived like errors for Nginx, but PHP worked correctly.

I had Nginx 1.17, PHP 8.0.2, PHP-FPM, Symfony 5.2, Xdebug, Docker.

I tried new versions Nginx 1.21, PHP 8.0.14, but without any result. That problem wasn’t with Apache.

I changed Nginx configuration, but also without any result.

answered Jan 6, 2022 at 14:30

Oleg Dmitrochenko's user avatar

1

I am using Symfony, it has a nice exception page (when the ErrorHandler is used). And this one will put the message of your exception as a header to the response created.

vendor/symfony/error-handler/ErrorRenderer/SerializerErrorRenderer.php

the header is called: X-Debug-Exception

So be careful, if you constructed a VERY large exception message, neither nginx nor chrome (limit 256k) nor curl (~128kb) can display your page and make it really hard to debug, what is outputting those big headers.

My suggestion would be to not blindly copy n paste increased buffer sizes into your nginx config, they treat the symptom not the cause.

answered May 5, 2022 at 4:46

pscheit's user avatar

pscheitpscheit

2,45225 silver badges29 bronze badges

Блог / Настраиваем сервер / varnish / Как исправить ошибку upstream sent too big header while reading response header from upstream?

Если текст подобной ошибки вы обнаружите в логах ngnix, то см. как её исправить в данной статье.

Ошибка связана с недостаточным размером буфера для передачи заголовка запроса. Во-первых, надо разобраться что является в данном случае upstream-ом, т.е. где требуется увеличить буфер.

В зависимости от того, как вы используете nginx, может понадобиться скорректировать размер буферов для одного из следующих компонентов: proxy, fastcgi или uwsgi. Или возможно для нескольких из них сразу.

По умолчанию размер буфера составляет 4/8 kb, чего обычно достаточно. Но при отправке сложных форм, этот лимит может оказаться исчерпан, тогда следует экспериментально подобрать размер буфера для вашего случая (увеличивая его, пока не пропадет данная ошибка).

Если проблема связана с использованием fastcgi (например вы используете связку с PHP) то добавьте или поменяйте в секции http следующие параметры:

http {

  fastcgi_buffers 32 32k;

  fastcgi_buffer_size 64k;

  ...

}

Аналогично для proxy и uwsgi:

http {

  proxy_buffers 32 32k;

  proxy_buffer_size 64k;

  uwsgi_buffers 32 32k;

  uwsgi_buffer_size 64k;

}

Возможно, что после правок (и перезапуска) nginx, ошибка в логах самого nginx появляться перестанет, но сама проблема, проявляющаяся как 5xx http ошибка не исчезнет. Это значит, что вы используете еще какой то софт, который проксирует nginx, и теперь скорее всего нужно решать данную проблему уже там.

Та же проблема, но уже из за Varnish

В моём случае это был varnish, потому я дополнительно расскажу как это поправить для него.

Нам нужно поправить значения буферов задаваемых тремя переменными: http_req_hdr_len, http_resp_hdr_len и http_max_hdr.

Для начала вы можете проверить текущие значения, используя в командной строке утилиту varnishadm (пример вывода):

[varnish]# varnishadm param.show http_max_hdr

http_max_hdr

        Value is: 512 [header lines]

        Default is: 64

        Minimum is: 32

        Maximum is: 65535

        Maximum number of HTTP header lines we allow in

        {req|resp|bereq|beresp}.http (obj.http is autosized to the

        exact number of headers).

        Cheap, ~20 bytes, in terms of workspace memory.

        Note that the first line occupies five header lines.

В моём примере значение уже скорректировано (установлено 512 вместо 64). А вам требуется добавить значения параметров в DAEMON_OPS в файле /etc/varnish/varnish.params (пример):

DAEMON_OPTS=«-p http_req_hdr_len=32k -p http_resp_hdr_len=32k -p http_max_hdr=512»

nginx varnish

Написать комментарий


Данная запись опубликована в 18.01.2021 18:24 и размещена в Настраиваем сервер.
Вы можете перейти в конец страницы и оставить ваш комментарий.

Мало букафф? Читайте есчо !

Проверка конфигурации nginx

Апрель 16, 2018 г.

Nginx не читает конфиги на лету, и их можно спокойно править на действующем сервере. Когда правки закончены, хотелось бы убедиться, что по крайней мере синтаксис команд верен и сервер запустится с новыми настройками.

Для этого в nginx встроен валидатор …

Читать

In this post I describe a problem I had running IdentityServer 4 behind an Nginx reverse proxy. In my case, I was running Nginx as an ingress controller for a Kubernetes cluster, but the issue is actually not specific to Kubernetes, or IdentityServer — it’s an Nginx configuration issue.

Initially, the Nginx ingress controller appeared to be configured correctly. I could view the IdentityServer home page, and could click login, but when I was redirected to the authorize endpoint (as part of the standard IdentityServer flow), I would get a 502 Bad Gateway error and a blank page.

Looking through the logs, IdentityServer showed no errors — as far as it was concerned there were no problems with the authorize request. However, looking through the Nginx logs revealed this gem (formatted slightly for legibility):

2018/02/05 04:55:21 [error] 193#193: 
    *25 upstream sent too big header while reading response header from upstream, 
client: 
    192.168.1.121, 
server: 
    example.com, 
request: 
  "GET /idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2Fclient%2F%23%2Fcallback%3F&amp;response_type=id_token%20token&amp;scope=profile%20openid%20email&amp;acr_values=tenant%3Atenant1 HTTP/1.1",
upstream:
  "http://10.32.0.9:80/idsrv/connect/authorize/callback?state=14379610753351226&amp;nonce=9227284121831921&amp;client_id=test.client&amp;redirect_uri=https%3A%2F%2Fexample.com%2F.client%2F%23%

Apparently, this is a common problem with Nginx, and is essentially exactly what the error says. Nginx sometimes chokes on responses with large headers, because its buffer size is smaller than some other web servers. When it gets a response with large headers, as was the case for my IdentityServer OpenID Connect callback, it falls over and sends a 502 response.

The solution is to simply increase Nginx’s buffer size. If you’re running Nginx on bare metal you could do this by increasing the buffer size in the config file, something like:

proxy_buffers         8 16k;  # Buffer pool = 8 buffers of 16k
proxy_buffer_size     16k;    # 16k of buffers from pool used for headers

However, in this case, I was working with Nginx as an ingress controller to a Kubernetes cluster. The question was, how do you configure Nginx when it’s running in a container?

How to configure the Nginx ingress controller

Luckily, the Nginx ingress controller is designed for exactly this situation. It uses a ConfigMap of values that are mapped to internal Nginx configuration values. By changing the ConfigMap, you can configure the underlying Nginx Pod.

The Nginx ingress controller only supports changing a subset of options via the ConfigMap approach, but luckily proxy‑buffer‑size is one such option! There’s two things you need to do to customise the ingress:

  1. Deploy the ConfigMap containing your customisations
  2. Point the Nginx ingress controller Deployment to your ConfigMap

I’m just going to show the template changes in this post, assuming you have a cluster created using kubeadm and kubectl

Creating the ConfigMap

The ConfigMap is one of the simplest resources in kubernets; it’s essentially just a collection of key-value pairs. The following manifest creates a ConfigMap called nginx-configuration and sets the proxy-buffer-size to "16k", to solve the 502 errors I was seeing previously.

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: kube-system
  labels:
    k8s-app: nginx-ingress-controller
data:
  proxy-buffer-size: "16k"

If you save this to a file nginx-configuration.yaml then you can apply it to your cluster using

kubectl apply -f nginx-configuration.yaml

However, you can’t just apply the ConfigMap and have the ingress controller pick it up automatically — you have to update your Nginx Deployment so it knows which ConfigMap to use.

Configuring the Nginx ingress controller to use your ConfigMap

In order for the ingress controller to use your ConfigMap, you must pass the ConfigMap name (nginx-configuration) as an argument in your deployment. For example:

args:
  - /nginx-ingress-controller
  - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
  - --configmap=$(POD_NAMESPACE)/nginx-configuration

Without this argument, the ingress controller will ignore your ConfigMap. The complete deployment manifest will look something like the following (adapted from the Nginx ingress controller repo)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true' 
    spec:
      initContainers:
      - command:
        - sh
        - -c
        - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: sysctl
        securityContext:
          privileged: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

Summary

While deploying a local Kubernetes cluster locally, the Nginx ingess controller was returning 502 errors for some requests. This was due to the headers being too large for Nginx to handle. Increasing the proxy_buffer_size configuration parmeter solved the problem. To achieve this with the ingress controller, you must provide a ConfigMap and point your ingress controller to it by passing an additional arg in your Deployment.

While working on my client website which is based on WooCommerce, I happened to see the checkout page failing with an error message “502 Bad gateway”. I suspected the issue might be because of NGINX and it happened to be true. The NGINX error log read as ‘upstream sent too big header while reading response header from upstream request: GET /checkout/ HTTP/1.1, upstream: fastcgi://unix:/var/run/php-fpm/php-fpm.sock‘. In this tutorial, I’ll be explaining what the error is all about and how to fix the same.

What does the error “Upstream sent too big header while reading response header from upstream” mean?

I understand that the page seems to send too big header than the capacity of the receiving end. But what was the header size that was too big for the server to handle? As the page I got the error was checkout page, which included 10 items added to the cart. Hence, the cookies, the content of the page were all high and that could have resulted in bigger header size. So how to find what the response headers include? That’s simple.

  1. Launch the chrome browser, right click and select Inspect
  2. Click Network tab
  3. Reload the page
  4. Select any of the HTTP request from the left panel and view the HTTP headers on the right panel.

That’s fine, you know to view the HTTP headers. But why did the server fail with an error “Upstream sent too big header while reading response header from upstream”? Well, the answer is each web server has a maximum header size set and the HTTP request headers sent was too big than the one set in the web server. Below are the maximum header size limit on various web servers.

  • Apache web server – 8K
  • NGINX – 4K to 8K
  • IIS (varies on each version) – 8K to 16K
  • Tomcat (varies on each version) : 8K to 48K.

As the web server I am using is NGINX, the default header size limit is 4K to 8K. By default NGINX uses the system page size, which is 4K in most of the systems. You can find that using the below command:

# getconf PAGESIZE
4096

Here’s a snippet that explains about NGINX FastCGI response buffer sizes.

By default when Nginx starts receiving a response from a FastCGI backend (such as PHP-FPM) it will buffer the response in memory before delivering it to the client. Any response larger than the set buffer size is saved to a temporary file on disk.

The two parameters that are related to FastCGI response buffering are:

fastcgi_buffers 
fastcgi_buffer_size

fastcgi_buffers – controls the number and memory size of buffer segments used for the payload of the FastCGI response.

fastcgi_buffer_size – controls the buffer size that used to hold the first chunk of fastCGI response from the HTTP response headers.

According to the NGNIX documentation, you don’t need to adjust the default value of fastCGI response parameters, as the NGINX by default uses the smallest page size of 4KB and it should fit most of the HTTP header requests. However, it does not seems to fit in my case. The same documentation says, some of the frameworks might push large amount of cookie data via Set-Cookie HTTP header and that might blow out the buffer resulting in HTTP 500 error. In such cases, you might need to increase the buffer size to 8k/16k/32k to accommodate larger upstream HTTP header.

How to find the average and maximum FastCGI response sizes received by the web server?

That can be found out by grepping the NGINX access log files. To do that, run the below command by providing the access_log file as an input

$ awk '($9 ~ /200/) { i++;sum+=$10;max=$10>max?$10:max; } END { printf("Maximum: %dnAverage: %dn",max,i?sum/i:0); }' access.log

Sample out on my web server:

Maximum: 3501304
Average: 21065

Note: The HTTP 200 OK response were only considered.

From the above snapshot, it’s clear that the average buffer size is more than 21K. So we need to set buffer size slightly more than the average request, which can probably be 32K. To do that, open the nginx.conf file add the below lines under location section – location ~ .php$ { }

fastcgi_buffers 32 32k;
fastcgi_buffer_size 32k;

Note: You might need to set lesser buffer value. I had set 32K as the average size was over 21K.

Learn more about FastCGI response buffers here.

Понравилась статья? Поделить с друзьями:
  • Ошибка upstream connect error or disconnect reset before headers reset reason connection failure
  • Ошибка vital engine xenus 2
  • Ошибка visual studio v143 сборка
  • Ошибка upnp 501
  • Ошибка visit workshop mercedes