Директивы
Синтаксис: |
absolute_redirect
|
---|---|
Умолчание: |
absolute_redirect on; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.11.8.
Если запрещено, то перенаправления, выдаваемые nginx’ом, будут относительными.
См. также директивы server_name_in_redirect
и port_in_redirect.
Синтаксис: |
aio
|
---|---|
Умолчание: |
aio off; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 0.8.11.
Разрешает или запрещает использование файлового асинхронного ввода-вывода (AIO)
во FreeBSD и Linux:
location /video/ { aio on; output_buffers 1 64k; }
Во FreeBSD AIO можно использовать, начиная с FreeBSD 4.3.
До FreeBSD 11.0
AIO можно либо собрать в ядре статически:
options VFS_AIO
либо загрузить динамически через загружаемый модуль ядра:
kldload aio
В Linux AIO можно использовать только начиная с версии ядра 2.6.22.
Кроме того, необходимо также дополнительно включить
directio,
иначе чтение будет блокирующимся:
location /video/ { aio on; directio 512; output_buffers 1 128k; }
В Linux
directio
можно использовать только для чтения блоков, выравненных
на границу 512 байт (или 4К для XFS).
Невыравненный конец файла будет читаться блокированно.
То же относится к запросам с указанием диапазона запрашиваемых байт
(byte-range requests) и к запросам FLV не с начала файла: чтение
невыравненных начала и конца ответа будет блокирующимся.
При одновременном включении AIO и sendfile в Linux
для файлов, размер которых больше либо равен указанному
в директиве directio, будет использоваться AIO,
а для файлов меньшего размера
или при выключенном directio — sendfile:
location /video/ { sendfile on; aio on; directio 8m; }
Кроме того, читать и отправлять
файлы можно в многопоточном режиме (1.7.11),
не блокируя при этом рабочий процесс:
location /video/ { sendfile on; aio threads; }
Операции чтения или отправки файлов будут обрабатываться потоками из указанного
пула.
Если пул потоков не задан явно,
используется пул с именем “default
”.
Имя пула может быть задано при помощи переменных:
aio threads=pool$disk;
По умолчанию поддержка многопоточности выключена, её сборку следует
разрешить с помощью конфигурационного параметра
--with-threads
.
В настоящий момент многопоточность совместима только с методами
epoll,
kqueue
и
eventport.
Отправка файлов в многопоточном режиме поддерживается только на Linux.
См. также директиву sendfile.
Синтаксис: |
aio_write
|
---|---|
Умолчание: |
aio_write off; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.9.13.
При включённом aio разрешает его использование для записи файлов.
В настоящий момент это работает только при использовании
aio threads
и ограничено записью временных файлов с данными,
полученными от проксируемых серверов.
Синтаксис: |
alias
|
---|---|
Умолчание: |
— |
Контекст: |
location
|
Задаёт замену для указанного location’а.
Например, при такой конфигурации
location /i/ { alias /data/w3/images/; }
на запрос
“/i/top.gif
” будет отдан файл
/data/w3/images/top.gif
.
В значении параметра путь
можно использовать переменные,
кроме $document_root
и $realpath_root
.
Если alias
используется внутри location’а, заданного
регулярным выражением, то регулярное выражение должно содержать
выделения, а сам alias
— ссылки на эти выделения
(0.7.40), например:
location ~ ^/users/(.+.(?:gif|jpe?g|png))$ { alias /data/w3/images/$1; }
Если location и последняя часть значения директивы совпадают:
location /images/ { alias /data/w3/images/; }
то лучше воспользоваться директивой
root:
location /images/ { root /data/w3; }
Синтаксис: |
auth_delay
|
---|---|
Умолчание: |
auth_delay 0s; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.17.10.
Задерживает обработку неавторизованных запросов с кодом ответа 401
для предотвращения атак по времени в случае ограничения доступа по
паролю, по
результату подзапроса
или по JWT.
Синтаксис: |
chunked_transfer_encoding
|
---|---|
Умолчание: |
chunked_transfer_encoding on; |
Контекст: |
http , server , location
|
Позволяет запретить формат передачи данных частями (chunked transfer
encoding) в HTTP/1.1.
Это может понадобиться при использовании программ, не поддерживающих
chunked encoding, несмотря на требования стандарта.
Синтаксис: |
client_body_buffer_size
|
---|---|
Умолчание: |
client_body_buffer_size 8k|16k; |
Контекст: |
http , server , location
|
Задаёт размер буфера для чтения тела запроса клиента.
Если тело запроса больше заданного буфера,
то всё тело запроса или только его часть записывается во
временный файл.
По умолчанию размер одного буфера равен двум размерам страницы.
На x86, других 32-битных платформах и x86-64 это 8K.
На других 64-битных платформах это обычно 16K.
Синтаксис: |
client_body_in_file_only
|
---|---|
Умолчание: |
client_body_in_file_only off; |
Контекст: |
http , server , location
|
Определяет, сохранять ли всё тело запроса клиента в файл.
Директиву можно использовать для отладки и при использовании переменной
$request_body_file
или метода
$r->request_body_file
модуля
ngx_http_perl_module.
При установке значения on
временные файлы
по окончании обработки запроса не удаляются.
Значение clean
разрешает удалять временные файлы,
оставшиеся по окончании обработки запроса.
Синтаксис: |
client_body_in_single_buffer
|
---|---|
Умолчание: |
client_body_in_single_buffer off; |
Контекст: |
http , server , location
|
Определяет, сохранять ли всё тело запроса клиента в одном буфере.
Директива рекомендуется при использовании переменной
$request_body
для уменьшения требуемого числа операций копирования.
Синтаксис: |
client_body_temp_path
|
---|---|
Умолчание: |
client_body_temp_path client_body_temp; |
Контекст: |
http , server , location
|
Задаёт каталог для хранения временных файлов с телами запросов клиентов.
В каталоге может использоваться иерархия подкаталогов до трёх уровней.
Например, при такой конфигурации
client_body_temp_path /spool/nginx/client_temp 1 2;
путь к временному файлу будет следующего вида:
/spool/nginx/client_temp/7/45/00000123457
Синтаксис: |
client_body_timeout
|
---|---|
Умолчание: |
client_body_timeout 60s; |
Контекст: |
http , server , location
|
Задаёт таймаут при чтении тела запроса клиента.
Таймаут устанавливается не на всю передачу тела запроса,
а только между двумя последовательными операциями чтения.
Если по истечении этого времени клиент ничего не передаст,
обработка запроса прекращается с ошибкой
408 (Request Time-out).
Синтаксис: |
client_header_buffer_size
|
---|---|
Умолчание: |
client_header_buffer_size 1k; |
Контекст: |
http , server
|
Задаёт размер буфера для чтения заголовка запроса клиента.
Для большинства запросов достаточно буфера размером в 1K байт.
Однако если в запросе есть длинные cookies, или же запрос
пришёл от WAP-клиента, то он может не поместиться в 1K.
Поэтому, если строка запроса или поле заголовка запроса
не помещаются полностью в этот буфер, то выделяются буферы
большего размера, задаваемые директивой
large_client_header_buffers.
Если директива указана на уровне server,
то может использоваться значение из сервера по умолчанию.
Подробнее см. в разделе
“Выбор
виртуального сервера”.
Синтаксис: |
client_header_timeout
|
---|---|
Умолчание: |
client_header_timeout 60s; |
Контекст: |
http , server
|
Задаёт таймаут при чтении заголовка запроса клиента.
Если по истечении этого времени клиент не передаст полностью заголовок,
обработка запроса прекращается с ошибкой
408 (Request Time-out).
Синтаксис: |
client_max_body_size
|
---|---|
Умолчание: |
client_max_body_size 1m; |
Контекст: |
http , server , location
|
Задаёт максимально допустимый размер тела запроса клиента.
Если размер больше заданного, то клиенту возвращается ошибка
413 (Request Entity Too Large).
Следует иметь в виду, что
браузеры не умеют корректно показывать
эту ошибку.
Установка параметра размер
в 0 отключает
проверку размера тела запроса клиента.
Синтаксис: |
connection_pool_size
|
---|---|
Умолчание: |
connection_pool_size 256|512; |
Контекст: |
http , server
|
Позволяет производить точную настройку выделения памяти
под конкретные соединения.
Эта директива не оказывает существенного влияния на
производительность, и её не следует использовать.
По умолчанию размер равен
256 байт на 32-битных платформах и 512 байт на 64-битных платформах.
До версии 1.9.8 по умолчанию использовалось значение 256 на всех платформах.
Синтаксис: |
default_type
|
---|---|
Умолчание: |
default_type text/plain; |
Контекст: |
http , server , location
|
Задаёт MIME-тип ответов по умолчанию.
Соответствие расширений имён файлов MIME-типу ответов задаётся
с помощью директивы types.
Синтаксис: |
directio
|
---|---|
Умолчание: |
directio off; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 0.7.7.
Разрешает использовать флаги
O_DIRECT
(FreeBSD, Linux),
F_NOCACHE
(macOS)
или функцию directio()
(Solaris)
при чтении файлов, размер которых больше либо равен указанному.
Директива автоматически запрещает (0.7.15) использование
sendfile
для данного запроса.
Рекомендуется использовать для больших файлов:
directio 4m;
или при использовании aio в Linux.
Синтаксис: |
directio_alignment
|
---|---|
Умолчание: |
directio_alignment 512; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 0.8.11.
Устанавливает выравнивание для
directio.
В большинстве случаев достаточно 512-байтового выравнивания, однако
при использовании XFS под Linux его нужно увеличить до 4K.
Синтаксис: |
disable_symlinks disable_symlinks
|
---|---|
Умолчание: |
disable_symlinks off; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.1.15.
Определяет, как следует поступать с символическими ссылками
при открытии файлов:
off
-
Символические ссылки в пути допускаются и не проверяются.
Это стандартное поведение. on
-
Если любой компонент пути является символической ссылкой,
доступ к файлу запрещается. if_not_owner
-
Доступ к файлу запрещается, если любой компонент пути
является символической ссылкой, а ссылка и объект, на
который она ссылается, имеют разных владельцев. from
=часть
-
При проверке символических ссылок
(параметрыon
иif_not_owner
)
обычно проверяются все компоненты пути.
Можно не проверять символические ссылки в начальной части пути,
указав дополнительно параметр
from
=часть
.
В этом случае символические ссылки проверяются лишь начиная
с компонента пути, который следует за заданной начальной частью.
Если значение не является начальной частью проверяемого пути,
путь проверяется целиком, как если бы этот параметр не был указан вовсе.
Если значение целиком совпадает с именем файла,
символические ссылки не проверяются.
В значении параметра можно использовать переменные.
Пример:
disable_symlinks on from=$document_root;
Эта директива доступна только на системах, в которых есть
интерфейсы openat()
и fstatat()
.
К таким системам относятся современные версии FreeBSD, Linux и Solaris.
Параметры on
и if_not_owner
требуют дополнительных затрат на обработку.
На системах, не поддерживающих операцию открытия каталогов только для поиска,
для использования этих параметров требуется, чтобы рабочие процессы
имели право читать все проверяемые каталоги.
Модули
ngx_http_autoindex_module,
ngx_http_random_index_module
и ngx_http_dav_module
в настоящий момент игнорируют эту директиву.
Синтаксис: |
error_page
|
---|---|
Умолчание: |
— |
Контекст: |
http , server , location , if в location
|
Задаёт URI, который будет показываться для указанных ошибок.
В значении uri
можно использовать переменные.
Пример:
error_page 404 /404.html; error_page 500 502 503 504 /50x.html;
При этом делается внутреннее перенаправление на указанный uri
,
а метод запроса клиента меняется на “GET
”
(для всех методов, отличных от
“GET
” и “HEAD
”).
Кроме того, можно поменять код ответа на другой,
используя синтаксис вида “=
ответ
”, например:
error_page 404 =200 /empty.gif;
Если ошибочный ответ обрабатывается проксированным сервером или
FastCGI/uwsgi/SCGI/gRPC-сервером,
и этот сервер может вернуть разные коды ответов,
например, 200, 302, 401 или 404, то можно выдавать возвращаемый им код:
error_page 404 = /404.php;
Если при внутреннем перенаправлении не нужно менять URI и метод,
то можно передать обработку ошибки в именованный location:
location / { error_page 404 = @fallback; } location @fallback { proxy_pass http://backend; }
Если при обработке
uri
происходит ошибка,
клиенту возвращается ответ с кодом последней случившейся ошибки.
Также существует возможность использовать перенаправления URL для обработки
ошибок:
error_page 403 http://example.com/forbidden.html; error_page 404 =301 http://example.com/notfound.html;
В этом случае по умолчанию клиенту возвращается код ответа 302.
Его можно изменить только на один из кодов ответа, относящихся к
перенаправлениям (301, 302, 303, 307 и 308).
До версий 1.1.16 и 1.0.13 код 307 не обрабатывался как перенаправление.
До версии 1.13.0 код 308 не обрабатывался как перенаправление.
Директивы наследуются с предыдущего уровня конфигурации при условии, что
на данном уровне не описаны свои директивы error_page
.
Синтаксис: |
etag
|
---|---|
Умолчание: |
etag on; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.3.3.
Разрешает или запрещает автоматическую генерацию поля “ETag”
заголовка ответа для статических ресурсов.
Синтаксис: |
http { ... }
|
---|---|
Умолчание: |
— |
Контекст: |
main
|
Предоставляет контекст конфигурационного файла, в котором указываются
директивы HTTP-сервера.
Синтаксис: |
if_modified_since
|
---|---|
Умолчание: |
if_modified_since exact; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 0.7.24.
Определяет, как сравнивать время модификации ответа с
временем в поле
“If-Modified-Since”
заголовка запроса:
off
- ответ всегда считается изменившимся (0.7.34);
exact
- точное совпадение;
before
-
время модификации ответа меньше или равно времени, заданному в поле
“If-Modified-Since” заголовка запроса.
Синтаксис: |
ignore_invalid_headers
|
---|---|
Умолчание: |
ignore_invalid_headers on; |
Контекст: |
http , server
|
Если включено, nginx игнорирует поля заголовка с недопустимыми именами.
Допустимыми считаются имена, состоящие из английских букв, цифр, дефисов
и возможно знаков подчёркивания (последнее контролируется директивой
underscores_in_headers).
Если директива указана на уровне server,
то может использоваться значение из сервера по умолчанию.
Подробнее см. в разделе
“Выбор
виртуального сервера”.
Синтаксис: |
internal;
|
---|---|
Умолчание: |
— |
Контекст: |
location
|
Указывает, что location может использоваться только для внутренних запросов.
Для внешних запросов клиенту будет возвращаться ошибка
404 (Not Found).
Внутренними запросами являются:
-
запросы, перенаправленные директивами
error_page,
index,
random_index и
try_files; -
запросы, перенаправленные с помощью поля
“X-Accel-Redirect” заголовка ответа вышестоящего сервера; -
подзапросы, формируемые командой
“include virtual
”
модуля
ngx_http_ssi_module,
директивами модуля
ngx_http_addition_module,
а также директивами
auth_request и
mirror; -
запросы, изменённые директивой
rewrite.
Пример:
error_page 404 /404.html; location = /404.html { internal; }
Для предотвращения зацикливания, которое может возникнуть при
использовании некорректных конфигураций, количество внутренних
перенаправлений ограничено десятью.
По достижении этого ограничения будет возвращена ошибка
500 (Internal Server Error).
В таком случае в лог-файле ошибок можно увидеть сообщение
“rewrite or internal redirection cycle”.
Синтаксис: |
keepalive_disable
|
---|---|
Умолчание: |
keepalive_disable msie6; |
Контекст: |
http , server , location
|
Запрещает keep-alive соединения с некорректно ведущими себя браузерами.
Параметры браузер
указывают, на какие браузеры это
распространяется.
Значение msie6
запрещает keep-alive соединения
со старыми версиями MSIE после получения запроса POST.
Значение safari
запрещает keep-alive соединения
с Safari и подобными им браузерами на macOS и подобных ей ОС.
Значение none
разрешает keep-alive соединения
со всеми браузерами.
До версии 1.1.18 под значение
safari
подпадали
все Safari и подобные им браузеры на всех ОС, и keep-alive
соединения с ними были по умолчанию запрещены.
Синтаксис: |
keepalive_requests
|
---|---|
Умолчание: |
keepalive_requests 1000; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 0.8.0.
Задаёт максимальное число запросов, которые можно
сделать по одному keep-alive соединению.
После того, как сделано максимальное число запросов,
соединение закрывается.
Периодическое закрытие соединений необходимо для освобождения
памяти, выделенной под конкретные соединения.
Поэтому использование слишком большого максимального числа запросов
может приводить к чрезмерному потреблению памяти и не рекомендуется.
До версии 1.19.10 по умолчанию использовалось значение 100.
Синтаксис: |
keepalive_time
|
---|---|
Умолчание: |
keepalive_time 1h; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.19.10.
Ограничивает максимальное время, в течение которого
могут обрабатываться запросы в рамках keep-alive соединения.
По достижении заданного времени соединение закрывается
после обработки очередного запроса.
Синтаксис: |
keepalive_timeout
|
---|---|
Умолчание: |
keepalive_timeout 75s; |
Контекст: |
http , server , location
|
Первый параметр задаёт таймаут, в течение которого keep-alive
соединение с клиентом не будет закрыто со стороны сервера.
Значение 0 запрещает keep-alive соединения с клиентами.
Второй необязательный параметр задаёт значение в поле
“Keep-Alive: timeout=время
”
заголовка ответа.
Два параметра могут отличаться друг от друга.
Поле
“Keep-Alive: timeout=время
”
заголовка понимают Mozilla и Konqueror.
MSIE сам закрывает keep-alive соединение примерно через 60 секунд.
Синтаксис: |
large_client_header_buffers
|
---|---|
Умолчание: |
large_client_header_buffers 4 8k; |
Контекст: |
http , server
|
Задаёт максимальное число
и размер
буферов для чтения большого заголовка запроса клиента.
Строка запроса не должна превышать размера одного буфера, иначе клиенту
возвращается ошибка
414 (Request-URI Too Large).
Поле заголовка запроса также не должно превышать размера одного буфера,
иначе клиенту возвращается ошибка
400 (Bad Request).
Буферы выделяются только по мере необходимости.
По умолчанию размер одного буфера равен 8K байт.
Если по окончании обработки запроса соединение переходит в состояние
keep-alive, эти буферы освобождаются.
Если директива указана на уровне server,
то может использоваться значение из сервера по умолчанию.
Подробнее см. в разделе
“Выбор
виртуального сервера”.
Синтаксис: |
limit_except
|
---|---|
Умолчание: |
— |
Контекст: |
location
|
Ограничивает HTTP-методы, доступные внутри location.
Параметр метод
может быть одним из
GET
,
HEAD
,
POST
,
PUT
,
DELETE
,
MKCOL
,
COPY
,
MOVE
,
OPTIONS
,
PROPFIND
,
PROPPATCH
,
LOCK
,
UNLOCK
или
PATCH
.
Если разрешён метод GET
, то метод
HEAD
также будет разрешён.
Доступ к остальным методам может быть ограничен при помощи директив модулей
ngx_http_access_module,
ngx_http_auth_basic_module
и
ngx_http_auth_jwt_module
(1.13.10):
limit_except GET { allow 192.168.1.0/32; deny all; }
Обратите внимание, что данное ограничение действует для всех методов,
кроме GET и HEAD.
Синтаксис: |
limit_rate
|
---|---|
Умолчание: |
limit_rate 0; |
Контекст: |
http , server , location , if в location
|
Ограничивает скорость передачи ответа клиенту.
Скорость
задаётся в байтах в секунду.
Значение 0 отключает ограничение скорости.
Ограничение устанавливается на запрос, поэтому, если клиент одновременно
откроет два соединения, суммарная скорость будет вдвое выше
заданного ограничения.
В значении параметра можно использовать переменные (1.17.0).
Это может быть полезно в случаях, когда скорость нужно ограничивать
в зависимости от какого-либо условия:
map $slow $rate { 1 4k; 2 8k; } limit_rate $rate;
Ограничение скорости можно также задать в переменной
$limit_rate
,
однако начиная с 1.17.0 использовать данный метод не рекомендуется:
server { if ($slow) { set $limit_rate 4k; } ... }
Кроме того, ограничение скорости может быть задано в поле
“X-Accel-Limit-Rate” заголовка ответа проксированного сервера.
Эту возможность можно запретить с помощью директив
proxy_ignore_headers,
fastcgi_ignore_headers,
uwsgi_ignore_headers
и
scgi_ignore_headers.
Синтаксис: |
limit_rate_after
|
---|---|
Умолчание: |
limit_rate_after 0; |
Контекст: |
http , server , location , if в location
|
Эта директива появилась в версии 0.8.0.
Задаёт начальный объём данных, после передачи которого начинает
ограничиваться скорость передачи ответа клиенту.
В значении параметра можно использовать переменные (1.17.0).
Пример:
location /flv/ { flv; limit_rate_after 500k; limit_rate 50k; }
Синтаксис: |
lingering_close
|
---|---|
Умолчание: |
lingering_close on; |
Контекст: |
http , server , location
|
Эта директива появилась в версиях 1.1.0 и 1.0.6.
Управляет закрытием соединений с клиентами.
Со значением по умолчанию “on
” nginx будет
ждать и
обрабатывать дополнительные данные,
поступающие от клиента, перед полным закрытием соединения, но только
если эвристика указывает на то, что клиент может ещё послать данные.
Со значением “always
” nginx всегда будет
ждать и обрабатывать дополнительные данные, поступающие от клиента.
Со значением “off
” nginx не будет ждать поступления
дополнительных данных и сразу же закроет соединение.
Это поведение нарушает протокол и поэтому не должно использоваться без
необходимости.
Для управления закрытием
HTTP/2-соединений
директива должна быть задана на уровне server (1.19.1).
Синтаксис: |
lingering_time
|
---|---|
Умолчание: |
lingering_time 30s; |
Контекст: |
http , server , location
|
Если действует lingering_close,
эта директива задаёт максимальное время, в течение которого nginx
будет обрабатывать (читать и игнорировать) дополнительные данные,
поступающие от клиента.
По прошествии этого времени соединение будет закрыто, даже если
будут ещё данные.
Синтаксис: |
lingering_timeout
|
---|---|
Умолчание: |
lingering_timeout 5s; |
Контекст: |
http , server , location
|
Если действует lingering_close, эта директива задаёт
максимальное время ожидания поступления дополнительных данных от клиента.
Если в течение этого времени данные не были получены, соединение закрывается.
В противном случае данные читаются и игнорируются, и nginx снова
ждёт поступления данных.
Цикл “ждать-читать-игнорировать” повторяется, но не дольше чем задано
директивой lingering_time.
Синтаксис: |
listen listen listen
|
---|---|
Умолчание: |
listen *:80 | *:8000; |
Контекст: |
server
|
Задаёт адрес
и порт
для IP
или путь
для UNIX-сокета,
на которых сервер будет принимать запросы.
Можно указать адрес
и порт
,
либо только адрес
или только порт
.
Кроме того, адрес
может быть именем хоста, например:
listen 127.0.0.1:8000; listen 127.0.0.1; listen 8000; listen *:8000; listen localhost:8000;
IPv6-адреса (0.7.36) задаются в квадратных скобках:
listen [::]:8000; listen [::1];
UNIX-сокеты (0.8.21) задаются при помощи префикса “unix:
”:
listen unix:/var/run/nginx.sock;
Если указан только адрес
, то используется порт 80.
Если директива не указана, то используется либо *:80
,
если nginx работает с привилегиями суперпользователя,
либо *:8000
.
Если у директивы есть параметр default_server
, то сервер,
в котором описана эта директива, будет сервером по умолчанию для указанной пары
адрес
:порт
.
Если же директив с параметром default_server
нет, то
сервером по умолчанию будет первый сервер, в котором описана пара
адрес
:порт
.
До версии 0.8.21 этот параметр назывался просто
default
.
Параметр ssl
(0.7.14) указывает на то, что все соединения,
принимаемые на данном порту, должны работать в режиме SSL.
Это позволяет задать компактную конфигурацию для сервера,
работающего сразу в двух режимах — HTTP и HTTPS.
Параметр http2
(1.9.5) позволяет принимать на этом порту
HTTP/2-соединения.
Обычно, чтобы это работало, следует также указать параметр
ssl
, однако nginx можно также настроить и на приём
HTTP/2-соединений без SSL.
Параметр spdy
(1.3.15-1.9.4) позволяет принимать на этом порту
SPDY-соединения.
Обычно, чтобы это работало, следует также указать параметр
ssl
, однако nginx можно также настроить и на приём
SPDY-соединений без SSL.
Параметр proxy_protocol
(1.5.12)
указывает на то, что все соединения, принимаемые на данном порту,
должны использовать
протокол
PROXY.
Протокол PROXY версии 2 поддерживается начиная с версии 1.13.11.
В директиве listen
можно также указать несколько
дополнительных параметров, специфичных для связанных с сокетами
системных вызовов.
Эти параметры можно задать в любой директиве listen
,
но только один раз для указанной пары
адрес
:порт
.
До версии 0.8.21 их можно было указывать лишь в директиве
listen
совместно с параметромdefault
.
-
setfib
=число
-
этот параметр (0.8.44) задаёт таблицу маршрутизации, FIB
(параметрSO_SETFIB
) для слушающего сокета.
В настоящий момент это работает только на FreeBSD. -
fastopen
=число
-
включает
“TCP Fast Open”
для слушающего сокета (1.5.8) и
ограничивает
максимальную длину очереди соединений, которые ещё не завершили процесс
three-way handshake.Не включайте “TCP Fast Open”, не убедившись, что сервер может адекватно
обрабатывать многократное получениеодного и того же SYN-пакета с данными.
-
backlog
=число
-
задаёт параметр
backlog
в вызове
listen()
, который ограничивает
максимальный размер очереди ожидающих приёма соединений.
По умолчаниюbacklog
устанавливается равным -1 для
FreeBSD, DragonFly BSD и macOS, и 511 для других платформ. -
rcvbuf
=размер
-
задаёт размер буфера приёма
(параметрSO_RCVBUF
) для слушающего сокета. -
sndbuf
=размер
-
задаёт размер буфера передачи
(параметрSO_SNDBUF
) для слушающего сокета. -
accept_filter
=фильтр
-
задаёт название accept-фильтра
(параметрSO_ACCEPTFILTER
) для слушающего сокета,
который включается для фильтрации входящих соединений
перед передачей их вaccept()
.
Работает только на FreeBSD и NetBSD 5.0+.
Можно использовать два фильтра:
dataready
и
httpready. -
deferred
-
указывает использовать отложенный
accept()
(параметрTCP_DEFER_ACCEPT
сокета) на Linux. -
bind
-
указывает, что для данной пары
адрес
:порт
нужно делать
bind()
отдельно.
Это нужно потому, что если описаны несколько директивlisten
с одинаковым портом, но разными адресами, и одна из директив
listen
слушает на всех адресах для данного порта
(*:
порт
), то nginx сделает
bind()
только на*:
порт
.
Необходимо заметить, что в этом случае для определения адреса, на который
пришло соединение, делается системный вызовgetsockname()
.
Если же используются параметрыsetfib
,
fastopen
,
backlog
,rcvbuf
,
sndbuf
,accept_filter
,
deferred
,ipv6only
,
reuseport
илиso_keepalive
,
то для данной пары
адрес
:порт
всегда делается
отдельный вызовbind()
. -
ipv6only
=on
|off
-
этот параметр (0.7.42) определяет
(через параметр сокетаIPV6_V6ONLY
),
будет ли слушающий на wildcard-адресе[::]
IPv6-сокет
принимать только IPv6-соединения, или же одновременно IPv6- и IPv4-соединения.
По умолчанию параметр включён.
Установить его можно только один раз на старте.До версии 1.3.4,
если этот параметр не был задан явно, то для сокета действовали
настройки операционной системы. -
reuseport
-
этот параметр (1.9.1) указывает, что нужно создавать отдельный слушающий сокет
для каждого рабочего процесса
(через параметр сокета
SO_REUSEPORT
для Linux 3.9+ и DragonFly BSD
илиSO_REUSEPORT_LB
для FreeBSD 12+), позволяя ядру
распределять входящие соединения между рабочими процессами.
В настоящий момент это работает только на Linux 3.9+, DragonFly BSD
и FreeBSD 12+ (1.15.1).Ненадлежащее использование параметра может быть
небезопасно. -
so_keepalive
=on
|off
|[keepidle
]:[keepintvl
]:[keepcnt
] -
этот параметр (1.1.11) конфигурирует для слушающего сокета
поведение “TCP keepalive”.
Если этот параметр опущен, то для сокета будут действовать
настройки операционной системы.
Если он установлен в значение “on
”, то для сокета
включается параметрSO_KEEPALIVE
.
Если он установлен в значение “off
”, то для сокета
параметрSO_KEEPALIVE
выключается.
Некоторые операционные системы поддерживают настройку параметров
“TCP keepalive” на уровне сокета посредством параметров
TCP_KEEPIDLE
,TCP_KEEPINTVL
и
TCP_KEEPCNT
.
На таких системах (в настоящий момент это Linux 2.4+, NetBSD 5+ и
FreeBSD 9.0-STABLE)
их можно сконфигурировать с помощью параметровkeepidle
,
keepintvl
иkeepcnt
.
Один или два параметра могут быть опущены, в таком случае для
соответствующего параметра сокета будут действовать стандартные
системные настройки.
Например,so_keepalive=30m::10
установит таймаут бездействия (
TCP_KEEPIDLE
) в 30 минут,
для интервала проб (TCP_KEEPINTVL
) будет действовать
стандартная системная настройка, а счётчик проб (TCP_KEEPCNT
)
будет равен 10.
Пример:
listen 127.0.0.1 default_server accept_filter=dataready backlog=1024;
Синтаксис: |
location [ location
|
---|---|
Умолчание: |
— |
Контекст: |
server , location
|
Устанавливает конфигурацию в зависимости от URI запроса.
Для сопоставления используется URI запроса в нормализованном виде,
после декодирования текста, заданного в виде “%XX
”,
преобразования относительных элементов пути “.
” и
“..
” в реальные и возможной
замены двух и более подряд идущих
слэшей на один.
location можно задать префиксной строкой или регулярным выражением.
Регулярные выражения задаются либо с модификатором “~*
”
(для поиска совпадения без учёта регистра символов),
либо с модификатором “~
” (с учётом регистра).
Чтобы найти location, соответствующий запросу, вначале проверяются
location’ы, заданные префиксными строками (префиксные location’ы).
Среди них ищется location с совпадающим префиксом
максимальной длины и запоминается.
Затем проверяются регулярные выражения, в порядке их следования
в конфигурационном файле.
Проверка регулярных выражений прекращается после первого же совпадения,
и используется соответствующая конфигурация.
Если совпадение с регулярным выражением не найдено, то используется
конфигурация запомненного ранее префиксного location’а.
Блоки location
могут быть вложенными,
с некоторыми исключениями, о которых говорится ниже.
Для операционных систем, нечувствительных к регистру символов, таких
как macOS и Cygwin, сравнение с префиксными строками производится
без учёта регистра (0.7.7).
Однако сравнение ограничено только однобайтными locale’ями.
Регулярные выражения могут содержать выделения (0.7.40), которые могут
затем использоваться в других директивах.
Если у совпавшего префиксного location’а максимальной длины указан модификатор
“^~
”, то регулярные выражения не проверяются.
Кроме того, с помощью модификатора “=
” можно задать точное
совпадение URI и location.
При точном совпадении поиск сразу же прекращается.
Например, если запрос “/
” случается часто, то
указав “location = /
”, можно ускорить обработку
этих запросов, так как поиск прекратится после первого же сравнения.
Очевидно, что такой location не может иметь вложенные location’ы.
В версиях с 0.7.1 по 0.8.41, если запрос точно совпал с префиксным
location’ом без модификаторов “=
” и “^~
”,
то поиск тоже сразу же прекращается и регулярные выражения также
не проверяются.
Проиллюстрируем вышесказанное примером:
location = / { [ конфигурация А ] } location / { [ конфигурация Б ] } location /documents/ { [ конфигурация В ] } location ^~ /images/ { [ конфигурация Г ] } location ~* .(gif|jpg|jpeg)$ { [ конфигурация Д ] }
Для запроса “/
” будет выбрана конфигурация А,
для запроса “/index.html
” — конфигурация Б,
для запроса “/documents/document.html
” — конфигурация В,
для запроса “/images/1.gif
” — конфигурация Г,
а для запроса “/documents/1.jpg
” — конфигурация Д.
Префикс “@
” задаёт именованный location.
Такой location не используется при обычной обработке запросов, а
предназначен только для перенаправления в него запросов.
Такие location’ы не могут быть вложенными и не могут содержать
вложенные location’ы.
Если location задан префиксной строкой со слэшом в конце
и запросы обрабатываются при помощи
proxy_pass,
fastcgi_pass,
uwsgi_pass,
scgi_pass,
memcached_pass или
grpc_pass,
происходит специальная обработка.
В ответ на запрос с URI равным этой строке, но без завершающего слэша,
будет возвращено постоянное перенаправление с кодом 301
на URI с добавленным в конец слэшом.
Если такое поведение нежелательно, можно задать точное совпадение
URI и location, например:
location /user/ { proxy_pass http://user.example.com; } location = /user { proxy_pass http://login.example.com; }
Синтаксис: |
log_not_found
|
---|---|
Умолчание: |
log_not_found on; |
Контекст: |
http , server , location
|
Разрешает или запрещает записывать в
error_log
ошибки о том, что файл не найден.
Синтаксис: |
log_subrequest
|
---|---|
Умолчание: |
log_subrequest off; |
Контекст: |
http , server , location
|
Разрешает или запрещает записывать в
access_log
подзапросы.
Синтаксис: |
max_ranges
|
---|---|
Умолчание: |
— |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.1.2.
Ограничивает максимальное допустимое число диапазонов в запросах с
указанием диапазона запрашиваемых байт (byte-range requests).
Запросы, превышающие указанное ограничение, обрабатываются как
если бы они не содержали указания диапазонов.
По умолчанию число диапазонов не ограничено.
Значение 0 полностью запрещает поддержку диапазонов.
Синтаксис: |
merge_slashes
|
---|---|
Умолчание: |
merge_slashes on; |
Контекст: |
http , server
|
Разрешает или запрещает преобразование URI путём замены двух и более подряд
идущих слэшей (“/
”) на один.
Необходимо иметь в виду, что это преобразование необходимо для корректной
проверки префиксных строк и регулярных выражений.
Если его не делать, то запрос “//scripts/one.php
”
не попадёт в
location /scripts/ { ... }
и может быть обслужен как статический файл.
Поэтому он преобразуется к виду “/scripts/one.php
”.
Запрет преобразования может понадобиться, если в URI используются имена,
закодированные методом base64, в котором задействован символ
“/
”.
Однако из соображений безопасности лучше избегать отключения преобразования.
Если директива указана на уровне server,
то может использоваться значение из сервера по умолчанию.
Подробнее см. в разделе
“Выбор
виртуального сервера”.
Синтаксис: |
msie_padding
|
---|---|
Умолчание: |
msie_padding on; |
Контекст: |
http , server , location
|
Разрешает или запрещает добавлять в ответы для MSIE со статусом больше 400
комментарий для увеличения размера ответа до 512 байт.
Синтаксис: |
msie_refresh
|
---|---|
Умолчание: |
msie_refresh off; |
Контекст: |
http , server , location
|
Разрешает или запрещает выдавать для MSIE клиентов refresh’ы вместо
перенаправлений.
Синтаксис: |
open_file_cache open_file_cache
|
---|---|
Умолчание: |
open_file_cache off; |
Контекст: |
http , server , location
|
Задаёт кэш, в котором могут храниться:
- дескрипторы открытых файлов, информация об их размерах и времени модификации;
- информация о существовании каталогов;
-
информация об ошибках поиска файла — “нет файла”, “нет прав на чтение”
и тому подобное.Кэширование ошибок нужно разрешить отдельно директивой
open_file_cache_errors.
У директивы есть следующие параметры:
-
max
-
задаёт максимальное число элементов в кэше;
при переполнении кэша удаляются наименее востребованные элементы (LRU); -
inactive
-
задаёт время, после которого элемент кэша удаляется, если к нему
не было обращений в течение этого времени; по умолчанию 60 секунд; -
off
- запрещает кэш.
Пример:
open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on;
Синтаксис: |
open_file_cache_errors
|
---|---|
Умолчание: |
open_file_cache_errors off; |
Контекст: |
http , server , location
|
Разрешает или запрещает кэширование ошибок поиска файлов в
open_file_cache.
Синтаксис: |
open_file_cache_min_uses
|
---|---|
Умолчание: |
open_file_cache_min_uses 1; |
Контекст: |
http , server , location
|
Задаёт минимальное число
обращений к файлу
в течение времени, заданного параметром inactive
директивы open_file_cache, необходимых для того, чтобы дескриптор
файла оставался открытым в кэше.
Синтаксис: |
open_file_cache_valid
|
---|---|
Умолчание: |
open_file_cache_valid 60s; |
Контекст: |
http , server , location
|
Определяет время, через которое следует проверять актуальность информации
об элементе в
open_file_cache.
Синтаксис: |
output_buffers
|
---|---|
Умолчание: |
output_buffers 2 32k; |
Контекст: |
http , server , location
|
Задаёт число
и размер
буферов,
используемых при чтении ответа с диска.
До версии 1.9.5 по умолчанию использовалось значение 1 32k.
Синтаксис: |
port_in_redirect
|
---|---|
Умолчание: |
port_in_redirect on; |
Контекст: |
http , server , location
|
Разрешает или запрещает указывать порт в
абсолютных
перенаправлениях, выдаваемых nginx’ом.
Использование в перенаправлениях основного имени сервера управляется
директивой server_name_in_redirect.
Синтаксис: |
postpone_output
|
---|---|
Умолчание: |
postpone_output 1460; |
Контекст: |
http , server , location
|
Если это возможно, то отправка данных клиенту будет отложена пока nginx не
накопит по крайней мере указанное количество байт для отправки.
Значение 0 запрещает отложенную отправку данных.
Синтаксис: |
read_ahead
|
---|---|
Умолчание: |
read_ahead 0; |
Контекст: |
http , server , location
|
Задаёт ядру размер предчтения при работе с файлами.
На Linux используется системный вызов
posix_fadvise(0, 0, 0, POSIX_FADV_SEQUENTIAL)
,
поэтому параметр размер
там игнорируется.
На FreeBSD используется системный вызов
fcntl(O_READAHEAD,
размер
)
,
появившийся во FreeBSD 9.0-CURRENT.
Для FreeBSD 7 необходимо установить
патч.
Синтаксис: |
recursive_error_pages
|
---|---|
Умолчание: |
recursive_error_pages off; |
Контекст: |
http , server , location
|
Разрешает или запрещает делать несколько перенаправлений через директиву
error_page.
Число таких перенаправлений ограничено.
Синтаксис: |
request_pool_size
|
---|---|
Умолчание: |
request_pool_size 4k; |
Контекст: |
http , server
|
Позволяет производить точную настройку выделений памяти
под конкретные запросы.
Эта директива не оказывает существенного влияния на
производительность, и её не следует использовать.
Синтаксис: |
reset_timedout_connection
|
---|---|
Умолчание: |
reset_timedout_connection off; |
Контекст: |
http , server , location
|
Разрешает или запрещает сброс соединений по таймауту,
а также при
закрытии
соединений с помощью нестандартного кода 444 (1.15.2).
Сброс делается следующим образом.
Перед закрытием сокета для него задаётся параметр
SO_LINGER
с таймаутом 0.
После этого при закрытии сокета клиенту отсылается TCP RST, а вся память,
связанная с этим сокетом, освобождается.
Это позволяет избежать длительного нахождения уже закрытого сокета в
состоянии FIN_WAIT1 с заполненными буферами.
Необходимо отметить, что keep-alive соединения по истечении таймаута
закрываются обычным образом.
Синтаксис: |
resolver
|
---|---|
Умолчание: |
— |
Контекст: |
http , server , location
|
Задаёт серверы DNS, используемые для преобразования имён вышестоящих серверов
в адреса, например:
resolver 127.0.0.1 [::1]:5353;
Адрес может быть указан в виде доменного имени или IP-адреса,
и необязательного порта (1.3.1, 1.2.2).
Если порт не указан, используется порт 53.
Серверы DNS опрашиваются циклически.
До версии 1.1.7 можно было задать лишь один DNS-сервер.
Задание DNS-серверов с помощью IPv6-адресов поддерживается
начиная с версий 1.3.1 и 1.2.2.
По умолчанию nginx будет искать как IPv4-, так и IPv6-адреса
при преобразовании имён в адреса.
Если поиск IPv4- или IPv6-адресов нежелателен,
можно указать параметр ipv4=off
(1.23.1) или
ipv6=off
.
Преобразование имён в IPv6-адреса поддерживается
начиная с версии 1.5.8.
По умолчанию nginx кэширует ответы, используя значение TTL из ответа.
Необязательный параметр valid
позволяет это
переопределить:
resolver 127.0.0.1 [::1]:5353 valid=30s;
До версии 1.1.9 настройка времени кэширования была невозможна
и nginx всегда кэшировал ответы на срок в 5 минут.
Для предотвращения DNS-спуфинга рекомендуется
использовать DNS-серверы в защищённой доверенной локальной сети.
Необязательный параметр status_zone
(1.17.1)
включает
сбор информации
о запросах и ответах сервера DNS
в указанной зоне
.
Параметр доступен как часть
коммерческой подписки.
Синтаксис: |
resolver_timeout
|
---|---|
Умолчание: |
resolver_timeout 30s; |
Контекст: |
http , server , location
|
Задаёт таймаут для преобразования имени в адрес, например:
resolver_timeout 5s;
Синтаксис: |
root
|
---|---|
Умолчание: |
root html; |
Контекст: |
http , server , location , if в location
|
Задаёт корневой каталог для запросов.
Например, при такой конфигурации
location /i/ { root /data/w3; }
в ответ на запрос “/i/top.gif
” будет отдан файл
/data/w3/i/top.gif
.
В значении параметра путь
можно использовать переменные,
кроме $document_root
и $realpath_root
.
Путь к файлу формируется путём простого добавления URI к значению директивы
root
.
Если же URI необходимо поменять, следует воспользоваться директивой
alias.
Синтаксис: |
satisfy
|
---|---|
Умолчание: |
satisfy all; |
Контекст: |
http , server , location
|
Разрешает доступ, если все (all
)
или хотя бы один (any
) из модулей
ngx_http_access_module,
ngx_http_auth_basic_module,
ngx_http_auth_request_module
или
ngx_http_auth_jwt_module
разрешают доступ.
Пример:
location / { satisfy any; allow 192.168.1.0/32; deny all; auth_basic "closed site"; auth_basic_user_file conf/htpasswd; }
Синтаксис: |
send_lowat
|
---|---|
Умолчание: |
send_lowat 0; |
Контекст: |
http , server , location
|
При установке этой директивы в ненулевое значение nginx будет пытаться
минимизировать число операций отправки на клиентских сокетах либо при
помощи флага NOTE_LOWAT
метода
kqueue,
либо при помощи параметра сокета SO_SNDLOWAT
.
В обоих случаях будет использован указанный размер
.
Эта директива игнорируется на Linux, Solaris и Windows.
Синтаксис: |
send_timeout
|
---|---|
Умолчание: |
send_timeout 60s; |
Контекст: |
http , server , location
|
Задаёт таймаут при передаче ответа клиенту.
Таймаут устанавливается не на всю передачу ответа,
а только между двумя операциями записями.
Если по истечении этого времени клиент ничего не примет,
соединение будет закрыто.
Синтаксис: |
sendfile
|
---|---|
Умолчание: |
sendfile off; |
Контекст: |
http , server , location , if в location
|
Разрешает или запрещает использовать
sendfile()
.
Начиная с nginx 0.8.12 и FreeBSD 5.2.1,
можно использовать aio для подгрузки данных
для sendfile()
:
location /video/ { sendfile on; tcp_nopush on; aio on; }
В такой конфигурации функция sendfile()
вызывается с флагом
SF_NODISKIO
, в результате чего она не блокируется на диске, а
сообщает об отсутствии данных в памяти.
После этого nginx инициирует асинхронную подгрузку данных, читая один байт.
При этом ядро FreeBSD подгружает в память первые 128K байт файла, однако
при последующих чтениях файл подгружается частями только по 16K.
Изменить это можно с помощью директивы
read_ahead.
До версии 1.7.11 подгрузка данных включалась с помощью
aio sendfile;
.
Синтаксис: |
sendfile_max_chunk
|
---|---|
Умолчание: |
sendfile_max_chunk 2m; |
Контекст: |
http , server , location
|
Ограничивает объём данных,
который может передан за один вызов sendfile()
.
Без этого ограничения одно быстрое соединение может целиком
захватить рабочий процесс.
До версии 1.21.4 по умолчанию ограничения не было.
Синтаксис: |
server { ... }
|
---|---|
Умолчание: |
— |
Контекст: |
http
|
Задаёт конфигурацию для виртуального сервера.
Чёткого разделения виртуальных серверов на IP-based (на основании IP-адреса)
и name-based (на основании поля “Host” заголовка запроса) нет.
Вместо этого директивами listen описываются все
адреса и порты, на которых нужно принимать соединения для этого сервера,
а в директиве server_name указываются все имена серверов.
Примеры конфигураций описаны в документе
“Как nginx обрабатывает запросы”.
Синтаксис: |
server_name
|
---|---|
Умолчание: |
server_name ""; |
Контекст: |
server
|
Задаёт имена виртуального сервера, например:
server { server_name example.com www.example.com; }
Первое имя становится основным именем сервера.
В именах серверов можно использовать звёздочку (“*
”)
для замены первой или последней части имени:
server { server_name example.com *.example.com www.example.*; }
Такие имена называются именами с маской.
Два первых вышеприведённых имени можно объединить в одно:
server { server_name .example.com; }
В качестве имени сервера можно также использовать регулярное выражение,
указав перед ним тильду (“~
”):
server { server_name www.example.com ~^wwwd+.example.com$; }
Регулярное выражение может содержать выделения (0.7.40),
которые могут затем использоваться в других директивах:
server { server_name ~^(www.)?(.+)$; location / { root /sites/$2; } } server { server_name _; location / { root /sites/default; } }
Именованные выделения в регулярном выражении создают переменные (0.8.25),
которые могут затем использоваться в других директивах:
server { server_name ~^(www.)?(?<domain>.+)$; location / { root /sites/$domain; } } server { server_name _; location / { root /sites/default; } }
Если параметр директивы установлен в “$hostname
” (0.9.4), то
подставляется имя хоста (hostname) машины.
Возможно также указать пустое имя сервера (0.7.11):
server { server_name www.example.com ""; }
Это позволяет обрабатывать запросы без поля “Host” заголовка
запроса в этом сервере, а не в сервере по умолчанию для данной пары адрес:порт.
Это настройка по умолчанию.
До 0.8.48 по умолчанию использовалось имя хоста (hostname) машины.
При поиске виртуального сервера по имени,
если имени соответствует несколько из указанных вариантов,
например, одновременно подходят и имя с маской, и регулярное выражение,
будет выбран первый подходящий вариант в следующем порядке приоритета:
- точное имя
-
самое длинное имя с маской в начале,
например “*.example.com
” -
самое длинное имя с маской в конце,
например “mail.*
” -
первое подходящее регулярное выражение
(в порядке следования в конфигурационном файле)
Подробнее имена серверов обсуждаются в отдельном
документе.
Синтаксис: |
server_name_in_redirect
|
---|---|
Умолчание: |
server_name_in_redirect off; |
Контекст: |
http , server , location
|
Разрешает или запрещает использовать в
абсолютных перенаправлениях,
выдаваемых nginx’ом, основное имя сервера, задаваемое директивой
server_name.
Если использование основного имени сервера запрещено, то используется имя,
указанное в поле “Host” заголовка запроса.
Если же этого поля нет, то используется IP-адрес сервера.
Использование в перенаправлениях порта управляется
директивой port_in_redirect.
Синтаксис: |
server_names_hash_bucket_size
|
---|---|
Умолчание: |
server_names_hash_bucket_size 32|64|128; |
Контекст: |
http
|
Задаёт размер корзины в хэш-таблицах имён серверов.
Значение по умолчанию зависит от размера строки кэша процессора.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
Синтаксис: |
server_names_hash_max_size
|
---|---|
Умолчание: |
server_names_hash_max_size 512; |
Контекст: |
http
|
Задаёт максимальный размер
хэш-таблиц имён серверов.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
Синтаксис: |
server_tokens
|
---|---|
Умолчание: |
server_tokens on; |
Контекст: |
http , server , location
|
Разрешает или запрещает выдавать версию nginx’а на страницах ошибок и
в поле “Server” заголовка ответа.
Если указан параметр build
(1.11.10),
то наряду с версией nginx’а будет также выдаваться
имя сборки.
Дополнительно, как часть
коммерческой подписки,
начиная с версии 1.9.13
подписи на страницах ошибок и
значение поля “Server” заголовка ответа
можно задать явно с помощью строки с переменными.
Пустая строка запрещает выдачу поля “Server”.
Синтаксис: |
subrequest_output_buffer_size
|
---|---|
Умолчание: |
subrequest_output_buffer_size 4k|8k; |
Контекст: |
http , server , location
|
Эта директива появилась в версии 1.13.10.
Задаёт размер
буфера, используемого для
хранения тела ответа подзапроса.
По умолчанию размер одного буфера равен размеру страницы памяти.
В зависимости от платформы это или 4K, или 8K,
однако его можно сделать меньше.
Директива применима только для подзапросов,
тело ответа которых сохраняется в памяти.
Например, подобные подзапросы создаются при помощи
SSI.
Синтаксис: |
tcp_nodelay
|
---|---|
Умолчание: |
tcp_nodelay on; |
Контекст: |
http , server , location
|
Разрешает или запрещает использование параметра TCP_NODELAY
.
Параметр включается при переходе соединения в состояние keep-alive.
Также, он включается на SSL-соединениях,
при небуферизованном проксировании
и при проксировании WebSocket.
Синтаксис: |
tcp_nopush
|
---|---|
Умолчание: |
tcp_nopush off; |
Контекст: |
http , server , location
|
Разрешает или запрещает использование параметра сокета
TCP_NOPUSH
во FreeBSD или
TCP_CORK
в Linux.
Параметр включаются только при использовании sendfile.
Включение параметра позволяет
-
передавать заголовок ответа и начало файла в одном пакете
в Linux и во FreeBSD 4.*; - передавать файл полными пакетами.
Синтаксис: |
try_files try_files
|
---|---|
Умолчание: |
— |
Контекст: |
server , location
|
Проверяет существование файлов в заданном порядке и использует
для обработки запроса первый найденный файл, причём обработка
делается в контексте этого же location’а.
Путь к файлу строится из параметра файл
в соответствии с директивами
root и alias.
С помощью слэша в конце имени можно проверить существование каталога,
например, “$uri/
”.
В случае, если ни один файл не найден, то делается внутреннее
перенаправление на uri
, заданный последним параметром.
Например:
location /images/ { try_files $uri /images/default.gif; } location = /images/default.gif { expires 30s; }
Последний параметр может также указывать на именованный location,
как в примерах ниже.
С версии 0.7.51 последний параметр может также быть кодом
:
location / { try_files $uri $uri/index.html $uri.html =404; }
Пример использования при проксировании Mongrel:
location / { try_files /system/maintenance.html $uri $uri/index.html $uri.html @mongrel; } location @mongrel { proxy_pass http://mongrel; }
Пример использования вместе с Drupal/FastCGI:
location / { try_files $uri $uri/ @drupal; } location ~ .php$ { try_files $uri @drupal; fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME /path/to$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param QUERY_STRING $args; ... прочие fastcgi_param } location @drupal { fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME /path/to/index.php; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param QUERY_STRING q=$uri&$args; ... прочие fastcgi_param }
В следующем примере директива try_files
location / { try_files $uri $uri/ @drupal; }
аналогична директивам
location / { error_page 404 = @drupal; log_not_found off; }
А здесь
location ~ .php$ { try_files $uri @drupal; fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME /path/to$fastcgi_script_name; ... }
try_files
проверяет существование PHP-файла,
прежде чем передать запрос FastCGI-серверу.
Пример использования вместе с WordPress и Joomla:
location / { try_files $uri $uri/ @wordpress; } location ~ .php$ { try_files $uri @wordpress; fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME /path/to$fastcgi_script_name; ... прочие fastcgi_param } location @wordpress { fastcgi_pass ...; fastcgi_param SCRIPT_FILENAME /path/to/index.php; ... прочие fastcgi_param }
Синтаксис: |
types { ... }
|
---|---|
Умолчание: |
types { text/html html; image/gif gif; image/jpeg jpg; } |
Контекст: |
http , server , location
|
Задаёт соответствие расширений имён файлов и MIME-типов ответов.
Расширения нечувствительны к регистру символов.
Одному MIME-типу может соответствовать несколько расширений, например:
types { application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; }
Достаточно полная таблица соответствий входит в дистрибутив nginx
и находится в файле conf/mime.types
.
Для того чтобы для определённого location’а для всех ответов
выдавался MIME-тип “application/octet-stream
”,
можно использовать следующее:
location /download/ { types { } default_type application/octet-stream; }
Синтаксис: |
types_hash_bucket_size
|
---|---|
Умолчание: |
types_hash_bucket_size 64; |
Контекст: |
http , server , location
|
Задаёт размер корзины в хэш-таблицах типов.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
До версии 1.5.13
значение по умолчанию зависело от размера строки кэша процессора.
Синтаксис: |
types_hash_max_size
|
---|---|
Умолчание: |
types_hash_max_size 1024; |
Контекст: |
http , server , location
|
Задаёт максимальный размер
хэш-таблиц типов.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
Синтаксис: |
underscores_in_headers
|
---|---|
Умолчание: |
underscores_in_headers off; |
Контекст: |
http , server
|
Разрешает или запрещает использование символов подчёркивания в
полях заголовка запроса клиента.
Если использование символов подчёркивания запрещено, поля заголовка запроса, в
именах которых есть подчёркивания,
помечаются как недопустимые и подпадают под действие директивы
ignore_invalid_headers.
Если директива указана на уровне server,
то может использоваться значение из сервера по умолчанию.
Подробнее см. в разделе
“Выбор
виртуального сервера”.
Синтаксис: |
variables_hash_bucket_size
|
---|---|
Умолчание: |
variables_hash_bucket_size 64; |
Контекст: |
http
|
Задаёт размер корзины в хэш-таблице переменных.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
Синтаксис: |
variables_hash_max_size
|
---|---|
Умолчание: |
variables_hash_max_size 1024; |
Контекст: |
http
|
Задаёт максимальный размер
хэш-таблицы переменных.
Подробнее настройка хэш-таблиц обсуждается в отдельном
документе.
До версии 1.5.13 по умолчанию использовалось значение 512.
Встроенные переменные
Модуль ngx_http_core_module
поддерживает встроенные
переменные, имена которых совпадают с именами переменных веб-сервера Apache.
Прежде всего, это переменные, представляющие из себя поля заголовка
запроса клиента, такие как $http_user_agent
, $http_cookie
и тому подобное.
Кроме того, есть и другие переменные:
$arg_
имя
-
аргумент
имя
в строке запроса $args
- аргументы в строке запроса
$binary_remote_addr
-
адрес клиента в бинарном виде, длина значения всегда 4 байта
для IPv4-адресов или 16 байт для IPv6-адресов $body_bytes_sent
-
число байт, переданное клиенту, без учёта заголовка ответа;
переменная совместима с параметром “%B
” модуля Apache
mod_log_config
$bytes_sent
- число байт, переданных клиенту (1.3.8, 1.2.5)
$connection
- порядковый номер соединения (1.3.8, 1.2.5)
$connection_requests
- текущее число запросов в соединении (1.3.8, 1.2.5)
$connection_time
- время соединения в секундах с точностью до миллисекунд (1.19.10)
$content_length
- поле “Content-Length” заголовка запроса
$content_type
- поле “Content-Type” заголовка запроса
$cookie_
имя
-
cookie
имя
$document_root
-
значение директивы root или alias
для текущего запроса $document_uri
-
то же, что и
$uri
$host
-
в порядке приоритета:
имя хоста из строки запроса, или
имя хоста из поля “Host” заголовка запроса, или
имя сервера, соответствующего запросу $hostname
- имя хоста
$http_
имя
-
произвольное поле заголовка запроса;
последняя часть имени переменной соответствует имени поля, приведённому
к нижнему регистру, с заменой символов тире на символы подчёркивания $https
-
“
on
”
если соединение работает в режиме SSL,
либо пустая строка $is_args
-
“
?
”, если в строке запроса есть аргументы,
и пустая строка, если их нет $limit_rate
-
установка этой переменной позволяет ограничивать скорость
передачи ответа, см. limit_rate $msec
- текущее время в секундах с точностью до миллисекунд (1.3.9, 1.2.6)
$nginx_version
- версия nginx
$pid
- номер (PID) рабочего процесса
$pipe
-
“
p
” если запрос был pipelined, иначе “.
”
(1.3.12, 1.2.7) $proxy_protocol_addr
-
адрес клиента, полученный из заголовка протокола PROXY (1.5.12)
Протокол PROXY должен быть предварительно включён при помощи установки
параметраproxy_protocol
в директиве listen. $proxy_protocol_port
-
порт клиента, полученный из заголовка протокола PROXY (1.11.0)
Протокол PROXY должен быть предварительно включён при помощи установки
параметраproxy_protocol
в директиве listen. $proxy_protocol_server_addr
-
адрес сервера, полученный из заголовка протокола PROXY (1.17.6)
Протокол PROXY должен быть предварительно включён при помощи установки
параметраproxy_protocol
в директиве listen. $proxy_protocol_server_port
-
порт сервера, полученный из заголовка протокола PROXY (1.17.6)
Протокол PROXY должен быть предварительно включён при помощи установки
параметраproxy_protocol
в директиве listen. $proxy_protocol_tlv_
имя
-
TLV, полученный из заголовка протокола PROXY (1.23.2).
Имя
может быть именем типа TLV или его числовым значением.
В последнем случае значение задаётся в шестнадцатеричном виде
и должно начинаться с0x
:$proxy_protocol_tlv_alpn $proxy_protocol_tlv_0x01
SSL TLV могут также быть доступны как по имени типа TLV,
так и по его числовому значению,
оба должны начинаться сssl_
:$proxy_protocol_tlv_ssl_version $proxy_protocol_tlv_ssl_0x21
Поддерживаются следующие имена типов TLV:
-
alpn
(0x01
) —
протокол более высокого уровня, используемый поверх соединения -
authority
(0x02
) —
значение имени хоста, передаваемое клиентом -
unique_id
(0x05
) —
уникальный идентификатор соединения -
netns
(0x30
) —
имя пространства имён -
ssl
(0x20
) —
структура SSL TLV в бинарном виде
Поддерживаются следующие имена типов SSL TLV:
-
ssl_version
(0x21
) —
версия SSL, используемая в клиентском соединении -
ssl_cn
(0x22
) —
Common Name сертификата -
ssl_cipher
(0x23
) —
имя используемого шифра -
ssl_sig_alg
(0x24
) —
алгоритм, используемый для подписи сертификата -
ssl_key_alg
(0x25
) —
алгоритм публичного ключа
Также поддерживается следующее специальное имя типа SSL TLV:
-
ssl_verify
—
результат проверки клиентского сертификата:
0
, если клиент предоставил сертификат
и он был успешно верифицирован,
либо ненулевое значение
Протокол PROXY должен быть предварительно включён при помощи установки
параметраproxy_protocol
в директиве listen. -
$query_string
-
то же, что и
$args
$realpath_root
-
абсолютный путь, соответствующий
значению директивы root или alias
для текущего запроса,
в котором все символические ссылки преобразованы в реальные пути $remote_addr
- адрес клиента
$remote_port
- порт клиента
$remote_user
- имя пользователя, использованное в Basic аутентификации
$request
- первоначальная строка запроса целиком
$request_body
-
тело запроса
Значение переменной появляется в location’ах, обрабатываемых
директивами
proxy_pass,
fastcgi_pass,
uwsgi_pass
и
scgi_pass,
когда тело было прочитано в
буфер в памяти. $request_body_file
-
имя временного файла, в котором хранится тело запроса
По завершении обработки файл необходимо удалить.
Для того чтобы тело запроса всегда записывалось в файл,
следует включить client_body_in_file_only.
При передаче имени временного файла в проксированном запросе
или в запросе к FastCGI/uwsgi/SCGI-серверу следует запретить передачу самого
тела директивамиproxy_pass_request_body off,
fastcgi_pass_request_body off,
uwsgi_pass_request_body off
илиscgi_pass_request_body off
соответственно. $request_completion
-
“
OK
” если запрос завершился,
либо пустая строка $request_filename
-
путь к файлу для текущего запроса, формируемый из директив
root или alias и URI запроса $request_id
-
уникальный идентификатор запроса,
сформированный из 16 случайных байт, в шестнадцатеричном виде (1.11.0) $request_length
-
длина запроса (включая строку запроса, заголовок и тело запроса)
(1.3.12, 1.2.7) $request_method
-
метод запроса, обычно
“GET
” или “POST
” $request_time
-
время обработки запроса в секундах с точностью до миллисекунд
(1.3.9, 1.2.6);
время, прошедшее с момента чтения первых байт от клиента $request_uri
- первоначальный URI запроса целиком (с аргументами)
$scheme
-
схема запроса, “
http
” или “https
” $sent_http_
имя
-
произвольное поле заголовка ответа;
последняя часть имени переменной соответствует имени поля, приведённому
к нижнему регистру, с заменой символов тире на символы подчёркивания $sent_trailer_
имя
-
произвольное поле, отправленное в конце ответа (1.13.2);
последняя часть имени переменной соответствует имени поля, приведённому
к нижнему регистру, с заменой символов тире на символы подчёркивания $server_addr
-
адрес сервера, принявшего запрос
Получение значения этой переменной обычно требует одного системного вызова.
Чтобы избежать системного вызова, в директивах listen
следует указывать адреса и использовать параметрbind
. $server_name
- имя сервера, принявшего запрос
$server_port
- порт сервера, принявшего запрос
$server_protocol
-
протокол запроса, обычно
“HTTP/1.0
”,
“HTTP/1.1
”
или
“HTTP/2.0” $status
- статус ответа (1.3.2, 1.2.2)
$time_iso8601
- локальное время в формате по стандарту ISO 8601 (1.3.12, 1.2.7)
$time_local
- локальное время в Common Log Format (1.3.12, 1.2.7)
-
$tcpinfo_rtt
,
$tcpinfo_rttvar
,
$tcpinfo_snd_cwnd
,
$tcpinfo_rcv_space
-
информация о клиентском TCP-соединении; доступна на системах,
поддерживающих параметр сокетаTCP_INFO
$uri
-
текущий URI запроса в нормализованном виде
Значение
$uri
может изменяться в процессе обработки запроса,
например, при внутренних перенаправлениях
или при использовании индексных файлов.
Configure NGINX and NGINX Plus as a web server, with support for virtual server multi-tenancy, URI and response rewriting, variables, and error handling.
This article explains how to configure NGINX Open Source and NGINX Plus as a web server, and includes the following sections:
- Setting Up Virtual Servers
- Configuring Locations
- Location Priority
- Using Variables
- Returning Specific Status Codes
- Rewriting URIs in Requests
- Rewriting HTTP Responses
- Handling Errors
For additional information on how to tune NGINX Plus and NGINX Open Source, watch our free webinar on-demand Installing and Tuning NGINX.
Note: The information in this article applies to both NGINX Open Source and NGINX Plus. For ease of reading, the remainder of the article refers to NGINX Plus only.
At a high level, configuring NGINX Plus as a web server is a matter of defining which URLs it handles and how it processes HTTP requests for resources at those URLs. At a lower level, the configuration defines a set of virtual servers that control the processing of requests for particular domains or IP addresses. For more information about configuration files, see Creating NGINX Plus Configuration Files.
Each virtual server for HTTP traffic defines special configuration instances called locations that control processing of specific sets of URIs. Each location defines its own scenario of what happens to requests that are mapped to this location. NGINX Plus provides full control over this process. Each location can proxy the request or return a file. In addition, the URI can be modified, so that the request is redirected to another location or virtual server. Also, a specific error code can be returned and you can configure a specific page to correspond to each error code.
Setting Up Virtual Servers
The NGINX Plus configuration file must include at least one server directive to define a virtual server. When NGINX Plus processes a request, it first selects the virtual server that will serve the request.
A virtual server is defined by a server
directive in the http
context, for example:
http {
server {
# Server configuration
}
}
It is possible to add multiple server
directives into the http
context to define multiple virtual servers.
The server
configuration block usually includes a listen directive to specify the IP address and port (or Unix domain socket and path) on which the server listens for requests. Both IPv4 and IPv6 addresses are accepted; enclose IPv6 addresses in square brackets.
The example below shows configuration of a server that listens on IP address 127.0.0.1 and port 8080:
server {
listen 127.0.0.1:8080;
# Additional server configuration
}
If a port is omitted, the standard port is used. Likewise, if an address is omitted, the server listens on all addresses. If the listen
directive is not included at all, the “standard” port is 80/tcp
and the “default” port is 8000/tcp
, depending on superuser privileges.
If there are several servers that match the IP address and port of the request, NGINX Plus tests the request’s Host
header field against the server_name directives in the server
blocks. The parameter to server_name
can be a full (exact) name, a wildcard, or a regular expression. A wildcard is a character string that includes the asterisk (*
) at its beginning, end, or both; the asterisk matches any sequence of characters. NGINX Plus uses the Perl syntax for regular expressions; precede them with the tilde (~
). This example illustrates an exact name.
server {
listen 80;
server_name example.org www.example.org;
#...
}
If several names match the Host
header, NGINX Plus selects one by searching for names in the following order and using the first match it finds:
- Exact name
- Longest wildcard starting with an asterisk, such as
*.example.org
- Longest wildcard ending with an asterisk, such as
mail.*
- First matching regular expression (in order of appearance in the configuration file)
If the Host
header field does not match a server name, NGINX Plus routes the request to the default server for the port on which the request arrived. The default server is the first one listed in the nginx.conf file, unless you include the default_server
parameter to the listen
directive to explicitly designate a server as the default.
server {
listen 80 default_server;
#...
}
Configuring Locations
NGINX Plus can send traffic to different proxies or serve different files based on the request URIs. These blocks are defined using the location directive placed within a server
directive.
For example, you can define three location
blocks to instruct the virtual server to send some requests to one proxied server, send other requests to a different proxied server, and serve the rest of the requests by delivering files from the local file system.
NGINX Plus tests request URIs against the parameters of all location
directives and applies the directives defined in the matching location. Inside each location
block, it is usually possible (with a few exceptions) to place even more location
directives to further refine the processing for specific groups of requests.
Note: In this guide, the word location refers to a single location context.
There are two types of parameter to the location
directive: prefix strings (pathnames) and regular expressions. For a request URI to match a prefix string, it must start with the prefix string.
The following sample location with a pathname parameter matches request URIs that begin with /some/path/, such as /some/path/document.html. (It does not match /my-site/some/path because /some/path does not occur at the start of that URI.)
location /some/path/ {
#...
}
A regular expression is preceded with the tilde (~
) for case-sensitive matching, or the tilde-asterisk (~*
) for case-insensitive matching. The following example matches URIs that include the string .html or .htm in any position.
location ~ .html? {
#...
}
NGINX Location Priority
To find the location that best matches a URI, NGINX Plus first compares the URI to the locations with a prefix string. It then searches the locations with a regular expression.
Higher priority is given to regular expressions, unless the ^~
modifier is used. Among the prefix strings NGINX Plus selects the most specific one (that is, the longest and most complete string). The exact logic for selecting a location to process a request is given below:
- Test the URI against all prefix strings.
- The
=
(equals sign) modifier defines an exact match of the URI and a prefix string. If the exact match is found, the search stops. - If the
^~
(caret-tilde) modifier prepends the longest matching prefix string, the regular expressions are not checked. - Store the longest matching prefix string.
- Test the URI against regular expressions.
- Stop processing when the first matching regular expression is found and use the corresponding location.
- If no regular expression matches, use the location corresponding to the stored prefix string.
A typical use case for the =
modifier is requests for / (forward slash). If requests for / are frequent, specifying = /
as the parameter to the location
directive speeds up processing, because the search for matches stops after the first comparison.
A location
context can contain directives that define how to resolve a request – either serve a static file or pass the request to a proxied server. In the following example, requests that match the first location
context are served files from the /data directory and the requests that match the second are passed to the proxied server that hosts content for the www.example.com domain.
server {
location /images/ {
root /data;
}
location / {
proxy_pass http://www.example.com;
}
}
The root directive specifies the file system path in which to search for the static files to serve. The request URI associated with the location is appended to the path to obtain the full name of the static file to serve. In the example above, in response to a request for /images/example.png, NGINX Plus delivers the file /data/images/example.png.
The proxy_pass directive passes the request to the proxied server accessed with the configured URL. The response from the proxied server is then passed back to the client. In the example above, all requests with URIs that do not start with /images/ are be passed to the proxied server.
Using Variables
You can use variables in the configuration file to have NGINX Plus process requests differently depending on defined circumstances. Variables are named values that are calculated at runtime and are used as parameters to directives. A variable is denoted by the $
(dollar) sign at the beginning of its name. Variables define information based upon NGINX’s state, such as the properties of the request being currently processed.
There are a number of predefined variables, such as the core HTTP variables, and you can define custom variables using the set, map, and geo directives. Most variables are computed at runtime and contain information related to a specific request. For example, $remote_addr
contains the client IP address and $uri
holds the current URI value.
Returning Specific Status Codes
Some website URIs require immediate return of a response with a specific error or redirect code, for example when a page has been moved temporarily or permanently. The easiest way to do this is to use the return directive. For example:
location /wrong/url {
return 404;
}
The first parameter of return
is a response code. The optional second parameter can be the URL of a redirect (for codes 301
, 302
, 303
, and 307
) or the text to return in the response body. For example:
location /permanently/moved/url {
return 301 http://www.example.com/moved/here;
}
The return
directive can be included in both the location
and server
contexts.
Rewriting URIs in Requests
A request URI can be modified multiple times during request processing through the use of the rewrite directive, which has one optional and two required parameters. The first (required) parameter is the regular expression that the request URI must match. The second parameter is the URI to substitute for the matching URI. The optional third parameter is a flag that can halt processing of further rewrite
directives or send a redirect (code 301
or 302
). For example:
location /users/ {
rewrite ^/users/(.*)$ /show?user=$1 break;
}
As this example shows, the second parameter users
captures though matching of regular expressions.
You can include multiple rewrite
directives in both the server
and location
contexts. NGINX Plus executes the directives one-by-one in the order they occur. The rewrite
directives in a server
context are executed once when that context is selected.
After NGINX processes a set of rewriting instructions, it selects a location
context according to the new URI. If the selected location contains rewrite
directives, they are executed in turn. If the URI matches any of those, a search for the new location starts after all defined rewrite
directives are processed.
The following example shows rewrite
directives in combination with a return
directive.
server {
#...
rewrite ^(/download/.*)/media/(w+).?.*$ $1/mp3/$2.mp3 last;
rewrite ^(/download/.*)/audio/(w+).?.*$ $1/mp3/$2.ra last;
return 403;
#...
}
This example configuration distinguishes between two sets of URIs. URIs such as /download/some/media/file are changed to /download/some/mp3/file.mp3. Because of the last
flag, the subsequent directives (the second rewrite
and the return
directive) are skipped but NGINX Plus continues processing the request, which now has a different URI. Similarly, URIs such as /download/some/audio/file are replaced with /download/some/mp3/file.ra. If a URI doesn’t match either rewrite
directive, NGINX Plus returns the 403
error code to the client.
There are two parameters that interrupt processing of rewrite
directives:
last
– Stops execution of therewrite
directives in the currentserver
orlocation
context, but NGINX Plus searches for locations that match the rewritten URI, and anyrewrite
directives in the new location are applied (meaning the URI can be changed again).break
– Like the break directive, stops processing ofrewrite
directives in the current context and cancels the search for locations that match the new URI. Therewrite
directives in the new location are not executed.
Rewriting HTTP Responses
Sometimes you need to rewrite or change the content in an HTTP response, substituting one string for another. You can use the sub_filter directive to define the rewrite to apply. The directive supports variables and chains of substitutions, making more complex changes possible.
For example, you can change absolute links that refer to a server other than the proxy:
location / {
sub_filter /blog/ /blog-staging/;
sub_filter_once off;
}
Another example changes the scheme from http://
to https://
and replaces the localhost
address with the hostname from the request header field. The sub_filter_once directive tells NGINX to apply sub_filter directives consecutively within a location:
location / {
sub_filter 'href="http://127.0.0.1:8080/' 'href="https://$host/';
sub_filter 'img src="http://127.0.0.1:8080/' 'img src="https://$host/';
sub_filter_once on;
}
Note that the part of the response already modified with the sub_filter
is not replaced again if another sub_filter
match occurs.
Handling Errors
With the error_page directive, you can configure NGINX Plus to return a custom page along with an error code, substitute a different error code in the response, or redirect the browser to a different URI. In the following example, the error_page
directive specifies the page (/404.html) to return with the 404
error code.
error_page 404 /404.html;
Note that this directive does not mean that the error is returned immediately (the return
directive does that), but simply specifies how to treat errors when they occur. The error code can come from a proxied server or occur during processing by NGINX Plus (for example, the 404
results when NGINX Plus can’t find the file requested by the client).
In the following example, when NGINX Plus cannot find a page, it substitutes code 301
for code 404
, and redirects the client to http:/example.com/new/path.html. This configuration is useful when clients are still trying to access a page at its old URI. The 301
code informs the browser that the page has moved permanently, and it needs to replace the old address with the new one automatically upon return.
location /old/path.html {
error_page 404 =301 http:/example.com/new/path.html;
}
The following configuration is an example of passing a request to the back end when a file is not found. Because there is no status code specified after the equals sign in the error_page
directive, the response to the client has the status code returned by the proxied server (not necessarily 404
).
server {
...
location /images/ {
# Set the root directory to search for the file
root /data/www;
# Disable logging of errors related to file existence
open_file_cache_errors off;
# Make an internal redirect if the file is not found
error_page 404 = /fetch$uri;
}
location /fetch/ {
proxy_pass http://backend/;
}
}
The error_page
directive instructs NGINX Plus to make an internal redirect when a file is not found. The $uri
variable in the final parameter to the error_page
directive holds the URI of the current request, which gets passed in the redirect.
For example, if /images/some/file is not found, it is replaced with /fetch/images/some/file and a new search for a location starts. As a result, the request ends up in the second location
context and is proxied to http://backend/.
The open_file_cache_errors directive prevents writing an error message if a file is not found. This is not necessary here since missing files are correctly handled.
Every time NGINX encounters an error as it attempts to process a client’s request, it returns an error. Each error includes an HTTP response code and a short description. The error usually is displayed to a user via a simple default HTML page.
Fortunately, you can configure NGINX to display custom error pages to your site’s or web application’s users. This can be achieved using the NGINX’s error_page directive which is used to define the URI that will be shown for a specified error. You can also, optionally use it to modify the HTTP status code in the response headers sent to a client.
In this guide, we will show how to configure NGINX to use custom error pages.
Create a Single Custom Page for All NGINX Errors
You can configure NGINX to use a single custom error page for all errors that it returns to a client. Start by creating your error page. Here is an example, a simple HTML page that displays the message:
“Sorry, the page can't be loaded! Contact the site's administrator or support for assistance.” to a client.
Sample HTML Nginx Custom page code.
<!DOCTYPE html> <html> <head> <style type=text/css> * { -webkit-box-sizing: border-box; box-sizing: border-box; } body { padding: 0; margin: 0; } #notfound { position: relative; height: 100vh; } #notfound .notfound { position: absolute; left: 50%; top: 50%; -webkit-transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); } .notfound { max-width: 520px; width: 100%; line-height: 1.4; text-align: center; } .notfound .notfound-error { position: relative; height: 200px; margin: 0px auto 20px; z-index: -1; } .notfound .notfound-error h1 { font-family: 'Montserrat', sans-serif; font-size: 200px; font-weight: 300; margin: 0px; color: #211b19; position: absolute; left: 50%; top: 50%; -webkit-transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); } @media only screen and (max-width: 767px) { .notfound .notfound-error h1 { font-size: 148px; } } @media only screen and (max-width: 480px) { .notfound .notfound-error { height: 148px; margin: 0px auto 10px; } .notfound .notfound-error h1 { font-size: 120px; font-weight: 200px; } .notfound .notfound-error h2 { font-size: 30px; } .notfound a { padding: 7px 15px; font-size: 24px; } .h2 { font-size: 148px; } } </style> </head> <body> <div id="notfound"> <div class="notfound"> <h1>Sorry the page can't be loaded!</a></h1> <div class="notfound-error"> <p>Contact the site's administrator or support for assistance.</p> </div> </div> </div> </body> </html>
Save the file with an appropriate name for example error-page.html and close it.
Next, move the file to your document root directory (/var/www/html/). If the directory doesn’t exist, you can create it using the mkdir command, as shown:
$ sudo mkdir -p /var/www/html/ $ sudo cp error-page.html /var/www/html/
Then configure NGINX to use the custom error page using the error_page directive. Create a configuration file called custom-error-page.conf under /etc/nginx/snippets/ as shown.
$ sudo mkdir /etc/nginx/snippets/ $ sudo vim /etc/nginx/snippets/custom-error-page.conf
Add the following lines to it:
error_page 404 403 500 503 /error-page.html; location = /error-page.html { root /var/www/html; internal; }
This configuration causes an internal redirect to the URI/error-page.html every time NGINX encounters any of the specified HTTP errors 404, 403, 500, and 503. The location context tells NGINX where to find your error page.
Save the file and close it.
Now include the file in the http context so that all server blocks use the error page, in the /etc/nginx/nginx.conf file:
$ sudo vim /etc/nginx/nginx.conf
The include directory tells NGINX to include the configuration in the specified .conf
file:
include snippets/custom-error-page.conf;
Alternatively, you can include the file for a specific server block (commonly known as vhost), for example, /etc/nginx/conf.d/mywebsite.conf. Add the above include directive in the server {}
context.
Save your NGINX configuration file and reload the service as follows:
$ sudo systemctl reload nginx.service
And test from a browser if the setup is working fine.
Create Different Custom Pages for Each NGINX Error
You can also set up different custom error pages for each HTTP error in NGINX. We discovered a good collection of custom nginx error pages created by Denys Vitali on Github.
To set up the repository on your server, run the following commands:
$ sudo git clone https://github.com/denysvitali/nginx-error-pages /srv/http/default $ sudo mkdir /etc/nginx/snippets/ $ sudo ln -s /srv/http/default/snippets/error_pages.conf /etc/nginx/snippets/error_pages.conf $ sudo ln -s /srv/http/default/snippets/error_pages_content.conf /etc/nginx/snippets/error_pages_content.conf
Next, add the following configuration in either your http context or each server block/vhost:
include snippets/error_pages.conf;
Save your NGINX configuration file and reload the service as follows:
$ sudo systemctl reload nginx.service
Also, test from a browser if the configuration is working as intended. In this example, we tested the 404 error page.
That’s all we had for you in this guide. NGINX’s error_page directive allows you to redirect users to a defined page or resource or URL when an error occurs. It also optionally allows for modification of the HTTP status code in the response to a client. For more information, read the nginx error page documentation.
If You Appreciate What We Do Here On TecMint, You Should Consider:
TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.
If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.
We are thankful for your never ending support.
Currently every invalid page is 500 (Internal Server Error) because I probably messed up with my server block configuration.
I decided to shut down my website a while ago and created a simple one-page, thank-you homepage. However old links and external sites are still trying to access other parts of the site, which no longer exists.
How do I force redirect all non-homepage (any invalid URL) to the homepage?
I tried with the following block, but it didn’t work:
location / {
try_files $uri $uri/ $document_uri/index.html;
}
My current configuration is (I don’t even serve PHP files right now, ie homepage is simple html):
server {
server_name www.example.com example.com;
access_log /srv/www/example.com/logs/access.log;
error_log /srv/www/example.com/logs/error.log;
root /srv/www/example.com/public_html;
index index.php index.html;
location / {
try_files $uri $uri/ $document_uri/index.html;
}
# Disable favicon.ico logging
location = /favicon.ico {
log_not_found off;
access_log off;
}
# Allow robots and disable logging
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Enable permalink structures
if (!-e $request_filename) {
rewrite . /index.php last;
}
# Handle php requests
location ~ .php$ {
try_files $uri = 404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_send_timeout 900;
fastcgi_read_timeout 900;
fastcgi_connect_timeout 900;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
# Disable static content logging and set cache time to max
location ~* .(jpg|jpeg|gif|png|css|js|ico|xml)$ {
access_log off;
log_not_found off;
expires max;
}
# Deny access to htaccess and htpasswd files
location ~ /.ht {
deny all;
}
# Deny access to hidden files (beginning with a period)
location ~ /. {
access_log off; log_not_found off; deny all;
}
}
JJJ
32.6k20 gold badges89 silver badges102 bronze badges
asked Oct 21, 2013 at 6:12
Setting the error page to the home page like this
error_page 404 /index.html;
has a small problem, the status code of the home page will be «404 not found», if you want to load the home page with a «200 ok» status code you should do it like this
error_page 404 =200 /index.html;
This will convert the «404 not found» error code to a «200 ok» code, and load the home page
The second method which @jvperrin mentioned is good too,
try_files $uri $uri/ /index.html;
but you need to keep 1 thing in mind, since it’s the location /
any asset that doesn’t match another location and is not found will also load the index.html
, for example missing images, css, js files, but in your case I can see you already have another location that’s matching the assets’ extensions, so you shouldn’t face this problem.
answered Oct 21, 2013 at 8:25
Mohammad AbuShadyMohammad AbuShady
39.8k10 gold badges76 silver badges89 bronze badges
9
To get a true redirect you could do this:
in server block define the error-page you want to redirect like this:
# define error page
error_page 404 = @myownredirect;
error_page 500 = @myownredirect;
Then you define that location:
# error page location redirect 302
location @myownredirect {
return 302 /;
}
In this case errors 404 and 500 generates HTTP 302 (temporary redirect) to / (could of course be any URL)
If you use fast-cgi for php or other these blocks must have the following added to send the errors «upstrem» to server-block:
fastcgi_intercept_errors on;
030
10.3k12 gold badges73 silver badges120 bronze badges
answered Mar 21, 2017 at 10:02
2
This solution for nginx hosted site:
Edit your virtual hosting file
sudo nano /etc/nginx/sites-available/vishnuku.com
Add this snippet in the server block
# define error page
error_page 404 = @notfound;
# error page location redirect 301
location @notfound {
return 301 /;
}
In your php block put the fastcgi_intercept_errors set to on
location ~ .php$ {
include /etc/nginx/fastcgi_params;
# intercept errors for 404 redirect
fastcgi_intercept_errors on;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
Final code will look like this
server {
listen 80;
server_name vishnuku.com;
root /var/www/nginx/vishnuku.com;
index index.php index.html;
access_log /var/log/nginx/vishnuku.com.log;
error_log /var/log/nginx/vishnuku.com.error.log;
location / {
try_files $uri $uri/ /index.php?$args /;
}
# define error page
error_page 404 = @notfound;
# error page location redirect 301
location @notfound {
return 301 /;
}
location ~ .php$ {
include /etc/nginx/fastcgi_params;
fastcgi_intercept_errors on;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /.ht {
deny all;
}
location = /nginx.conf {
deny all;
}
}
answered May 13, 2019 at 9:54
VishnuVishnu
3,6392 gold badges18 silver badges18 bronze badges
Try adding the following line after your index
definition:
error_page 404 /index.html;
If that doesn’t work, try changing your try_files
call to the following instead:
try_files $uri $uri/ /index.html;
Hopefully one of those works for you, I haven’t tested either yet.
answered Oct 21, 2013 at 6:16
jvperrinjvperrin
3,3701 gold badge23 silver badges33 bronze badges
4
Must use
"fastcgi_intercept_errors on;"
along with the custom redirect location like
error_page 404 =200 /index.html
or
as above
location @myownredirect {
return 302 /;
}
answered May 12, 2017 at 9:35
AntoAnto
2,7781 gold badge17 silver badges19 bronze badges
Try this:
error_page 404 $scheme://$host/index.html;
answered May 25, 2022 at 22:43
Hi first there are a lot of different ways to redirect all the 404 to the home page
this helps you in SEO
make such u use
fastcgi_intercept_errors on;
than add this following to your config
error_page 404 =301 http://yourdomain.com/;
error_page 403 /error403.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
}
answered Sep 4, 2019 at 19:26
Came here when I was looking to implement similar where I’m using Nginx as a reverse proxy to a self-hosted MinIO bucket, and none of the above answers have this. So, if you’re using proxy_pass
and are intercepting errors, you can also handle this logic to redirect to the index.html
page.
Handling URLs such as:
- http://localhost/ — 200 OK
- http://localhost/index.html — 200 OK
- http://localhost/non-existant/ — 404 Not Found
server {
listen 80;
server_name localhost;
location / {
rewrite ^/$ /static/index.html break;
proxy_intercept_errors on;
proxy_pass http://something-to-proxy/static/;
}
error_page 404 /index.html;
}
answered Apr 16, 2021 at 22:08
ReeceReece
5643 silver badges10 bronze badges
Another-correct way to redirect from error page to the home page. The setup example applies to the nginx server configuration file:
...
http {
...
server{
...
#Catch 40x errors:
error_page 400 401 402 403 404 = @RedirectToHome;
#Catch 50x errors:
error_page 500 501 502 503 504 = @RedirectToHome;
#We are now redirecting to the homepage of the site
location @RedirectToHome {
return 301 http://www.example.com;
#return 301 https://www.example.com;
#return 301 /;
}
...
}
}
You should avoid setting return 301 /;
if port forwarding to the server was performed somewhere, because this nginx redirection will then be performed to the port on which the server is listening for incoming connections. Therefore, it is better to set the correct hostname(site name) in this configuration.
answered Jan 10, 2022 at 22:19
7 июня, 2022 12:03 пп
373 views
| Комментариев нет
LEMP Stack, Ubuntu
Nginx – это высокопроизводительный веб-сервер, способный гибко и качественно обслуживать контент. Оформляя страницы своего сайта, вы наверняка захотите создать пользовательский стиль для каждого его элемента, включая и страницы об ошибках, которые появляются, если контент недоступен. В этом руководстве мы покажем, как настроить такие страницы на Nginx.
Требования
- Виртуальный сервер с пользователем sudo (мы используем сервер Ubuntu 22.04, настроенный по этому мануалу).
- Предварительно установленный веб-сервер Nginx (инструкции по установке вы найдете здесь).
Создание пользовательской страницы об ошибке
Пользовательские страницы ошибок, которые мы используем здесь, предназначены для демонстрационных целей. Если у вас есть свои страницы, используйте их.
Поместите пользовательские страницы ошибок в каталог /usr/share/nginx/html, корневой каталог Nginx по умолчанию. Там мы создадим страницу для ошибки 404 под названием custom_404.html и для общих ошибок уровня 500 под названием custom_50x.html.
Примечание: Дальнейшие строки можно использовать, если вы тренируетесь на наших страницах. В противном случае не забудьте указать свои данные.
Сначала создайте HTML-файл для своей пользовательской страницы 404 с помощью nano или другого текстового редактора:
sudo nano /usr/share/nginx/html/custom_404.html
Вставьте туда код, который определяет пользовательскую страницу:
<h1 style='color:red'>Error 404: Not found :-(</h1> <p>I have no idea where that file is, sorry. Are you sure you typed in the correct URL?</p>
Сохраните и закройте файл.
Теперь создайте файл HTML для страницы 500:
sudo nano /usr/share/nginx/html/custom_50x.html
Вставьте в файл следующее:
<h1>Oops! Something went wrong...</h1> <p>We seem to be having some technical difficulties. Hang tight.</p>
Сохраните и закройте файл.
В данный момент у вас есть две пользовательские страницы ошибок, которые будут отображаться на сайте, когда запросы клиентов приводят к различным ошибкам.
Настройка Nginx для поддержки пользовательских страниц
Итак, пора сообщить Nginx, что он должен использовать эти страницы всякий раз, когда возникают соответствующие ошибки. Откройте тот файл server-блока в каталоге /etc/nginx/sites-enabled, который вы хотите настроить. Здесь мы используем стандартный файл по имени default. Если вы настраиваете свои собственные страницы, пожалуйста, убедитесь, что используете правильный файл:
sudo nano /etc/nginx/sites-enabled/default
Теперь нужно направить Nginx на соответствующие страницы.
Настройка пользовательской страницы 404
Используйте директиву error_page, чтобы при возникновении ошибки 404 (когда запрошенный файл не найден) обслуживалась созданная вами пользовательская страница. Создайте location-блок для вашего файла, где вы сможете установить его правильное расположение в файловой системе и указать, что файл доступен только через внутренние перенаправления Nginx (не запрашиваемые клиентами напрямую):
server { listen 80 default_server; . . . error_page 404 /custom_404.html; location = /custom_404.html { root /usr/share/nginx/html; internal; } }
Обычно устанавливать root в новом блоке location не нужно, так как он совпадает с root в блоке server. Однако здесь мы явно указываем, что страницы ошибок нужно обслуживать, даже если вы перемещаете свой обычный веб-контент и связанный с ним root в другое место.
Настройка страницы ошибок 50х
Затем добавьте новые директивы, чтобы Nginx, столкнувшись с ошибками уровня 500 (это проблемы, связанные с сервером), мог обслуживать другую пользовательскую страницу, которую вы создали. Здесь мы будем следовать той же формуле, которую вы использовали в предыдущем разделе. На этот раз мы насторим несколько ошибок уровня 500, чтобы все они использовали страницу custom_50x.html.
Внизу мы также добавим фиктивный FastCGI, чтобы вы могли протестировать свою страницу с ошибкой уровня 500. Это выдаст ошибку, потому что бэкэнд на самом деле не существует. Так вы можете убедиться, что ошибки уровня 500 обслуживают вашу пользовательскую страницу.
Отредактируйте файл /etc/nginx/sites-enabled/default следующим образом:
server { listen 80 default_server; . . . error_page 404 /custom_404.html; location = /custom_404.html { root /usr/share/nginx/html; internal; } error_page 500 502 503 504 /custom_50x.html; location = /custom_50x.html { root /usr/share/nginx/html; internal; } location /testing { fastcgi_pass unix:/does/not/exist; } }
Сохраните и закройте файл, когда закончите.
Перезапуск Nginx и тестирование
Чтобы проверить синтаксис ваших файлов, введите:
sudo nginx -t
Если команда обнаружила какие-либо ошибки, исправьте их, прежде чем продолжить. Перезапустите Nginx, если ошибок нет:
sudo systemctl restart nginx
Теперь, если вы перейдете на домен или IP-адрес вашего сервера и запросите несуществующий файл, вы должны увидеть настроенную вами страницу 404:
http://server_domain_or_IP/thiswillerror
Перейдите в расположение вашего FastCGI и вы получите ошибку 502 Bad Gateway, что является ошибкой уровня 50х:
http://server_domain_or_IP/testing
Вернитесь в конфигурационный файл и удалите фиктивный FastCGI.
Заключение
Теперь ваш веб-сервер может обслуживать пользовательские страницы ошибок. Это простой способ персонализировать ваш сайт и обеспечить лучший пользовательский опыт даже при возникновении ошибок. Один из способов оптимизировать эти страницы – разместить на них дополнительную информацию или полезные ссылки для пользователей. Если вы сделаете это, убедитесь, что ссылки доступны даже при возникновении соответствующих ошибок.
Tags: NGINX, Ubuntu 22.04
Base Rules
Go back to the Table of Contents or What’s next? section.
📌 These are the basic set of rules to keep NGINX in good condition.
- ≡ Base Rules (16)
- Organising Nginx configuration
- Format, prettify and indent your Nginx code
- Use reload option to change configurations on the fly
- Separate listen directives for 80 and 443 ports
- Define the listen directives with address:port pair
- Prevent processing requests with undefined server names
- Never use a hostname in a listen or upstream directives
- Set the HTTP headers with add_header and proxy_*_header directives properly
- Use only one SSL config for the listen directive
- Use geo/map modules instead of allow/deny
- Map all the things…
- Set global root directory for unmatched locations
- Use return directive for URL redirection (301, 302)
- Configure log rotation policy
- Use simple custom error pages
- Don’t duplicate index directive, use it only in the http block
- Debugging
- Performance
- Hardening
- Reverse Proxy
- Load Balancing
- Others
🔰 Organising Nginx configuration
Rationale
When your NGINX configuration grow, the need for organising your configuration will also grow. Well organised code is:
- easier to understand
- easier to maintain
- easier to work with
Use
include
directive to move and to split common server settings into multiple files and to attach your specific code to global config or contexts. This helps in organizing code into logical components. Inclusions are processed recursively, that is, an include file can further have include statements.
Work out your own directory structure (from the top-level directory to the lowest) and apply it when working with NGINX. Think about it carefully and figure out what’s going to be best for you and the easiest to maintain.
I always try to keep multiple directories in a root of configuration tree. These directories stores all configuration files which are attached to the main file (e.g.
nginx.conf
) and, if necessary, mostly to the files which hasserver
directives.
I prefer the following structure:
html
— for default static files, e.g. global 5xx error pagemaster
— for main configuration, e.g. acls, listen directives, and domains
_acls
— for access control lists, e.g. geo or map modules_basic
— for rate limiting rules, redirect maps, or proxy params_listen
— for all listen directives; also stores SSL configuration_server
— for domains configuration; also stores all backends definitionsmodules
— for modules which are dynamically loading into NGINXsnippets
— for NGINX aliases, configuration templates
Example
# In https.conf for example: listen 10.240.20.2:443 ssl; ssl_certificate /etc/nginx/master/_server/example.com/certs/nginx_example.com_bundle.crt; ssl_certificate_key /etc/nginx/master/_server/example.com/certs/example.com.key; ... # Include 'https.conf' to the server section: server { include /etc/nginx/master/_listen/10.240.20.2/https.conf; # And other external files: include /etc/nginx/master/_static/errors.conf; include /etc/nginx/master/_server/_helpers/global.conf; server_name example.com www.example.com; ...
External resources
- How I Manage Nginx Config
- Organize your data and code
- How to keep your R projects organized
🔰 Format, prettify and indent your Nginx code
Rationale
Work with unreadable configuration files is terrible. If syntax is not very clear and readable, it makes your eyes sore, and you suffers from headaches.
When your code is formatted, it is significantly easier to maintain, debug, optimise, and can be read and understood in a short amount of time. You should eliminate code style violations from your NGINX configuration files.
Spaces, tabs, and new line characters are not part of the NGINX configuration. They are not interpreted by the NGINX engine, but they help to make the configuration more readable.
Choose your formatter style and setup a common config for it. Some rules are universal, but in my view, the most important thing is to keep a consistent NGINX code style throughout your code base:
- use whitespaces and blank lines to arrange and separate code blocks
- tabs vs spaces — more important to be consistent throughout your code than to use any specific type
- tabs are consistent, customizable and allow mistakes to be more noticeable (unless you are a 4 space kind of guy)
- a space is always one column, use it if you want your beautiful work to appear right for everyone
- use comments to explain why things are done not what is done
- use meaningful naming conventions
- simple is better than complex but complex is better than complicated
Some would say that NGINX’s files are written in their own language so we should not overdo it with above rules. I think, it is worth sticking to the general (programming) rules and make your and other NGINX adminstrators life easier.
Example
Not recommended code style:
http { include nginx/proxy.conf; include /etc/nginx/fastcgi.conf; index index.html index.htm index.php; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; tcp_nopush on; server_names_hash_bucket_size 128; # this seems to be required for some vhosts ...
Recommended code style:
http { # Attach global rules: include /etc/nginx/proxy.conf; include /etc/nginx/fastcgi.conf; index index.html index.htm index.php; default_type application/octet-stream; # Standard log format: log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; # This seems to be required for some vhosts: server_names_hash_bucket_size 128; ...
External resources
- Programming style
- Toward Developing Good Programming Style
- Death to the Space Infidels!
- Tabs versus Spaces: An Eternal Holy War
- nginx-config-formatter
- Format and beautify nginx config files
🔰 Use reload
option to change configurations on the fly
Rationale
Use the
reload
option to achieve a graceful reload of the configuration without stopping the server and dropping any packets. This function of the master process allows to rolls back the changes and continues to work with stable and old working configuration.
This ability of NGINX is very critical in a high-uptime and dynamic environments for keeping the load balancer or standalone server online.
Master process checks the syntax validity of the new configuration and tries to apply all changes. If this procedure has been accomplished, the master process create new worker processes and sends shutdown messages to old. Old workers stops accepting new connections after received a shut down signal but current requests are still processing. After that, the old workers exit.
When you restart the NGINX service you might encounter situation in which NGINX will stop, and won’t start back again, because of syntax error. Reload method is safer than restarting because before old process will be terminated, new configuration file is parsed and whole process is aborted if there are any problems with it.
To stop processes with waiting for the worker processes to finish serving current requests use
nginx -s quit
command. It’s better thannginx -s stop
for fast shutdown.
From NGINX documentation:
In order for NGINX to re-read the configuration file, a
HUP
signal should be sent to the master process. The master process first checks the syntax validity, then tries to apply new configuration, that is, to open log files and new listen sockets. If this fails, it rolls back changes and continues to work with old configuration. If this succeeds, it starts new worker processes, and sends messages to old worker processes requesting them to shut down gracefully. Old worker processes close listen sockets and continue to service old clients. After all clients are serviced, old worker processes are shut down.
Example
# 1) systemctl reload nginx # 2) service nginx reload # 3) /etc/init.d/nginx reload # 4) /usr/sbin/nginx -s reload # 5) kill -HUP $(cat /var/run/nginx.pid) # or kill -HUP $(pgrep -f "nginx: master") # 6) /usr/sbin/nginx -g 'daemon on; master_process on;' -s reload
External resources
- Changing Configuration
- Commands (from this handbook)
🔰 Separate listen
directives for 80 and 443 ports
Rationale
If you served HTTP and HTTPS with the exact same config (a single server that handles both HTTP and HTTPS requests) NGINX is intelligent enough to ignore the SSL directives if loaded over port 80.
I don’t like duplicating the rules, but separate
listen
directives is certainly to help you maintain and modify your configuration. I always split the configuration if I want to redirect from HTTP to HTTPS (or www to non-www, and vice versa). For me, the right way is to define a separate server context in any such cases.
It’s also useful if you pin multiple domains to one IP address. This allows you to attach one
listen
directive (e.g. if you keep it in the configuration file) to multiple domains configurations.
It may also be necessary to hardcode the domains if you’re using HTTPS, because you have to know upfront which certificates you’ll be providing.
You should also use
return
directive for redirect from HTTP to HTTPS (to hardcode everything, and not use regular expressions at all).
Example
# For HTTP: server { listen 10.240.20.2:80; # If you need redirect to HTTPS: return 301 https://example.com$request_uri; ... } # For HTTPS: server { listen 10.240.20.2:443 ssl; ... }
External resources
- Understanding the Nginx Configuration File Structure and Configuration Contexts
- Configuring HTTPS servers
- Force all connections over TLS — Hardening — P1 (from this handbook)
🔰 Define the listen
directives with address:port
pair
Rationale
NGINX translates all incomplete
listen
directives by substituting missing values with their default values.
And what’s more, will only evaluate the
server_name
directive when it needs to distinguish between server blocks that match to the same level in thelisten
directive.
Set IP address and port number to prevents soft mistakes which may be difficult to debug. In addition, no IP means bind to all IPs on your system, this can cause a lot of problems and it’s bad practice because it is recommended to only configure the minimum network access for services.
Example
# Client side: $ curl -Iks http://api.random.com # Server side: server { # This block will be processed: listen 192.168.252.10; # --> 192.168.252.10:80 ... } server { listen 80; # --> *:80 --> 0.0.0.0:80 server_name api.random.com; ... }
External resources
- Nginx HTTP Core Module — Listen
- Understanding different values for nginx ‘listen’ directive
🔰 Prevent processing requests with undefined server names
Rationale
It protects against configuration errors, e.g. traffic forwarding to incorrect backends, bypassing filters like an ACLs or WAFs. The problem is easily solved by creating a default dummy vhost (with
default_server
directive) that catches all requests with unrecognizedHost
headers.
As we know, the
Host
header tells the server which virtual host to use (if set up). You can even have the same virtual host using several aliases (= domains and wildcard-domains). This header can also be modified, so for security and cleanness reasons it’s a good practice to deny requests without host or with hosts not configured in any vhost. According to this, NGINX should prevent processing requests with undefined server names (also on IP address).
If none of the
listen
directives have thedefault_server
parameter then the first server with theaddress:port
pair will be the default server for this pair (it means that the NGINX always has a default server).
If someone makes a request using an IP address instead of a server name, the
Host
request header field will contain the IP address and the request can be handled using the IP address as the server name.
The server name
_
is not required in modern versions of NGINX (so you can put anything there). In fact, thedefault_server
does not need aserver_name
statement because it match anything that the other server blocks does not explicitly match.
If a server with a matching
listen
andserver_name
cannot be found, NGINX will use the default server. If your configurations are spread across multiple files, there evaluation order will be ambiguous, so you need to mark the default server explicitly.
NGINX uses
Host
header forserver_name
matching but it does not use TLS SNI. This means that NGINX must be able to accept SSL connection, which boils down to having certificate/key. The cert/key can be any, e.g. self-signed.
There is a simple procedure for all non defined server names:
- one
server
block, with…- complete
listen
directive, with…default_server
parameter, with…- only one
server_name
(but not required) definition, and…- preventively I add it at the beginning of the configuration (attach it to the file
nginx.conf
)
Also good point is
return 444;
(most commonly used to deny malicious or malformed requests) for default server name because this will close the connection (which will kill the connection without sending any headers so return nothing) and log it internally, for any domain that isn’t defined in NGINX. In addition, I would implement rate limiting rule.
Example
# Place it at the beginning of the configuration file to prevent mistakes: server { # For ssl option remember about SSL parameters (private key, certs, cipher suites, etc.); # add default_server to your listen directive in the server that you want to act as the default: listen 10.240.20.2:443 default_server ssl; # We catch: # - invalid domain names # - requests without the "Host" header # - and all others (also due to the above setting; like "--" or "!@#") # - default_server in server_name directive is not required # I add this for a better understanding and I think it's an unwritten standard # ...but you should know that it's irrelevant, really, you can put in everything there. server_name _ "" default_server; limit_req zone=per_ip_5r_s; ... # Close (hang up) connection without response: return 444; # We can also serve: # location / { # # static file (error page): # root /etc/nginx/error-pages/404; # or redirect: # return 301 https://badssl.com; # # } # Remember to log all actions (set up access and error log): access_log /var/log/nginx/default-access.log main; error_log /var/log/nginx/default-error.log warn; } server { listen 10.240.20.2:443 ssl; server_name example.com; ... } server { listen 10.240.20.2:443 ssl; server_name domain.org; ... }
External resources
- Server names
- How processes a request
- nginx: how to specify a default server
🔰 Never use a hostname in a listen
or upstream
directives
Rationale
Generaly, uses of hostnames in a
listen
orupstream
directives is a bad practice. In the worst case NGINX won’t be able to bind to the desired TCP socket which will prevent NGINX from starting at all.
The best and safer way is to know the IP address that needs to be bound to and use that address instead of the hostname. This also prevents NGINX from needing to look up the address and removes dependencies on external and internal resolvers.
Uses of
$hostname
(the machine’s hostname) variable in theserver_name
directive is also example of bad practice (it’s similar to use hostname label).
I believe it is also necessary to set IP address and port number pair to prevents soft mistakes which may be difficult to debug.
Example
Not recommended configuration:
upstream bk_01 { server http://x-9s-web01-prod.int:8080; } server { listen rev-proxy-prod:80; ... location / { # It's OK, bk_01 is the internal name: proxy_pass http://bk_01; ... } location /api { proxy_pass http://x-9s-web01-prod-api.int:80; ... } ... }
Recommended configuration:
upstream bk_01 { server http://192.168.252.200:8080; } server { listen 10.10.100.20:80; ... location / { # It's OK, bk_01 is the internal name: proxy_pass http://bk_01; ... } location /api { proxy_pass http://192.168.253.10:80; ... } ... }
External resources
- Using a Hostname to Resolve Addresses
- Define the listen directives with address:port pair — Base Rules — P1 (from this handbook)
🔰 Set the HTTP headers with add_header
and proxy_*_header
directives properly
Rationale
The
add_header
directive works in theif
,location
,server
, andhttp
scopes. Theproxy_*_header
directives works in thelocation
,server
, andhttp
scopes. These directives are inherited from the previous level if and only if there are noadd_header
orproxy_*_header
directives defined on the current level.
If you use them in multiple contexts only the lowest occurrences are used. So, if you specify it in the
server
andlocation
contexts (even if you hide different header by setting with the same directive and value) only the one of them in thelocation
block are used. To prevent this situation, you should define a common config snippet and only include it in each individuallocation
where you want these headers to be sent. It is the most predictable solution.
In my opinion, also interesting solution is use an include file with your global headers and add it to the
http
context (however, then you duplicate the rules unnecessarily). Next, you should also set up other include file with your server/domain specific configuration (but always with your global headers! You have to repeat it in the lowest contexts) and add it to theserver/location
contexts. However, it is a little more complicated and does not guarantee consistency in any way.
There are additional solutions to this, such as using an alternative module (headers-more-nginx-module) to define specific headers in
server
orlocation
blocks. It does not affect the above directives.
That is great explanation of the problem:
Therefore, let’s say you have an http block and have specified the
add_header
directive within that block. Then, within the http block you have 2 server blocks — one for HTTP and one for HTTPs.Let’s say we don’t include an
add_header
directive within the HTTP server block, however we do include an additionaladd_header
within the HTTPs server block. In this scenario, theadd_header
directive defined in the http block will only be inherited by the HTTP server block as it does not have anyadd_header
directive defined on the current level. On the other hand, the HTTPS server block will not inherit theadd_header
directive defined in the http block.
Example
Not recommended configuration:
http { # In this context: # set: # - 'FooX barX' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) # - 'X-Powered-By' (proxy_hide_header) proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; add_header FooX barX; ... server { server_name example.com; # In this context: # set: # - 'FooY barY' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) # - 'X-Powered-By' (proxy_hide_header) # not set: # - 'FooX barX' (add_header) add_header FooY barY; ... location / { # In this context: # set: # - 'Foo bar' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) # - 'X-Powered-By' (proxy_hide_header) # - headers from ngx_headers_global.conf # not set: # - 'FooX barX' (add_header) # - 'FooY barY' (add_header) include /etc/nginx/ngx_headers_global.conf; add_header Foo bar; ... } location /api { # In this context: # set: # - 'FooY barY' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) # - 'X-Powered-By' (proxy_hide_header) # not set: # - 'FooX barX' (add_header) ... } } server { server_name a.example.com; # In this context: # set: # - 'FooY barY' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Powered-By' (proxy_hide_header) # not set: # - 'FooX barX' (add_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_hide_header X-Powered-By; add_header FooY barY; ... location / { # In this context: # set: # - 'FooY barY' (add_header) # - 'X-Powered-By' (proxy_hide_header) # - 'Accept-Encoding ""' (proxy_set_header) # not set: # - 'FooX barX' (add_header) # - 'Host $host' (proxy_set_header) # - 'X-Real-IP $remote_addr' (proxy_set_header) # - 'X-Forwarded-For $proxy_add_x_forwarded_for' (proxy_set_header) proxy_set_header Accept-Encoding ""; ... } } }
Most recommended configuration:
# Store it in a file, e.g. proxy_headers.conf: proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; http { server { server_name example.com; ... location / { include /etc/nginx/proxy_headers.conf; include /etc/nginx/ngx_headers_global.conf; add_header Foo bar; ... } location /api { include /etc/nginx/proxy_headers.conf; include /etc/nginx/ngx_headers_global.conf; add_header Foo bar; more_set_headers 'FooY: barY'; ... } } server { server_name a.example.com; ... location / { include /etc/nginx/proxy_headers.conf; include /etc/nginx/ngx_headers_global.conf; add_header Foo bar; add_header FooX barX; ... } } server { server_name b.example.com; ... location / { include /etc/nginx/proxy_headers.conf; include /etc/nginx/ngx_headers_global.conf; add_header Foo bar; ... } } }
External resources
- Module ngx_http_headers_module — add_header
- Managing request headers
- Nginx add_header configuration pitfall
- Be very careful with your add_header in Nginx! You might make your site insecure
🔰 Use only one SSL config for the listen
directive
Rationale
For me, this rule making it easier to debug and maintain. It also prevents multiple TLS configurations on the same listening address.
You should use one SSL config for sharing a single IP address between several HTTPS configurations (e.g. protocols, ciphers, curves). It’s to prevent mistakes and configuration mismatch.
Using a common TLS configuration (stored in one file and added using the include directive) for all
server
contexts prevents strange behaviors. I think no better cure for a possible configuration clutter.
Remember that regardless of SSL parameters you are able to use multiple SSL certificates on the same
listen
directive (IP address). Also some of the TLS parameters may be different.
Also remember about configuration for default server. It’s important because if none of the listen directives have the
default_server
parameter then the first server in your configuration will be default server. Therefore you should use only one SSL setup for several server names on the same IP address.
Example
# Store it in a file, e.g. https.conf: ssl_protocols TLSv1.2; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305"; ssl_prefer_server_ciphers off; ssl_ecdh_curve secp521r1:secp384r1; ... # Include this file to the server context (attach domain-a.com for specific listen directive): server { listen 192.168.252.10:443 default_server ssl http2; include /etc/nginx/https.conf; server_name domain-a.com; ssl_certificate domain-a.com.crt; ssl_certificate_key domain-a.com.key; ... } # Include this file to the server context (attach domain-b.com for specific listen directive): server { listen 192.168.252.10:443 ssl; include /etc/nginx/https.conf; server_name domain-b.com; ssl_certificate domain-b.com.crt; ssl_certificate_key domain-b.com.key; ... }
External resources
- Nginx one ip and multiple ssl certificates
- Configuring HTTPS servers
🔰 Use geo/map
modules instead of allow/deny
Rationale
Use
map
orgeo
modules (one of them) to prevent users abusing your servers. This allows to create variables with values depending on the client IP address.
Since variables are evaluated only when used, the mere existence of even a large number of declared e.g.
geo
variables does not cause any extra costs for request processing.
These directives provides the perfect way to block invalid visitors e.g. with
ngx_http_geoip_module
. For example,geo
module is great for conditionally allow or deny IP.
geo
module (watch out: don’t mistake this module for the GeoIP) builds in-memory radix tree when loading configs. This is the same data structure as used in routing, and lookups are really fast.
I use both modules for a large lists but these directives may require the use of several
if
conditions. For me,allow/deny
directives are better solution (more plain) for simple lists.
Example
# Map module: map $remote_addr $globals_internal_map_acl { # Status code: # - 0 = false # - 1 = true default 0; ### INTERNAL ### 10.255.10.0/24 1; 10.255.20.0/24 1; 10.255.30.0/24 1; 192.168.0.0/16 1; } # Geo module: geo $globals_internal_geo_acl { # Status code: # - 0 = false # - 1 = true default 0; ### INTERNAL ### 10.255.10.0/24 1; 10.255.20.0/24 1; 10.255.30.0/24 1; 192.168.0.0/16 1; }
Take a look also at the example below (allow/deny
vs geo + if
statement):
# allow/deny: location /internal { include acls/internal.conf; allow 192.168.240.0/24; deny all; ... # vs geo/map: location /internal { if ($globals_internal_map_acl) { set $pass 1; } if ($pass = 1) { proxy_pass http://localhost:80; } if ($pass != 1) { return 403; } ... }
External resources
- Nginx Basic Configuration (Geo Ban)
- What is the best way to redirect 57,000 URLs on nginx?
- How Radix trees made blocking IPs 5000 times faster
- Compressing Radix Trees Without (Too Many) Tears
- Blocking/allowing IP addresses (from this handbook)
- allow and deny (from this handbook)
- ngx_http_geoip_module (from this handbook)
🔰 Map all the things…
Rationale
Manage a large number of redirects with maps and use them to customise your key-value pairs. If you are ever faced with using an if during a request, you should check to see if you can use a
map
instead.
The
map
directive maps strings, so it is possible to represent e.g.192.168.144.0/24
as a regular expression and continue to use themap
directive.
Map module provides a more elegant solution for clearly parsing a big list of regexes, e.g. User-Agents, Referrers.
You can also use
include
directive for your maps so your config files would look pretty and can be used in many places in your configuration.
Example
# Define in an external file (e.g. maps/http_user_agent.conf): map $http_user_agent $device_redirect { default "desktop"; ~(?i)ip(hone|od) "mobile"; ~(?i)android.*(mobile|mini) "mobile"; ~Mobile.+Firefox "mobile"; ~^HTC "mobile"; ~Fennec "mobile"; ~IEMobile "mobile"; ~BB10 "mobile"; ~SymbianOS.*AppleWebKit "mobile"; ~OperasMobi "mobile"; } # Include to the server context: include maps/http_user_agent.conf; # And turn on in a specific context (e.g. location): if ($device_redirect = "mobile") { return 301 https://m.example.com$request_uri; }
External resources
- Module ngx_http_map_module
- Cool Nginx feature of the week
🔰 Set global root directory for unmatched locations
Rationale
Set global
root
inside server directive for requests. It specifies the root directory for undefined locations.
If you define
root
in alocation
block it will only be available in thatlocation
. This almost always leads to duplication of eitherroot
directives of file paths, neither of which is good.
If you define it in the
server
block it is always inherited by thelocation
blocks so it will always be available in the$document_root
variable, thus avoiding the duplication of file paths.
From official documentation:
If you add a
root
to every location block then a location block that isn’t matched will have noroot
. Therefore, it is important that aroot
directive occur prior to your location blocks, which can then override this directive if they need to.
Example
server { server_name example.com; # It's important: root /var/www/example.com/public; location / { ... } location /api { ... } location /static { root /var/www/example.com/static; ... } }
External resources
- Nginx Pitfalls: Root inside location block
🔰 Use return
directive for URL redirection (301, 302)
Rationale
It’s a simple rule. You should use server blocks and
return
statements as they’re way faster than evaluating RegEx.
It is simpler and faster because NGINX stops processing the request (and doesn’t have to process a regular expressions). More than that, you can specify a code in the 3xx series.
If you have a scenario where you need to validate the URL with a regex or need to capture elements in the original URL (that are obviously not in a corresponding NGINX variable), then you should use
rewrite
.
Example
server { listen 192.168.252.10:80; ... server_name www.example.com; return 301 https://example.com$request_uri; # Other examples: # return 301 https://$host$request_uri; # return 301 $scheme://$host$request_uri; } server { ... server_name example.com; return 301 $scheme://www.example.com$request_uri; }
External resources
- Creating NGINX Rewrite Rules
- How to do an Nginx redirect
- rewrite vs return (from this handbook)
- Adding and removing the www prefix (from this handbook)
- Avoid checks server_name with if directive — Performance — P2 (from this handbook)
- Use return directive instead of rewrite for redirects — Performance — P2 (from this handbook)
🔰 Configure log rotation policy
Rationale
Log files gives you feedback about the activity and performance of the server as well as any problems that may be occurring. They are records details about requests and NGINX internals. Unfortunately, logs use more disk space.
You should define a process which periodically archiving the current log file and starting a new one, renames and optionally compresses the current log files, delete old log files, and force the logging system to begin using new log files.
I think the best tool for this is a
logrotate
. I use it everywhere if I want to manage logs automatically, and for a good night’s sleep also. It is a simple program to rotate logs, uses crontab to work. It’s scheduled work, not a daemon, so no need to reload its configuration.
Example
-
for manually rotation:
# Check manually (all log files): logrotate -dv /etc/logrotate.conf # Check manually with force rotation (specific log file): logrotate -dv --force /etc/logrotate.d/nginx
-
for automate rotation:
# GNU/Linux distributions: cat > /etc/logrotate.d/nginx << __EOF__ /var/log/nginx/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 nginx nginx sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then run-parts /etc/logrotate.d/httpd-prerotate; fi endscript postrotate # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid` invoke-rc.d nginx reload >/dev/null 2>&1 endscript } /var/log/nginx/localhost/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 nginx nginx sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then run-parts /etc/logrotate.d/httpd-prerotate; fi endscript postrotate # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid` invoke-rc.d nginx reload >/dev/null 2>&1 endscript } /var/log/nginx/domains/example.com/*.log { daily missingok rotate 14 compress delaycompress notifempty create 0640 nginx nginx sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then run-parts /etc/logrotate.d/httpd-prerotate; fi endscript postrotate # test ! -f /var/run/nginx.pid || kill -USR1 `cat /var/run/nginx.pid` invoke-rc.d nginx reload >/dev/null 2>&1 endscript } __EOF__
# BSD systems: cat > /usr/local/etc/logrotate.d/nginx << __EOF__ /var/log/nginx/*.log { daily rotate 14 missingok sharedscripts compress postrotate kill -HUP `cat /var/run/nginx.pid` endscript dateext } /var/log/nginx/*/*.log { daily rotate 14 missingok sharedscripts compress postrotate kill -HUP `cat /var/run/nginx.pid` endscript dateext } __EOF__
External resources
- Understanding logrotate utility
- Rotating Linux Log Files — Part 2: Logrotate
- Managing Logs with Logrotate
- nginx and Logrotate
- nginx log rotation
🔰 Use simple custom error pages
Rationale
Default error pages in NGINX are simple but it reveals version information and returns the «nginx» string, which leads to information leakage vulnerability.
Information about the technologies used and the software versions are extremely valuable information. These details allow the identification and exploitation of known software weaknesses published in publicly available vulnerability databases.
The best option is to generate pages for each HTTP code or use SSI and
map
modules to create dynamic error pages.
You can setup a custom error page for every location block in your
nginx.conf
, or a global error page for the site as a whole. You can also append standard error codes together to have a single page for several types of errors.
Be careful with the syntax! You should drop the
=
out of theerror_page
directive because with this,error_page 404 = /404.html;
return the404.html
page with a status code of 200 (=
has relayed that to this page) so you should seterror_page 404 /404.html;
and you’ll get the original error code returned.
You should also remember about HTTP request smuggling attacks (see more):
error_page 401 https://example.org/;
— this handler is vulnerable, allowing an attacker to smuggle a request and potentially gain access to resources/informationerror_page 404 /404.html;
+error_page 404 @404;
— are not vulnerable
To generate custom error pages you can use HTTP Static Error Pages Generator.
Example
Create error page templates:
cat >> /usr/share/nginx/html/404.html << __EOF__ <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> </body> </html> __EOF__ # Just an example, I know it is stupid... cat >> /usr/share/nginx/html/50x.html << __EOF__ <html> <head><title>server error</title></head> <body> <center><h1>server error</h1></center> </body> </html> __EOF__
Set them on the NGINX side:
error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /404.html { root /usr/share/nginx/html; internal; } location = /custom_50x.html { root /usr/share/nginx/html; internal; }
External resources
- error_page from ngx_http_core_module
- src/http/ngx_http_special_response.c
- HTTP Status Codes
- One NGINX error page to rule them all
- NGINX — Custom Error Pages. A Decent Title Not Found
- Dynamic error pages with SSI (from this handbook)
🔰 Don’t duplicate index
directive, use it only in the http block
Rationale
Use the
index
directive one time. It only needs to occur in yourhttp
context and it will be inherited below.
I think we should be careful about duplicating the same rules. But, of course, rules duplication is sometimes okay or not necessarily a great evil.
Example
Not recommended configuration:
http { ... index index.php index.htm index.html; server { server_name www.example.com; location / { index index.php index.html index.$geo.html; ... } } server { server_name www.example.com; location / { index index.php index.htm index.html; ... } location /data { index index.php; ... } ... }
Recommended configuration:
http { ... index index.php index.htm index.html index.$geo.html; server { server_name www.example.com; location / { ... } } server { server_name www.example.com; location / { ... } location /data { ... } ... }
External resources
- Pitfalls and Common Mistakes — Multiple Index Directives
Debugging
Go back to the Table of Contents or What’s next? section.
📌 NGINX has many methods for troubleshooting issues. In this chapter I will present a few ways to deal with them.
- Base Rules
- ≡ Debugging (5)
- Use custom log formats
- Use debug mode to track down unexpected behaviour
- Improve debugging by disable daemon, master process, and all workers except one
- Use core dumps to figure out why NGINX keep crashing
- Use mirror module to copy requests to another backend
- Performance
- Hardening
- Reverse Proxy
- Load Balancing
- Others
🔰 Use custom log formats
Rationale
Anything you can access as a variable in NGINX config, you can log, including non-standard HTTP headers, etc. so it’s a simple way to create your own log format for specific situations.
This is extremely helpful for debugging specific
location
directives.
I also use custom log formats for analyze of the users traffic profiles (e.g. SSL/TLS version, ciphers, and many more).
Example
# Default main log format from nginx repository: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Extended main log format: log_format main-level-0 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time'; # Debug log formats: # - level 0 # - based on main-level-0 without "$http_referer" "$http_user_agent" log_format debug-level-0 '$remote_addr - $remote_user [$time_local] ' '"$request_method $scheme://$host$request_uri ' '$server_protocol" $status $body_bytes_sent ' '$request_id $pid $msec $request_time ' '$upstream_connect_time $upstream_header_time ' '$upstream_response_time "$request_filename" ' '$request_completion';
External resources
- Module ngx_http_log_module
- Nginx: Custom access log format and error levels
- nginx: Log complete request/response with all headers?
- Custom log formats (from this handbook)
🔰 Use debug mode to track down unexpected behaviour
Rationale
It will probably return more details than you want, but that can sometimes be a lifesaver (however, log file growing rapidly on a very high-traffic sites).
Generally, the
error_log
directive is specified in themain
context but you can specified inside a particularserver
or alocation
block, the global settings will be overridden and sucherror_log
directive will set its own path to the log file and the level of logging.
It is possible to enable the debugging log for a particular IP address or a range of IP addresses (see examples).
The alternative method of storing the debug log is keep it in the memory (to a cyclic memory buffer). The memory buffer on the debug level does not have significant impact on performance even under high load.
If you want to logging of
ngx_http_rewrite_module
(at thenotice
level) you should enablerewrite_log on;
in ahttp
,server
, orlocation
contexts.
Words of caution:
- never leave debug logging to a file on in production
- don’t forget to revert debug-level for
error_log
on a very high traffic sites- absolutely use log rotation policy
A while ago, I found this interesting comment:
notice
is much better thandebug
as the error level for debugging rewrites because it will skip a lot of low-level irrelevant debug info (e.g. SSL or gzip details; 50+ lines per request).
Example
-
Debugging log to a file:
# Turn on in a specific context, e.g.: # - global - for global logging # - http - for http and all locations logging # - location - for specific location error_log /var/log/nginx/error-debug.log debug;
-
Debugging log to memory:
error_log memory:32m debug;
You can read more about that in the Show debug log in memory chapter.
-
Debugging log for selected client connections:
events { # Other connections will use logging level set by the error_log directive. debug_connection 192.168.252.15/32; debug_connection 10.10.10.0/24; }
-
Debugging log for each server:
error_log /var/log/nginx/debug.log debug; ... http { server { # To enable debugging: error_log /var/log/nginx/example.com/example.com-debug.log debug; # To disable debugging: error_log /var/log/nginx/example.com/example.com-debug.log; ... } }
External resources
- Debugging NGINX
- A debugging log
- A little note to all nginx admins there — debug log
- Error log severity levels (from this hadnbook)
🔰 Improve debugging by disable daemon, master process, and all workers except one
Rationale
These directives with following values are mainly used during development and debugging, e.g. while testing a bug/feature.
For example,
daemon off;
andmaster_process off;
lets me test configurations rapidly.
For normal production the NGINX server will start in the background (
daemon on;
). In this way NGINX and other services are running and talking to each other. One server runs many services.
In a development or debugging environment (you should never run NGINX in production with this), using
master_process off;
, I usually run NGINX in the foreground without the master process and press^C
(SIGINT
) to terminated it simply.
worker_processes 1;
is also very useful because can reduce number of worker processes and the data they generate, so that is pretty comfortable for us to debug.
Example
# Update configuration file (in a global context): daemon off master_process off; worker_processes 1; # Or run NGINX from shell (oneliner): /usr/sbin/nginx -t -g 'daemon off; master_process off; worker_processes 1;'
External resources
- Core functionality
🔰 Use core dumps to figure out why NGINX keep crashing
Rationale
A core dump is basically a snapshot of the memory when the program crashed.
NGINX is a very stable daemon but sometimes it can happen that there is a unique termination of the running NGINX process. You should always enable core dumps when your NGINX instance receive an unexpected error or when it crashed.
It ensures two important directives that should be enabled if you want the memory dumps to be saved, however, in order to properly handle memory dumps, there are a few things to do. For fully information about it see Dump a process’s memory (from this handbook) chapter.
Also keep in mind other debugging and troubleshooting tools such as
eBPF
,ftrace
,perf trace
orstrace
(note:strace
pausing the target process for each syscall so that the debugger can read state, and doing this twice: when the syscall begins, and when it ends, so can bring down your production environment) on the worker process for syscall tracing like aread/readv/write/writev/close/shutdown
.
Example
worker_rlimit_core 500m; worker_rlimit_nofile 65535; working_directory /var/dump/nginx;
External resources
- Debugging — Core dump
- Debugging (from this handbook)
- Dump a process’s memory (from this handbook)
- Debugging socket leaks (from this handbook)
- Debugging Symbols (from this handbook)
🔰 Use mirror module to copy requests to another backend
Rationale
Traffic mirroring is very useful to:
- analyzing and debugging the original request
- pre-production tests (handle real production traffic)
- logging of requests for security analysis and content inspection
- traffic troubleshooting (diagnose errors)
- copying real traffic to a test envrionment without considerable changes to the production system
Mirroring itself doesn’t affect original requests (only requests are analyzed, responses are not analyzed). And what’s more, errors in the mirror backend don’t affect the main backend.
If you use mirroring, keep in mind:
Delayed processing of the next request is a known side-effect of how mirroring is implemented in NGINX, and this is unlikely to change. The point was to make sure this was actually the case.
Usually a mirror subrequest does not affect the main request. However there are two issues with mirroring:
the next request on the same connection will not be processed until all mirror subrequests finish. Try disabling keepalive for the primary location and see if it helps
if you use
sendfile
andtcp_nopush
, it’s possible that the response is not pushed properly because of a mirror subrequest, which may result in a delay. Turn offsendfile
and see if it helps
Example
location / { log_subrequest on; mirror /backend-mirror; mirror_request_body on; proxy_pass http://bk_web01; # Indicates whether the header fields of the original request # and original request body are passed to the proxied server: proxy_pass_request_headers on; proxy_pass_request_body on; # Uncomment if you have problems with latency: # keepalive_timeout 0; } location = /backend-mirror { internal; proxy_pass http://bk_web01_debug$request_uri; # Pass the headers that will be sent to the mirrored backend: proxy_set_header M-Server-Port $server_port; proxy_set_header M-Server-Addr $server_addr; proxy_set_header M-Host $host; # or $http_host for <host:port> proxy_set_header M-Real-IP $remote_addr; proxy_set_header M-Request-ID $request_id; proxy_set_header M-Original-URI $request_uri; }
External resources
- Module ngx_http_mirror_module
- nginx mirroring tips and tricks
Performance
Go back to the Table of Contents or What’s next? section.
📌 NGINX is a insanely fast, but you can adjust a few things to make sure it’s as fast as possible for your use case.
- Base Rules
- Debugging
- ≡ Performance (13)
- Adjust worker processes
- Use HTTP/2
- Maintaining SSL sessions
- Enable OCSP Stapling
- Use exact names in a server_name directive if possible
- Avoid checks server_name with if directive
- Use $request_uri to avoid using regular expressions
- Use try_files directive to ensure a file exists
- Use return directive instead of rewrite for redirects
- Enable PCRE JIT to speed up processing of regular expressions
- Activate the cache for connections to upstream servers
- Make an exact location match to speed up the selection process
- Use limit_conn to improve limiting the download speed
- Hardening
- Reverse Proxy
- Load Balancing
- Others
🔰 Adjust worker processes
Rationale
The
worker_processes
directive is the sturdy spine of life for NGINX. This directive is responsible for letting our virtual server know many workers to spawn once it has become bound to the proper IP and port(s) and its value is helpful in CPU-intensive work.
The safest setting is to use the number of cores by passing
auto
. You can adjust this value to maximum throughput under high concurrency. The value should be changed to an optimal value depending on the number of cores available, disks, network subsystem, server load, and so on.
How many worker processes do you need? Do some multiple load testing. Hit the app hard and see what happens with only one. Then add some more to it and hit it again. At some point you’ll reach a point of truly saturating the server resources. That’s when you know you have the right balance.
In my opinion, for high load proxy servers (also standalone servers) interesting value is
ALL_CORES - 1
(or more) because if you’re running NGINX with other critical services on the same server, you’re just going to thrash the CPUs with all the context switching required to manage all of those processes.
Rule of thumb: If much time is spent blocked on I/O, worker processes should be increased further.
Increasing the number of worker processes is a great way to overcome a single CPU core bottleneck, but may opens a whole new set of problems.
Official NGINX documentation say:
When one is in doubt, setting it to the number of available CPU cores would be a good start (the value «auto» will try to autodetect it). […] running one worker process per CPU core — makes the most efficient use of hardware resources.
Example
# The safest and recommend way: worker_processes auto; # Alternative: # VCPU = 4 , expr $(nproc --all) - 1, grep "processor" /proc/cpuinfo | wc -l worker_processes 3;
External resources
- Nginx Core Module — worker_processes
- Processes (from this handbook)
🔰 Use HTTP/2
Rationale
HTTP/2 will make our applications faster, simpler, and more robust. The primary goals for HTTP/2 are to reduce latency by enabling full request and response multiplexing, minimise protocol overhead via efficient compression of HTTP header fields, and add support for request prioritisation and server push. HTTP/2 has also a extremely large blacklist of old and insecure ciphers.
http2
directive configures the port to accept HTTP/2 connections. This doesn’t mean it accepts only HTTP/2 connections. HTTP/2 is backwards-compatible with HTTP/1.1, so it would be possible to ignore it completely and everything will continue to work as before because if the client that does not support HTTP/2 will never ask the server for an HTTP/2 communication upgrade: the communication between them will be fully HTTP1/1.
HTTP/2 multiplexes many requests within a single TCP connection. Typically, a single TCP connection is established to a server when HTTP/2 is in use.
You should also enable the
ssl
parameter (but NGINX can also be configured to accept HTTP/2 connections without SSL), required because browsers do not support HTTP/2 without encryption (the h2 specification, allows to use HTTP/2 over an unsecurehttp://
scheme, but browsers have not implemented this (and most do not plan to)). Note that accepting HTTP/2 connections over TLS requires the «Application-Layer Protocol Negotiation» (ALPN) TLS extension support.
Obviously, there is no pleasure without pain. HTTP/2 is more secure than HTTP/1.1, however, there are serious vulnerabilities detected in the HTTP/2 protocol. For more information please see HTTP/2 can shut you down!, On the recent HTTP/2 DoS attacks, and HTTP/2: In-depth analysis of the top four flaws of the next generation web protocol [pdf].
Let’s not forget about backwards-compatible with HTTP/1.1, also when it comes to security. Many of the vulnerabilities for HTTP/1.1 may be present in HTTP/2.
To test your server with RFC 7540 [IETF] (HTTP/2) and RFC 7541 [IETF] (HPACK) use h2spec tool.
Example
server { listen 10.240.20.2:443 ssl http2; ...
External resources
- RFC 7540 — HTTP/2 [IETF]
- RFC 7540 — HTTP/2: Security Considerations [IETF]
- Introduction to HTTP/2
- What is HTTP/2 — The Ultimate Guide
- The HTTP/2 Protocol: Its Pros & Cons and How to Start Using It
- HTTP/2 Compatibility with old Browsers and Servers
- HTTP 2 protocol – it is faster, but is it also safer?
- HTTP/2 Denial of Service Advisory
- HTTP/2, Brute! Then fall, server. Admin! Ops! The server is dead
- HTTP Headers (from this handbook)
🔰 Maintaining SSL sessions
Rationale
Default, «built-in» session cache is not optimal as it can be used by only one worker process and can cause memory fragmentation.
Enabling session caching with
ssl_session_cache
directive helps to reduce NGINX server CPU load. This also improves performance from the clients’ perspective because it eliminates the need for a new (and time-consuming) SSL handshake to be conducted each time a request is made. What’s more, it is much better to use shared cache.
When using
ssl_session_cache
, the performance of keep-alive connections over SSL might be enormously increased. 10M value of this is a good starting point (1MB shared cache can hold approximately 4,000 sessions). Withshared
a cache shared between all worker processes (a cache with the same name can be used in several virtual servers).
For TLSv1.2, the RFC 5246 — Resuming Sessions recommends that sessions are not kept alive for more than 24 hours (it is the maximum time). Generally, TLS sessions cannot be resumed unless both the client and server agree and should force a full handshake if either party suspects that the session may have been compromised, or that certificates may have expired or been revoked. But a while ago, I found
ssl_session_timeout
with less time (e.g. 15 minutes) for prevent abused by advertisers like Google and Facebook, I don’t know, I guess it makes sense.
On the other hand, RFC 5077 — Ticket Lifetime says:
The ticket lifetime may be longer than the 24-hour lifetime recommended in RFC4346. TLS clients may be given a hint of the lifetime of the ticket. Since the lifetime of a ticket may be unspecified, a client has its own local policy that determines when it discards tickets.
Most servers do not purge sessions or ticket keys, thus increasing the risk that a server compromise would leak data from previous (and future) connections.
Vincent Bernat written great tool for testing session resume with and without tickets.
Ivan Ristić (Founder of Hardenize) say:
Session resumption either creates a large server-side cache that can be broken into or, with tickets, kills forward secrecy. So you have to balance performance (you don’t want your users to use full handshakes on every connection) and security (you don’t want to compromise it too much). Different projects dictate different settings. […] One reason not to use a very large cache (just because you can) is that popular implementations don’t actually delete any records from there; even the expired sessions are still in the cache and can be recovered. The only way to really delete is to overwrite them with a new session. […] These days I’d probably reduce the maximum session duration to 4 hours, down from 24 hours currently in my book. But that’s largely based on a gut feeling that 4 hours is enough for you to reap the performance benefits, and using a shorter lifetime is always better.
Ilya Grigorik (Web performance engineer at Google) say about SSL buffers:
1400 bytes (actually, it should probably be even a bit lower) is the recommended setting for interactive traffic where you want to avoid any unnecessary delays due to packet loss/jitter of fragments of the TLS record. However, packing each TLS record into dedicated packet does add some framing overhead and you probably want larger record sizes if you’re streaming larger (and less latency sensitive) data. 4K is an in between value that’s «reasonable» but not great for either case. For smaller records, we should also reserve space for various TCP options (timestamps, SACKs. up to 40 bytes), and account for TLS record overhead (another 20-60 bytes on average, depending on the negotiated ciphersuite). All in all: 1500 — 40 (IP) — 20 (TCP) — 40 (TCP options) — TLS overhead (60-100) ~= 1300 bytes. If you inspect records emitted by Google servers, you’ll see that they carry ~1300 bytes of application data due to the math above.
The other recommendation (it seems to me that the authors are Leif Hedstrom, Thomas Jackson, Brian Geffon) is to use the below values:
- smaller TLS record size: MTU/MSS (1500) minus the TCP (20 bytes) and IP (40 bytes) overheads: 1500 — 40 — 20 = 1440 bytes
- larger TLS record size: maximum TLS record size which is 16383 (2^14 — 1)
Example
ssl_session_cache shared:NGX_SSL_CACHE:10m; ssl_session_timeout 4h; ssl_session_tickets off; ssl_buffer_size 1400;
External resources
- SSL Session (cache)
- Speeding up TLS: enabling session reuse
- SSL Session Caching (in nginx)
- ssl_session_cache in Nginx and the ab benchmark
- Improving OpenSSL Performance
🔰 Enable OCSP Stapling
Rationale
Unlike OCSP, in the OCSP Stapling mechanism the user’s browser does not contact the issuer of the certificate, but does it at regular intervals by the application server.
OCSP Stapling extension is configured for better performance (is designed to reduce the cost of an OCSP validation; improves browser communication performance with the application server and allows to retrieve information about the validity of the certificate at the time of accessing the application) and user privacy is still maintained. OCSP Stapling is an optimization, and nothing breaks if it doesn’t work.
The use of the OCSP without the implementation of the OCSP Stapling extension is associated with an increased risk of losing user privacy, as well as an increased risk of negative impact on the availability of applications due to the inability to verify the validity of the certificate.
OCSP Stapling defines OCSP response in TLS Certificate Status Request (RFC 6066 — Certificate Status Request) extension («stapling»). In this case, server sends the OCSP response as part of TLS extension, hence the client need not have to check it on OCSP URL (saves revocation checking time for client).
NGINX provides several options to keep in mind. For example: it generate list from the file of certificates pointed to by
ssl_trusted_certificate
(the list of these certificates will not be sent to clients). You need to send this list or switch offssl_verify_client
. This step is optional when the full certificate chain (only Intermediate certs, without Root CA, and also must not include the site certificate) was already provided with thessl_certificate
statement. In case just the certificate is being used (not the parts of your CA), thenssl_trusted_certificate
is needed.
I found on the web that both type of chains (RootCA + Intermediate certs or only Intermediate certs) will work as the
ssl_trusted_certificate
for the purpose of OCSP verification. The root is not recommended and not needed inssl_certificate
. If you use Let’s Encrypt you don’t need to add the RootCA (tossl_trusted_certificate
) because the OCSP response is signed by the intermediate certificate itself. I think, that the safest way is to include all corresponding Root and Intermediate CA certificates inssl_trusted_certificate
.
I always use the most stable DNS resolver like Google’s
8.8.8.8
, Quad9’s9.9.9.9
, CloudFlare’s1.1.1.1
, or OpenDNS’s208.67.222.222
(of course you can use resolving domains internally and externally with Bind9 or whatever else). Ifresolver
line isn’t added or your NGINX will not have an external access, the resolver defaults to the server’s DNS default.
You should know, that too short resolver timeout (default of 30 seconds) can be another reason for OCSP Stapling to fail (temporarily). If the NGINX
resolver_timeout
directive is set to very low values (< 5 seconds), log messages like this can appear:"[...] ssl_stapling" ignored, host not found in OCSP responder [...]
.
Also bear in mind that NGINX lazy-loads OCSP responses. So, the first request will not have a stapled response, but subsequent requests will. This is, because NGINX will not prefetch OCSP responses at server startup (or after reload).
Important information from NGINX documentation:
For the OCSP stapling to work, the certificate of the server certificate issuer should be known. If the
ssl_certificate
file does not contain intermediate certificates, the certificate of the server certificate issuer should be present in thessl_trusted_certificate
file.
To prevent DNS spoofing (
resolver
), it is recommended configuring DNS servers in a properly secured trusted local network.
Example
# Turn on OCSP Stapling: ssl_stapling on; # Enable the server to check OCSP: ssl_stapling_verify on; # Point to a trusted CA (the company that signed our CSR) certificate chain # (Intermediate certificates in that order from top to bottom) file, but only, # if NGINX can not find the top level certificates from ssl_certificate: ssl_trusted_certificate /etc/nginx/ssl/inter-CA-chain.pem # For a resolution of the OCSP responder hostname, set resolvers and their cache time: resolver 1.1.1.1 8.8.8.8 valid=300s; resolver_timeout 5s;
To test OCSP Stapling:
openssl s_client -connect example.com:443 -servername example.com -tlsextdebug -status echo | openssl s_client -connect example.com:443 -servername example.com -status 2> /dev/null | grep -A 17 'OCSP response:'
External resources
- RFC 2560 — X.509 Internet Public Key Infrastructure Online Certificate Status Protocol — OCSP
- OCSP Stapling on nginx
- OCSP Stapling: Performance
- OCSP Stapling; SSL with added speed and privacy
- High-reliability OCSP stapling and why it matters
- OCSP Stapling: How CloudFlare Just Made SSL 30% Faster
- Is the web ready for OCSP Must-Staple?
- The case for «OCSP Must-Staple»
- Page Load Optimization: OCSP Stapling
- ImperialViolet — No, don’t enable revocation checking
- The Problem with OCSP Stapling and Must Staple and why Certificate Revocation is still broken
- Damn it, nginx! stapling is busted
- Priming the OCSP cache in Nginx
- How to make OCSP stapling on nginx work
- HAProxy OCSP stapling
- DNS Resolvers Performance compared: CloudFlare x Google x Quad9 x OpenDNS
- OCSP Validation with OpenSSL
🔰 Use exact names in a server_name
directive if possible
Rationale
Exact names, wildcard names starting with an asterisk, and wildcard names ending with an asterisk are stored in three hash tables bound to the listen ports.
The exact names hash table is searched first. So if the most frequently requested names of a server are
example.com
andwww.example.com
, it is more efficient to define them explicitly.
If the exact name is not found, the hash table with wildcard names starting with an asterisk is searched. If the name is not found there, the hash table with wildcard names ending with an asterisk is searched. Searching wildcard names hash table is slower than searching exact names hash table because names are searched by domain parts.
When searching for a virtual server by name, if name matches more than one of the specified variants, e.g. both wildcard name and regular expression match, the first matching variant will be chosen, in the following order of precedence:
- exact name
- longest wildcard name starting with an asterisk, e.g.
*.example.org
- longest wildcard name ending with an asterisk, e.g.
mail.*
- first matching regular expression (in order of appearance in a configuration file)
Regular expressions are tested sequentially and therefore are the slowest method and are non-scalable. For these reasons, it is better to use exact names where possible.
From official documentation:
A wildcard name may contain an asterisk only on the name’s start or end, and only on a dot border. The names
www.*.example.org
andw*.example.org
are invalid. […] A special wildcard name in the form.example.org
can be used to match both the exact nameexample.org
and the wildcard name*.example.org
.
The name
*.example.org
matches not onlywww.example.org
butwww.sub.example.org
as well.
To use a regular expression, the server name must start with the tilde character. […] otherwise it will be treated as an exact name, or if the expression contains an asterisk, as a wildcard name (and most likely as an invalid one). Do not forget to set
^
and$
anchors. They are not required syntactically, but logically. Also note that domain name dots should be escaped with a backslash. A regular expression containing the characters{
and}
should be quoted.
Example
Not recommended configuration:
server { listen 192.168.252.10:80; # From official documentation: "Searching wildcard names hash table is slower than searching exact names # hash table because names are searched by domain parts. Note that the special wildcard form # '.example.org' is stored in a wildcard names hash table and not in an exact names hash table.": server_name .example.org; ... }
Recommended configuration:
# It is more efficient to define them explicitly: server { listen 192.168.252.10:80; # .example.org = *.example.org server_name example.org www.example.org *.example.org; ... }
External resources
- Server names
- Server Naming Conventions and Best Practices
- Server/Device Naming [pdf]
- Handle incoming connections (from this handbook)
🔰 Avoid checks server_name
with if
directive
Rationale
When NGINX receives a request no matter what is the subdomain being requested, be it
www.example.com
or just the plainexample.com
thisif
directive is always evaluated. Since you’re requesting NGINX to check for theHost
header for every request. It might be extremely inefficient.
Instead use two server directives like the example below. This approach decreases NGINX processing requirements.
The problem is not just the
$server_name
directive. Keep in mind also other variables, e.g.$scheme
. In some cases (but not always), it is better to add an additional block directive than to use theif
.
On the other hand, official documentation say:
Directive if has problems when used in location context, in some cases it doesn’t do what you expect but something completely different instead. In some cases it even segfaults. It’s generally a good idea to avoid it if possible.
Example
Not recommended configuration:
server { server_name example.com www.example.com; if ($host = www.example.com) { return 301 https://example.com$request_uri; } server_name example.com; ... }
Recommended configuration:
server { listen 192.168.252.10:80; server_name www.example.com; return 301 $scheme://example.com$request_uri; # If you force your web traffic to use HTTPS: # return 301 https://example.com$request_uri; ... } server { listen 192.168.252.10:80; server_name example.com; ... }
External resources
- If Is Evil
- if, break, and set (from this handbook)
🔰 Use $request_uri
to avoid using regular expressions
Rationale
With built-in
$request_uri
we can effectively avoid doing any capturing or matching at all. By default, the regex is costly and will slow down the performance.
This rule is addressing passing the URL unchanged to a new host, sure return is more efficient just passing through the existing URI.
The value of
$request_uri
is always the original URI (full original request URI with arguments) as received from the client and is not subject to any normalisations compared to the$uri
directive.
Use
$request_uri
in a map directive, if you need to match the URI and its query string.
An unconsidered use the
$request_uri
can lead to many strange behaviors. For example, using$request_uri
in the wrong place can cause URL encoded characters to become doubly encoded. So the most of the time you would use$uri
, because it is normalised.
I think the best explanation comes from the official documentation:
Don’t feel bad here, it’s easy to get confused with regular expressions. In fact, it’s so easy to do that we should make an effort to keep them neat and clean.
Example
Not recommended configuration:
# 1) rewrite ^/(.*)$ https://example.com/$1 permanent; # 2) rewrite ^ https://example.com$request_uri permanent;
Recommended configuration:
return 301 https://example.com$request_uri;
External resources
- Pitfalls and Common Mistakes — Taxing Rewrites
- Module ngx_http_proxy_module — proxy_pass
- uri vs request_uri (from this handbook)
🔰 Use try_files
directive to ensure a file exists
Rationale
try_files
is definitely a very useful thing. You can usetry_files
directive to check a file exists in a specified order.
You should use
try_files
instead ofif
directive. It’s definitely better way than usingif
for this action becauseif
directive is extremely inefficient since it is evaluated every time for every request.
The advantage of using
try_files
is that the behavior switches immediately with one command. I think the code is more readable also.
try_files
allows you:
- to check if the file exists from a predefined list
- to check if the file exists from a specified directory
- to use an internal redirect if none of the files are found
Example
Not recommended configuration:
server { ... root /var/www/example.com; location /images { if (-f $request_filename) { expires 30d; break; } ... }
Recommended configuration:
server { ... root /var/www/example.com; location /images { try_files $uri =404; ... }
External resources
- Creating NGINX Rewrite Rules
- Pitfalls and Common Mistakes
- Serving Static Content
- Serve files with nginx conditionally
- try_files directive (from this hadnbook)
🔰 Use return
directive instead of rewrite
for redirects
Rationale
For me, ability to rewrite URLs in NGINX is an extremely powerful and important feature. Technically, you can use both options but in my opinion you should use
server
blocks andreturn
statements as they are way simpler and faster than evaluating RegEx e.g. vialocation
blocks.
NGINX has to process and start a search.
return
directive stops processing (it directly stops execution) and returns the specified code to a client. This is preferred in any context.
If you have a scenario where you need to validate the URL with a regex or need to capture elements in the original URL (that are obviously not in a corresponding NGINX variable), then you should use
rewrite
.
Example
Not recommended configuration:
server { ... location / { try_files $uri $uri/ =404; rewrite ^/(.*)$ https://example.com/$1 permanent; } ... }
Recommended configuration:
server { ... location / { try_files $uri $uri/ =404; return 301 https://example.com$request_uri; } ... }
External resources
- NGINX — rewrite vs redirect
- If Is Evil
- rewrite vs return (from this handbook)
- Use return directive for URL redirection (301, 302) — Base Rules — P2 (from this handbook)
🔰 Enable PCRE JIT to speed up processing of regular expressions
Rationale
Enables the use of JIT for regular expressions to speed-up their processing. Specifically, checking rules can be time-consuming, especially complex regular expression (regex) conditions.
By compiling NGINX with the PCRE library, you can perform complex manipulations with your
location
blocks and use the powerfulrewrite
directives.
PCRE JIT rule-matching engine can speed up processing of regular expressions significantly. NGINX with
pcre_jit
is magnitudes faster than without it. This option can improve performance, however, in some casespcre_jit
may have a negative effect. So, before enabling it, I recommend you to read this great document: PCRE Performance Project.
If you’ll try to use
pcre_jit on;
without JIT available, or if NGINX was compiled with JIT available, but currently loaded PCRE library does not support JIT, will warn you during configuration parsing.
The
--with-pcre-jit
is only needed when you compile PCRE library using NGNIX configure (./configure --with-pcre=
). When using a system PCRE library whether or not JIT is supported depends on how the library was compiled.
If you don’t pass
--with-pcre-jit
, the NGINX configure scripts are smart enough to detect and enable it automatically. See here. So, if your PCRE library is recent enough, a simple./configure
with no switches will compile NGINX withpcre_jit
enabled.
From NGINX documentation:
The JIT is available in PCRE libraries starting from version 8.20 built with the
--enable-jit
configuration parameter. When the PCRE library is built with nginx (--with-pcre=
), the JIT support is enabled via the--with-pcre-jit
configuration parameter.
Example
# In global context: pcre_jit on;
External resources
- Core functionality — pcre jit
- Performance comparison of regular expression engines
- Building OpenResty with PCRE JIT
🔰 Activate the cache for connections to upstream servers
Rationale
The idea behind keepalive is to address the latency of establishing TCP connections over high-latency networks. This connection cache is useful in situations where NGINX has to constantly maintain a certain number of open connections to an upstream server.
Keep-Alive connections can have a major impact on performance by reducing the CPU and network overhead needed to open and close connections. With HTTP keepalive enabled in NGINX upstream servers reduces latency thus improves performance and it reduces the possibility that the NGINX runs out of ephemeral ports.
This can greatly reduce the number of new TCP connections, as NGINX can now reuse its existing connections (
keepalive
) per upstream.
If your upstream server supports Keep-Alive in its config, NGINX will now reuse existing TCP connections without creating new ones. This can greatly reduce the number of sockets in
TIME_WAIT
TCP connections on a busy servers (less work for OS to establish new connections, less packets on a network).
Keep-Alive connections are only supported as of HTTP/1.1.
Example
# Upstream context: upstream backend { # Sets the maximum number of idle keepalive connections to upstream servers # that are preserved in the cache of each worker process. keepalive 16; } # Server/location contexts: server { ... location / { # By default only talks HTTP/1 to the upstream, # keepalive is only enabled in HTTP/1.1: proxy_http_version 1.1; # Remove the Connection header if the client sends it, # it could be "close" to close a keepalive connection: proxy_set_header Connection ""; ... } }
External resources
- NGINX keeps sending requests to offline upstream
- HTTP Keep-Alive connections (from this handbook)
🔰 Make an exact location match to speed up the selection process
Rationale
Exact location matches are often used to speed up the selection process by immediately ending the execution of the algorithm.
Regexes when present take precedence over simple URI matching and can add significant computational overhead depending on their complexity.
Using the
=
modifier it is possible to define an exact match of URI and location. It is very fast to process and save a significant amount of CPU cycles.
If an exact match is found, the search terminates. For example, if a
/
request happens frequently, defininglocation = /
will speed up the processing of these requests, as search terminates right after the first comparison. Such a location cannot obviously contain nested locations.
Example
# Matches the query / only and stops searching: location = / { ... } # Matches the query /v9 only and stops searching: location = /v9 { ... } ... # Matches any query due to the fact that all queries begin at /, # but regular expressions and any longer conventional blocks will be matched at first place: location / { ... }
External resources
- Untangling the nginx location block matching algorithm
🔰 Use limit_conn
to improve limiting the download speed
Rationale
NGINX provides two directives to limiting download speed:
limit_rate_after
— sets the amount of data transferred before thelimit_rate
directive takes effectlimit_rate
— allows you to limit the transfer rate of individual client connections (past exceedinglimit_rate_after
)
This solution limits NGINX download speed per connection, so, if one user opens multiple e.g. video files, it will be able to download
X * the number of times
he connected to the video files.
To prevent this situation use
limit_conn_zone
andlimit_conn
directives.
Example
# Create limit connection zone: limit_conn_zone $binary_remote_addr zone=conn_for_remote_addr:1m; # Add rules to limiting the download speed: limit_rate_after 1m; # run at maximum speed for the first 1 megabyte limit_rate 250k; # and set rate limit after 1 megabyte # Enable queue: location /videos { # Max amount of data by one client: 10 megabytes (limit_rate_after * 10) limit_conn conn_for_remote_addr 10; ...
External resources
- How to Limit Nginx download Speed
Hardening
Go back to the Table of Contents or What’s next? section.
📌 In this chapter I will talk about some of the NGINX hardening approaches and security standards.
- Base Rules
- Debugging
- Performance
- ≡ Hardening (31)
- Always keep NGINX up-to-date
- Run as an unprivileged user
- Disable unnecessary modules
- Protect sensitive resources
- Take care about your ACL rules
- Hide Nginx version number
- Hide Nginx server signature
- Hide upstream proxy headers
- Remove support for legacy and risky HTTP request headers
- Use only the latest supported OpenSSL version
- Force all connections over TLS
- Use min. 2048-bit for RSA and 256-bit for ECC
- Keep only TLS 1.3 and TLS 1.2
- Use only strong ciphers
- Use more secure ECDH Curve
- Use strong Key Exchange with Perfect Forward Secrecy
- Prevent Replay Attacks on Zero Round-Trip Time
- Defend against the BEAST attack
- Mitigation of CRIME/BREACH attacks
- Enable HTTP Strict Transport Security
- Reduce XSS risks (Content-Security-Policy)
- Control the behaviour of the Referer header (Referrer-Policy)
- Provide clickjacking protection (X-Frame-Options)
- Prevent some categories of XSS attacks (X-XSS-Protection)
- Prevent Sniff Mimetype middleware (X-Content-Type-Options)
- Deny the use of browser features (Feature-Policy)
- Reject unsafe HTTP methods
- Prevent caching of sensitive data
- Limit concurrent connections
- Control Buffer Overflow attacks
- Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
- Reverse Proxy
- Load Balancing
- Others
🔰 Always keep NGINX up-to-date
Rationale
NGINX is a very secure and stable but vulnerabilities in the main binary itself do pop up from time to time. It’s the main reason for keep NGINX up-to-date as hard as you can.
When planning the NGINX update/upgrade process, the best way is simply to install the newly released version. But for me, the most common way to handle NGINX updates is to wait a few weeks after the stable release (and reading community comments of all possible and identified issues after the release of the new NGINX version).
Most modern GNU/Linux distros will not push the latest version of NGINX into their default package lists so maybe you should consider install it from sources.
Before update/upgrade NGINX remember about:
- do it on the testing environment in the first place
- make sure to make a backup of your current configuration before updating
Example
# RedHat/CentOS yum install <pkgname> # Debian/Ubuntu apt-get install <pkgname> # FreeBSD/OpenBSD pkg -f install <pkgname>
External resources
- Installing from prebuilt packages (from this handbook)
- Installing from source (from this handbook)
🔰 Run as an unprivileged user
Rationale
It is an important general principle that programs have the minimal amount of privileges necessary to do its job. That way, if the program is broken, its damage is limited.
There is no real difference in security just by changing the process owner name. On the other hand, in security, the principle of least privilege states that an entity should be given no more permission than necessary to accomplish its goals within a given system. This way only master process runs as root.
NGINX meets these requirements and it is the default behaviour, but remember to check it.
From Secure Programming HOWTO — 7.4. Minimize Privileges article:
The most extreme example is to simply not write a secure program at all — if this can be done, it usually should be. For example, don’t make your program
setuid
orsetgid
if you can; just make it an ordinary program, and require the administrator to log in as such before running it.
Example
# Edit/check nginx.conf: user nginx; # or 'www' for example; if group is omitted, # a group whose name equals that of user is used # Check/set owner and group for root directory: chown -R root:root /etc/nginx # Set owner and group for app directory: chown -R nginx:nginx /var/www/example.com
External resources
- Why does nginx starts process as root?
- How and why Linux daemons drop privileges
- POS36-C. Observe correct revocation order while relinquishing privileges
🔰 Disable unnecessary modules
Rationale
It is recommended to disable any modules which are not required as this will minimise the risk of any potential attacks by limiting the operations allowed by the web server. I also recommend only compiling and running signed and tested modules on you production environments.
Disable unneeded modules in order to reduce the memory utilized and improve performance. Modules that are not needed just make loading times longer.
The best way to unload unused modules is use the
configure
option during installation. If you have static linking a shared module you should re-compile NGINX.
Use only high quality modules and remember about that:
Unfortunately, many third‑party modules use blocking calls, and users (and sometimes even the developers of the modules) aren’t aware of the drawbacks. Blocking operations can ruin NGINX performance and must be avoided at all costs.
Example
# 1a) Check which modules can be turn on or off while compiling: ./configure --help | less # 1b) Turn off during installation: ./configure --without-http_autoindex_module # 2) Comment modules in the configuration file e.g. modules.conf: # load_module /usr/share/nginx/modules/ndk_http_module.so; # load_module /usr/share/nginx/modules/ngx_http_auth_pam_module.so; # load_module /usr/share/nginx/modules/ngx_http_cache_purge_module.so; # load_module /usr/share/nginx/modules/ngx_http_dav_ext_module.so; load_module /usr/share/nginx/modules/ngx_http_echo_module.so; # load_module /usr/share/nginx/modules/ngx_http_fancyindex_module.so; load_module /usr/share/nginx/modules/ngx_http_geoip_module.so; load_module /usr/share/nginx/modules/ngx_http_headers_more_filter_module.so; # load_module /usr/share/nginx/modules/ngx_http_image_filter_module.so; # load_module /usr/share/nginx/modules/ngx_http_lua_module.so; load_module /usr/share/nginx/modules/ngx_http_perl_module.so; # load_module /usr/share/nginx/modules/ngx_mail_module.so; # load_module /usr/share/nginx/modules/ngx_nchan_module.so; # load_module /usr/share/nginx/modules/ngx_stream_module.so;
External resources
- NGINX 3rd Party Modules
- nginx-modules
- Emiller’s Guide To Nginx Module Development
🔰 Protect sensitive resources
Rationale
Hidden directories and files should never be web accessible — sometimes critical data are published during application deploy. If you use control version system you should defninitely drop the access (by giving less information to attackers) to the critical hidden directories/files like a
.git
or.svn
to prevent expose source code of your application.
Sensitive resources contains items that abusers can use to fully recreate the source code used by the site and look for bugs, vulnerabilities, and exposed passwords.
As for the denying method:
In my opinion, a return 403 according to the RFC 2616 — 403 Forbidden [IETF] suggests (or even a 404, for purposes of no information disclosure) is less error prone if you know the resource should under no circumstances be accessed via http, even if «authorized» in a general context.
Note also:
If you use locations with regular expressions, NGINX applies them in the order of their appearance in the configuration file. You can also use the
^~
modifier which makes the prefix location block take precedence over any regular expression location block at the same level.
NGINX process request in phases.
return
directive is from rewrite module, anddeny
is from access module. Rewrite module is processed inNGX_HTTP_REWRITE_PHASE
phase (forreturn
inlocation
context), the access module is processed inNGX_HTTP_ACCESS_PHASE
phase, rewrite phase (wherereturn
belongs) happens before access phase (wheredeny
works), thusreturn
stops request processing and returns 301 in rewrite phase.
deny all
will have the same consequence but leaves the possibilities of slip-ups. The issue is illustrated in this answer, suggesting not using thesatisfy
+allow
+deny
atserver { ... }
level because of inheritance.
On the other hand, according to the NGINX documentation: The
ngx_http_access_module
module allows limiting access to certain client addresses. More specifically, you can’t restrict access to another module (return
is more used when you want to return other codes, not block access).
Example
Not recommended configuration:
if ($request_uri ~ "/.git") { return 403; }
Recommended configuration:
# 1) Catch only file names (without file extensions): # Example: /foo/bar/.git but not /foo/bar/file.git location ~ /.git { return 403; } # 2) Catch file names and file extensions: # Example: /foo/bar/.git and /foo/bar/file.git location ~* ^.*(.(?:git|svn|htaccess))$ { deny all; }
Most recommended configuration:
# Catch all . directories/files excepted .well-known (without file extensions): # Example: /foo/bar/.git but not /foo/bar/file.git location ~ /.(?!well-known/) { deny all; access_log /var/log/nginx/hidden-files-access.log main; error_log /var/log/nginx/hidden-files-error.log warn; }
Look also at files with the following extensions:
# Think also about the following rule (I haven't tested this but looks interesting). It comes from: # - https://github.com/h5bp/server-configs-nginx/blob/master/h5bp/location/security_file_access.conf location ~* (?:#.*#|.(?:bak|conf|dist|fla|in[ci]|log|orig|psd|sh|sql|sw[op])|~)$ { deny all; } # - https://github.com/getgrav/grav/issues/1625 location ~ /(LICENSE.txt|composer.lock|composer.json|nginx.conf|web.config|htaccess.txt|.htaccess) { deny all; } # Deny running scripts inside core system directories: # - https://github.com/getgrav/grav/issues/1625 location ~* /(system|vendor)/.*.(txt|xml|md|html|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 418; } # Deny running scripts inside user directory: # - https://github.com/getgrav/grav/issues/1625 location ~* /user/.*.(txt|md|yaml|yml|php|pl|py|cgi|twig|sh|bat)$ { return 418; }
Based on the above (tested, I use this):
# Catch file names and file extensions: # Example: /foo/bar/.git and /foo/bar/file.git location ~* ^.*(.(?:git|svn|hg|bak|bckp|save|old|orig|original|test|conf|cfg|dist|in[ci]|log|sql|mdb|sw[op]|htaccess|php#|php~|php_bak|aspx?|tpl|sh|bash|bin|exe|dll|jsp|out|cache|))$ { # Use also rate limiting: # in server context: limit_req_zone $binary_remote_addr zone=per_ip_5r_s:5m rate=5r/s; limit_req zone=per_ip_5r_s; deny all; access_log /var/log/nginx/restricted-files-access.log main; access_log /var/log/nginx/restricted-files-error.log main; }
External resources
- Hidden directories and files as a source of sensitive information about web application
- 1% of CMS-Powered Sites Expose Their Database Passwords
- RFC 5785 — Defining Well-Known Uniform Resource Identifiers (URIs) [IETF]
🔰 Take care about your ACL rules
Rationale
When planning for access control, consider several access options. NGINX provides
ngx_http_access_module
,ngx_http_geo_module
,ngx_http_map_module
orngx_http_auth_basic_module
modules for allow and deny permissions. Each of them secure sensitive files and directories.
You should always test your rules:
- check all used directives and their occurrence/priorites at all levels of request processing
- send testing requests to validate allowing or denying users access to web resources (also from external/blacklisted IP)
- send testing requests to check and verify HTTP response codes for all protected resources (see: response codes decision diagram)
- less is more, you should minimize any user’s access to the critical resources
- add only really required IP addresses and check their owner in the whois database
- regularly audit your access control rules to ensure they are current
If you use
*ACCESS_PHASE
(e.g.allow/deny
directives) remember that NGINX process request in phases, andrewrite
phase (wherereturn
belongs) goes beforeaccess
phase (wheredeny
works). See Allow and deny chapter to learn more. It’s important because this may break your security layers.
However, it is not recommended to use
if
statements but use of regular expressions may be a bit more flexible (for more information see this).
Example
- Restricting access with basic authentication
- Restricting access with client certificate
- Restricting access by geographical location
- Blocking/allowing IP addresses
- Limiting referrer spam
- Limiting the rate of requests with burst mode
- Limiting the rate of requests with burst mode and nodelay
- Limiting the rate of requests per IP with geo and map
- Limiting the number of connections
External resources
- Fastly — About ACLs
- Restrict allowed HTTP methods in Nginx
- Allow and deny (from this handbook)
- Protect sensitive resources — Hardening — P1 (from this handbook)
🔰 Hide Nginx version number
Rationale
Disclosing the version of NGINX running can be undesirable, particularly in environments sensitive to information disclosure. NGINX shows the version number by default in error pages and in the headers of HTTP responses.
This information can be used as a starting point for attackers who know of specific vulnerabilities associated with specific versions and might help gain a greater understanding of the systems in use and potentially develop further attacks targeted at the specific version of NGINX. For example, Shodan provides a widely used database of this info. It’s far more efficient to just try the vulnerability on all random servers than asking them.
Hiding your version information will not stop an attack from happening, but it will make you less of a target if attackers are looking for a specific version of hardware or software. I take the data broadcast by the HTTP server as a personal information.
Security by obscurity doesn’t mean you’re safe, but it does slow people down sometimes, and that’s exactly what’s needed for day zero vulnerabilities.
Look also at the most excellent comment about this (by specializt):
Disregarding important security factors like «no version numbers» and probably even «no server vendor name» entirely is just … a beginners mistake. Of course security through obscurity does nothing for your security itself but it sure as hell will at least protect against the most mundane, simplistic attack vectors — security through obscurity is a necessary step, it may be the first one and should never be the last security measurement -skipping it completely is a very bad mistake, even the most secure webservers can be cracked if a version-specific attack vector is known.
Example
# This disables emitting NGINX version on error pages and in the "Server" response header field: server_tokens off;
External resources
- Remove Version from Server Header Banner in nginx
- Reduce or remove server headers
- Fingerprint Web Server (OTG-INFO-002)
🔰 Hide Nginx server signature
Rationale
The
Server
response-header field contains information about the software used by the origin server to handle the request. This string is used by places like Alexa and Netcraft to collect statistics about how many and of what type of web server are live on the Internet.
One of the easiest first steps to undertake, is to prevent the web server from showing its used software and technologies via the
Server
header. Certainly, there are several reasons why do you want to change the server header. It could be security, it could be redundant systems, load balancers etc. The attacker collects all available information about the application and its environment. Information about the technologies used and the software versions are extremely valuable information.
And in my opinion, there is no real reason or need to show this much information about your server. It is easy to look up particular vulnerabilities once you know the version number. However, it’s not information you need to give out, so I am generally in favour of removing it, where this can be accomplished with minimal effort.
You should compile NGINX from sources with
ngx_headers_more
to usedmore_set_headers
directive or use a nginx-remove-server-header.patch.
Maybe it’s a very restrictive approach but the guidelines from RFC 2616 — Personal Information are always very helpful to me:
History shows that errors in this area often create serious security and/or privacy problems and generate highly adverse publicity for the implementor’s company. […] Like any generic data transfer protocol, HTTP cannot regulate the content of the data that is transferred, nor is there any a priori method of determining the sensitivity of any particular piece of information within the context of any given request. Therefore, applications SHOULD supply as much control over this information as possible to the provider of that information. Four header fields are worth special mention in this context:
Server
,Via
,Referer
andFrom
.
The Official Apache Documentation (yep, it’s not a joke, in my opinion that’s an interesting point of view) say:
Setting ServerTokens to less than minimal is not recommended because it makes it more difficult to debug interoperational problems. Also note that disabling the Server: header does nothing at all to make your server more secure. The idea of «security through obscurity» is a myth and leads to a false sense of safety.
Example
Recommended configuration:
http { more_set_headers "Server: Unknown"; # or whatever else, e.g. 'WOULDN'T YOU LIKE TO KNOW!' ...
Most recommended configuration:
http { more_clear_headers 'Server'; ...
You can also use Lua module:
http { header_filter_by_lua_block { ngx.header["Server"] = nil } ...
External resources
- Shhh… don’t let your response headers talk too loudly
- How to change (hide) the Nginx Server Signature?
- Configuring Your Web Server to Not Disclose Its Identity
🔰 Hide upstream proxy headers
Rationale
Securing a server goes far beyond not showing what’s running but I think less is more is better.
When NGINX is used to proxy requests to an upstream server (such as a PHP-FPM instance), it can be beneficial to hide certain headers sent in the upstream response (e.g. the version of PHP running).
You should use
proxy_hide_header
(or Lua module) to hide/remove headers from upstream servers returned to your NGINX reverse proxy (and consequently to the client).
Example
# Hide some standard response headers: proxy_hide_header X-Powered-By; proxy_hide_header X-AspNetMvc-Version; proxy_hide_header X-AspNet-Version; proxy_hide_header X-Drupal-Cache; # Hide some Amazon S3 specific response headers: proxy_hide_header X-Amz-Id-2; proxy_hide_header X-Amz-Request-Id; # Hide other risky response headers: proxy_hide_header X-Runtime;
External resources
- Remove insecure http headers
- CRLF Injection and HTTP Response Splitting Vulnerability
- HTTP Response Splitting
- HTTP response header injection
- X-Runtime header related attacks
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Remove support for legacy and risky HTTP request headers
Rationale
In my opinion, support of these headers is not a vulnerability itself, but more like misconfiguration which in some circumstances could lead to a vulnerability.
It is good manners to definitely remove (or stripping/normalizing of their values) support for risky HTTP request headers. None of them never should get to your application or go through a proxy server without not factor the contents of them.
The ability to use of the
X-Original-URL
orX-Rewrite-URL
can have serious consequences. These headers allows a user to access one URL but have your app (e.g. uses PHP/Symfony) return a different one which can bypass restrictions on higher level caches and web servers, for example, also if you set a deny rule (deny all; return 403;
) on the proxy for location such as/admin
.
If one or more of your backends uses the contents of the
X-Host
,X-Forwarded-Host
,X-Forwarded-Server
,X-Rewrite-Url
orX-Original-Url
HTTP request headers to decide which of your users (or which security domain) it sends an HTTP response, you may be impacted by this class of vulnerability. If you passes these headers to your backend an attacker could potentially cause to store a response with arbitrary content inserted to a victim’s cache.
Look at the following explanation taken from PortSwigger Research — Practical Web Cache Poisoning:
This revealed the headers
X-Original-URL
andX-Rewrite-URL
which override the request’s path. I first noticed them affecting targets running Drupal, and digging through Drupal’s code revealed that the support for this header comes from the popular PHP framework Symfony, which in turn took the code from Zend. The end result is that a huge number of PHP applications unwittingly support these headers. Before we try using these headers for cache poisoning, I should point out they’re also great for bypassing WAFs and security rules […]
Example
# Remove risky request headers (the safest method): proxy_set_header X-Original-URL ""; proxy_set_header X-Rewrite-URL ""; proxy_set_header X-Forwarded-Server ""; proxy_set_header X-Forwarded-Host ""; proxy_set_header X-Host ""; # Or consider setting the vulnerable headers to a known-safe value: proxy_set_header X-Original-URL $request_uri; proxy_set_header X-Rewrite-URL $original_uri; proxy_set_header X-Forwarded-Host $host;
External resources
- CVE-2018-14773: Remove support for legacy and risky HTTP request headers
- Local File Inclusion Vulnerability in Concrete5 version 5.7.3.1
- PortSwigger Research — Practical Web Cache Poisoning
- Passing headers to the backend (from this handbook)
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Use only the latest supported OpenSSL version
Rationale
Before start see Release Strategy Policies and Changelog on the OpenSSL website. Criteria for choosing OpenSSL version can vary and it depends all on your use.
The latest versions of the major OpenSSL library are (may be changed):
- the next version of OpenSSL will be 3.0.0
- version 1.1.1 will be supported until 2023-09-11 (LTS)
- last minor version: 1.1.1d (September 10, 2019)
- version 1.1.0 will be supported until 2019-09-11
- last minor version: 1.1.0k (May 28, 2018)
- version 1.0.2 will be supported until 2019-12-31 (LTS)
- last minor version: 1.0.2s (May 28, 2018)
- any other versions are no longer supported
In my opinion, the only safe way is based on the up-to-date, still supported and production-ready version of the OpenSSL. And what’s more, I recommend to hang on to the latest versions (e.g. 1.1.1 or 1.1.1d at this moment). So, make sure your OpenSSL library is updated to the latest available version and encourage your clients to also use updated OpenSSL and software working with it.
You should know one thing before start using OpenSSL 1.1.1: it has a different API than the current 1.0.2 so that’s not just a simple flick of the switch. NGINX started supporting TLS 1.3 with the release of version 1.13.0, but when the OpenSSL devs released OpenSSL 1.1.1, that NGINX had support for the brand new protocol version.
If your repositories system does not have the newest OpenSSL, you can do the compilation process (see OpenSSL sub-section).
I also recommend track the OpenSSL Vulnerabilities official newsletter, if you want to know a security bugs and issues fixed in OpenSSL.
External resources
- OpenSSL Official Website
- OpenSSL Official Blog
- OpenSSL Official Newslog
🔰 Force all connections over TLS
Rationale
TLS provides two main services. For one, it validates the identity of the server that the user is connecting to for the user. It also protects the transmission of sensitive information from the user to the server.
In my opinion you should always use HTTPS instead of HTTP (use HTTP only for redirection to HTTPS) to protect your website, even if it doesn’t handle sensitive communications and don’t have any mixed content. The application can have many sensitive places that should be protected.
Always put login page, registration forms, all subsequent authenticated pages, contact forms, and payment details forms in HTTPS to prevent sniffing and injection (attacker can inject code into an unencrypted HTTP transmission, so it always increases the risk to alter the content, even if someone only reads non-critical content. See Man-in-the-browser attack). Them must be accessed only over TLS to ensure your traffic is secure.
If page is available over TLS, it must be composed completely of content which is transmitted over TLS. Requesting subresources using the insecure HTTP protocol weakens the security of the entire page and HTTPS protocol. Modern browsers should blocked or report all active mixed content delivered via HTTP on pages by default.
Also remember to implement the HTTP Strict Transport Security (HSTS) and ensure proper configuration of TLS (protocol version, cipher suites, right certificate chain, and other).
We have currently the first free and open CA — Let’s Encrypt — so generating and implementing certificates has never been so easy. It was created to provide free and easy-to-use TLS and SSL certificates.
Example
-
force all traffic to use TLS:
server { listen 10.240.20.2:80; server_name example.com; return 301 https://$host$request_uri; } server { listen 10.240.20.2:443 ssl; server_name example.com; ... }
-
force login page to use TLS:
server { listen 10.240.20.2:80; server_name example.com; ... location ^~ /login { return 301 https://example.com$request_uri; } }
External resources
- Does My Site Need HTTPS?
- HTTP vs HTTPS Test
- Let’s Encrypt Documentation
- Should we force user to HTTPS on website?
- Force a user to HTTPS
- The Security Impact of HTTPS Interception [pdf]
- HTTPS with self-signed certificate vs HTTP (from this handbook)
- Enable HTTP Strict Transport Security — Hardening — P1 (from this handbook)
🔰 Use min. 2048-bit for RSA
and 256-bit for ECC
Rationale
SSL certificates most commonly use
RSA
keys and the recommended size of these keys keeps increasing to maintain sufficient cryptographic strength. An alternative toRSA
isECC
. TheECC
(andECDSA
) is probably better for most purposes, but not for everything. Both key types share the same important property of being asymmetric algorithms (one key for encrypting and one key for decrypting). NGINX supports dual certificates, so you can get the leaner, meanerECC
certificates but still let visitors to browse your site with standard certificates.
The truth is (if we talk about
RSA
), the industry/community are split on this topic. I am in the «use 2048, because 4096 gives us almost nothing, while costing us quite a lot» camp myself.
Advisories recommend 2048-bit for
RSA
(or 256-bit forECC
) keys at the moment. Security experts are projecting that 2048 bits will be sufficient for commercial use until around the year 2030 (as per NIST). US National Security Agency (NSA) requires all Top Secret files and documents to be encrypted with 384-bitECC
keys (7680-bitRSA
key). Also, due to security reason, the latest CA/Browser forum — Baseline Requirements [pdf] forum and IST advises to use 2048-bit RSA key for subscriber certificates/keys. Next, current recommendations (NIST SP 800-57-2 [pdf]) are now 2048 or 3072 bits, depending on interoperability requirements. On the other hand, the latest version of FIPS-186-5 (Draft) [pdf] specifies the use of a modulus whose bit length is an even integer and greater than or equal to 2048 bits (old FIPS-186-4 say the U.S. Federal Government generate (and use) digital signatures with 1024, 2048, or 3072 bit key lengths).
Next, OpenSSL use a 2048 bit key by default. Recommendations of the European Payments Council (EPC342-08 v8.0 [pdf]) say you should avoid using 1024-bit RSA keys and 160-bit ECC keys for new applications unless for short term low value protection (e.g. ephemeral authentication for single devices). EPC also recommend to use at least 2048-bit RSA or 224-bit ECC for medium term (e.g. 10 year) protection. They classify
SHA-1
,RSA
moduli with 1024 bits,ECC
keys of 160 bits as suitable for legacy use (but I no longer believeSHA-1
is suitable for legacy use).
Generally there is no compelling reason to choose 4096 bit keys for
RSA
over 2048 provided you use sane expiration intervals (e.g. not greater than 6-12 months for 2048-bit key and certificate) to give an attacker less time to crack the key and to reduce the chances of someone exploiting any vulnerabilities that may occur if your key is compromised, but it’s not necessary for the certificate’s security per se for now. The security levels for RSA are based on the strongest known attacks against RSA compared to amount of processing that would be needed to break symmetric encryption algorithms. For me, we should more concerned about our private keys getting stolen in a server compromise and when technological progress makes our key vulnerable to attacks.
A 256-bit
ECC
key can be stronger than a 2048-bit classical key. If you useECDSA
the recommended key size changes according to usage, see NIST 800-57-3 — Application-Specific Key Management Guidance (page 12, table 2-1) [pdf]. While it is true that a longer key provides better security, doubling the length of theRSA
key from 2048 to 4096, the increase in bits of security is only 18, a mere 16% (the time to sign a message increases by 7x, and the time to verify a signature increases by more than 3x in some cases). Moreover, besides requiring more storage, longer keys also translate into increased CPU usage.
ECC
is more better thanRSA
in terms of key length. But the main issues are implementation. I think,RSA
is more easy to implement thanECC
.ECDSA
keys (contain anECC
public keys) are recommended overRSA
because offers same level of security with smaller keys contrasted with non-ECC
cryptography.ECC
keys are better thanRSA & DSA
keys in that theECC
algorithm is harder to break (less vulnerable). In my opinion,ECC
is suitable for environments with lots of constrained (limited storage or data processing resources), e.g. cellular phones, PDAs, and generally for embedded systems. Of course,RSA
keys are very fast, provides very simple encryption and verification, and are easier to implement thanECC
.
Longer
RSA
keys take more time to generate and require more CPU and power when used for encrypting and decrypting, also the SSL handshake at the start of each connection will be slower. It also has a small impact on the client side (e.g. browsers). When usingcurve25519
,ECC
is considered more secure. It is fast and immune to a variety of side-channel attacks by design.RSA
is no less secure though in practical terms, and is also considered unbreakable by modern technology.
The real advantage of using a 4096-bit key nowadays is future proofing. If you want to get A+ with 100%s on SSL Lab (for Key Exchange) you should definitely use 4096 bit private keys. That’s the main (and the only one for me) reason why you should use them.
Use OpenSSL’s
speed
command to benchmark the two types and compare results, e.g.openssl speed rsa2048 rsa4096
,openssl speed rsa
oropenssl speed ecdsa
. Remember, however, in OpenSSL speed tests you see difference on block cipher speed, while in real life most CPU time is spent on asymmetric algorithms during SSL handshake. On the other hand, modern processors are capable of executing at least 1k of RSA 1024-bit signs per second on a single core, so this isn’t usually an issue.
The «SSL/TLS Deployment Best Practices» book say:
The cryptographic handshake, which is used to establish secure connections, is an operation whose cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in «too much» security and slow operation. For most web sites, using RSA keys stronger than 2048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2048 bits for DHE and 256 bits for ECDHE.
Konstantin Ryabitsev (Reddit):
Generally speaking, if we ever find ourselves in a world where 2048-bit keys are no longer good enough, it won’t be because of improvements in brute-force capabilities of current computers, but because RSA will be made obsolete as a technology due to revolutionary computing advances. If that ever happens, 3072 or 4096 bits won’t make much of a difference anyway. This is why anything above 2048 bits is generally regarded as a sort of feel-good hedging theatre.
My recommendation:
Use 256-bit for
ECDSA
or 2048-bit key instead of 4096-bit forRSA
at this moment.
Example
### Example (RSA): ( _fd="example.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} ) # Let's Encrypt: certbot certonly -d example.com -d www.example.com --rsa-key-size 2048 ### Example (ECC): # _curve: prime256v1, secp521r1, secp384r1 ( _fd="example.com.key" ; _fd_csr="example.com.csr" ; _curve="prime256v1" ; openssl ecparam -out ${_fd} -name ${_curve} -genkey ; openssl req -new -key ${_fd} -out ${_fd_csr} -sha256 ) # Let's Encrypt (from above): certbot --csr ${_fd_csr} -[other-args]
For x25519
:
( _fd="private.key" ; _curve="x25519" ; openssl genpkey -algorithm ${_curve} -out ${_fd} )
➡️ ssllabs score: 100%
( _fd="example.com.key" ; _len="2048" ; openssl genrsa -out ${_fd} ${_len} ) # Let's Encrypt: certbot certonly -d example.com -d www.example.com
➡️ ssllabs score: 90%
External resources
- Key Management Guidelines by NIST [NIST]
- Recommendation for Transitioning the Use of Cryptographic Algorithms and Key Lengths [NIST]
- NIST SP 800-52 Rev. 2 [NIST]
- NIST SP 800-57 Part 1 Rev. 3 [NIST]
- FIPS PUB 186-4 — Digital Signature Standard (DSS) [NIST, pdf]
- Cryptographic Key Length Recommendations
- Key Lengths — Contribution to The Handbook of Information Security [pdf]
- NIST — Key Management [NIST]
- CA/Browser Forum Baseline Requirements
- Mozilla Guidelines — Key Management
- So you’re making an RSA key for an HTTPS certificate. What key size do you use?
- RSA Key Sizes: 2048 or 4096 bits?
- Create a self-signed ECC certificate
- ECDSA: Elliptic Curve Signatures
- Elliptic Curve Cryptography Explained
- You should be using ECC for your SSL/TLS certificates
- Comparing ECC vs RSA
- Comparison And Evaluation Of Digital Signature Schemes Employed In Ndn Network [pdf]
- HTTPS Performance, 2048-bit vs 4096-bit
- RSA and ECDSA hybrid Nginx setup with LetsEncrypt certificates
- Why ninety-day lifetimes for certificates?
- SSL Certificate Validity Will Be Limited to One Year by Apple’s Safari Browser
- Certificate lifetime capped to 1 year from Sep 2020
- Why some cryptographic keys are much smaller than others
- Bit security level
- RSA key lengths
🔰 Keep only TLS 1.3 and TLS 1.2
Rationale
It is recommended to enable TLS 1.2/1.3 and fully disable SSLv2, SSLv3, TLS 1.0 and TLS 1.1 that have protocol weaknesses and uses older cipher suites (do not provide any modern ciper modes) which we really shouldn’t be using anymore. TLS 1.2 is currently the most used version of TLS and has made several improvements in security compared to TLS 1.1. The vast majority of sites do support TLSv1.2 but there are still some out there that don’t (what’s more, it is still not all clients are compatible with every version of TLS). The TLS 1.3 protocol is the latest and more robust TLS protocol version and should be used where possible (and where don’t need backward compatibility). The biggest benefit to dropping TLS 1.0 and 1.1 is that modern AEAD ciphers are only supported by TLS 1.2 and above.
TLS 1.0 and TLS 1.1 should not be used (see Deprecating TLSv1.0 and TLSv1.1 [IETF]) and were superseded by TLS 1.2, which has now itself been superseded by TLS 1.3 (must be included by January 1, 2024). They are also actively being deprecated in accordance with guidance from government agencies (e.g. NIST Special Publication (SP) 800-52 Revision 2 [pdf]) and industry consortia such as the Payment Card Industry Association (PCI-TLS — Migrating from SSL and Early TLS (Information Suplement) [pdf]). For example, in March of 2020, Firefox will disable support for TLS 1.0 and TLS 1.1.
Sticking with TLS 1.0 is a very bad idea and pretty unsafe. Can be POODLEd, BEASTed and otherwise padding-Oracled as well. Lots of other CVE (see TLS Security 6: Examples of TLS Vulnerabilities and Attacks) weaknesses still apply which cannot be fixed unless by switching TLS 1.0 off. Sticking with TLS 1.1 is only a bad compromise though it is halfway free from TLS 1.0 problems. On the other hand, sometimes their use is still required in practice (to support older clients). There are many other security risks caused by sticking to TLS 1.0 or 1.1, so I strongly recommend everyone updates their clients, services and devices to support min. TLS 1.2.
Removing backward SSL/TLS version is often the only way to prevent downgrade attacks. Google has proposed an extension to SSL/TLS named
TLS_FALLBACK_SCSV
(it should be supported by your OpenSSL library) that seeks to prevent forced SSL/TLS downgrades (the extension was adopted as RFC 7507 in April 2015). Upgrading alone is not sufficient. You must disable SSLv2 and SSLv3 — so if your server does not allow SSLv3 (or v2) connections it is not needed (as those downgraded connections would not work). TechnicallyTLS_FALLBACK_SCSV
is still useful with SSL disabled, because it helps avoid the connection being downgraded to TLS<1.2. To test this extension, read this great tutorial.
TLS 1.2 and TLS 1.3 are both without security issues (TLSv1.2 only once the certain conditions have been fulfilled, e.g. disable
CBC
ciphers). Only these versions provides modern cryptographic algorithms and adds TLS extensions and cipher suites. TLS 1.2 improves cipher suites that reduce reliance on block ciphers that have been exploited by attacks like BEAST and the aforementioned POODLE. TLS 1.3 is a new TLS version that will power a faster and more secure web for the next few years. What’s more, TLS 1.3 comes without a ton of stuff (was removed): renegotiation, compression, and many legacy algorithms:DSA
,RC4
,SHA1
,MD5
, andCBC
ciphers. Additionally, as already mentioned, TLS 1.0 and TLS 1.1 protocols will be removed from browsers at the beginning of 2020.
TLS 1.2 does require careful configuration to ensure obsolete cipher suites with identified vulnerabilities are not used in conjunction with it. TLS 1.3 removes the need to make these decisions and doesn’t require any particular configuration, as all of the ciphers are secure, and by default OpenSSL only enables
GCM
andChacha20/Poly1305
for TLSv1.3, without enablingCCM
. TLS 1.3 version also improves TLS 1.2 security, privace and performance issues.
Before enabling specific protocol version, you should check which ciphers are supported by the protocol. So, if you turn on TLS 1.2, remember about the correct (and strong) ciphers to handle them. Otherwise, they will not be anyway works without supported ciphers (no TLS handshake will succeed).
I think the best way to deploy secure configuration is: enable TLS 1.2 (as a minimum version; is safe enough) without any
CBC
ciphers (ChaCha20+Poly1305
orAES/GCM
should be preferred overCBC
(cf. BEAST), howewer, for me usingCBC
ciphers is not a vulnerability in and out of itself, Zombie POODLE, etc. are the vulnerabilities) and/or TLS 1.3 which is safer because of its handling improvement and the exclusion of everything that went obsolete since TLS 1.2 came up. So, making TLS 1.2 your «minimum protocol level» is the solid choice and an industry best practice (all of industry standards like PCI-DSS, HIPAA, NIST, strongly suggest the use of TLS 1.2 than TLS 1.1/1.0).
TLS 1.2 is probably insufficient for legacy client support. The NIST guidelines are not applicable to all use cases, and you should always analyze your user base before deciding which protocols to support or drop (for example, by adding variables responsible for TLS versions and ciphers to the log format). It’s important to remember that not every client supports the latest and greatest that TLS has to offer.
If you told NGINX to use TLS 1.3, it will use TLS 1.3 only where is available. NGINX supports TLS 1.3 since version 1.13.0 (released in April 2017), when built against OpenSSL 1.1.1 or more.
For TLS 1.3, think about using
ssl_early_data
to allow TLS 1.3 0-RTT handshakes.
My recommendation:
Use only TLSv1.3 and TLSv1.2.
Example
TLS 1.3 + 1.2:
ssl_protocols TLSv1.3 TLSv1.2;
TLS 1.2:
➡️ ssllabs score: 100%
TLS 1.3 + 1.2 + 1.1:
ssl_protocols TLSv1.3 TLSv1.2 TLSv1.1;
TLS 1.2 + 1.1:
ssl_protocols TLSv1.2 TLSv1.1;
➡️ ssllabs score: 95%
External resources
- The Transport Layer Security (TLS) Protocol Version 1.2 [IETF]
- The Transport Layer Security (TLS) Protocol Version 1.3 [IETF]
- Transport Layer Security Protocol: Documentation & Implementations
- TLS1.2 — Every byte explained and reproduced
- TLS1.3 — Every byte explained and reproduced
- TLS1.3 — OpenSSLWiki
- TLS v1.2 handshake overview
- An Overview of TLS 1.3 — Faster and More Secure
- A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)
- Differences between TLS 1.2 and TLS 1.3
- TLS 1.3 in a nutshell
- TLS 1.3 is here to stay
- TLS 1.3: Everything you need to know
- TLS 1.3: better for individuals — harder for enterprises
- How to enable TLS 1.3 on Nginx
- How to deploy modern TLS in 2019?
- Deploying TLS 1.3: the great, the good and the bad
- Why TLS 1.3 isn’t in browsers yet
- Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries
- How does TLS 1.3 protect against downgrade attacks?
- Phase two of our TLS 1.0 and 1.1 deprecation plan
- Deprecating TLSv1.0 and TLSv1.1 (IETF) [IETF]
- Deprecating TLS 1.0 and 1.1 — Enhancing Security for Everyone
- End of Life for TLS 1.0/1.1
- Legacy TLS is on the way out: Start deprecating TLSv1.0 and TLSv1.1 now
- TLS/SSL Explained – Examples of a TLS Vulnerability and Attack, Final Part
- A Challenging but Feasible Blockwise-Adaptive Chosen-Plaintext Attack on SSL
- TLS/SSL hardening and compatibility Report 2011 [pdf]
- This POODLE bites: exploiting the SSL 3.0 fallback
- New Tricks For Defeating SSL In Practice [pdf]
- Are You Ready for 30 June 2018? Saying Goodbye to SSL/early TLS
- What Happens After 30 June 2018? New Guidance on Use of SSL/Early TLS
- Mozilla Security Blog — Removing Old Versions of TLS
- Google — Modernizing Transport Security
- These truly are the end times for TLS 1.0, 1.1
- Who’s quit TLS 1.0?
- Recommended Cloudflare SSL configurations for PCI compliance
- Cloudflare SSL cipher, browser, and protocol support
- SSL and TLS Deployment Best Practices
- What level of SSL or TLS is required for HIPAA compliance?
- AEAD Ciphers — shadowsocks
- Building a faster and more secure web with TCP Fast Open, TLS False Start, and TLS 1.3
- SSL Labs Grade Change for TLS 1.0 and TLS 1.1 Protocols
- ImperialViolet — TLS 1.3 and Proxies
- How Netflix brings safer and faster streaming experiences to the living room on crowded networks using TLS 1.3
- TLS versions (from this handbook)
- Defend against the BEAST attack — Hardening — P1 (from this handbook)
🔰 Use only strong ciphers
Rationale
This parameter changes more often than others, the recommended configuration for today may be out of date tomorrow. In my opinion, having a well-considered and up-to-date list of highly secure cipher suites is important for high security SSL/TLS communication. In case of doubt, you should follow Mozilla Security/Server Side TLS (it’s really great source; all Mozilla websites and deployments should follow the recommendations from this document).
To check ciphers supported by OpenSSL on your server:
openssl ciphers -s -v
,openssl ciphers -s -v ECDHE
oropenssl ciphers -s -v DHE
.
Without careful cipher suite selection (TLS 1.3 does it for you!), you risk negotiating to a weak (less secure and don’t get ahead of the latest vulnerabilities; see this) cipher suite that may be compromised. If another party doesn’t support a cipher suite that’s up to your standards, and you highly value security on that connection, you shouldn’t allow your system to operate with lower-quality cipher suites.
For more security use only strong and not vulnerable cipher suites. Place
ECDHE+AESGCM
(according to Alexa Top 1 Million Security Analysis, over 92.8% websites using encryption prefer to useECDHE
based ciphers) andDHE
suites at the top of your list (also if you are concerned about performance, prioritizeECDHE-ECDSA
andECDHE-RSA
overDHE
; Chrome is going to prioritizeECDHE
-based ciphers overDHE
-based ciphers).DHE
is generally slow and in TLS 1.2 and below is vulnerable to weak groups (less than 2048-bit at this moment). And what’s more, not specified any restrictions on the groups to use. These issues don’t impactECDHE
which is why it’s generally preferred today.
The order is important because
ECDHE
suites are faster, you want to use them whenever clients supports them. EphemeralDHE/ECDHE
are recommended and support Perfect Forward Secrecy (a method that does not have the vulnerability to the type of replay attack that other solutions could introduce if a highly secure cipher suite is not supported).ECDHE-ECDSA
is about the same asRSA
in performance, but much more secure.ECDHE
withRSA
is slower, but still much more secure than aloneRSA
.
For backward compatibility software components think about less restrictive ciphers. Not only that you have to enable at least one special
AES128
cipher for HTTP/2 support regarding to RFC 7540 — TLS 1.2 Cipher Suites [IETF], you also have to allowprime256
elliptic curves which reduces the score for key exchange by another 10% even if a secure server preferred order is set.
Servers either use the client’s most preferable ciphersuite or their own. Most servers use their own preference. Disabling
DHE
removes forward security, but results in substantially faster handshake times. I think, so long as you only control one side of the conversation, it would be ridiculous to restrict your system to only supporting one cipher suite (it would cut off too many clients and too much traffic). On the other hand, look at what David Benjamin (from Chrome networking) said about it: Servers should also disableDHE
ciphers. Even ifECDHE
is preferred, merely supporting a weak group leavesDHE
-capable clients vulnerable.
Also modern cipher suites (e.g. from Mozilla recommendations) suffers from compatibility troubles mainly because drops
SHA-1
(see what Google said about it in 2014: Gradually sunsetting SHA-1). But be careful if you want to use ciphers withHMAC-SHA-1
, because them has been proven to be vulnerable to collision attacks as of 2017 (see this). While this does not affect its usage as aMAC
, safer alternatives such asSHA-256
, orSHA-3
should be considered. There’s a perfectly good explanation why.
If you want to get A+ with 100%s on SSL Lab (for Cipher Strength) you should definitely disable
128-bit
(that’s the main reason why you should not use them) andCBC
cipher suites which have had many weaknesses.
In my opinion
128-bit
symmetric encryption doesn’t less secure. Moreover, there are about 30% faster and still secure. For example TLS 1.3 useTLS_AES_128_GCM_SHA256 (0x1301)
(for TLS-compliant applications).
You should disable
CHACHA20_POLY1305
(e.g.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
andTLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
) to comply with HIPAA and NIST SP 800-38D [pdf] (Mozilla and Cloudflare uses them, IETF also recommend to use these cipher suites) guidelines andCBC
ciphersuites to comply with PCI DSS, HIPAA, and NIST guidelines. However, it is strange to me (getting rid ofCHACHA20_POLY1305
) and I have not found a rational explanation for why we should do it.ChaCha20
is simpler thanAES
and currently be quite a lot faster encryption algorithm if noAES
hardware acceleration is available (in practiceAES
is often implemented in hardware which gives it an advantage). What’s more, speed and security is probably the reason for Google to already supportChaCha20 + Poly1305/AES
in Chrome.
Mozilla recommends leaving the default ciphers for TLSv1.3 and not explicitly enabling them in the configuration (TLSv1.3 doesn’t require any particular changes). This is one of the changes: we need to know is that the cipher suites are fixed unless an application explicitly defines TLS 1.3 cipher suites. Thus, all of your TLSv1.3 connections will use
AES-256-GCM
,ChaCha20
, thenAES-128-GCM
, in that order. I also recommend relying on OpenSSL because for TLS 1.3 the cipher suites are fixed so setting them will not affect (you will automatically use those three ciphers).
By default, OpenSSL 1.1.1* with TLSv1.3 disable
TLS_AES_128_CCM_SHA256
andTLS_AES_128_CCM_8_SHA256
ciphers. In my opinion,ChaCha20+Poly1305
orAES/GCM
are very efficient in the most cases. On modern processors, the commonAES-GCM
cipher and mode are sped up by dedicated hardware, making that algorithm’s implementation faster than anything by a wide margin. On older or cheaper processors that lack that feature, though, theChaCha20
cipher runs faster thanAES-GCM
, as was theChaCha20
designers’ intention.
For TLS 1.2 you should consider disable weak ciphers without forward secrecy like ciphers with
CBC
algorithm. TheCBC
mode is vulnerable to plain-text attacks with TLS 1.0, SSL 3.0 and lower. However a real fix is implemented with TLS 1.2 in which theGCM
mode was introduced and which is not vulnerable to the BEAST attack. Using them also reduces the final grade because they don’t use ephemeral keys. In my opinion you should use ciphers withAEAD
(TLS 1.3 supports only these suites) encryption because they don’t have any known weaknesses.
There are vulnerabilities like Zombie POODLE, GOLDENDOODLE, 0-Length OpenSSL and Sleeping POODLE which were published for websites that use
CBC
(Cipher Block Chaining) block cipher modes. These vulnerabilities are applicable only if the server uses TLS 1.0, TLS 1.1 or TLS 1.2 withCBC
cipher modes. Look at Zombie POODLE, GOLDENDOODLE, & How TLSv1.3 Can Save Us All [pdf] presentation from Black Hat Asia 2019. TLS 1.0 and TLS 1.1 may be affected by vulnerabilities such as FREAK, POODLE, BEAST, and CRIME.
And yet, interestingly, Craig Young, a computer security researcher for Tripwire’s Vulnerability and Exposure Research Team, found vulnerabilities in SSL 3.0’s successor, TLS 1.2, that allow for attacks akin to POODLE due to TLS 1.2’s continued support for a long-outdated cryptographic method: cipher block-chaining (
CBC
). The flaws allow man-in-the-middle (MitM) attacks on a user’s encrypted Web sessions.
I recommend to disable TLS cipher modes that use
RSA
encryption (all ciphers that start withTLS_RSA_WITH_*
) because they are really vulnerable to ROBOT attack. Instead, you should add support for cipher suites that useECDHE
orDHE
(to be compliant to NIST SP 800-56B [pdf]) for key transport. If your server is configured to support ciphers known as static key ciphers, you should know that hese ciphers don’t support «Forward Secrecy». In the new specification for HTTP/2, these ciphers have been blacklisted. Not all servers that supportRSA
key exchange are vulnerable, but it is recommended to disableRSA
key exchange ciphers as it does not support forward secrecy. On the other hand,TLS_ECDHE_RSA
ciphers may be OK, becauseRSA
is not doing the key transport in this case. TLS 1.3 doesn’t useRSA
key exchanges because they’re not forward secret.
You should also absolutely disable weak ciphers regardless of the TLS version do you use, like those with
DSS
,DSA
,DES/3DES
,RC4
,MD5
,SHA1
,null
, anon in the name.
We have a nice online tool for testing compatibility cipher suites with user agents: CryptCheck. I think it will be very helpful for you.
If in doubt, use one of the recommended Mozilla kits (see below), check also Supported cipher suites and User agent compatibility.
Look at this great explanation about weak ciphers by Keith Shaw:
Weak does not mean insecure. […] A cipher usually gets marked as weak because there is some fundamental design flaw that makes it difficult to implement securely.
At the end, some interesting statistics Logjam: the latest TLS vulnerability explained:
94% of the TLS connections to CloudFlare customer sites uses
ECDHE
(more precisely 90% of them beingECDHE-RSA-AES
of some sort and 10%ECDHE-RSA-CHACHA20-POLY1305
) and provides Forward Secrecy. The rest use staticRSA
(5.5% withAES
, 0.6% with3DES
).
My recommendation:
Use only TLSv1.3 and TLSv1.2 with below cipher suites (remember about min.
2048-bit
DH params forDHE
with TLSv1.2):
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256";
Example
Cipher suites for TLSv1.3:
# - it's only example because for TLS 1.3 the cipher suites are fixed so setting them will not affect # - if you have no explicit cipher suites configuration then you will automatically use those three and will be able to negotiate TLSv1.3 # - I recommend not setting ciphers for TLSv1.3 in NGINX ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384";
Cipher suites for TLSv1.2:
# Without DHE, only ECDHE: ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";
➡️ ssllabs score: 100%
Cipher suites for TLSv1.3:
# - it's only example because for TLS 1.3 the cipher suites are fixed so setting them will not affect # - if you have no explicit cipher suites configuration then you will automatically use those three and will be able to negotiate TLSv1.3 # - I recommend not setting ciphers for TLSv1.3 in NGINX ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256";
Cipher suites for TLSv1.2:
# 1) # With DHE (remember about min. 2048-bit DH params for DHE!): ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256"; # 2) # Without DHE, only ECDHE (DH params are not required): ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"; # 3) # With DHE (remember about min. 2048-bit DH params for DHE!): ssl_ciphers "EECDH+CHACHA20:EDH+AESGCM:AES256+EECDH:AES256+EDH";
➡️ ssllabs score: 90%
This will also give a baseline for comparison with Mozilla SSL Configuration Generator:
- Modern profile, OpenSSL 1.1.1 (and variants) for TLSv1.3
# However, Mozilla does not enable them in the configuration: # - for TLS 1.3 the cipher suites are fixed unless an application explicitly defines them # ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256";
- Modern profile, OpenSSL 1.1.1 (and variants) for TLSv1.2 + TLSv1.3
# However, Mozilla does not enable them in the configuration: # - for TLS 1.3 the cipher suites are fixed unless an application explicitly defines them # ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256"; ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
- Intermediate profile, OpenSSL 1.1.0b + 1.1.1 (and variants) for TLSv1.2
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
There is also recommended ciphers for HIPAA and TLS v1.2+:
ssl_ciphers "TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-AES-128-CCM-8-SHA256:TLS13-AES-128-CCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-CCM:DHE-RSA-AES128-CCM:DHE-RSA-AES256-CCM8:DHE-RSA-AES128-CCM8:DH-RSA-AES256-GCM-SHA384:DH-RSA-AES128-GCM-SHA256:ECDH-RSA-AES256-GCM-SHA384:ECDH-RSA-AES128-GCM-SHA256";
Scan results for each cipher suite (TLSv1.2 offered)
My recommendation
- Cipher suites:
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256";
-
DH: 2048-bit
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) ECDH x25519 (eq. 3072 bits RSA) FS 128
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xccaa) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) DH 2048 bits FS 128
- SSLLabs ‘Handshake Simulation’ errors:
IE 11 / Win Phone 8.1 R Server sent fatal alert: handshake_failure
Safari 6 / iOS 6.0.1 Server sent fatal alert: handshake_failure
Safari 7 / iOS 7.1 R Server sent fatal alert: handshake_failure
Safari 7 / OS X 10.9 R Server sent fatal alert: handshake_failure
Safari 8 / iOS 8.4 R Server sent fatal alert: handshake_failure
Safari 8 / OS X 10.10 R Server sent fatal alert: handshake_failure
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› x9f DHE-RSA-AES256-GCM-SHA384 DH 2048 AESGCM 256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xccaa DHE-RSA-CHACHA20-POLY1305 DH 2048 ChaCha20 256 TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xc02f ECDHE-RSA-AES128-GCM-SHA256 ECDH 521 AESGCM 128 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
› x9e DHE-RSA-AES128-GCM-SHA256 DH 2048 AESGCM 128 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
SSLLabs 100%
- Cipher suites:
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384";
-
DH: not used
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 100%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
- SSLLabs ‘Handshake Simulation’ errors:
Android 5.0.0 Server sent fatal alert: handshake_failure
Android 6.0 Server sent fatal alert: handshake_failure
Firefox 31.3.0 ESR / Win 7 Server sent fatal alert: handshake_failure
IE 11 / Win 7 R Server sent fatal alert: handshake_failure
IE 11 / Win 8.1 R Server sent fatal alert: handshake_failure
IE 11 / Win Phone 8.1 R Server sent fatal alert: handshake_failure
IE 11 / Win Phone 8.1 Update R Server sent fatal alert: handshake_failure
Safari 6 / iOS 6.0.1 Server sent fatal alert: handshake_failure
Safari 7 / iOS 7.1 R Server sent fatal alert: handshake_failure
Safari 7 / OS X 10.9 R Server sent fatal alert: handshake_failure
Safari 8 / iOS 8.4 R Server sent fatal alert: handshake_failure
Safari 8 / OS X 10.10 R Server sent fatal alert: handshake_failure
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
SSLLabs 90% (1)
- Cipher suites:
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256";
-
DH: 2048-bit
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) DH 2048 bits FS 256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xccaa) DH 2048 bits FS 256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) ECDH x25519 (eq. 3072 bits RSA) FS 128
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) DH 2048 bits FS 128
- SSLLabs ‘Handshake Simulation’ errors:
IE 11 / Win Phone 8.1 R Server sent fatal alert: handshake_failure
Safari 6 / iOS 6.0.1 Server sent fatal alert: handshake_failure
Safari 7 / iOS 7.1 R Server sent fatal alert: handshake_failure
Safari 7 / OS X 10.9 R Server sent fatal alert: handshake_failure
Safari 8 / iOS 8.4 R Server sent fatal alert: handshake_failure
Safari 8 / OS X 10.10 R Server sent fatal alert: handshake_failure
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› x9f DHE-RSA-AES256-GCM-SHA384 DH 2048 AESGCM 256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xccaa DHE-RSA-CHACHA20-POLY1305 DH 2048 ChaCha20 256 TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xc02f ECDHE-RSA-AES128-GCM-SHA256 ECDH 521 AESGCM 128 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
› x9e DHE-RSA-AES128-GCM-SHA256 DH 2048 AESGCM 128 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
SSLLabs 90% (2)
- Cipher suites:
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256";
-
DH: not used
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) ECDH x25519 (eq. 3072 bits RSA) FS 128
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) ECDH x25519 (eq. 3072 bits RSA) FS WEAK 256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) ECDH x25519 (eq. 3072 bits RSA) FS WEAK 128
- SSLLabs ‘Handshake Simulation’ errors:
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› xc028 ECDHE-RSA-AES256-SHA384 ECDH 521 AES 256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xc02f ECDHE-RSA-AES128-GCM-SHA256 ECDH 521 AESGCM 128 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
› xc027 ECDHE-RSA-AES128-SHA256 ECDH 521 AES 128 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
SSLLabs 90% (3)
- Cipher suites:
ssl_ciphers "EECDH+CHACHA20:EDH+AESGCM:AES256+EECDH:AES256+EDH";
-
DH: 2048-bit
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) DH 2048 bits FS 128
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) ECDH x25519 (eq. 3072 bits RSA) FS WEAK 256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) ECDH x25519 (eq. 3072 bits RSA) FS WEAK 256
TLS_DHE_RSA_WITH_AES_256_CCM_8 (0xc0a3) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_AES_256_CCM (0xc09f) DH 2048 bits FS 256
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x6b) DH 2048 bits FS WEAK 256
TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x39) DH 2048 bits FS WEAK 256
- SSLLabs ‘Handshake Simulation’ errors:
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› xc028 ECDHE-RSA-AES256-SHA384 ECDH 521 AES 256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
› xc014 ECDHE-RSA-AES256-SHA ECDH 521 AES 256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
› x9f DHE-RSA-AES256-GCM-SHA384 DH 2048 AESGCM 256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xc0a3 DHE-RSA-AES256-CCM8 DH 2048 AESCCM8 256 TLS_DHE_RSA_WITH_AES_256_CCM_8
› xc09f DHE-RSA-AES256-CCM DH 2048 AESCCM 256 TLS_DHE_RSA_WITH_AES_256_CCM
› x6b DHE-RSA-AES256-SHA256 DH 2048 AES 256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
› x39 DHE-RSA-AES256-SHA DH 2048 AES 256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
› x9e DHE-RSA-AES128-GCM-SHA256 DH 2048 AESGCM 128 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
Mozilla modern profile
- Cipher suites:
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
-
DH: 2048-bit
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) ECDH x25519 (eq. 3072 bits RSA) FS 128
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x9e) DH 2048 bits FS 128
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x9f) DH 2048 bits FS 256
- SSLLabs ‘Handshake Simulation’ errors:
IE 11 / Win Phone 8.1 R Server sent fatal alert: handshake_failure
Safari 6 / iOS 6.0.1 Server sent fatal alert: handshake_failure
Safari 7 / iOS 7.1 R Server sent fatal alert: handshake_failure
Safari 7 / OS X 10.9 R Server sent fatal alert: handshake_failure
Safari 8 / iOS 8.4 R Server sent fatal alert: handshake_failure
Safari 8 / OS X 10.10 R Server sent fatal alert: handshake_failure
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 521 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
› x9f DHE-RSA-AES256-GCM-SHA384 DH 2048 AESGCM 256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
› xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
› xc02f ECDHE-RSA-AES128-GCM-SHA256 ECDH 521 AESGCM 128 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
› x9e DHE-RSA-AES128-GCM-SHA256 DH 2048 AESGCM 128 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
Scan results for each cipher suite (TLSv1.3 offered)
Mozilla modern profile (My recommendation)
-
Cipher suites: not set
-
DH: 2048-bit
-
SSL Labs scores:
- Certificate: 100%
- Protocol Support: 100%
- Key Exchange: 90%
- Cipher Strength: 90%
-
SSLLabs suites in server-preferred order:
TLS_AES_256_GCM_SHA384 (0x1302) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_CHACHA20_POLY1305_SHA256 (0x1303) ECDH x25519 (eq. 3072 bits RSA) FS 256
TLS_AES_128_GCM_SHA256 (0x1301) ECDH x25519 (eq. 3072 bits RSA) FS 128
- SSLLabs ‘Handshake Simulation’ errors:
Chrome 69 / Win 7 R Server sent fatal alert: protocol_version
Firefox 62 / Win 7 R Server sent fatal alert: protocol_version
OpenSSL 1.1.0k R Server sent fatal alert: protocol_version
- testssl.sh:
› SSLv2
› SSLv3
› TLS 1
› TLS 1.1
› TLS 1.2
› TLS 1.3
› x1302 TLS_AES_256_GCM_SHA384 ECDH 253 AESGCM 256 TLS_AES_256_GCM_SHA384
› x1303 TLS_CHACHA20_POLY1305_SHA256 ECDH 253 ChaCha20 256 TLS_CHACHA20_POLY1305_SHA256
› x1301 TLS_AES_128_GCM_SHA256 ECDH 253 AESGCM 128 TLS_AES_128_GCM_SHA256
External resources
- RFC 7525 — TLS Recommendations [IETF]
- TLS Cipher Suites [IANA]
- SEC 1: Elliptic Curve Cryptography [pdf]
- TLS Cipher Suite Search
- Elliptic Curve Cryptography: a gentle introduction
- SSL/TLS: How to choose your cipher suite
- HTTP/2 and ECDSA Cipher Suites
- TLS 1.3 (with AEAD) and TLS 1.2 cipher suites demystified: how to pick your ciphers wisely
- Which SSL/TLS Protocol Versions and Cipher Suites Should I Use?
- Recommendations for a cipher string by OWASP
- Recommendations for TLS/SSL Cipher Hardening by Acunetix
- Mozilla’s Modern compatibility suite
- Cloudflare SSL cipher, browser, and protocol support
- TLS & Perfect Forward Secrecy
- Why use Ephemeral Diffie-Hellman
- Cipher Suite Breakdown
- Zombie POODLE and GOLDENDOODLE Vulnerabilities
- SSL Labs Grading Update: Forward Secrecy, Authenticated Encryption and ROBOT
- Logjam: the latest TLS vulnerability explained
- The CBC Padding Oracle Problem
- Goodbye TLS_RSA
- ImperialViolet — TLS Symmetric Crypto
- IETF drops RSA key transport from TLS 1.3
- Why TLS 1.3 is a Huge Improvement
- Overview of TLS v1.3 — What’s new, what’s removed and what’s changed? [pdf]
- OpenSSL IANA Mapping
- Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001)
- Bypassing Web-Application Firewalls by abusing SSL/TLS
- What level of SSL or TLS is required for HIPAA compliance?
- Cryptographic Right Answers
- ImperialViolet — ChaCha20 and Poly1305 for TLS
- Do the ChaCha: better mobile performance with cryptography
- AES Is Great … But We Need A Fall-back: Meet ChaCha and Poly1305
- There’s never magic, but plenty of butterfly effects
- Cipher suites (from this handbook)
🔰 Use more secure ECDH Curve
Rationale
Keep an eye also on this: Secure implementations of the standard curves are theoretically possible but very hard.
In my opinion your main source of knowledge should be The SafeCurves web site. This site reports security assessments of various specific curves.
For a SSL server certificate, an «elliptic curve» certificate will be used only with digital signatures (
ECDSA
algorithm). NGINX provides directive to specifies a curve forECDHE
ciphers (ssl_ecdh_curve
).
x25519
is a more secure (also with SafeCurves requirements) but slightly less compatible option. I think to maximise interoperability with existing browsers and servers, stick toP-256
(prime256v1
) andP-384
(secp384r1
) curves. Of course there’s tons of different opinions about them.
NSA Suite B says that NSA uses curves
P-256
andP-384
(in OpenSSL, they are designated as, respectively,prime256v1
andsecp384r1
). There is nothing wrong withP-521
, except that it is, in practice, useless. Arguably,P-384
is also useless, because the more efficientP-256
curve already provides security that cannot be broken through accumulation of computing power.
Bernstein and Lange believe that the NIST curves are not optimal and there are better (more secure) curves that work just as fast, e.g.
x25519
.
The SafeCurves say:
NIST P-224
,NIST P-256
andNIST P-384
are UNSAFE
From the curves described here only
x25519
is a curve meets all SafeCurves requirements.
I think you can use
P-256
to minimise trouble. If you feel that your manhood is threatened by using a 256-bit curve where a 384-bit curve is available, then useP-384
: it will increases your computational and network costs.
If you use TLS 1.3 you should enable
prime256v1
signature algorithm. Without this SSL Lab reportsTLS_AES_128_GCM_SHA256 (0x1301)
signature as weak.
If you do not set
ssl_ecdh_curve
, then NGINX will use its default settings, e.g. Chrome will preferx25519
, but it is not recommended because you can not control default settings (seems to beP-256
) from the NGINX.
Explicitly set
ssl_ecdh_curve X25519:prime256v1:secp521r1:secp384r1;
decreases the Key Exchange SSL Labs rating.
Definitely do not use the
secp112r1
,secp112r2
,secp128r1
,secp128r2
,secp160k1
,secp160r1
,secp160r2
,secp192k1
curves. They have a too small size for security application according to NIST recommendation.
My recommendation:
Use only TLSv1.3 and TLSv1.2 and only strong ciphers with the following curves:
ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
Example
Curves for TLS 1.2:
ssl_ecdh_curve secp521r1:secp384r1:prime256v1;
➡️ ssllabs score: 100%
# Alternative (this one doesn’t affect compatibility, by the way; it’s just a question of the preferred order). # This setup downgrade Key Exchange score but is recommended for TLS 1.2 + TLS 1.3: ssl_ecdh_curve X25519:secp521r1:secp384r1:prime256v1;
External resources
- Elliptic Curves for Security [IETF]
- Standards for Efficient Cryptography Group
- SafeCurves: choosing safe curves for elliptic-curve cryptography
- A note on high-security general-purpose elliptic curves
- P-521 is pretty nice prime
- Safe ECC curves for HTTPS are coming sooner than you think
- Cryptographic Key Length Recommendations
- Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection (OTG-CRYPST-001)
- Elliptic Curve performance: NIST vs Brainpool
- Which elliptic curve should I use?
- Elliptic Curve Cryptography for those who are afraid of maths [pdf]
- Security dangers of the NIST curves [pdf]
- How to design an elliptic-curve signature system
- Win10 Crypto Vulnerability: Cheating in Elliptic Curve Billiards 2
🔰 Use strong Key Exchange with Perfect Forward Secrecy
Rationale
These parameters determine how the OpenSSL library performs Diffie-Hellman (DH) key exchange (DH requires some set-up parameters to begin with which are generated with
openssl dhparam ...
and set inssl_dhparam
directive). From a mathematical point of view, they include a field primep
and a generatorg
. A largerp
will make it more difficult to find a common and secret keyK
, protecting against passive attacks.
To use a signature based authentication you need some kind of DH exchange (fixed or ephemeral/temporary), to exchange the session key. If you use
DHE
ciphers but you do not specify these parameters, NGINX will use the default Ephemeral Diffie-Hellman paramaters to define how performs the Diffie-Hellman (DH) key-exchange. In older versions, NGINX used a weak key (by default:1024 bit
) that gets lower scores.
You should always use the Elliptic Curve Diffie Hellman Ephemeral (
ECDHE
) and if you want to retain support for older customers alsoDHE
. Due to increasing concern about pervasive surveillance, key exchanges that provide Forward Secrecy are recommended, see for example RFC 7525 — Forward Secrecy [IETF].
Make sure your OpenSSL library is updated to the latest available version and encourage your clients to also use updated software. Updated browsers discard low and vulnerable DH parameters (below 768/1024 bits).
For greater compatibility but still for security in key exchange, you should prefer the latter E (ephemeral) over the former E (EC). There is recommended configuration:
ECDHE
>DHE
(with unique keys at least 2048 bits long) >ECDH
. With this if the initial handshake fails, another handshake will be initiated usingDHE
.
DHE
is slower thanECDHE
. If you are concerned about performance, prioritizeECDHE-ECDSA
orECDHE-RSA
overDHE
. OWASP estimates that the TLS handshake withDHE
hinders the CPU by a factor of 2.4 compared toECDHE
.
Diffie-Hellman requires some set-up parameters to begin with. Parameters from
ssl_dhparam
(which are generated withopenssl dhparam ...
) define how OpenSSL performs the Diffie-Hellman (DH) key-exchange. They include a field primep
and a generatorg
.
The purpose of the availability to customize these parameter is to allow everyone to use own parameters for this, and most importantly, finding such prime numbers is really computationally intensive and you can’t afford them with every connection, so they are pre-calculated (set up from the HTTP server). In the case of NGINX, we set them using the ssl_dhparam directive. However, using custom parameters will make the server non-compliant with FIPS requirements: The publication approves the use of specific safe-prime groups of domain parameters for the finite field DH and MQV schemes, in addition to the previously approved domain parameter sets. See also approved TLS groups for FFC key agreement (table 26, page 133) [NIST, pdf].
You can use custom parameters to prevent being affected from the Logjam attack (both the client and the server need to be vulnerable in order for the attack to succeed because the server must accept to sign small
DHE_EXPORT
parameters, and the client must accept them as validDHE
parameters).
Modern clients prefer
ECDHE
instead other variants and if your NGINX accepts this preference then the handshake will not use the DH param at all since it will not do aDHE
key exchange but anECDHE
key exchange. Thus, if no plainDH/DHE
ciphers are configured at your server but only Eliptic curve DH (e.g.ECDHE
) then you don’t need to set your ownssl_dhparam
directive. EnablingDHE
requires us to take care of our DH primes (a.k.a.dhparams
) and to trust inDHE
— in newer versions, NGINX does it for us.
Elliptic curve Diffie-Hellman is a modified Diffie-Hellman exchange which uses Elliptic curve cryptography instead of the traditional RSA-style large primes. So, while I’m not sure what parameters it may need (if any), I don’t think it needs the kind you’re generating (
ECDH
is based on curves, not primes, so I don’t think the traditional DH params will do you any good).
Cipher suites using
DHE
key exchange in OpenSSL requiretmp_DH
parameters, which thessl_dhparam
directive provides. The same is true forDH_anon
key exchange, but in practice nobody uses those. The OpenSSL wiki page for Diffie Hellman Parameters it says: To use perfect forward secrecy cipher suites, you must set up Diffie-Hellman parameters (on the server side). Look also at SSL_CTX_set_tmp_dh_callback.
If you use
ECDH/ECDHE
key exchange please see Use more secure ECDH Curve rule.
In older versions of OpenSSL, if no key size is specified, default key size was
512/1024 bits
— it was vulnerable and breakable. For the best security configuration use your own DH Group (min.2048 bit
) or use known safe ones pre-defined DH groups (it’s recommended; the pre-defined DH groupsffdhe2048
,ffdhe3072
orffdhe4096
recommended by the IETF in RFC 7919 — Supported Groups Registry [IETF], compliant with NIST and FIPS. They are audited and may be more resistant to attacks than ones randomly generated. Example of pre-defined groups:
- ffdhe2048
- ffdhe4096
The
2048 bit
is generally expected to be safe and is already very far into the «cannot break it zone». However, years ago people expected 1024 bit to be safe so if you are after long term resistance you would go up to4096 bit
(for both RSA keys and DH parameters). It’s also important if you want to get 100% on Key Exchange of the SSL Labs test.
TLS clients should also reject static Diffie-Hellman — it’s describe in this draft.
You should remember that the
4096 bit
modulus will make DH computations slower and won’t actually improve security.
There is good explanation about DH parameters recommended size:
Current recommendations from various bodies (including NIST) call for a
2048-bit
modulus for DH. Known DH-breaking algorithms would have a cost so ludicrously high that they could not be run to completion with known Earth-based technology. See this site for pointers on that subject.
You don’t want to overdo the size because the computational usage cost rises relatively sharply with prime size (somewhere between quadratic and cubic, depending on some implementation details) but a
2048-bit
DH ought to be fine (a basic low-end PC can do several hundreds of2048-bit
DH per second).
Look also at this answer by Matt Palmer:
Indeed, characterising
2048 bit
DH parameters as «weak as hell» is quite misleading. There are no known feasible cryptographic attacks against arbitrary strong 2048 bit DH groups. To protect against future disclosure of a session key due to breaking DH, sure, you want your DH parameters to be as long as is practical, but since1024 bit
DH is only just getting feasible,2048 bits
should be OK for most purposes for a while yet.
Take a look at this interesting answer comes from Guide to Deploying Diffie-Hellman for TLS:
2. Deploy (Ephemeral) Elliptic-Curve Diffie-Hellman (ECDHE). Elliptic-Curve Diffie-Hellman (ECDH) key exchange avoids all known feasible cryptanalytic attacks, and modern web browsers now prefer ECDHE over the original, finite field, Diffie-Hellman. The discrete log algorithms we used to attack standard Diffie-Hellman groups do not gain as strong of an advantage from precomputation, and individual servers do not need to generate unique elliptic curves.
My recommendation:
If you use only TLS 1.3 —
ssl_dhparam
is not required (not used). Also, if you useECDHE/ECDH
—ssl_dhparam
is not required (not used). If you useDHE/DH
—ssl_dhparam
with DH parameters is required (min.2048 bit
). By default no parameters are set, and thereforeDHE
ciphers will not be used.
Example
To set DH params:
# curl https://ssl-config.mozilla.org/ffdhe2048.txt --output ffdhe2048.pem ssl_dhparam ffdhe2048.pem;
To generate DH params:
# To generate a DH parameters: openssl dhparam -out /etc/nginx/ssl/dhparam_4096.pem 4096 # To produce "DSA-like" DH parameters: openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_4096.pem 4096 # Use the pre-defined DH groups: curl https://ssl-config.mozilla.org/ffdhe4096.txt > /etc/nginx/ssl/ffdhe4096.pem # NGINX configuration only for DH/DHE: ssl_dhparam /etc/nginx/ssl/dhparams_4096.pem;
➡️ ssllabs score: 100%
# To generate a DH parameters: openssl dhparam -out /etc/nginx/ssl/dhparam_2048.pem 2048 # To produce "DSA-like" DH parameters: openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam_2048.pem 2048 # Use the pre-defined DH groups: curl https://ssl-config.mozilla.org/ffdhe2048.txt > /etc/nginx/ssl/ffdhe2048.pem # NGINX configuration only for DH/DHE: ssl_dhparam /etc/nginx/ssl/dhparam_2048.pem;
➡️ ssllabs score: 90%
External resources
- Guide to Deploying Diffie-Hellman for TLS
- Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice [pdf]
- Weak Diffie-Hellman and the Logjam Attack
- Logjam: the latest TLS vulnerability explained
- Pre-defined DHE groups
- Why is Mozilla recommending predefined DHE groups?
- Instructs OpenSSL to produce «DSA-like» DH parameters
- OpenSSL generate different types of self signed certificate
- Public Diffie-Hellman Parameter Service/Tool
- Vincent Bernat’s SSL/TLS & Perfect Forward Secrecy
- What’s the purpose of DH Parameters?
- RSA and ECDSA performance
- SSL/TLS: How to choose your cipher suite
- Diffie-Hellman and its TLS/SSL usage
- Google Plans to Deprecate DHE Cipher Suites
- Downgrade Attacks
- Diffie-Hellman key exchange (from this handbook)
🔰 Prevent Replay Attacks on Zero Round-Trip Time
Rationale
This rules is only important for TLS 1.3. By default enabling TLS 1.3 will not enable 0-RTT support. After all, you should be fully aware of all the potential exposure factors and related risks with the use of this option.
0-RTT Handshakes is part of the replacement of TLS Session Resumption and was inspired by the QUIC Protocol.
0-RTT creates a significant security risk. With 0-RTT, a threat actor can intercept an encrypted client message and resend it to the server, tricking the server into improperly extending trust to the threat actor and thus potentially granting the threat actor access to sensitive data.
On the other hand, including 0-RTT (Zero Round Trip Time Resumption) results in a significant increase in efficiency and connection times. TLS 1.3 has a faster handshake that completes in 1-RTT. Additionally, it has a particular session resumption mode where, under certain conditions, it is possible to send data to the server on the first flight (0-RTT).
For example, Cloudflare only supports 0-RTT for GET requests with no query parameters in an attempt to limit the attack surface. Moreover, in order to improve identify connection resumption attempts, they relay this information to the origin by adding an extra header to 0-RTT requests. This header uniquely identifies the request, so if one gets repeated, the origin will know it’s a replay attack (the application needs to track values received from that and reject duplicates on non-idempotent endpoints).
To protect against such attacks at the application layer, the
$ssl_early_data
variable should be used. You’ll also need to ensure that theEarly-Data
header is passed to your application.$ssl_early_data
returns 1 if TLS 1.3 early data is used and the handshake is not complete.
However, as part of the upgrade, you should disable 0-RTT until you can audit your application for this class of vulnerability.
In order to send early-data, client and server must support PSK exchange mode [IETF] (session cookies).
In addition, I would like to recommend this great discussion about TLS 1.3 and 0-RTT.
If you are unsure to enable 0-RTT, look what Cloudflare say about it:
Generally speaking, 0-RTT is safe for most web sites and applications. If your web application does strange things and you’re concerned about its replay safety, consider not using 0-RTT until you can be certain that there are no negative effects. […] TLS 1.3 is a big step forward for web performance and security. By combining TLS 1.3 with 0-RTT, the performance gains are even more dramatic.
Example
Test 0-RTT with OpenSSL:
# 1) _host="example.com" cat > req.in << __EOF__ HEAD / HTTP/1.1 Host: $_host Connection: close __EOF__ # or: # echo -e "GET / HTTP/1.1rnHost: $_hostrnConnection: closernrn" > req.in openssl s_client -connect ${_host}:443 -tls1_3 -sess_out session.pem -ign_eof < req.in openssl s_client -connect ${_host}:443 -tls1_3 -sess_in session.pem -early_data req.in # 2) python -m sslyze --early_data "$_host"
Enable 0-RTT with $ssl_early_data
variable:
server { ... ssl_protocols TLSv1.2 TLSv1.3; # To enable 0-RTT (TLS 1.3): ssl_early_data on; location / { proxy_pass http://backend_x20; # It protect against such attacks at the application layer: proxy_set_header Early-Data $ssl_early_data; } ... }
External resources
- Security Review of TLS1.3 0-RTT
- Introducing Zero Round Trip Time Resumption (0-RTT)
- What Application Developers Need To Know About TLS Early Data (0RTT)
- Zero round trip time resumption (0-RTT)
- Session Resumption Protocols and Efficient Forward Security for TLS 1.3 0-RTT
- Replay Attacks on Zero Round-Trip Time: The Case of the TLS 1.3 Handshake Candidates [pdf]
- 0-RTT and Anti-Replay [IETF]
- Using Early Data in HTTP (2017) [IETF]
- Using Early Data in HTTP (2018) [IETF]
- 0-RTT Handshakes
🔰 Defend against the BEAST attack
Rationale
Generally the BEAST attack relies on a weakness in the way
CBC
mode is used in SSL/TLS (TLSv1.0 and earlier).
More specifically, to successfully perform the BEAST attack, there are some conditions which needs to be met:
- vulnerable version of SSL must be used using a block cipher (
CBC
in particular)- JavaScript or a Java applet injection — should be in the same origin of the web site
- data sniffing of the network connection must be possible
To prevent possible use BEAST attacks you should enable server-side protection, which causes the server ciphers should be preferred over the client ciphers, and completely excluded TLS 1.0 from your protocol stack.
When
ssl_prefer_server_ciphers
is set to on, the web server owner can control which ciphers are available.
The reason why this control was preferred is old and insecure ciphers that were available in SSL, and TLS v1.0 and TLS v1.1 because when the server supports old TLS versions and
ssl_prefer_server_ciphers
is off, an adversary can interfere with the handshake and force the connection to use weak ciphers, therefore allowing decrypting of the connection.
The preferred setting in modern setups is
ssl_prefer_server_ciphers off,
because then the client device can choose his preferred encryption method based on the hardware capabilities of the client device. As such, we let the client choose the most performant cipher suite for their hardware configuration.
Example
# In TLSv1.0 and TLSv1.1 ssl_prefer_server_ciphers on; # In TLSv1.2 and TLSv1.3 ssl_prefer_server_ciphers off;
External resources
- Here Come The ⊕ Ninjas [pdf]
- An Illustrated Guide to the BEAST Attack
- How the BEAST Attack Works
- Is BEAST still a threat?
- SSL/TLS attacks: Part 1 – BEAST Attack
- Beat the BEAST with TLS 1.1/1.2 and More [not found]
- Duong and Rizzo’s paper on the BEAST attack) [pdf]
- ImperialViolet — Real World Crypto 2013
- Use only strong ciphers — Hardening — P1 (from this handbook)
🔰 Mitigation of CRIME/BREACH attacks
Rationale
Disable HTTP compression or compress only zero sensitive content. Furthermore, you shouldn’t use HTTP compression on private responses when using TLS.
By default, the
gzip
compression modules are installed but not enabled in the NGINX.
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways. Disabling SSL/TLS compression stops the attack very effectively (libraries like OpenSSL built with compression disabled using
no-comp
configuration option). A deployment of HTTP/2 over TLS 1.2 must disable TLS compression (please see RFC 7540 — Use of TLS Features [IETF]).
CRIME exploits SSL/TLS compression which is disabled since NGINX 1.3.2. BREACH exploits only HTTP compression.
Some attacks are possible (e.g. the real BREACH attack is a complicated and this only applies if specific information is returned in the HTTP responses) because of gzip (HTTP compression not TLS compression) being enabled on SSL requests.
Compression is not the only requirement for the attack to be done so using it does not mean that the attack will succeed. Generally you should consider whether having an accidental performance drop on HTTPS sites is better than HTTPS sites being accidentally vulnerable.
In most cases, the best action is moving to TLS 1.3 or disable gzip for SSL (in older TLS versions than 1.3) but some of resources explain that is not a decent option to solving this. Mitigation is mostly on an application level, however common mitigation is to add data of random length to any responses containing sensitive data (it’s default behaviour of TLSv1.3 — 5.4. Record Padding [IETF]). For more information look at nginx-length-hiding-filter-module. This filter module provides functionality to append randomly generated HTML comment to the end of response body to hide correct response length and make it difficult for attackers to guess secure token.
I would gonna to prioritise security over performance but compression can be (I think) okay to HTTP compress publicly available static content like css or js and HTML content with zero sensitive info (like an «About Us» page).
Example
# Disable dynamic HTTP compression: gzip off; # Enable dynamic HTTP compression for specific location context: location ^~ /assets/ { gzip on; ... }
External resources
- Is HTTP compression safe?
- HTTP compression continues to put encrypted communications at risk
- SSL/TLS attacks: Part 2 – CRIME Attack
- How BREACH works (as I understand it)
- Defending against the BREACH Attack
- To avoid BREACH, can we use gzip on non-token responses?
- Brotli compression algorithm and BREACH attack
- Don’t Worry About BREACH
- The current state of the BREACH attack
- Module ngx_http_gzip_static_module
- Offline Compression with Nginx
- ImperialViolet — Real World Crypto 2013
🔰 Enable HTTP Strict Transport Security
Rationale
Generally HSTS is a way for websites to tell browsers that the connection should only ever be encrypted. This prevents MITM attacks, downgrade attacks, sending plain text cookies and session ids. The correct implementation of HSTS is an additional security mechanism in accordance with the principle of multilayer security (defense in depth).
This header is great for performance because it instructs the browser to do the HTTP to HTTPS redirection client-side, without ever touching the network.
The header indicates for how long a browser should unconditionally refuse to take part in unsecured HTTP connection for a specific domain.
When a browser knows that a domain has enabled HSTS, it does two things:
- always uses an
https://
connection, even when clicking on anhttp://
link or after typing a domain into the location bar without specifying a protocol- removes the ability for users to click through warnings about invalid certificates
The HSTS header needs to be set inside the HTTP block with the
ssl
listen statement or you risk sending Strict-Transport-Security headers over HTTP sites you may also have configured on the server. Additionally, you should usereturn 301
for the HTTP server block to be redirected to HTTPS.
Ideally, you should always use
includeSubdomains
with HSTS. This will provide robust security for the main hostname as well as all subdomains. The issue here is that (withoutincludeSubdomains
) a man in the middle attacker can create arbitrary subdomains and use them inject cookies into your application. In some cases, even leakage might occur. The drawback ofincludeSubdomains
, of course, is that you will have to deploy all subdomains over SSL.
There are a few simple best practices for HSTS (from The Importance of a Proper HTTP Strict Transport Security Implementation on Your Web Server):
The strongest protection is to ensure that all requested resources use only TLS with a well-formed HSTS header. Qualys recommends providing an HSTS header on all HTTPS resources in the target domain
It is advisable to assign the
max-age
directive’s value to be greater than10368000
seconds (120 days) and ideally to31536000
(one year). Websites should aim to ramp up themax-age
value to ensure heightened security for a long duration for the current domain and/or subdomainsRFC 6797 — The Need for includeSubDomains [IETF], advocates that a web application must aim to add the
includeSubDomain
directive in the policy definition whenever possible. The directive’s presence ensures the HSTS policy is applied to the domain of the issuing host and all of its subdomains, e.g.example.com
andwww.example.com
The application should never send an HSTS header over a plaintext HTTP header, as doing so makes the connection vulnerable to SSL stripping attacks
It is not recommended to provide an HSTS policy via the
http-equiv
attribute of a meta tag. According to RFC 6797 [IETF], user agents don’t heedhttp-equiv="Strict-Transport-Security"
attribute on<meta>
elements on the received content
To meet the HSTS preload list standard a root domain needs to return a
strict-transport-security
header that includes both theincludeSubDomains
andpreload
directives and has a minimummax-age
of one year. Your site must also serve a valid SSL certificate on the root domain and all subdomains, as well as redirect all HTTP requests to HTTPS on the same host.
You had better be pretty sure that your website is indeed all HTTPS before you turn this on because HSTS adds complexity to your rollback strategy. Google recommend enabling HSTS this way:
- Roll out your HTTPS pages without HSTS first
- Start sending HSTS headers with a short
max-age
. Monitor your traffic both from users and other clients, and also dependents’ performance, such as ads- Slowly increase the HSTS
max-age
- If HSTS doesn’t affect your users and search engines negatively, you can, if you wish, ask your site to be added to the HSTS preload list used by most major browsers
My recommendation:
Set the
max-age
to a big value like31536000
(12 months) or63072000
(24 months) withincludeSubdomains
parameter.
Example
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains" always;
➡️ ssllabs score: A+
External resources
- OWASP Secure Headers Project — HSTS
- Strict-Transport-Security
- Security HTTP Headers — Strict-Transport-Security
- HTTP Strict Transport Security
- HTTP Strict Transport Security Cheat Sheet
- HSTS Cheat Sheet
- HSTS Preload and Subdomains
- Check HSTS preload status and eligibility
- HTTP Strict Transport Security (HSTS) and NGINX
- Is HSTS as a proper substitute for HTTP-to-HTTPS redirects?
- How to configure HSTS on www and other subdomains
- HSTS: Is includeSubDomains on main domain sufficient?
- The HSTS preload list eligibility
- HSTS Deployment Recommendations
- How does HSTS handle mixed content?
- Broadening HSTS to secure more of the Web
- The Road To HSTS
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Reduce XSS risks (Content-Security-Policy)
Rationale
CSP reduce the risk and impact a wide range of attacks, including cross-site scripting and other cross-site injections in modern browsers (Cross-Site Scripting vulnerabilities allows you to enter into the code of the displayed page elements that will be executed by the browser when displaying the page (in particular, unauthorized scripts executed by the attacker’s browser)). Is a good defence-in-depth measure to make exploitation of an accidental lapse in that less likely.
The inclusion of CSP policies significantly impedes successful XSS attacks, UI Redressing (Clickjacking), malicious use of frames or CSS injections.
Whitelisting known-good resource origins, refusing to execute potentially dangerous inline scripts, and banning the use of eval are all effective mechanisms for mitigating cross-site scripting attacks.
The default policy that starts building a header is: block everything. By modifying the CSP value, administrator/programmer loosens restrictions for specific groups of resources (e.g. separately for scripts, images, etc.).
You should approach very individually and never set CSP sample values found on the Internet or anywhere else. Blindly deploying «standard/recommend» versions of the CSP header will broke the most of web apps. Be aware that incorrectly configured Content Security Policy could expose an application against client-side threats including Cross-Site Scripting, Cross Frame Scripting and Cross-Site Request Forgery.
Before enabling this header, you should discuss about CSP parameters with developers and application architects. They probably going to have to update web application to remove any inline scripts and styles, and make some additional modifications there (implementation of content validation mechanisms introduced by users, use of lists of allowed characters that can be entered into individual fields of the application by its users or encoding of user data transferred by the application to the browser).
Strict policies will significantly increase security, and higher code quality will reduce the overall number of errors. CSP can never replace secure code — new restrictions help reduce the effects of attacks (such as XSS), but they are not mechanisms to prevent them!
You should always validate CSP before implement:
- CSP Evaluator
- Content Security Policy (CSP) Validator
For generate a policy (remember, however, that these types of tools may become outdated or have errors):
- https://report-uri.com/home/generate
Example
# This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load: add_header Content-Security-Policy "default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';" always;
External resources
- OWASP Secure Headers Project — Content-Security-Policy
- Content Security Policy (CSP) Quick Reference Guide
- Content Security Policy Cheat Sheet – OWASP
- Content Security Policy – OWASP
- Content Security Policy — An Introduction — Scott Helme
- CSP Cheat Sheet — Scott Helme
- Security HTTP Headers — Content-Security-Policy
- CSP Evaluator
- Content Security Policy (CSP) Validator
- Can I Use CSP
- CSP Is Dead, Long Live CSP!
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Control the behaviour of the Referer header (Referrer-Policy)
Rationale
Referral policy deals with what information (related to the url) the browser ships to a server to retrieve an external resource.
Basically this is a privacy enhancement, when you want to hide information for owner of the domain of a link where is clicked that the user came from your website.
I think the most secure value is
no-referrer
which specifies that no referrer information is to be sent along with requests made from a particular request client to any origin. The header will be omitted entirely.
The use of
no-referrer
has its advantages because it allows you to hide the HTTP header, and this increases online privacy and the security of users themselves. On the other hand, it can mainly affects analytics (in theory, should not have any SEO impact) becauseno-referrer
specifies to hide that kind of information.
Mozilla has a good table explaining how each of referrer policy options works. It comes from Mozilla’s reference documentation about Referer Policy.
Example
# This policy does not send information about the referring site after clicking the link: add_header Referrer-Policy "no-referrer";
External resources
- OWASP Secure Headers Project — Referrer-Policy
- A new security header: Referrer Policy
- Security HTTP Headers — Referrer-Policy
- What you need to know about Referrer Policy
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Provide clickjacking protection (X-Frame-Options)
Rationale
Helps to protect your visitors against clickjacking attacks by declaring a policy whether your application may be embedded on other (external) pages using frames.
It is recommended that you use the
x-frame-options
header on pages which should not be allowed to render a page in a frame.
This header allows 3 parameters, but in my opinion you should consider only two: a
deny
parameter to disallow embedding the resource in general or asameorigin
parameter to allow embedding the resource on the same host/origin.
It has a lower priority than CSP but in my opinion it is worth using as a fallback.
Example
# Only pages from the same domain can "frame" this URL: add_header X-Frame-Options "SAMEORIGIN" always;
External resources
- OWASP Secure Headers Project — X-Frame-Options
- HTTP Header Field X-Frame-Options [IETF]
- Clickjacking Defense Cheat Sheet
- Security HTTP Headers — X-Frame-Options
- X-Frame-Options — Scott Helme
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Prevent some categories of XSS attacks (X-XSS-Protection)
Rationale
Enable the cross-site scripting (XSS) filter built into modern web browsers.
It’s usually enabled by default anyway, so the role of this header is to re-enable the filter for this particular website if it was disabled by the user.
I think you can set this header without consulting its value with web application architects but all well written apps have to emit header
X-XSS-Protection: 0
and just forget about this feature. If you want to have extra security that better user agents can provide, use a strictContent-Security-Policy
header. There is an exact answer by Mikko Rantalainen.
Example
add_header X-XSS-Protection "1; mode=block" always;
External resources
- OWASP Secure Headers Project — X-XSS-Protection
- XSS (Cross Site Scripting) Prevention Cheat Sheet
- DOM based XSS Prevention Cheat Sheet
- X-XSS-Protection HTTP Header
- Security HTTP Headers — X-XSS-Protection
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Prevent Sniff Mimetype middleware (X-Content-Type-Options)
Rationale
It prevents the browser from doing MIME-type sniffing.
Setting this header will prevent the browser from interpreting files as something else than declared by the content type in the HTTP headers.
Example
# Disallow content sniffing: add_header X-Content-Type-Options "nosniff" always;
External resources
- OWASP Secure Headers Project — X-Content-Type-Options
- X-Content-Type-Options HTTP Header
- Security HTTP Headers — X-Content-Type-Options
- X-Content-Type-Options — Scott Helme
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Deny the use of browser features (Feature-Policy)
Rationale
This header protects your site from third parties using APIs that have security and privacy implications, and also from your own team adding outdated APIs or poorly optimised images.
Example
add_header Feature-Policy "geolocation 'none'; midi 'none'; notifications 'none'; push 'none'; sync-xhr 'none'; microphone 'none'; camera 'none'; magnetometer 'none'; gyroscope 'none'; speaker 'none'; vibrate 'none'; fullscreen 'none'; payment 'none'; usb 'none';";
External resources
- Feature Policy Explainer
- Policy Controlled Features
- Security HTTP Headers — Feature-Policy
- Feature policy playground
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Reject unsafe HTTP methods
Rationale
An ordinary web server supports the
GET
,HEAD
andPOST
methods to retrieve static and dynamic content. Other (e.g.OPTIONS
,TRACE
) methods should not be supported on public web servers, as they increase the attack surface.
Some of these methods are typically dangerous to expose, and some are just extraneous in a production environment, which could be considered extra attack surface. Still, worth shutting those off too, since you probably wont need them.
Some of the APIs (e.g. RESTful APIs) uses also other methods. In addition to the following protection, application architects should also verify incoming requests.
Support for the
TRACE
method can allow Cross-Site Tracing attack that can facilitate to capture of the session ID of another application user. In addition, this method can be used to attempt to identify additional information about the environment in which the application operates (e.g. existence of cache servers on the path to the application).
Support for the
OPTIONS
method is not a direct threat, but it is a source of additional information for the attacker that can facilitate an effective attack.
Support for the
HEAD
method is also risky (really!) — it is not considered dangerous but it can be used to attack a web application by mimicking theGET
request. Secondly, usage ofHEAD
can speed up the attack process by limiting the volume of data sent from the server. If the authorization mechanisms are based on theGET
andPOST
, theHEAD
method may allow to bypass these protections.
I think, that
HEAD
requests are commonly used by proxies or CDN’s to efficiently determine whether a page has changed without downloading the entire body (it is useful for retrieving meta-information written in response headers). What’s more, if you disabled it, you’d just increase your throughput cost.
It is not recommended to use
if
statements to block unsafe HTTP methods, instead you can uselimit_except
directive (should be faster than regexp evaluation), but remember, it has limited use: insidelocation
only. I think, use of regular expressions in this case is a bit more flexible.
Before chosing to configure either method, note this incredible explanation of the difference between 401, 403 and 405 HTTP response codes (with example that combines the 401, 403 and 405 responses and should clarify their precendence in a typical configuration). There is a brief description of HTTP method differences:
- 0: A request comes in…
- 1:
405 Method Not Allowed
refers to the server not allowing that method on that uri- 2:
401 Unauthorized
refers to the user is not authenticated- 3:
403 Forbidden
refers to the accessing client not being authorized to do that request
In my opinion, if a HTTP resource is not able to handle a request with the given HTTP method, it should send an
Allow
header to list the allowed HTTP methods. For this, you may useadd_header
but remember of potential problems.
Example
Recommended configuration:
# If we are in server context, it’s good to use construction like this: add_header Allow "GET, HEAD, POST" always; if ($request_method !~ ^(GET|HEAD|POST)$) { # You can also use 'add_header' inside 'if' context: # add_header Allow "GET, HEAD, POST" always; return 405; }
Alternative configuration (only inside location
context):
# Note: allowing the GET method makes the HEAD method also allowed. location /api { limit_except GET POST { allow 192.168.1.0/24; deny all; # always return 403 error code # or: # auth_basic "Restricted access"; # auth_basic_user_file /etc/nginx/htpasswd; ... } }
But never do nothing like this one (it is highly not recommend!) with mixed allow/deny
and return
directives:
location /api { limit_except GET POST { allow 192.168.1.0/24; # It's only example (return is not allowed in limit_except), # all clients (also from 192.168.1.0/24) might get 405 error code if it worked: return 405; ... } }
External resources
- Hypertext Transfer Protocol (HTTP) Method Registry [IANA]
- Vulnerability name: Unsafe HTTP methods
- Cross Site Tracing
- Cross-Site Tracing (XST): The misunderstood vulnerability
- Penetration Testing Of A Web Application Using Dangerous HTTP Methods [pdf]
- Blocking/allowing IP addresses (from this handbook)
- allow and deny (from this handbook)
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Prevent caching of sensitive data
Rationale
This policy should be implemented by the application architect, however, I know from experience that this does not always happen.
Don’ to cache or persist sensitive data. As browsers have different default behaviour for caching HTTPS content, pages containing sensitive information should include a
Cache-Control
header to ensure that the contents are not cached.
One option is to add anticaching headers to relevant HTTP/1.1 and HTTP/2 responses, e.g.
Cache-Control: no-cache, no-store
andExpires: 0
.
To cover various browser implementations the full set of headers to prevent content being cached should be:
1 —
Cache-Control: no-cache, no-store, private, must-revalidate, max-age=0, no-transform
2 —Pragma: no-cache
3 —Expires: 0
Example
location /api { expires 0; add_header Cache-Control "no-cache, no-store"; }
External resources
- RFC 2616 — HTTP/1.1: Standards Track [IETF]
- RFC 7234 — HTTP/1.1: Caching [IETF]
- HTTP Cache Headers — A Complete Guide
- Caching best practices & max-age gotchas
- Increasing Application Performance with HTTP Cache Headers
- HTTP Caching
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Limit concurrent connections
Rationale
NGINX provides basic and simple to enable protection from denial of service attacks like DoS. By default, there are no limitations on the number of active connections a user can have.
In my opinion, it is also good to cut off redundant/unnecessary connections globally (in the
http
context), but be careful and monitor your access and error logs. You can set this on a per NGINXlocation
match context too i.e. set it for search pages, online user displays, member lists etc.
You should limit the number of sessions that can be opened by a single client IP address, again to a value appropriate for real users. Because most often the excess traffic is generated by bots and is meant to overwhelm the server, the rate of traffic is much higher than a human user can generate. The limit concurrent connections must be active to enable a maximum session limit.
However, note that while NGINX is a key element of Cloudflare-style protection, it’s not enough to set NGINX up on your webserver and hope it will protect you.
IP connection limits will help to certain degree though with a large enough layer 7 ddos attack, it can still overwhelm your server. For me, the first line of defense should be the hardware firewall (but it is not enough) or DDoS mitigation devices with stateless protection mechanism that can handle millions of connection attempts (to provide deep inspection to filter out bad traffic and allow good traffic through) without requiring connection table entries or exhausting other system resources.
In particular, it is a good idea enable the mitigation on the network providers side and route the traffic through a layer 7 DDoS mitigation filter provided by an external company before it reaches you. I think it is the best solution.
Example
http { limit_conn_zone $binary_remote_addr zone=slimit:10m; # Set globally: limit_conn slimit 10; ... server { # Or in the server context: limit_conn slimit 10; ... } }
External resources
- Module ngx_http_limit_conn_module
- Mitigating DDoS Attacks with NGINX and NGINX Plus
- What is a DDoS Attack?
- Nginx-Lua-Anti-DDoS
- Extend NGINX with Lua — DDOS Mitigation using Cookie validation
- Blocking/allowing IP addresses (from this handbook)
- allow and deny (from this handbook)
- ngx_http_geoip_module (from this handbook)
- Control Buffer Overflow attacks — Hardening — P2 (from this handbook)
- Mitigating Slow HTTP DoS attacks (Closing Slow Connections) — Hardening — P2 (from this handbook)
- Use limit_conn to improve limiting the download speed — Performance — P3 (from this handbook)
🔰 Control Buffer Overflow attacks
Rationale
Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in NGINX we can set buffer size limitations for all clients.
The large size of
POST
requests can effectively lead to a DoS attack if the entire server memory is used. Allowing large files to be uploaded to the server can make it easier for an attacker to utilize system resources and successfully perform a denial of service.
Corresponding values depends on your server memory and how many traffic you have. Long ago I found an interesting formula:
MAX_MEMORY = client_body_buffer_size x CONCURRENT_TRAFFIC - OS_RAM - FS_CACHE
I think the key is to monitor all the things (MEMORY/CPU/Traffic) and change settings according your usage, star little of course then increase until you can.
In my opinion, using a smaller
client_body_buffer_size
(a little bigger than 10k but not so much) is definitely better, since a bigger buffer could ease DoS attack vectors, since you would allocate more memory for it.
Tip: If a request body is larger than
client_body_buffer_size
, it’s written to disk and not available in memory, hence no$request_body
. Additionally, setting theclient_body_buffer_size
to high may affect the log file size (if you log$request_body
).
Example
client_body_buffer_size 16k; # default: 8k (32-bit) | 16k (64-bit) client_header_buffer_size 1k; # default: 1k client_max_body_size 100k; # default: 1m large_client_header_buffers 2 1k; # default: 4 8k
External resources
- Module ngx_http_core_module
- SCG WS nginx
🔰 Mitigating Slow HTTP DoS attacks (Closing Slow Connections)
Rationale
You can close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible (thus reducing the server’s ability to accept new connections).
In my opinion, 2-3 seconds for
keepalive_timeout
are often enough for most folks to parse HTML/CSS and retrieve needed images, icons, or frames, connections are cheap in NGINX so increasing this is generally safe. However, setting this too high will result in the waste of resources (mainly memory) as the connection will remain open even if there is no traffic, potentially: significantly affecting performance. I think this should be as close to your average response time as possible.
I would also suggest that if you set the
send_timeout
small then your web server will close connections quickly which will give more overall connections available to connecting hosts.
These parameters are most likely only relevant in a high traffic webserver. Both are supporting the same goal and that is less connections and more efficient handling of requests. Either putting all requests into one connection (keep alive) or closing connections quickly to handle more requests (send timeout).
Example
client_body_timeout 10s; # default: 60s client_header_timeout 10s; # default: 60s keepalive_timeout 5s 5s; # default: 75s send_timeout 10s; # default: 60s
External resources
- Module ngx_http_core_module
- Module ngx_http_limit_conn_module
- Mitigating DDoS Attacks with NGINX and NGINX Plus
- What is a DDoS Attack?
- SCG WS nginx
- How to Protect Against Slow HTTP Attacks
- Effectively Using and Detecting The Slowloris HTTP DoS Tool
- Limit concurrent connections — Hardening — P1 (from this handbook)
Reverse Proxy
Go back to the Table of Contents or What’s next? section.
📌 One of the frequent uses of the NGINX is setting it up as a proxy server that can off load much of the infrastructure concerns of a high-volume distributed web application.
- Base Rules
- Debugging
- Performance
- Hardening
- ≡ Reverse Proxy (8)
- Use pass directive compatible with backend protocol
- Be careful with trailing slashes in proxy_pass directive
- Set and pass Host header only with $host variable
- Set properly values of the X-Forwarded-For header
- Don’t use X-Forwarded-Proto with $scheme behind reverse proxy
- Always pass Host, X-Real-IP, and X-Forwarded headers to the backend
- Use custom headers without X- prefix
- Always use $request_uri instead of $uri in proxy_pass
- Load Balancing
- Others
🔰 Use pass directive compatible with backend protocol
Rationale
All
proxy_*
directives are related to the backends that use the specific backend protocol.
You should always use
proxy_pass
only for HTTP servers working on the backend layer (set also thehttp://
protocol before referencing the HTTP backend) and other*_pass
directives only for non-HTTP backend servers (like a uWSGI or FastCGI).
Directives such as
uwsgi_pass
,fastcgi_pass
, orscgi_pass
are designed specifically for non-HTTP apps and you should use them instead of theproxy_pass
(non-HTTP talking).
For example:
uwsgi_pass
uses an uwsgi protocol.proxy_pass
uses normal HTTP to talking with uWSGI server. uWSGI docs claims that uwsgi protocol is better, faster and can benefit from all of uWSGI special features. You can send to uWSGI information what type of data you are sending and what uWSGI plugin should be invoked to generate response. With http (proxy_pass
) you won’t get that.
Example
Not recommended configuration:
server { location /app/ { # For this, you should use uwsgi_pass directive. # backend layer: uWSGI Python app proxy_pass 192.168.154.102:4000; } ... }
Recommended configuration:
server { location /app/ { # backend layer: OpenResty as a front for app proxy_pass http://192.168.154.102:80; } location /app/v3 { # backend layer: uWSGI Python app uwsgi_pass 192.168.154.102:8080; } location /app/v4 { # backend layer: php-fpm app fastcgi_pass 192.168.154.102:8081; } ... }
External resources
- Passing a Request to a Proxied Server
- Reverse proxy (from this handbook)
🔰 Be careful with trailing slashes in proxy_pass
directive
Rationale
NGINX replaces part literally and you could end up with some strange url.
If
proxy_pass
used without URI (i.e. without path afterserver:port
) NGINX will put URI from original request exactly as it was with all double slashes,../
and so on.
URI in
proxy_pass
acts like alias directive, means NGINX will replace part that matches location prefix with URI inproxy_pass
directive (which I intentionally made the same as location prefix) so URI will be the same as requested but normalized (without doule slashes and all that staff).
Example
location = /a { proxy_pass http://127.0.0.1:8080/a; ... } location ^~ /a/ { proxy_pass http://127.0.0.1:8080/a/; ... }
External resources
- Module ngx_http_proxy_module — proxy_pass
- Trailing slashes (from this handbook)
🔰 Set and pass Host
header only with $host
variable
Rationale
You should almost always use
$host
as a incoming host variable, because it’s the only one guaranteed to have something sensible regardless of how the user-agent behaves, unless you specifically need the semantics of one of the other variables.
It’s always a good idea to modify the
Host
header to make sure that the virtual host resolution on the downstream server works as it should.
$host
is simply$http_host
with some processing (stripping port number and lowercasing) and a default value (of theserver_name
), so there’s no less «exposure» to theHost
header sent by the client when using$http_host
. There’s no danger in this though.
The variable
$host
is the host name from the request line or the http header. The variable$server_name
is the name of the server block we are in right now.
The difference is explained in the NGINX documentation:
$host
contains in this order of precedence: host name from the request line, or host name from theHost
request header field, or the server name matching a request$http_host
contains the content of the HTTPHost
header field, if it was present in the request (equals always theHTTP_HOST
request header)$server_name
contains theserver_name
of the virtual host which processed the request, as it was defined in the NGINX configuration. If a server contains multiple server names, only the first one will be present in this variable
$http_host
, moreover, is better than$host:$server_port
because it uses the port as present in the URL, unlike$server_port
which uses the port that NGINX listens on.
On the other hand, using
$host
has it’s own vulnerability; you must handle the situation when theHost
field is absent properly by defining default server blocks to catch those requests. The key point though is that changing theproxy_set_header Host $host
would not change this behavior at all because the$host
value would be equal to the$http_host
value when theHost
request field is present.
In the NGINX server we will achieve this by use catch-all virtual hosts. These are vhosts referenced by web servers if an unrecognized/undefined
Host
header appears in the client request. It’s also a good idea to specifying the exact (not wildcard) value ofserver_name
.
Of course, the most important line of defense is the proper implementation of parsing mechanisms on the application side, e.g. using the list of allowed values for the
Host
header. Your web-app should fully comply with RFC 7230 to avoid problems caused by inconsistent interpretation of host to associate with an HTTP transaction. Per above RFC, the correct solution is to treat multipleHost
headers and white-spaces around field-names as errors.
Example
proxy_set_header Host $host;
External resources
- RFC 2616 — The Resource Identified by a Request [IETF]
- RFC 2616 — Host [IETF]
- Module ngx_http_proxy_module — proxy_set_header
- What is the difference between Nginx variables $host, $http_host, and $server_name?
- HTTP_HOST and SERVER_NAME Security Issues
- Reasons to use ‘$http_host’ instead of ‘$host’ with ‘proxy_set_header Host’ in template?
- Tip: keep the Host header via nginx proxy_pass
- What is a Host Header Attack?
- Practical HTTP Host header attacks
- Host of Troubles Vulnerabilities
- $10k host header
🔰 Set properly values of the X-Forwarded-For
header
Rationale
X-Forwarded-For
(XFF) is the custom HTTP header that carries along the original IP address of a client (identifies the client IP address for an original request that was served through a proxy or load balancer) so the app at the other end knows what it is. Otherwise it would only see the proxy IP address, and that makes some apps angry.
The
X-Forwarded-For
depends on the proxy server, which should actually pass the IP address of the client connecting to it. Where a connection passes through a chain of proxy servers,X-Forwarded-For
can give a comma-separated list of IP addresses with the first being the furthest downstream (that is, the user). Because of this, servers behind proxy servers need to know which of them are trustworthy.
In the most cases, the most proxies or load balancers automatically include an
X-Forwarded-For
header, for debugging, statistics, and generating location-dependent content, based on the original request.
The usefulness of XFF depends on the proxy server truthfully reporting the original host’s IP address; for this reason, effective use of XFF requires knowledge of which proxies are trustworthy, for instance by looking them up in a whitelist of servers whose maintainers can be trusted.
The proxy used can set this header to anything it wants to, and therefore you can’t trust its value. Most proxies do set the correct value though. This header is mostly used by caching proxies, and in those cases you’re in control of the proxy and can thus verify that is gives you the correct information. In all other cases its value should be considered untrustworthy.
Some systems also use
X-Forwarded-For
to enforce access control. A good number of applications rely on knowing the actual IP address of a client to help prevent fraud and enable access.
Value of the
X-Forwarded-For
header field can be set at the client’s side — this can also be termed asX-Forwarded-For
spoofing. However, when the web request is made via a proxy server, the proxy server modifies theX-Forwarded-For
field by appending the IP address of the client (user). This will result in 2 comma separated IP addresses in theX-Forwarded-For
field.
A reverse proxy is not source IP address transparent. This is a pain when you need the client source IP address to be correct in the logs of the backend servers. I think the best solution of this problem is configure the load balancer to add/modify an
X-Forwarded-For
header with the source IP of the client and forward it to the backend in the correct form.
Unfortunately, on the proxy side we are not able to solve this problem (all solutions can be spoofable), it is important that this header is correctly interpreted by application servers. Doing so ensures that the apps or downstream services have accurate information on which to make their decisions, including those regarding access and authorization.
In the light of the latest httpoxy vulnerabilities, there is really a need for a full example, how to use
HTTP_X_FORWARDED_FOR
properly. In short, the load balancer sets the ‘most recent’ part of the header. In my opinion, for security reasons, the proxy servers must be specified by the administrator manually.
There is also an interesing idea what to do in this situation:
To prevent this we must distrust that header by default and follow the IP address breadcrumbs backwards from our server. First we need to make sure the
REMOTE_ADDR
is someone we trust to have appended a proper value to the end ofX-Forwarded-For
. If so then we need to make sure we trust theX-Forwarded-For
IP to have appended the proper IP before it, so on and so forth. Until, finally we get to an IP we don’t trust and at that point we have to assume that’s the IP of our user. — it comes from Proxies & IP Spoofing by Xiao Yu.
Example
# The whole purpose that it exists is to do the appending behavior: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Above is equivalent for this: proxy_set_header X-Forwarded-For $http_x_forwarded_for,$remote_addr; # The following is also equivalent for above but in this example we use http_realip_module: proxy_set_header X-Forwarded-For "$http_x_forwarded_for, $realip_remote_addr";
External resources
- Prevent X-Forwarded-For Spoofing or Manipulation
- Bypass IP blocks with the X-Forwarded-For header
- Forwarded header
🔰 Don’t use X-Forwarded-Proto
with $scheme
behind reverse proxy
Rationale
X-Forwarded-Proto
can be set by the reverse proxy to tell the app whether it is HTTPS or HTTP or even an invalid name.
The scheme (i.e. HTTP, HTTPS) variable evaluated only on demand (used only for the current request).
Setting the
$scheme
variable will cause distortions if it uses more than one proxy along the way. For example: if the client go to thehttps://example.com
, the proxy stores the scheme value as HTTPS. If the communication between the proxy and the next-level proxy takes place over HTTP, then the backend sees the scheme as HTTP. So if you set$scheme
forX-Forwarded-Proto
on the next-level proxy, app will see a different value than the one the client came with.
For resolve this problem you can also use this configuration snippet.
Example
# 1) client <-> proxy <-> backend proxy_set_header X-Forwarded-Proto $scheme; # 2) client <-> proxy <-> proxy <-> backend # proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
External resources
- Reverse Proxy — Passing headers (from this handbook)
🔰 Always pass Host
, X-Real-IP
, and X-Forwarded
headers to the backend
Rationale
When using NGINX as a reverse proxy you may want to pass through some information of the remote client to your backend web server. I think it’s good practices because gives you more control of forwarded headers.
It’s very important for servers behind proxy because it allow to interpret the client correctly. Proxies are the «eyes» of such servers, they should not allow a curved perception of reality. If not all requests are passed through a proxy, as a result, requests received directly from clients may contain e.g. inaccurate IP addresses in headers.
X-Forwarded
headers are also important for statistics or filtering. Other example could be access control rules on your app, because without these headers filtering mechanism may not working properly.
If you use a front-end service like Apache or whatever else as the front-end to your APIs, you will need these headers to understand what IP or hostname was used to connect to the API.
Forwarding these headers is also important if you use the HTTPS protocol (it has become a standard nowadays).
However, I would not rely on either the presence of all
X-Forwarded
headers, or the validity of their data.
Example
location / { proxy_pass http://bk_upstream_01; # The following headers also should pass to the backend: # - Host - host name from the request line, or host name from the Host request header field, or the server name matching a request # proxy_set_header Host $host:$server_port; # proxy_set_header Host $http_host; proxy_set_header Host $host; # - X-Real-IP - forwards the real visitor remote IP address to the proxied server proxy_set_header X-Real-IP $remote_addr; # X-Forwarded headers stack: # - X-Forwarded-For - mark origin IP of client connecting to server through proxy # proxy_set_header X-Forwarded-For $remote_addr; # proxy_set_header X-Forwarded-For $http_x_forwarded_for,$remote_addr; # proxy_set_header X-Forwarded-For "$http_x_forwarded_for, $realip_remote_addr"; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # - X-Forwarded-Host - mark origin host of client connecting to server through proxy # proxy_set_header X-Forwarded-Host $host:443; proxy_set_header X-Forwarded-Host $host:$server_port; # - X-Forwarded-Server - the hostname of the proxy server proxy_set_header X-Forwarded-Server $host; # - X-Forwarded-Port - defines the original port requested by the client # proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Port $server_port; # - X-Forwarded-Proto - mark protocol of client connecting to server through proxy # proxy_set_header X-Forwarded-Proto https; # proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto; proxy_set_header X-Forwarded-Proto $scheme; }
External resources
- Forwarding Visitor’s Real-IP + Nginx Proxy/Fastcgi backend correctly
- Using the Forwarded header
- Reverse Proxy — Passing headers (from this handbook)
- Set properly values of the X-Forwarded-For header — Reverse Proxy — P1 (from this handbook)
- Don’t use X-Forwarded-Proto with $scheme behind reverse proxy — Reverse Proxy — P1 (from this handbook)
🔰 Use custom headers without X-
prefix
Rationale
The use of custom headers with
X-
prefix is not forbidden but discouraged. In other words, you can keep usingX-
prefixed headers, but it’s not recommended and you may not document them as if they are public standard.
The IETF draft was posted to deprecate the recommendation of using the
X-
prefix for non-standard headers. The reason is that when non-standard headers prefixed withX-
become standard, removing theX-
prefix breaks backwards compatibility, forcing application protocols to support both names (E.g,x-gzip
andgzip
are now equivalent). So, the official recommendation is to just name them sensibly without theX-
prefix.
Internet Engineering Task Force released in RFC 6648 [IETF] recommended official deprecation of
X-
prefix:
- 3. Recommendations for Creators of New Parameters […] SHOULD NOT prefix their parameter names with «X-» or similar constructs.
- 4. Recommendations for Protocol Designers […] SHOULD NOT prohibit parameters with an «X-» prefix or similar constructs from being registered. […] MUST NOT stipulate that a parameter with an «X-» prefix or similar constructs needs to be understood as unstandardized. […] MUST NOT stipulate that a parameter without an «X-» prefix or similar constructs needs to be understood as standardized.
The
X-
in front of a header name customarily has denoted it as experimental/non-standard/vendor-specific. Once it’s a standard part of HTTP, it’ll lose the prefix.
If it’s possible for new custom header to be standardized, use a non-used and meaningful header name.
Example
Not recommended configuration:
add_header X-Backend-Server $hostname;
Recommended configuration:
add_header Backend-Server $hostname;
External resources
- Use of the «X-» Prefix in Application Protocols [IETF]
- Why we need to deprecate x prefix for HTTP headers?
- Custom HTTP headers : naming conventions
- Set the HTTP headers with add_header and proxy_*_header directives properly — Base Rules — P1 (from this handbook)
🔰 Always use $request_uri
instead of $uri
in proxy_pass
Rationale
Naturally, there are exceptions to what I’m going to say about this. I think the best rule for pass unchanged URI to the backend layer is use
proxy_pass http://<backend>
without any arguments.
If you add something more at the end of
proxy_pass
directive, you should always pass unchanged URI to the backend layer unless you know what you’re doing. For example, using$uri
accidentally inproxy_pass
directives opens you up to http header injection vulnerabilities because URL encoded characters are decoded (this sometimes matters and is not equivalent to$request_uri
). And what’s more, the value of$uri
may change during request processing, e.g. when doing internal redirects, or when using index files.
The
request_uri
is equal to the original request URI as received from the client including the arguments. In this case (pass variable like a$request_uri
), if URI is specified in the directive, it is passed to the server as is, replacing the original request URI.
Note also that using
proxy_pass
with variables implies various other side effects, notably use of resolver for dynamic name resolution, and generally less effective than using names in a configuration.
Look also what the NGINX documentation say about it:
If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed […]
Example
Not recommended configuration:
location /foo { proxy_pass http://django_app_server$uri; }
Recommended configuration:
location /foo { proxy_pass http://django_app_server$request_uri; }
Most recommended configuration:
location /foo { proxy_pass http://django_app_server; }
Load Balancing
Go back to the Table of Contents or What’s next? section.
📌 Load balancing is a useful mechanism to distribute incoming traffic around several capable servers. We may improve of some rules about the NGINX working as a load balancer.
- Base Rules
- Debugging
- Performance
- Hardening
- Reverse Proxy
- ≡ Load Balancing (2)
- Tweak passive health checks
- Don’t disable backends by comments, use down parameter
- Others
🔰 Tweak passive health checks
Rationale
Monitoring for health is important on all types of load balancing mainly for business continuity. Passive checks watches for failed or timed-out connections as they pass through NGINX as requested by a client.
This functionality is enabled by default but the parameters mentioned here allow you to tweak their behaviour. Default values are:
max_fails=1
andfail_timeout=10s
.
Example
upstream backend { server bk01_node:80 max_fails=3 fail_timeout=5s; server bk02_node:80 max_fails=3 fail_timeout=5s; }
External resources
- Module ngx_http_upstream_module
🔰 Don’t disable backends by comments, use down
parameter
Rationale
Sometimes we need to turn off backends e.g. at maintenance-time. I think good solution is marks the server as permanently unavailable with
down
parameter even if the downtime takes a short time.
It’s also important if you use IP Hash load balancing technique. If one of the servers needs to be temporarily removed, it should be marked with this parameter in order to preserve the current hashing of client IP addresses.
Comments are good for really permanently disable servers or if you want to leave information for historical purposes.
NGINX also provides a
backup
parameter which marks the server as a backup server. It will be passed requests when the primary servers are unavailable. I use this option rarely for the above purposes and only if I am sure that the backends will work at the maintenance time.
You can also use
ngx_dynamic_upstream
for operating upstreams dynamically with HTTP APIs.
Example
upstream backend { server bk01_node:80 max_fails=3 fail_timeout=5s down; server bk02_node:80 max_fails=3 fail_timeout=5s; }
External resources
- Module ngx_http_upstream_module
- Module ngx_dynamic_upstream
Others
Go back to the Table of Contents or What’s next? section.
📌 This rules aren’t strictly related to the NGINX but in my opinion they’re also very important aspect of security.
- Base Rules
- Debugging
- Performance
- Hardening
- Reverse Proxy
- Load Balancing
- ≡ Others (4)
- Set the certificate chain correctly
- Enable DNS CAA Policy
- Define security policies with security.txt
- Use tcpdump to diagnose and troubleshoot the HTTP issues
🔰 Set the certificate chain correctly
Rationale
A chain of trust is a linked path of verification and validation from an end-entity digital certificate to a root certificate authority (CA) that acts as a trust anchor.
Your browser (and possibly your OS) ships with a list of trusted CAs. These pre-installed certificates serve as trust anchors to derive all further trust from. When visiting an HTTPS website, your browser verifies that the trust chain presented by the server during the TLS handshake ends at one of the locally trusted root certificates.
Validation of the certificate chain is a critical part within any certificate-based authentication process. If a system does not follow the chain of trust of a certificate to a root server, the certificate loses all usefulness as a metric of trust.
The server always sends a chain but should never present certificate chains containing a trust anchor which is the root CA certificate, because the root is useless for validation purposes. As per the TLS standard, the chain may or may not include the root certificate itself; the client does not need that root since it already has it. And, indeed, if the client does not already have the root, then receiving it from the server would not help since a root can be trusted only by virtue of being already there.
What’s more, the presence of a trust anchor in the certification path can have a negative impact on performance when establishing connections using SSL/TLS, because the root is «downloaded» on each handshake (allows you to save the 1 kB or so of data bandwith per connection and reduces server-side memory consumption for TLS session parameters).
For best practices, remove the self-signed root from the server. The certificate bundle should only include the certificate’s public key, and the public key of any intermediate certificate authorities. Browsers will only trust certificates that resolve to roots that are already in their trust store, they will ignore a root certificate sent in the certificate bundle (otherwise, anyone could send any root).
With the chain broken, there is no verification that the server that’s hosting the data is the correct (expected) server — there is no way to be sure the server is what the server says it is (you lose the ability to validate the security of the connection or to trust it). The connections is still secure (will be still encrypted) but the main concern would be to fix that certificate chain. You should solve the incomplete certificate chain issue manually by concatenating all certificates from the certificate to the trusted root certificate (exclusive, in this order), to prevent such issues.
Example of incomplete chain: incomplete-chain.badssl.com.
From the «SSL Labs: SSL and TLS Deployment Best Practices — 2.1 Use Complete Certificate Chains»:
An invalid certificate chain effectively renders the server certificate invalid and results in browser warnings. In practice, this problem is sometimes difficult to diagnose because some browsers can reconstruct incomplete chains and some can’t. All browsers tend to cache and reuse intermediate certificates.
Example
On the OpenSSL side:
$ ls root_ca.crt inter_ca.crt example.com.crt # Build manually. Server certificate comes first in the chain file, then the intermediates: $ cat example.com.crt inter_ca.crt > certs/example.com/example.com-chain.crt
To build a valid SSL certificate chain you may use mkchain tool. It also can help you fix the incomplete chain and download all missing CA certificates. You can also download all certificates from remote server and get your certificate chain right.
# If you have all certificates: $ ls /data/certs/example.com root.crt inter01.crt inter02.crt certificate.crt $ mkchain -i /data/certs/example.com -o /data/certs/example.com-chain.crt # If you have only server certificate (downloads all missing CA certificates automatically): $ ls /data/certs/example.com certificate.crt $ mkchain -i certificate.crt -o /data/certs/example.com-chain.crt # If you want to download certificate chain from existing domain: $ mkchain -i https://incomplete-chain.badssl.com/ -o /data/certs/example.com-chain.crt
On the NGINX side:
server { listen 192.168.10.2:443 ssl http2; ssl_certificate certs/example.com/example.com-chain.crt; ssl_certificate_key certs/example.com/example.com.key; ...
External resources
- What is the SSL Certificate Chain?
- What is a chain of trust?
- The Difference Between Root Certificates and Intermediate Certificates
- Get your certificate chain right
- Verify certificate chain with OpenSSL
- Chain of Trust (from this handbook)
🔰 Enable DNS CAA Policy
Rationale
DNS CAA policy helps you to control which Certificat Authorities are allowed to issue certificates for your domain becaues if no CAA record is present, any CA is allowed to issue a certificate for the domain.
The purpose of the CAA record is to allow domain owners to declare which certificate authorities are allowed to issue a certificate for a domain. They also provide a means of indicating notification rules in case someone requests a certificate from an unauthorized certificate authority.
If no CAA record is present, any CA is allowed to issue a certificate for the domain. If a CAA record is present, only the CAs listed in the record(s) are allowed to issue certificates for that hostname.
Example
Generic configuration (Google Cloud DNS, Route 53, OVH, and other hosted services) for Let’s Encrypt:
example.com. CAA 0 issue "letsencrypt.org"
Standard Zone File (BIND, PowerDNS and Knot DNS) for Let’s Encrypt:
example.com. IN CAA 0 issue "letsencrypt.org"
External resources
- DNS Certification Authority Authorization (CAA) Resource Record [IETF]
- CAA Records
- CAA Record Helper
🔰 Define security policies with security.txt
Rationale
Add
security.txt
to your site, with correct contact details inside the file, so that people reporting security issues won’t have to guess where to send the reports to.
The main purpose of
security.txt
is to help make things easier for companies and security researchers when trying to secure platforms. It also provides information to assist in disclosing security vulnerabilities.
When security researchers detect potential vulnerabilities in a page or application, they will try to contact someone «appropriate» to «responsibly» reveal the problem. It’s worth taking care of getting to the right address.
This file should be placed under the
/.well-known/
path, e.g./.well-known/security.txt
(RFC 5785 [IETF]) of a domain name or IP address for web properties.
Example
$ curl -ks https://example.com/.well-known/security.txt Contact: security@example.com Contact: +1-209-123-0123 Encryption: https://example.com/pgp.txt Preferred-Languages: en Canonical: https://example.com/.well-known/security.txt Policy: https://example.com/security-policy.html
And from Google:
$ curl -ks https://www.google.com/.well-known/security.txt
Contact: https://g.co/vulnz
Contact: mailto:security@google.com
Encryption: https://services.google.com/corporate/publickey.txt
Acknowledgements: https://bughunter.withgoogle.com/
Policy: https://g.co/vrp
Hiring: https://g.co/SecurityPrivacyEngJobs
# Flag: BountyCon{075e1e5eef2bc8d49bfe4a27cd17f0bf4b2b85cf}
External resources
- A Method for Web Security Policies [IETF]
- security.txt
- Say hello to security.txt
🔰 Use tcpdump to diagnose and troubleshoot the HTTP issues
Rationale
Tcpdump is a swiss army knife (is a well known command line packet analyzer/protocol decoding) for all the administrators and developers when it comes to troubleshooting. You can use it to monitor HTTP traffic between a proxy (or clients) and your backends.
I use tcpdump for a quick inspection. If I need a more in-depth inspection I capture everything to a file and open it with powerfull sniffer which can decode lots of protocols, lots of filters like a wireshark.
Run
man tcpdump | less -Ip examples
to see some examples.
Example
Capture everything and write to a file:
tcpdump -X -s0 -w dump.pcap <tcpdump_params>
Monitor incoming (on interface) traffic, filter by <ip:port>
:
tcpdump -ne -i eth0 -Q in host 192.168.252.1 and port 443
-n
— don’t convert addresses (-nn
will not resolve hostnames or ports)-e
— print the link-level headers-i [iface|any]
— set interface-Q|-D [in|out|inout]
— choose send/receive direction (-D
— for old tcpdump versions)host [ip|hostname]
— set host, also[host not]
[and|or]
— set logicport [1-65535]
— set port number, also[port not]
Monitor incoming (on interface) traffic, filter by <ip:port>
and write to a file:
tcpdump -ne -i eth0 -Q in host 192.168.252.10 and port 80 -c 5 -w dump.pcap
-c [num]
— capture only num number of packets-w [filename]
— write packets to file,-r [filename]
— reading from file
Monitor all HTTP GET traffic/requests:
tcpdump -i eth0 -s 0 -A -vv 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
tcp[((tcp[12:1] & 0xf0) >> 2):4]
— first determines the location of the bytes we are interested in (after the TCP header) and then selects the 4 bytes we wish to match against0x47455420
— represents the ASCII value of characters ‘G’, ‘E’, ‘T’, ‘ ‘
Monitor all HTTP POST traffic/requests, filter by destination port:
tcpdump -i eth0 -s 0 -A -vv 'tcp dst port 80 and (tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354)'
0x504F5354
— represents the ASCII value of characters ‘P’, ‘O’, ‘S’, ‘T’, ‘ ‘
Monitor HTTP traffic including request and response headers and message body, filter by port:
tcpdump -A -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
Monitor HTTP traffic including request and response headers and message body, filter by <src-ip:port>
:
tcpdump -A -s 0 'src 192.168.252.10 and tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
External resources
- TCPDump Capture HTTP GET/POST requests – Apache, Weblogic & Websphere
- Wireshark tcp filter:
tcp[((tcp[12:1] & 0xf0) >> 2):4]
- How to Use tcpdump Commands with Examples
- Helpers — Debugging (from this handbook)
Каждый раз, когда NGINX сталкивается с ошибкой при попытке обработать запрос клиента, он возвращает ошибку.
Каждая ошибка включает в себя код ответа HTTP и краткое описание.
Обычно ошибка отображается пользователю с помощью простой HTML-страницы по умолчанию.
К счастью, вы можете настроить NGINX на отображение пользовательских страниц ошибок для пользователей вашего сайта или веб-приложения.
Этого можно добиться с помощью директивы error_page NGINX, которая используется для определения URI, который будет показан для указанной ошибки.
Nginx отправляет 404 на бэкэнд
Вы также можете, по желанию, использовать ее для изменения кода состояния HTTP в заголовках ответа, отправляемых клиенту.
В этом руководстве мы покажем, как настроить NGINX на использование пользовательских страниц ошибок.
Создание единой пользовательской страницы для всех ошибок NGINX
Вы можете настроить NGINX на использование одной пользовательской страницы ошибок для всех ошибок, которые он возвращает клиенту.
Начните с создания страницы ошибки.
Вот пример простой HTML-страницы, которая отображает сообщение:
"Извините, страница не может быть загружена! Обратитесь за помощью к администратору сайта или в службу поддержки." клиенту.
Образец кода пользовательской страницы HTML Nginx.
<!DOCTYPE html> <html> <head> <style type=text/css> * { -webkit-box-sizing: border-box; box-sizing: border-box; } body { padding: 0; margin: 0; } #notfound { position: relative; height: 100vh; } #notfound .notfound { position: absolute; left: 50%; top: 50%; -webkit-transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); } .notfound { max-width: 520px; width: 100%; line-height: 1.4; text-align: center; } .notfound .notfound-error { position: relative; height: 200px; margin: 0px auto 20px; z-index: -1; } .notfound .notfound-error h1 { font-family: 'Montserrat', sans-serif; font-size: 200px; font-weight: 300; margin: 0px; color: #211b19; position: absolute; left: 50%; top: 50%; -webkit-transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); } @media only screen and (max-width: 767px) { .notfound .notfound-error h1 { font-size: 148px; } } @media only screen and (max-width: 480px) { .notfound .notfound-error { height: 148px; margin: 0px auto 10px; } .notfound .notfound-error h1 { font-size: 120px; font-weight: 200px; } .notfound .notfound-error h2 { font-size: 30px; } .notfound a { padding: 7px 15px; font-size: 24px; } .h2 { font-size: 148px; } } </style> </head> <body> <div id="notfound"> <div class="notfound"> <h1>К сожалению, страница не может быть загружена!</a></h1> <div class="notfound-error"> <p>Обратитесь за помощью к администратору сайта или в службу поддержки.</p> </div> </div> </div> </body> </html>
Сохраните файл с подходящим именем, например error-page.html, и закройте его.
Затем переместите файл в корневой каталог (/var/www/html/).
Если каталог не существует, вы можете создать его с помощью команды mkdir, как показано далее:
$ sudo mkdir -p /var/www/html/ $ sudo cp error-page.html /var/www/html/
Затем настройте NGINX на использование пользовательской страницы ошибок с помощью директивы error_page.
Создайте конфигурационный файл custom-error-page.conf в каталоге /etc/nginx/snippets/, как показано ниже:
$ sudo mkdir /etc/nginx/snippets/ $ sudo vim /etc/nginx/snippets/custom-error-page.conf
Добавьте в него следующие строки:
error_page 404 403 500 503 /error-page.html; location = /error-page.html { root /var/www/html; internal; }
Эта конфигурация вызывает внутреннее перенаправление на URI/error-page.html каждый раз, когда NGINX встречает любую из указанных HTTP ошибок 404, 403, 500 и 503.
Контекст местоположения указывает NGINX, где найти вашу страницу ошибки.
Сохраните файл и закройте его.
Теперь включите файл в контекст http, чтобы все блоки сервера использовали страницу ошибок, в файл /etc/nginx/nginx.conf:
$ sudo vim /etc/nginx/nginx.conf
Каталог include указывает NGINX включить конфигурацию в указанный файл .conf:
include snippets/custom-error-page.conf;
В качестве альтернативы можно включить файл для конкретного блока сервера (обычно известного как vhost), например, /etc/nginx/conf.d/mywebsite.conf.
Добавьте приведенную выше директиву include в контекст server {}.
Сохраните файл конфигурации NGINX и перезагрузите службу следующим образом:
$ sudo systemctl reload nginx.service
И проверьте через браузер, все ли работает нормально.
Создание различных пользовательских страниц для каждой ошибки NGINX
Вы также можете настроить различные пользовательские страницы ошибок для каждой HTTP-ошибки в NGINX.
Мы обнаружили хорошую коллекцию пользовательских страниц ошибок nginx на Github.
Чтобы установить репозиторий на вашем сервере, выполните следующие команды:
$ sudo git clone https://github.com/denysvitali/nginx-error-pages /srv/http/default $ sudo mkdir /etc/nginx/snippets/ $ sudo ln -s /srv/http/default/snippets/error_pages.conf /etc/nginx/snippets/error_pages.conf $ sudo ln -s /srv/http/default/snippets/error_pages_content.conf /etc/nginx/snippets/error_pages_content.conf
Затем добавьте следующую конфигурацию либо в контекст http, либо в каждый серверный блок/vhost:
include snippets/error_pages.conf;
Сохраните файл конфигурации NGINX и перезагрузите службу следующим образом:
$ sudo systemctl reload nginx.service
Кроме того, проверьте с помощью браузера, работает ли конфигурация так, как задумано.
Директива error_page в NGINX позволяет перенаправлять пользователей на определенную страницу, ресурс или URL при возникновении ошибки.
Она также позволяет изменять код состояния HTTP в ответе клиенту.
Для получения дополнительной информации ознакомьтесь с документацией по странице ошибки nginx.
см. также:
- 🌐 Как увеличить время ожидания запроса в NGINX
- 🌐 Как использовать преимущества динамического резолва DNS в NGINX
- Как ограничить размер загрузки файлов в Nginx