Socket error on read socket redis

I am receiving error as Redis server error: socket error on read socket Error received as 'JobQueueError from line xxx of JobQueueRedis.php: Redis server error: socket error on read socket' I...

I am receiving error as

Redis server error: socket error on read socket
Error received as ‘JobQueueError from line xxx of JobQueueRedis.php: Redis server error: socket error on read socket’

I tried by changing persistent connection option to true.

$wgObjectCaches['redis'] = [
  'class'       => 'RedisBagOStuff',
  'servers'     => [ $redisserver ],
  'persistent'  => true
];

joshweir's user avatar

joshweir

5,2653 gold badges38 silver badges58 bronze badges

asked Sep 25, 2019 at 9:27

Vrushali's user avatar

1

In our case the following errors would occur because our AWS ElasticeCache Redis Cluster was using 100% of available memory :

"RedisException","message":"socket error on read socket","code":0
"RedisClusterException","message":"Error processing response from Redis node!"

Increasing the number of nodes and or allowing more memory on the instance seemed to solve the problem. But it seems our cached values take a lot of memory and need to expire faster.

enter image description here

answered Aug 7, 2020 at 12:03

wlarcheveque's user avatar

wlarchevequewlarcheveque

8061 gold badge10 silver badges27 bronze badges

For me, it was an error in the health check script, which was written with parm --no-auth-warning, which was not working for redis version 4 which caused redis-cli --no-auth-warning -a password ping resulted in an error after that, the script was retreating the redis server and server has socket error.

answered Nov 11, 2022 at 7:22

Ramratan Gupta's user avatar

Ramratan GuptaRamratan Gupta

1,0563 gold badges17 silver badges39 bronze badges

Работать в пятницу после обеда первого апреля не хочется — вдруг ещё техника выкинет какую-нибудь шутку. Потому решил о чем-либо написать.
Не так давно на просторах хабра в одной статье огульно охаяли сразу Unix-сокеты, mysql, php и redis. Говорить обо всём в одной статье не будем, остановимся на сокетах и немного на redis.
Итак вопрос: что быстрее Unix- или TCP-сокеты?
Вопрос, который не стоит и выеденного яйца, однако, постоянно муссируемый и писать не стал бы если б не опрос в той самой статье, согласно которому едва-ли не половина респондентов считает, что лучше/надёжнее/стабильнее использовать TCP-сокеты.
Тем, кто и так выбирает AF_UNIX, можно дальше не читать.

Начнём с краткой выжимки из теории.
Сокет — один из интерфейсов межпроцессного взаимодействия, позволяющий разрабатывать клиент-серверные системы для локального или сетевого использования. Так как мы рассматриваем в сравнении (с одной стороны) Unix-сокеты, то в дальнейшем будем говорить об IPC в пределах одной машины.
В отличии от именованных каналов, при использовании сокетов прослеживается отличие между клиентом и сервером. Механизм сокетов позволяет создавать сервер к которому подключается множество клиентов.

Как реализуется взаимодействие со стороны сервера:
— системный вызов socket создаёт сокет, но этот сокет не может использоваться совместно с другими процессами;
— сокет именуется. Для локальных сокетов домена AF_UNIX(AF_LOCAL) адрес будет задан именем файла. Сетевые сокеты AF_INET именуются в соответствии с их ip/портом;
— системный вызов listen(int socket, int backlog) формирует очередь входящих подключений. Второй параметр backlog определяет длину этой очереди;
— эти подключения сервер принимает с помощью вызова accept, который создаёт новый сокет, отличающийся от именованного сокета. Этот новый сокет применяется только для взаимодействия с данным конкретным клиентом.

С точки зрения клиента подключение происходит несколько проще:
— вызывается socket;
— и connect, используя в качестве адреса именованный сокет сервера.

Остановимся внимательнее на вызове int socket(int domain, int type, int protocol) второй параметр которого определяет тип обмена данными используемого с этим сокетом. В нашем сравнении мы будем рассматривать его возможное значение SOCK_STREAM, являющееся надежным, упорядоченным двунаправленным потоком байтов. То есть в рассмотрении участвуют сокеты вида
sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
и
sockfd = socket(AF_INET, SOCK_STREAM, 0);

Структура сокета в домене AF_UNIX проста:

struct sockaddr_un {
        unsigned char   sun_len;        /* sockaddr len including null */
        sa_family_t     sun_family;     /* AF_UNIX */
        char    sun_path[104];          /* path name (gag) */
};

В домене AF_INET несколько сложнее:

struct sockaddr_in {
        uint8_t sin_len;
        sa_family_t     sin_family;
        in_port_t       sin_port;
        struct  in_addr sin_addr;
        char    sin_zero[8];
};

и на её заполнение мы понесём дополнительные расходы. В частности, это могут быть расходы на ресолвинг (gethostbyname) и/или выяснение того с какой стороны разбивать яйца (htons).

Также сокеты в домене AF_INET, несмотря на обращение к localhost, «не знают» того, что они работают на локальной системе. Тем самым они не прилагают никаких усилий, чтобы обойти механизмы сетевого стека для увеличения производительности. Таким образом мы «оплачиваем» усилия на переключения контекста, ACK, TCP управление потоком, маршрутизацию, разбиение больших пакетов и т.п. То есть это «полноценная TCP работа» несмотря на то, что мы работаем на локальном интерфейсе.

В свою очередь сокеты AF_UNIX «осознают», что они работают внутри одной системы. Они избегают усилий на установку ip-заголовков, роутинг, расчёт контрольных сумм и т.д. Кроме того, раз в домене AF_UNIX используется файловая система в качестве адресного пространства, мы получаем бонус в виде возможности использования прав доступа к файлам и управления доступа к ним. Тем самым мы можем без существенных усилий ограничивать процессам доступ к сокетам и, опять же, не несём затрат на этапы обсепечения безопасности.

Проверим теорию на практике.
Мне лень писать серверную часть, потому воспользуюсь тем же redis-server. Его функционал отлично для этого подходит и заодно проверим справедливы ли были обвинения в его адрес. Клиентские части набросаем свои. Будем выполнять простейшую команду INCR со сложностью O(1).
Создание сокетов намеренно помещаем внутри циклов.
TCP-клиент:

AF_INET

#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>

#include <sys/types.h>
#include <sys/socket.h>

#include <netdb.h>
#include <netinet/in.h>

#include <string.h>

int main(int argc, char *argv[]) {
   int sockfd, portno, n;
   struct sockaddr_in serv_addr;
   struct hostent *server;
   
   char buffer[256];
   
   if (argc < 4) {
      fprintf(stderr,"usage %s hostname port count_reqn", argv[0]);
      exit(0);
   }
	
   portno = atoi(argv[2]);
   
   int i=0;
   int ci = atoi(argv[3]);
   for(i; i < ci; i++)
   {
      
      sockfd = socket(AF_INET, SOCK_STREAM, 0);
      
      if (sockfd < 0) {
         perror("ERROR opening socket");
         exit(1);
      }
           
      server = gethostbyname(argv[1]);
      
      if (server == NULL) {
         fprintf(stderr,"ERROR, no such hostn");
         exit(0);
      }
      
      bzero((char *) &serv_addr, sizeof(serv_addr));
      serv_addr.sin_family = AF_INET;
      bcopy((char *)server->h_addr, (char *)&serv_addr.sin_addr.s_addr, server->h_length);
      serv_addr.sin_port = htons(portno);
      
      if (connect(sockfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr)) < 0) {
         perror("ERROR connecting");
         exit(1);
      }
      
      char str[] = "*2rn$4rnincrrn$3rnfoorn";
      int len = sizeof(str);
      bzero(buffer, len);
      memcpy ( buffer, str, len );

      n = write(sockfd, buffer, strlen(buffer));
      
      if (n < 0) {
         perror("ERROR writing to socket");
         exit(1);
      }
      
      bzero(buffer,256);
      n = read(sockfd, buffer, 255);
      
      if (n < 0) {
         perror("ERROR reading from socket");
         exit(1);
      }
      
      printf("%sn",buffer);
      close(sockfd);
   }
   return 0;
}

UNIX-клиент:

AF_UNIX

#include <stdio.h>
#include <stdlib.h>

#include <sys/types.h>
#include <sys/socket.h>

#include <sys/un.h>

#include <string.h>

int main(int argc, char *argv[]) {
   int sockfd, portno, n;
   struct sockaddr_un serv_addr;
   struct hostent *server;
   
   char buffer[256];
   
   if (argc < 1) {
      fprintf(stderr,"usage %s count_reqn", argv[0]);
      exit(0);
   }
	
   int i=0;
   int ci = atoi(argv[1]);
   for(i; i < ci; i++)
   {
      
      sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
      
      if (sockfd < 0) {
         perror("ERROR opening socket");
         exit(1);
      }
      
      bzero((char *) &serv_addr, sizeof(serv_addr));
      serv_addr.sun_family = AF_UNIX;
      strcpy(serv_addr.sun_path, "/tmp/redis.sock");
      
      if (connect(sockfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr)) < 0) {
         perror("ERROR connecting");
         exit(1);
      }
      
      char str[] = "*2rn$4rnincrrn$3rnfoorn";
      int len = sizeof(str);
      bzero(buffer, len);
      memcpy ( buffer, str, len );
      
      n = write(sockfd, buffer, strlen(buffer));
      
      if (n < 0) {
         perror("ERROR writing to socket");
         exit(1);
      }
      
      bzero(buffer,256);
      n = read(sockfd, buffer, 255);
      
      if (n < 0) {
         perror("ERROR reading from socket");
         exit(1);
      }
      
      printf("%sn",buffer);
      close(sockfd);
   }
   return 0;
}

Тестируем с одним клиентом:

# redis-cli set foo 0 ; time ./redistcp 127.0.0.1 6379 1000000 > /dev/null ; redis-cli get foo
OK
2.108u 21.991s 1:13.75 32.6%    9+158k 0+0io 0pf+0w
"1000000"

# redis-cli set foo 0 ; time ./redisunix 1000000 > /dev/null ; redis-cli get foo
OK
0.688u 9.806s 0:36.90 28.4%     4+151k 0+0io 0pf+0w
"1000000"

И теперь для двадцати паралелльных клиентов отправляющих 500000 запросов каждый.
для TCP: 6:12.86

# redis-cli info Commandstats
cmdstat_set:calls=1,usec=5,usec_per_call=5.00
cmdstat_incr:calls=10000000,usec=24684314,usec_per_call=2.47

для UNIX: 4:11.23

# redis-cli info Commandstats
cmdstat_set:calls=1,usec=8,usec_per_call=8.00
cmdstat_incr:calls=10000000,usec=22258069,usec_per_call=2.23

Тем самым, в целом, аргументами в пользу TCP-сокетов может служить лишь мобильность применения и возможность простого масштабирования. Но если же вам требуется работа в пределах одной машины, то выбор, безусловно, в пользу UNIX-сокетов. Поэтому выбор между TCP- и UNIX-сокетами — это, в первую очередь, выбор между переносимостью и производительностью.

На сим предлагаю любить Unix-сокеты, а вопросы тупоконечностей оставить жителям Лилипутии и Блефуску.

Содержание

  1. ‘read error on connection’ #492
  2. Comments
  3. Server
  4. Clients
  5. Memory
  6. Persistence
  7. Stats
  8. Replication
  9. Redis connection to xxxxxx failed — read ECONNRESET #980
  10. Comments
  11. Redis client handling
  12. Accepting Client Connections
  13. What Order are Client Requests Served In?
  14. Maximum Concurrent Connected Clients
  15. Output Buffer Limits
  16. Query Buffer Hard Limit
  17. Client Eviction
  18. Client Timeouts
  19. The CLIENT Command
  20. TCP keepalive
  21. On This Page

‘read error on connection’ #492

Hi, how can solve this error?
PHP message: PHP Fatal error: Uncaught exception ‘RedisException’ with message ‘read error on connection’
i’m getting it for this call: Redis->hGet(‘fechainicio’, ‘1’)

My system is:
PHP Version 5.4.4-14+
in phpinfo redis version 2.2.5 but in redis-cli redis_version:2.6.17
with 17 clients connected, take 6 seconds to show the command prompt when type redis-cli

Memory
used_memory:91931192
used_memory_human:87.67M
used_memory_rss:105947136
used_memory_peak:5178554456
used_memory_peak_human:4.82G
used_memory_lua:31744
mem_fragmentation_ratio:1.15
mem_allocator:jemalloc-3.2.0

Not save to disk
Thanks

The text was updated successfully, but these errors were encountered:

My guess is you’re hitting a timeout, either because Redis is under heavy load and/or paging, or because of network issues?

Have you tried running:

That will let you know if you’re having latency issues.

min: 1, max: 4762, avg: 580.68 (81 samples)

To me this looks like a timeout failure. A max latency of 4762ms is quite high, average 580 is also quite high. Is the server under tremendous load or paging to disk?

The server only has 3k http requests per minute, the 99% of their has this schema:
nginx=>php=>echo get content from redis, some atomic increment in redis
the other 1% of requests call to database and store result in redis
Server has 64gb of ram
About paging to disk, if refer to save redis to disk, i have disable it, if no refer to it, explain it please
Thanks

It’s difficult for me to diagnose the problem without knowing more information. These latency numbers are really, really high:

min: 1, max: 4762, avg: 580.68 (81 samples)

Here is a random production server we use, which is under constant load (>50k ops/sec):

min: 0, max: 4, avg: 0.69 (898 samples)

The most likely possibilities:

  1. You have a network issue. You could test that by running —latency on the server itself (assuming you didn’t).
  2. You have operations that are incredibly expensive (e.g. KEYS, ZUNIONSTORE on huge sets, etc)
  3. The server is out of ram and paging. Honestly though, Redis is still generally fast in this case as long as you aren’t doing expensive ops.

Tell me what more info need
network issue: the test was did in localhost
operations: i only use: hget, hset, lpop, and hincr
The redis is set with 100 mb of memory

I’m happy to try and help, but this doesn’t appear to be related to phpredis at all. 😄

Could you send me the output from 👍

redis info
and
slowlog get 10

This is the data of redis-info, the server is recently restarted, i’m also getting high cpu usage in the php pool that call to redis.
Slow log return empty list

Server

redis_version:2.6.17
redis_git_sha1:00000000
redis_git_dirty:0
redis_mode:standalone
os:Linux 3.10.23-xxxx-std-ipv6-64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.7.2
process_id:5107
run_id:74a23fe027297315ab832f7483326d057f29785b
tcp_port:6379
uptime_in_seconds:406
uptime_in_days:0
hz:10
lru_clock:218698

Clients

connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

Memory

used_memory:47619360
used_memory_human:45.41M
used_memory_rss:49885184
used_memory_peak:5178553432
used_memory_peak_human:4.82G
used_memory_lua:31744
mem_fragmentation_ratio:1.05
mem_allocator:jemalloc-3.2.0

Persistence

loading:0
rdb_changes_since_last_save:195548
rdb_bgsave_in_progress:0
rdb_last_save_time:1407278415
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok

Stats

total_connections_received:39364
total_commands_processed:313049
instantaneous_ops_per_sec:1053
rejected_connections:0
expired_keys:0
evicted_keys:764
keyspace_hits:0
keyspace_misses:78610
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0

Replication

used_cpu_sys:10.82
used_cpu_user:25.61
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

Источник

Redis connection to xxxxxx failed — read ECONNRESET #980

Please advise what info and how to obtain it, to help

Running on OpenShift
Node Version: v4.2.3
Redis Server: 2.18.24
Node_Redis: 2.4.2

The text was updated successfully, but these errors were encountered:

Got a little more info today:

The server connection is lost and as the errors are unhandled I guess you did not attach a event listener for it to the client?

UNCERTAIN_STATE means that the connection got lost after firing a command but without the answer being returned. Your call took 108439ms so that’s a huge delay.

Please check your server connection and add a error handler. That is likely all that I can help you with right now.

@BridgeAR I added the following code:

The first and the third error is still not handled. Are you using different client instances? And please check why you get ECONNREFUSED and ECONNRESET errors. There’s a problem with your server connection.

Why closed?
I am using Kue and it also uses Redis and I provide the configuration pieces as it uses a different Redis client.
What should I do?
I have contacted Openshift

I see that Kue can share Redis client. I will change when I get back.

After changing Kue to share the Redis client:

I opened issue w/ Kue also

I’ve updated my configuration w/ Kue such that it is also now use node_redis. But I still get the error:

I’ve opened a support ticket with OpenShift also.

There are unhandled errors, so there are definitely redis connections open that do not have a attached error listener.

I closed the issue as I’m quite certain there is not much more that I can help you with in this case. The thrown errors are not thrown by node_redis directly but by the net module from NodeJS. The error is just passed to the library and emitted from node_redis. I hope you found a solution for it in the meanwhile.

Hello, I have the same problem and I have no idea how to fix it

I’m having same issue:

CLIENT: 3.0.2
REDIS: 5.0.3

  • Running within Docker / node:14.15.4-alpine3.12
  • Running Inside GCLOUD ( latency

10ms )

  • After started, the app starts PINGING the redis server every 30 seconds
  • Server crashes after 4 minutes (usually), just after PING

    Bugs/multibulk zero null #8 ( PING timestamp === ERROR timestamp )

  • not a single event es registered in the log ( end, error, uncaughtException, reconnecting )
  • I’ve tried ECHO instead of PING but I got same results.
  • Источник

    Redis client handling

    How the Redis server manages client connections

    This document provides information about how Redis handles clients at the network layer level: connections, timeouts, buffers, and other similar topics are covered here.

    The information contained in this document is only applicable to Redis version 2.6 or greater.

    Accepting Client Connections

    Redis accepts clients connections on the configured TCP port and on the Unix socket if enabled. When a new client connection is accepted the following operations are performed:

    • The client socket is put in the non-blocking state since Redis uses multiplexing and non-blocking I/O.
    • The TCP_NODELAY option is set in order to ensure that there are no delays to the connection.
    • A readable file event is created so that Redis is able to collect the client queries as soon as new data is available to read on the socket.

    After the client is initialized, Redis checks if it is already at the limit configured for the number of simultaneous clients (configured using the maxclients configuration directive, see the next section of this document for further information).

    When Redis can’t accept a new client connection because the maximum number of clients has been reached, it tries to send an error to the client in order to make it aware of this condition, closing the connection immediately. The error message will reach the client even if the connection is closed immediately by Redis because the new socket output buffer is usually big enough to contain the error, so the kernel will handle transmission of the error.

    What Order are Client Requests Served In?

    The order is determined by a combination of the client socket file descriptor number and order in which the kernel reports events, so the order should be considered as unspecified.

    However, Redis does the following two things when serving clients:

    • It only performs a single read() system call every time there is something new to read from the client socket. This ensures that if we have multiple clients connected, and a few send queries at a high rate, other clients are not penalized and will not experience latency issues.
    • However once new data is read from a client, all the queries contained in the current buffers are processed sequentially. This improves locality and does not need iterating a second time to see if there are clients that need some processing time.

    Maximum Concurrent Connected Clients

    In Redis 2.4 there was a hard-coded limit for the maximum number of clients that could be handled simultaneously.

    In Redis 2.6 and newer, this limit is dynamic: by default it is set to 10000 clients, unless otherwise stated by the maxclients directive in redis.conf .

    However, Redis checks with the kernel what the maximum number of file descriptors that we are able to open is (the soft limit is checked). If the limit is less than the maximum number of clients we want to handle, plus 32 (that is the number of file descriptors Redis reserves for internal uses), then the maximum number of clients is updated to match the number of clients it is really able to handle under the current operating system limit.

    When maxclients is set to a number greater than Redis can support, a message is logged at startup:

    When Redis is configured in order to handle a specific number of clients it is a good idea to make sure that the operating system limit for the maximum number of file descriptors per process is also set accordingly.

    Under Linux these limits can be set both in the current session and as a system-wide setting with the following commands:

    • ulimit -Sn 100000 # This will only work if hard limit is big enough.
    • sysctl -w fs.file-max=100000

    Output Buffer Limits

    Redis needs to handle a variable-length output buffer for every client, since a command can produce a large amount of data that needs to be transferred to the client.

    However it is possible that a client sends more commands producing more output to serve at a faster rate than that which Redis can send the existing output to the client. This is especially true with Pub/Sub clients in case a client is not able to process new messages fast enough.

    Both conditions will cause the client output buffer to grow and consume more and more memory. For this reason by default Redis sets limits to the output buffer size for different kind of clients. When the limit is reached the client connection is closed and the event logged in the Redis log file.

    There are two kind of limits Redis uses:

    • The hard limit is a fixed limit that when reached will make Redis close the client connection as soon as possible.
    • The soft limit instead is a limit that depends on the time, for instance a soft limit of 32 megabytes per 10 seconds means that if the client has an output buffer bigger than 32 megabytes for, continuously, 10 seconds, the connection gets closed.

    Different kind of clients have different default limits:

    • Normal clients have a default limit of 0, that means, no limit at all, because most normal clients use blocking implementations sending a single command and waiting for the reply to be completely read before sending the next command, so it is always not desirable to close the connection in case of a normal client.
    • Pub/Sub clients have a default hard limit of 32 megabytes and a soft limit of 8 megabytes per 60 seconds.
    • Replicas have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 seconds.

    It is possible to change the limit at runtime using the CONFIG SET command or in a permanent way using the Redis configuration file redis.conf . See the example redis.conf in the Redis distribution for more information about how to set the limit.

    Query Buffer Hard Limit

    Every client is also subject to a query buffer limit. This is a non-configurable hard limit that will close the connection when the client query buffer (that is the buffer we use to accumulate commands from the client) reaches 1 GB, and is actually only an extreme limit to avoid a server crash in case of client or server software bugs.

    Client Eviction

    Redis is built to handle a very large number of client connections. Client connections tend to consume memory, and when there are many of them, the aggregate memory consumption can be extremely high, leading to data eviction or out-of-memory errors. These cases can be mitigated to an extent using output buffer limits, but Redis allows us a more robust configuration to limit the aggregate memory used by all clients’ connections.

    This mechanism is called client eviction, and it’s essentially a safety mechanism that will disconnect clients once the aggregate memory usage of all clients is above a threshold. The mechanism first attempts to disconnect clients that use the most memory. It disconnects the minimal number of clients needed to return below the maxmemory-clients threshold.

    maxmemory-clients defines the maximum aggregate memory usage of all clients connected to Redis. The aggregation takes into account all the memory used by the client connections: the query buffer, the output buffer, and other intermediate buffers.

    Note that replica and master connections aren’t affected by the client eviction mechanism. Therefore, such connections are never evicted.

    maxmemory-clients can be set permanently in the configuration file ( redis.conf ) or via the CONFIG SET command. This setting can either be 0 (meaning no limit), a size in bytes (possibly with mb / gb suffix), or a percentage of maxmemory by using the % suffix (e.g. setting it to 10% would mean 10% of the maxmemory configuration).

    The default setting is 0, meaning client eviction is turned off by default. However, for any large production deployment, it is highly recommended to configure some non-zero maxmemory-clients value. A value 5% , for example, can be a good place to start.

    It is possible to flag a specific client connection to be excluded from the client eviction mechanism. This is useful for control path connections. If, for example, you have an application that monitors the server via the INFO command and alerts you in case of a problem, you might want to make sure this connection isn’t evicted. You can do so using the following command (from the relevant client’s connection):

    And you can revert that with:

    For more information and an example refer to the maxmemory-clients section in the default redis.conf file.

    Client eviction is available from Redis 7.0.

    Client Timeouts

    By default recent versions of Redis don’t close the connection with the client if the client is idle for many seconds: the connection will remain open forever.

    However if you don’t like this behavior, you can configure a timeout, so that if the client is idle for more than the specified number of seconds, the client connection will be closed.

    You can configure this limit via redis.conf or simply using CONFIG SET timeout .

    Note that the timeout only applies to normal clients and it does not apply to Pub/Sub clients, since a Pub/Sub connection is a push style connection so a client that is idle is the norm.

    Even if by default connections are not subject to timeout, there are two conditions when it makes sense to set a timeout:

    • Mission critical applications where a bug in the client software may saturate the Redis server with idle connections, causing service disruption.
    • As a debugging mechanism in order to be able to connect with the server if a bug in the client software saturates the server with idle connections, making it impossible to interact with the server.

    Timeouts are not to be considered very precise: Redis avoids setting timer events or running O(N) algorithms in order to check idle clients, so the check is performed incrementally from time to time. This means that it is possible that while the timeout is set to 10 seconds, the client connection will be closed, for instance, after 12 seconds if many clients are connected at the same time.

    The CLIENT Command

    The Redis CLIENT command allows you to inspect the state of every connected client, to kill a specific client, and to name connections. It is a very powerful debugging tool if you use Redis at scale.

    CLIENT LIST is used in order to obtain a list of connected clients and their state:

    In the above example two clients are connected to the Redis server. Let’s look at what some of the data returned represents:

    • addr: The client address, that is, the client IP and the remote port number it used to connect with the Redis server.
    • fd: The client socket file descriptor number.
    • name: The client name as set by CLIENT SETNAME .
    • age: The number of seconds the connection existed for.
    • idle: The number of seconds the connection is idle.
    • flags: The kind of client (N means normal client, check the full list of flags).
    • omem: The amount of memory used by the client for the output buffer.
    • cmd: The last executed command.

    See the [ CLIENT LIST ](https://redis.io/commands/client-list) documentation for the full listing of fields and their purpose.

    Once you have the list of clients, you can close a client’s connection using the CLIENT KILL command, specifying the client address as its argument.

    The commands CLIENT SETNAME and CLIENT GETNAME can be used to set and get the connection name. Starting with Redis 4.0, the client name is shown in the SLOWLOG output, to help identify clients that create latency issues.

    TCP keepalive

    From version 3.2 onwards, Redis has TCP keepalive ( SO_KEEPALIVE socket option) enabled by default and set to about 300 seconds. This option is useful in order to detect dead peers (clients that cannot be reached even if they look connected). Moreover, if there is network equipment between clients and servers that need to see some traffic in order to take the connection open, the option will prevent unexpected connection closed events.

    On This Page

    This is a community website sponsored by Redis Ltd. © 2023. Redis and the cube logo are registered trademarks of Redis Ltd. Terms of use & privacy policy.

    Источник

    Hi,

    I’m using this nice plugin since years and had very few troubles with it.
    But now a strange thing is happening:
    All websites updated to WordPress 5.5/5.5.1 cannot connect anymore to the Redis Server.
    Any other website on same server still on WP 5.4.2 or lower works flawlessly…

    here’s a working diagnostics on WP 5.4.2:

    Status: Connected
    Client: PhpRedis (v5.0.2)
    Drop-in: Valid
    Disabled: No
    Filesystem: Working
    Ping: 1
    Errors: []
    PhpRedis: 5.0.2
    Predis: Not loaded
    Credis: Not loaded
    PHP Version: 7.3.21
    Plugin Version: 2.0.13
    Redis Version: 3.2.12
    Multisite: No
    Global Prefix: "wp1g_"
    Blog Prefix: "wp1g_"
    WP_REDIS_PREFIX: "*****"
    WP_CACHE_KEY_SALT: "*****"
    Global Groups: [
        "blog-details",
        "blog-id-cache",
        "blog-lookup",
        "global-posts",
        "networks",
        "rss",
        "sites",
        "site-details",
        "site-lookup",
        "site-options",
        "site-transient",
        "users",
        "useremail",
        "userlogins",
        "usermeta",
        "user_meta",
        "userslugs",
        "redis-cache",
        "blog_meta"
    ]
    Ignored Groups: [
        "counts",
        "plugins",
        "themes"
    ]
    Unflushable Groups: []
    Drop-ins: [
        "advanced-cache.php v by ",
        "Redis Object Cache Drop-In v2.0.13 by Till Krüss"
    ]
    

    and here’s a non-working one on WP 5.5.1 (same server, same cpanel account)

    Status: Not connected
    Client: PhpRedis (v5.0.2)
    Drop-in: Valid
    Disabled: No
    Filesystem: Working
    Ping: 1
    Errors: [
        "socket error on read socket"
    ]
    PhpRedis: 5.0.2
    Predis: Not loaded
    Credis: Not loaded
    PHP Version: 7.3.21
    Plugin Version: 2.0.13
    Redis Version: 3.2.12
    Multisite: No
    Global Prefix: "wplu_"
    Blog Prefix: "wplu_"
    WP_REDIS_PREFIX: "*****"
    WP_CACHE_KEY_SALT: "*****"
    Global Groups: [
        "blog-details",
        "blog-id-cache",
        "blog-lookup",
        "global-posts",
        "networks",
        "rss",
        "sites",
        "site-details",
        "site-lookup",
        "site-options",
        "site-transient",
        "users",
        "useremail",
        "userlogins",
        "usermeta",
        "user_meta",
        "userslugs",
        "redis-cache",
        "blog_meta"
    ]
    Ignored Groups: [
        "counts",
        "plugins",
        "themes",
        "blog-details",
        "blog-id-cache",
        "blog-lookup",
        "global-posts",
        "networks",
        "rss",
        "sites",
        "site-details",
        "site-lookup",
        "site-options",
        "site-transient",
        "users",
        "useremail",
        "userlogins",
        "usermeta",
        "user_meta",
        "userslugs",
        "redis-cache",
        "blog_meta"
    ]
    Unflushable Groups: []
    Drop-ins: [
        "advanced-cache.php v by ",
        "Redis Object Cache Drop-In v2.0.13 by Till Krüss"
    ]
    

    Any advice?
    Thanks in advance!

    1. Home

    2. node.js — socket.io cause php redis read error on connection?

    663 votes

    0 answers

    Get the solution ↓↓↓

    I use php(laravel 5.8) to broadcast, and I uselaravel-echo-server, it’s work. But!!! recent I need to something by my self. And I write asocket service by ndoejs. This is my code

    const jwt = require('jsonwebtoken');
    const server = require('http').Server();
    const io = require('socket.io')(server);
    
    ... 
    
    // ==== The question is here =====================
    const Redis = require('ioredis');
    const redis = new Redis({
      port: REDIS_PORT,
      host: REDIS_HOST,
      password: REDIS_PASSWORD
    });
    
    redis.psubscribe('myTestChannel.*');
    
    redis.on('pmessage', function(pattern, channel, message) {
      console.log(channel, message);
      const object_message = JSON.parse(message);
      io.sockets.emit(channel, object_message.data);
    });
    // ==== The question is here =====================
    
    io.sockets.use(function (socket, next) {
      if (socket.handshake.query && socket.handshake.query.token){
        // auth
      } else {
        next(new Error('Authentication error'));
      }
    }).on('connection', function(socket) {
      console.log('==========connection============');
      console.log('Socket Connect with ID: ' + socket.id);
    
      socket.on('join', (data) => {
        ... do something
      });
    
      socket.on('disconnect', () => {
        ... do something
    
      })
    });
    
    server.listen(LISTEN_PORT, function () {
      console.log(`Start listen ${LISTEN_PORT} port`);
    });
    

    It’s work, but running a long time, my php get a error message, phpredis read error on connection. I’m not sure the real reason. But I guess is about my socket, because I use laravel-echo-server is great.

    I’m not sure myredis.psubscribe‘s position is right? Maybe this cause a long time connection and cause php read error on connection?

    I should move theredis.psubscribe intoon('connection') and unsubscribe when disconnection?

    I want to know theredis.psubscribe is the main cause the problem? Thanks your help.

    2021-11-18

    Write your answer


    Share solution ↓

    Additional Information:

    Date the issue was resolved:

    2021-11-18

    Link To Source

    Link To Answer
    People are also looking for solutions of the problem: fastcgi sent in stderr: «primary script unknown» while reading response header from upstream

    Didn’t find the answer?

    Our community is visited by hundreds of web development professionals every day. Ask your question and get a quick answer for free.


    Similar questions

    Find the answer in similar questions on our website.

    Write quick answer

    Do you know the answer to this question? Write a quick response to it. With your help, we will make our community stronger.


    About the technologies asked in this question

    PHP

    PHP (from the English Hypertext Preprocessor — hypertext preprocessor) is a scripting programming language for developing web applications. Supported by most hosting providers, it is one of the most popular tools for creating dynamic websites.
    The PHP scripting language has gained wide popularity due to its processing speed, simplicity, cross-platform, functionality and distribution of source codes under its own license.
    https://www.php.net/

    Laravel

    Laravel is a free open source PHP framework that came out in 2011. Since then, it has been able to become the framework of choice for web developers. One of the main reasons for this is that Laravel makes it easier, faster, and safer to develop complex web applications than any other framework.
    https://laravel.com/

    Node.js

    Node.js is an open source server-side framework built on top of the Google Chrome JavaScript Engine. The number of sites using NodeJS has increased by 84,000. It is one of the busiest cross-platform JavaScript runtimes. Node.js is an asynchronous, single-threaded, non-blocking I / O model that makes it lightweight and efficient. The Node.js package ecosystem, npm, is also the world’s largest open source library ecosystem.
    https://nodejs.org/



    Welcome to programmierfrage.com

    programmierfrage.com is a question and answer site for professional web developers, programming enthusiasts and website builders. Site created and operated by the community. Together with you, we create a free library of detailed answers to any question on programming, web development, website creation and website administration.

    Get answers to specific questions

    Ask about the real problem you are facing. Describe in detail what you are doing and what you want to achieve.

    Help Others Solve Their Issues

    Our goal is to create a strong community in which everyone will support each other. If you find a question and know the answer to it, help others with your knowledge.

    Понравилась статья? Поделить с друзьями:

    Читайте также:

  • Socket error on client unknown disconnecting
  • Socket error on client mqtt
  • Snowrunner runtime error
  • Snowrunner error 1058406399
  • Snow layer minecraft ошибка

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии