In our production server we got this below error, mean time app was not running.
In redis-cli i have runned the below command
config set stop-writes-on-bgsave-error no
After that the error stoped and working fine now.
[1095] 19 Nov 09:47:47.017 * 1 changes in 900 seconds. Saving...
[1095] 19 Nov 09:47:47.018 * Background saving started by pid 10588
[10588] 19 Nov 09:47:47.102 #
=== REDIS BUG REPORT START: Cut & paste starting from here ===
[10588] 19 Nov 09:47:47.102 # Redis 2.8.14 crashed by signal: 11
[10588] 19 Nov 09:47:47.102 # Failed assertion: <no assertion failed> (<no file>:0)
[10588] 19 Nov 09:47:47.102 # --- STACK TRACE
redis-rdb-bgsave 127.0.0.1:6379(logStackTrace+0x33)[0x4468c3]
redis-rdb-bgsave 127.0.0.1:6379(lzf_compress+0x3ff)[0x41ed6f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10340)[0x7f3bddc8f340]
redis-rdb-bgsave 127.0.0.1:6379(lzf_compress+0x3ff)[0x41ed6f]
redis-rdb-bgsave 127.0.0.1:6379(rdbSaveLzfStringObject+0x5c)[0x42d13c]
redis-rdb-bgsave 127.0.0.1:6379(rdbSaveRawString+0x18e)[0x42d53e]
redis-rdb-bgsave 127.0.0.1:6379(rdbSaveObject+0xf5)[0x42dfd5]
redis-rdb-bgsave 127.0.0.1:6379(rdbSaveKeyValuePair+0x9a)[0x42e2fa]
redis-rdb-bgsave 127.0.0.1:6379(rdbSave+0x20e)[0x42e53e]
redis-rdb-bgsave 127.0.0.1:6379(rdbSaveBackground+0x64)[0x42e844]
redis-rdb-bgsave 127.0.0.1:6379(serverCron+0x4e7)[0x41a2d7]
redis-rdb-bgsave 127.0.0.1:6379(aeProcessEvents+0x202)[0x414e72]
redis-rdb-bgsave 127.0.0.1:6379(aeMain+0x2b)[0x4150fb]
redis-rdb-bgsave 127.0.0.1:6379(main+0x310)[0x413e90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7f3bdd8daec5]
redis-rdb-bgsave 127.0.0.1:6379[0x413fe8]
[10588] 19 Nov 09:47:47.103 # --- INFO OUTPUT
[10588] 19 Nov 09:47:47.103 # # Server
redis_version:2.8.14
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:b84ccf22550d3c52
redis_mode:standalone
os:Linux 3.13.0-36-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.2
process_id:10588
run_id:45241b74590e9dd371ba88f65626c40366a2f249
tcp_port:6379
uptime_in_seconds:4408083
uptime_in_days:51
hz:10
lru_clock:7084523
config_file:/etc/redis/redis.conf
# Clients
connected_clients:15
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:99906320
used_memory_human:95.28M
used_memory_rss:21938176
used_memory_peak:99906320
used_memory_peak_human:95.28M
used_memory_lua:68608
mem_fragmentation_ratio:0.22
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
rdb_changes_since_last_save:2
rdb_bgsave_in_progress:0
rdb_last_save_time:1416345974
rdb_last_bgsave_status:err
rdb_last_bgsave_time_sec:1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:8446
total_commands_processed:4802554
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:77886
evicted_keys:0
keyspace_hits:3878118
keyspace_misses:134845
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:824
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:0.01
used_cpu_user:0.07
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
# Commandstats
cmdstat_get:calls=3479505,usec=8420599,usec_per_call=2.42
cmdstat_set:calls=82510,usec=251311,usec_per_call=3.05
cmdstat_setex:calls=155792,usec=642233,usec_per_call=4.12
cmdstat_del:calls=13667,usec=26460,usec_per_call=1.94
cmdstat_exists:calls=2414,usec=3876,usec_per_call=1.61
cmdstat_hset:calls=75,usec=420,usec_per_call=5.60
cmdstat_hmset:calls=266321,usec=841845,usec_per_call=3.16
cmdstat_hmget:calls=267062,usec=908177,usec_per_call=3.40
cmdstat_hincrby:calls=150,usec=603,usec_per_call=4.02
cmdstat_select:calls=2302,usec=3719,usec_per_call=1.62
cmdstat_expire:calls=266396,usec=336560,usec_per_call=1.26
cmdstat_flushdb:calls=11,usec=22560,usec_per_call=2050.91
cmdstat_flushall:calls=11,usec=13008,usec_per_call=1182.55
cmdstat_eval:calls=83054,usec=4571113,usec_per_call=55.04
cmdstat_evalsha:calls=183284,usec=9222035,usec_per_call=50.32
# Keyspace
db0:keys=17368,expires=36,avg_ttl=7364823
hash_init_value: 1412144605
[10588] 19 Nov 09:47:47.103 # --- CLIENT LIST OUTPUT
[10588] 19 Nov 09:47:47.103 # id=6493 addr=127.0.0.1:33020 fd=5 name= age=1025927 idle=1025927 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=6496 addr=127.0.0.1:33026 fd=7 name= age=1025906 idle=412463 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=del
id=6497 addr=127.0.0.1:33029 fd=8 name= age=1025906 idle=584851 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=6498 addr=127.0.0.1:33032 fd=9 name= age=1025906 idle=468 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=6499 addr=127.0.0.1:33037 fd=10 name= age=1025906 idle=3166 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=6500 addr=127.0.0.1:33041 fd=11 name= age=1025906 idle=164 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=6501 addr=127.0.0.1:33043 fd=12 name= age=1025906 idle=2267 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7031 addr=127.0.0.1:44340 fd=6 name= age=750644 idle=144818 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7165 addr=127.0.0.1:47417 fd=18 name= age=673710 idle=142116 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7166 addr=127.0.0.1:47430 fd=13 name= age=673645 idle=149319 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7167 addr=127.0.0.1:47437 fd=19 name= age=673645 idle=579749 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7172 addr=127.0.0.1:47490 fd=14 name= age=672741 idle=584851 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=7177 addr=127.0.0.1:47539 fd=15 name= age=671841 idle=155627 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=8380 addr=127.0.0.1:44956 fd=23 name= age=56718 idle=37 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=get
id=8381 addr=127.0.0.1:44958 fd=24 name= age=56718 idle=37 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=evalsha
[10588] 19 Nov 09:47:47.103 # --- REGISTERS
[10588] 19 Nov 09:47:47.103 #
RAX:00000000000000ec RBX:00007f3bdc3d4a06
RCX:00007f3bdbbfff14 RDX:00007f3bd4800000
RDI:00007f3bd8e50008 RSI:00007f3bd4f0ad90
RBP:00007f3bd7d849fc RSP:00007fff819c2858
R8 :0000000000000000 R9 :00000000000000eb
R10:00007f3bdbbfff13 R11:0000000000000108
R12:00007f3bdc3d4a08 R13:0000000000000108
R14:00000000000000ec R15:0000000000000000
RIP:000000000041ed6f EFL:0000000000010212
CSGSFS:0000000000000033
[10588] 19 Nov 09:47:47.103 # (00007fff819c2867) -> 00007f3bdb92c849
[10588] 19 Nov 09:47:47.103 # (00007fff819c2866) -> 00007f3bdb21f94a
[10588] 19 Nov 09:47:47.103 # (00007fff819c2865) -> 00007f3bdbbf354e
[10588] 19 Nov 09:47:47.103 # (00007fff819c2864) -> 00007f3bdbbef243
[10588] 19 Nov 09:47:47.103 # (00007fff819c2863) -> 00007f3bdb92c822
[10588] 19 Nov 09:47:47.103 # (00007fff819c2862) -> 00007f3bdb7b8cda
[10588] 19 Nov 09:47:47.103 # (00007fff819c2861) -> 00007f3bdbbf1079
[10588] 19 Nov 09:47:47.103 # (00007fff819c2860) -> 00007f3bdb21e973
[10588] 19 Nov 09:47:47.103 # (00007fff819c285f) -> 00007f3bdaec93f8
[10588] 19 Nov 09:47:47.103 # (00007fff819c285e) -> 00007f3bdb226632
[10588] 19 Nov 09:47:47.103 # (00007fff819c285d) -> 00007f3bdb13b169
[10588] 19 Nov 09:47:47.103 # (00007fff819c285c) -> 00007f3bdb6e05e2
[10588] 19 Nov 09:47:47.103 # (00007fff819c285b) -> 00007f3bdb753000
[10588] 19 Nov 09:47:47.103 # (00007fff819c285a) -> 00007f3bdb940bf6
[10588] 19 Nov 09:47:47.103 # (00007fff819c2859) -> 00007f3bdb7667dd
[10588] 19 Nov 09:47:47.103 # (00007fff819c2858) -> 00007f3bdb93fa4f
[10588] 19 Nov 09:47:47.103 # --- FAST MEMORY TEST
[10588] 19 Nov 09:47:48.197 # Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.
[10588] 19 Nov 09:47:48.197 #
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Please report the crash opening an issue on github:
http://github.com/antirez/redis/issues
Suspect RAM error? Use redis-server --test-memory to verify it.
Now while checking for log its keepon getting the below logs
product@app-srv01:~$ sudo tail -f /var/log/redis/redis-server.log
[913] 19 Nov 10:24:07.082 * 10 changes in 300 seconds. Saving...
[913] 19 Nov 10:24:07.083 * Background saving started by pid 16825
[16825] 19 Nov 10:24:07.084 * DB saved on disk
[16825] 19 Nov 10:24:07.084 * RDB: 4 MB of memory used by copy-on-write
[913] 19 Nov 10:24:07.183 * Background saving terminated with success
[913] 19 Nov 10:29:08.000 * 10 changes in 300 seconds. Saving...
[913] 19 Nov 10:29:08.001 * Background saving started by pid 21391
[21391] 19 Nov 10:29:08.003 * DB saved on disk
[21391] 19 Nov 10:29:08.003 * RDB: 4 MB of memory used by copy-on-write
[913] 19 Nov 10:29:08.101 * Background saving terminated with success
Any suggestion to fix the issue ?
Sometimes Redis will throw out error:
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
Override Error in Redis Configuration
Using redis-cli
, you can stop it trying to save the snapshot:
config set stop-writes-on-bgsave-error no
This is a quick workaround, but if you care about the data you are using it for, you should check to make sure why BGSAVE
failed in first place.
This gives a temporary fix to the problem. However, it is a horrible way to over look this error, since what this option does is stop redis from notifying that writes have been stopped and to move on without writing the data in a snapshot. This is simply ignoring this error.
Save Redis on Low Memory
There might be errors during the bgsave process due to low memory.
This error occurs because of BGSAVE
being failed. During BGSAVE
, Redis forks a child process to save the data on disk. Although exact reason for failure of BGSAVE
can be checked from logs (usually at /var/log/redis/redis-server.log
on linux machines) but a lot of the times BGSAVE
fails because the fork
can’t allocate memory. Many times the fork
fails to allocate memory (although the machine has enough RAM available) because of a conflicting optimization by the OS.
As can be read from Redis FAQ:
Background saving is failing with a fork() error under Linux even if I’ve a lot of free RAM!
Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can’t tell in advance how much memory the child will take, so if the
overcommit_memory
setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.
Setting
overcommit_memory
to 1 tells Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.
# echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf
# sysctl vm.overcommit_memory=1
Redis doesn’t need as much memory as the OS thinks it does to write to disk, so may pre-emptively fail the fork.
During writes to Redis ( SET foo bar
) I am getting the following error:
MISCONF Redis is configured to save RDB snapshots, but is currently
not able to persist on disk. Commands that may modify the data set are
disabled. Please check Redis logs for details about the error.
Basically I understand that the problem is that redis is not able to save data on the disk, but have no idea how to get rid of the problem.
Also the following question has the same problem, it is abandoned long time ago with no answers and most probably no attempts to solve the problem.
30 Answers
In case you encounter the error and some important data cannot be discarded on the running redis instance (problems with permissions for the rdb
file or its directory incorrectly, or running out of disk space), you can always redirect the rdb
file to be written somewhere else.
Using redis-cli
, you can do something like this:
CONFIG SET dir /tmp/some/directory/other/than/var
CONFIG SET dbfilename temp.rdb
After this, you might want to execute a BGSAVE
command to make sure that the data will be written to the rdb
file. Make sure that when you execute INFO persistence
, bgsave_in_progress
is already 0
and rdb_last_bgsave_status
is ok
. After that, you can now start backing up the generated rdb
file somewhere safe.
Using redis-cli
, you can stop it trying to save the snapshot:
config set stop-writes-on-bgsave-error no
This is a quick workaround, but if you care about the data you are using it for, you should check to make sure why bgsave failed in first place.
Restart your redis server.
- macOS (brew):
brew services restart redis
. - Linux:
sudo service redis restart
/sudo systemctl restart redis
- Windows: Windows + R -> Type
services.msc
, Enter -> Search forRedis
then click onrestart
.
I personally had this issue after upgrading redis with Brew (brew upgrade
).
After rebooting the laptop, it immediately worked.
There might be errors during the bgsave process due to low memory. Try this (from redis background save FAQ)
echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1
This error occurs because of BGSAVE being failed. During BGSAVE, Redis forks a child process to save the data on disk. Although exact reason for failure of BGSAVE can be checked from logs (usually at /var/log/redis/redis-server.log
on linux machines) but a lot of the times BGAVE fails because the fork can’t allocate memory. Many times the fork fails to allocate memory (although the machine has enough RAM available) because of a conflicting optimization by the OS.
As can be read from Redis FAQ:
Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can’t tell in advance how much memory the child will take, so if the overcommit_memory setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.
Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.
Redis doesn’t need as much memory as the OS thinks it does to write to disk, so may pre-emptively fail the fork.
To Resolve this, you can:
Modify /etc/sysctl.conf
and add:
vm.overcommit_memory=1
Then restart sysctl with:
On FreeBSD:
sudo /etc/rc.d/sysctl reload
On Linux:
sudo sysctl -p /etc/sysctl.conf
in case you are working on a linux machine, also recheck the file and folder permissions of the database.
The db and the path to it can be obtained via:
in redis-cli
:
CONFIG GET dir
CONFIG GET dbfilename
and in the commandline ls -l
. The permissions for the directory should be 755, and those for the file should be 644. Also, normally redis-server executes as the user redis
, therefore its also nice to give the user redis
the ownership of the folder by executing sudo chown -R redis:redis /path/to/rdb/folder
. This has been elaborated in the answer here.
Thanks everyone for checking the problem, apparently the error was produced during bgsave
.
For me, typing config set stop-writes-on-bgsave-error no
in a shell and restarting Redis solved the problem.
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here’s what’s actually going on:
The default location for storing the rdb.dump
file is ./
(denoting current directory). You can verify this in your redis.conf
file. Therefore, the directory from where you start the redis server is where a dump.rdb
file will be created and updated.
It seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb
file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli
and update the dir
key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE
to invoke the creation of the dump.rdb
file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to
./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project’s directory and not in the hardcoded path’s project directory.
If you’re running MacOS and have recently upgraded to Catalina, you may need to run brew services restart redis
as suggested in this issue.
Had encountered this error and was able to figure out from log that the error is because of the disk space not being enough. All the data that was inserted in my case was not needed any longer. So I tried to FLUSHALL. Since redis-rdb-bgsave process was running, it was not allowing to FLUSH the data also. I followed below steps and was able to continue.
- Login to redis client
- Execute config set stop-writes-on-bgsave-error no
- Execute FLUSHALL (Data stored was not needed)
- Execute config set stop-writes-on-bgsave-error yes
The process redis-rdb-bgsave was no longer running after the above steps.
A more permanent fix might be to look in /etc/redis/redis.conf around lines 200-250 there are settings for the rdb features, that were not a part of redis back in the 2.x days.
notably
dir ./
can be changed to
dir /home/someuser/redislogfiledirectory
or you could comment out all the save lines, and not worry about persistence. (See the comments in /etc/redis/redis.conf)
Also, don’t forget
service redis-server stop
service redis-server start
I faced the similar issue, the main reason behind this was the memory(RAM) consumption by redis.
My EC2 machine had 8GB RAM(arounf 7.4 available for consumption)
When my program was running the RAM usage went upto 7.2 GB leaving hardly ~100MB in RAM , this generally triggers the MISCONF Redis error ...
You can determine the RAM consumption using the htop
command. Look for the Mem attribute after running htop command. If it shows high consumtion (like in my case it was 7.2GB/7.4GB) It’s better to upgrade the instance’s with larger Memory.
In this scenario using config set stop-writes-on-bgsave-error no
will be a disaster for the server and may result in disrupting other services running on the server(if any). So, it better to avoid the config command and UPGRADE YOUR REDIS MACHINE.
FYI: You may need to install htop to make this work : sudo apt-get install htop
One more solution to this can be some other RAM heavy service running on your system, check for other service running on your server/machine/instance and stop it if its not necessary. To check all the services running on your machine use service --status-all
And a suggestion for people directly pasting the config command , please do reasearch a bit and atleast warn the user before using such commands. And as @Rodrigo mentioned in his comment : «It does not look cool to ignore the errors.»
—UPDATE—
YOu can also configure maxmemory
and maxmemory-policy
to define the behavior of Redis when a specific limit of memory is reached.
For example, if I want to keep the memory limit of 6GB and delete the least recently used keys from the DB to make sure that redis mem usage do not exceed 6GB, then we can set these two parameters (in redis.conf or CONFIG SET command):
maxmemory 6gb
maxmemory-policy allkeys-lru
There are a lot of other values which you can set for these two parameters you can read about this from here: https://redis.io/topics/lru-cache
all of those answers do not explain the reason why the rdb save failed.
as my case, I checked the redis log and found:
14975:M 18 Jun 13:23:07.354 # Background saving terminated by signal 9
run the following command in terminal:
sudo egrep -i -r 'killed process' /var/log/
it display:
/var/log/kern.log.1:Jun 18 13:23:07 10-10-88-16 kernel: [28152358.208108] Killed process 28416 (redis-server) total-vm:7660204kB, anon-rss:2285492kB, file-rss:0kB
that is it! this process(redis save rdb) is killed by OOM killer
refers:
https://github.com/antirez/redis/issues/1886
Finding which process was killed by Linux OOM killer
Nowadays the Redis write-access problems that give this error message to the client re-emerged in the official redis
docker containers.
Redis from the official redis
image tries to write the .rdb file in the containers /data
folder, which is rather unfortunate, as it is a root-owned folder and it is a non-persistent location too (data written there will disappear if your container/pod crashes).
So after an hour of inactivity, if you have run your redis
container as a non-root user (e.g. docker run -u 1007
rather than default docker run -u 0
), you will get a nicely detailed error msg in your server log (see docker logs redis
):
1:M 29 Jun 2019 21:11:22.014 * 1 changes in 3600 seconds. Saving...
1:M 29 Jun 2019 21:11:22.015 * Background saving started by pid 499
499:C 29 Jun 2019 21:11:22.015 # Failed opening the RDB file dump.rdb (in server root dir /data) for saving: Permission denied
1:M 29 Jun 2019 21:11:22.115 # Background saving error
So what you need to do is to map container’s /data
folder to an external location (where the non-root user, here: 1007, has write access, such as /tmp
on the host machine), e.g:
docker run --rm -d --name redis -p 6379:6379 -u 1007 -v /tmp:/data redis
So it is a misconfiguration of the official docker image (which should write to /tmp
not /data
) that produces this «time bomb» that you will most likely encounter only in production… overnight over some particularly quiet holiday weekend :/
for me
config set stop-writes-on-bgsave-error no
and I reload my mac, it works
$ redis-cli
config set stop-writes-on-bgsave-error no
According to Redis documentation, this is recommended only if you don’t have RDB snapshots enabled or if you don’t care about data persistence in the snapshots.
«By default Redis will stop accepting writes if RDB snapshots are enabled (at least one save point) and the latest background save failed. This will make the user aware (in a hard way) that data is not persisting on disk properly, otherwise,strong text chances are that no one will notice and some disaster will happen.»
What u should be doing is :
redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
OK
127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
OK
127.0.0.1:6379> BGSAVE
Background saving started
127.0.0.1:6379>
Please Make sure /data/tmp has enough disk space.
I too was facing the same issue. Both the answers (the most upvoted one and the accepted one) just give a temporary fix for the same.
Moreover, the config set stop-writes-on-bgsave-error no
is a horrible way to over look this error, since what this option does is stop redis from notifying that writes have been stopped and to move on without writing the data in a snapshot. This is simply ignoring this error.
Refer this
As for setting dir
in config
in redis-cli, once you restart the redis service, this shall get cleared too and the same error shall pop up again. The default value of dir
in redis.conf
is ./
, and if you start redis as root user, then ./
is /
to which write permissions aren’t granted, and hence the error.
The best way is to set the dir
parameter in redis.conf file and set proper permissions to that directory. Most of the debian distributions shall have it in /etc/redis/redis.conf
After banging my head through so many SO questions finally —
for me @Axel Advento’ s answer worked but with few extra steps —
I was still facing the permission issues.
I had to switch user to redis
, create a new dir in it’s home dir and then set it as redis’s dir.
sudo su - redis -s /bin/bash
mkdir redis_dir
redis-cli CONFIG SET dir $(realpath redis_dir)
exit # to logout from redis user (optional)
I know this thread is slightly older, but here’s what worked for me when I got this error earlier, knowing I was nowhere near memory limit- both answers were found above.
Hopefully this could help someone in the future if they need it.
- Checked CHMOD on dir folder… found somehow the symbolic notation was different. CHMOD dir folder to 755
- dbfilename permissions were good, no changes needed
- Restarted redis-server
- (Should’ve done this first, but ah well) Referenced the redis-server.log and found that the error was the result of access being denied.
Again- unsure how the permissions on the DIR folder got changed, but I’m assuming CHMOD back to 755 and restarting redis-server took care of it as I was able to ping redis server afterwards.
Also- to note, redis did have ownership of the dbfilename and DIR folder.
On redis.conf
line ~235
let’s try to change config like this
- stop-writes-on-bgsave-error yes
+ stop-writes-on-bgsave-error no
I hit this problem while working on a server with AFS disk space because my authentication token had expired, which yielded Permission Denied
responses when the redis-server tried to save. I solved this by refreshing my token:
kinit USERNAME_HERE -l 30d && aklog
In case you are using docker/docker-compose and want to prevent redis from writing to file, you can create a redis config and mount into a container
docker.compose.override.yml
redis:¬
volumes:¬
- ./redis.conf:/usr/local/etc/redis/redis.conf¬
ports:¬
- 6379:6379¬
You can download the default config from here
in the redis.conf file make sure you comment out these 3 lines
save 900 1
save 300 10
save 60 10000
myou can view more solutions for removing the persistent data here
In my case it happened because I just installed redis
using the quick way. So redis is not running as root.
I was able to solve this problem by following the instructions under the Installing Redis more properly
section of their Quick Start Guide. After doing so, the problem was solved and redis
is now running as root. Check it out.
Check your Redis log before taking any action. Some of the solutions in this thread may erase your Redis data, so be careful about what you are doing.
In my case, the machine was running out of RAM. This also can happen when there is no more free disk space on the host.
In my case it was related to disk free space. (you can check it with df -h
bash command) when I free some space this error disappeared.
If you are running Redis locally on a windows machine, try to «run as administrator» and see if it works. With me, the problem was that Redis was located in the «Program Files» folder, which restricts permissions by default. As it should.
However, do not automatically run Redis as an administrator You don’t want to grant it more rights that it is supposed to have. You want to solve this by the book.
So, we have been able to quickly identify the problem by running it as an administrator, but this is not the cure. A likely scenario is that you have put Redis in a folder that doesn’t have write rights and as a consequence the DB file is stored in that same location.
You can solve this by opening the redis.windows.conf
and to search for the following configuration:
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
Change dir ./
to a path you have regular read/write permissions for
You could also just move the Redis folder in it’s entirety to a folder you know has the right permissions.
Please, be aware, that this error appears when your server is under attack. Just found that redis fails to write to ‘/etc/cron.d/web’ where after correcting of permissions, new file consisting of mining algorithm with some hiding options was added.
# on redis 6.0.4
# if show error 'MISCONF Redis is configured to save RDB snapshots'
# Because redis doesn't have permissions to create dump.rdb file
sudo redis/bin/redis-server
sudo redis/bin/redis-cli
check the permission in dir : var/lib/redis it should be redis:redis
во время записи в Redis (SET foo bar
) Я получаю следующую ошибку:
MISCONF Redis настроен для сохранения снимков RDB, но в настоящее время
не удается сохранить на диске. Команды, которые могут изменить набор данных
нетрудоспособный. Пожалуйста, проверьте журналы Redis для получения подробной информации об ошибке.
в основном я понимаю, что проблема в том, что redis не может сохранить данные на диске, но понятия не имею, как избавиться от проблема.
также следующий вопрос имеет ту же проблему, он давно заброшен без ответов и, скорее всего, не пытается решить проблему.
566
18
18 ответов:
в случае, если вы столкнулись с ошибкой и некоторые важные данные не могут быть отброшены на запущенном экземпляре redis (проблемы с разрешениями для
rdb
файл или его каталог неправильно, или не хватает места на диске), вы всегда можете перенаправитьrdb
файл для записи в другом месте.используя
redis-cli
, вы можете сделать что-то вроде этого:CONFIG SET dir /tmp/some/directory/other/than/var CONFIG SET dbfilename temp.rdb
после этого, вы можете выполнить
BGSAVE
команда, чтобы убедиться, что данные будут записаны элемент . Убедитесь, что при выполненииINFO
,bgsave_in_progress
уже0
(либо операция выполнена успешно или ошибка). После этого вы можете начать резервное копирование сгенерированногоrdb
файл в безопасном месте.
вы можете остановить его, пытаясь сохранить снимок:
config set stop-writes-on-bgsave-error no
Это быстрый обходной путь, но если вы заботитесь о данных, которые вы используете для него, вы должны проверить, чтобы убедиться, почему bgsave не удалось в первую очередь.
могут быть ошибки во время процесса bgsave из-за нехватки памяти. Попробуйте это (из redis background save FAQ)
echo 'vm.overcommit_memory = 1' >> /etc/sysctl.conf sysctl vm.overcommit_memory=1
слишком кратко об ответе. откройте терминал и введите следующие команды
redis-cli
и теперь типа
config set stop-writes-on-bgsave-error no
в случае, если вы работаете на машине linux, также проверьте права доступа к файлам и папкам базы данных.
БД и путь к нему можно получить через:
на
redis-cli
:CONFIG GET dir
CONFIG получить dbfilename
и в командной строке
ls -l
. Разрешения для каталога должны быть 755, а те для файла должны быть 644. Кроме того, обычно redis-сервер выполняется как пользовательredis
, поэтому его также приятно дать пользователюredis
право собственности на папки выполняетсяsudo chown -R redis:redis /path/to/rdb/folder
. Это было разработано в ответе здесь.
спасибо всем за проверку проблемы, по-видимому, ошибка была произведена во время
bgsave
.для меня, введя
config set stop-writes-on-bgsave-error no
в оболочке и перезапуск Redis решили проблему.
запустите сервер Redis в каталоге, где Redis имеет права на запись
ответы выше, безусловно, решить вашу проблему, но вот что на самом деле происходит:
папку по умолчанию для хранения и
./
(обозначение текущего каталога). Вы можете проверить это в своем . Поэтому каталог, из которого вы запускаете сервер redis, находится там, гдеdump.rdb
файл будет создан и обновлен.кажется, вы запустили сервер redis в каталоге, где redis не имеет правильных разрешений для создания .
что еще хуже, redis также, вероятно, не позволит вам закрыть сервер, пока он не сможет создать файл rdb для обеспечения надлежащего сохранения данных.
чтобы решить эту проблему, необходимо перейти в активную клиентскую среду redis с помощью
redis-cli
обновитьdir
ключ и установите его значение для вашего проекта папка или любая папка, в которой не root имеет разрешения на сохранение. Тогда бегиBGSAVE
, чтобы вызвать создание .CONFIG SET dir "/hardcoded/path/to/your/project/folder" BGSAVE
(теперь, если вы нужно сохранить дамп.файл rdb в каталоге, в котором вы запустили сервер, затем вам нужно будет изменить разрешения для каталога, чтобы redis мог писать в него. Вы можете искать stackoverflow для того, как это сделать).
теперь вы должны быть в состоянии выключить сервер redis. Отметим, что мы жестко путь. Жесткое кодирование редко является хорошей практикой, и я настоятельно рекомендую запустить сервер redis из каталога проекта и изменить
dir key back to
./`.CONFIG SET dir "./" BGSAVE
таким образом, когда вам нужен redis для другого проекта, файл дампа будет создан в каталоге вашего текущего проекта, а не в каталоге проекта жестко заданного пути.
эта ошибка возникает из-за сбоя BGSAVE. Во время BGSAVE Redis разветвляет дочерний процесс для сохранения данных на диске. Хотя точную причину сбоя BGSAVE можно проверить из журналов (обычно при
/var/log/redis/redis-server.log
на машинах с Linux), но много раз БДАЛ не потому, что вилка не может выделить память. Много раз вилка не может выделить память (хотя машина имеет достаточно оперативной памяти) из-за конфликтной оптимизации ОС.как можно прочитать из Redis FAQ:
схема сохранения фона Redis основана на семантике копирования при записи fork в современных операционных системах: Redis forks (создает дочерний процесс), который является точной копией родительского. Дочерний процесс сбрасывает БД на диск и, наконец, завершает работу. Теоретически ребенок должен использовать столько же памяти, сколько и родитель, являющийся копией, но на самом деле благодаря семантике копирования на запись, реализованной большинством современных операционных систем, Родительский и дочерний процесс будут общие страницы памяти. Страница будет дублироваться только тогда, когда она изменяется в дочернем или родительском элементе. Поскольку в теории все страницы может измениться, в то время как дочерний процесс-это экономия, линукс не могу сказать заранее, сколько памяти ребенка, поэтому если overcommit_memory параметр установлен в ноль вилка будет выполнена, если есть столько свободной оперативной памяти, как это требуется, чтобы действительно дублировать все родительские страниц памяти, поэтому если у вас есть Redis для набора данных 3 ГБ оперативной и всего 2 ГБ свободной памяти это потерпеть неудачу.
установка overcommit_memory в 1 говорит, что Linux расслабляется и выполняет вилку более оптимистичным способом распределения, и это действительно то, что вы хотите для Redis.
Redis не нужно столько памяти, сколько ОС думает, что это делает для записи на диск, так что может упреждающе провалить вилку.
чтобы решить эту проблему, можно:
изменить
/etc/sysctl.conf
и добавить:vm.overcommit_memory=1
затем перезапустите sysctl с помощью:
On FreeBSD:
sudo /etc/rc.d/sysctl reload
На Linux:
sudo sysctl -p /etc/sysctl.conf
более постоянным исправлением может быть просмотр в /etc/redis/redis.conf вокруг строк 200-250 есть настройки для функций rdb, которые не были частью redis обратно в 2.х дней.
в частности
dir ./
можно изменить на
dir /home/someuser/redislogfiledirectory
или вы можете закомментировать все строки сохранения,и не беспокоиться о постоянстве. (См. комментарии в /etc / redis / redis.conf)
кроме того, не забывайте
service redis-server stop service redis-server start
все эти ответы не объясняют причину, по которой не удалось сохранить rdb.
как и в моем случае, я проверил журнал redis и нашел:
14975: M 18 Jun 13: 23: 07.354 # сохранение фона прекращено сигналом 9
выполните следующую команду в терминале:
sudo egrep -i -r 'killed process' /var/log/
дисплей:
/var / log / kern.бревно.1:18 июня 13:23: 07 10-10-88-16 ядро: [28152358.208108] убитый процесс 28416 (redis-сервер) итого-ВМ:7660204kB, Анон-RSS-канал:2285492kB, файл-RSS-канал:0 КБ
что это! этот процесс (redis save rdb) убит убийца ООТ
говорится:
https://github.com/antirez/redis/issues/1886
найти, какой процесс был убит Linux OOM killer
столкнулся с этой ошибкой и смог выяснить из журнала, что ошибка связана с тем, что дискового пространства недостаточно. Все данные, которые были вставлены в моем случае, больше не нужны. Так что я попробовал FLUSHALL. Поскольку процесс redis-rdb-bgsave был запущен, он также не позволял сбросить данные. Я последовал ниже шагов и смог продолжить.
- войдите в redis client
- выполнить конфигурация установка стоп-пишет-на-bgsave-ошибка нет
- выполнить FLUSHALL (сохраненные данные не нужны)
- выполнить config set stop-writes-on-bgsave-error yes
процесс redis-rdb-bgsave больше не выполнялся после вышеуказанных шагов.
Как указал @Chris проблема, скорее всего, с низкой памятью. Мы начали испытывать это, когда мы выделили слишком много оперативной памяти для MySQL (
innodb_buffer_pool_size
).чтобы обеспечить достаточное количество оперативной памяти для Redis и других услуг, мы сократили
innodb_buffer_pool_size
на MySQL.
я столкнулся с этой проблемой во время работы на сервере с дисковым пространством AFS, потому что мой токен аутентификации истек, что дало
Permission Denied
ответы, когда redis-сервер пытался сохранить. Я решил это, обновив свой токен:
kinit USERNAME_HERE -l 30d && aklog
в моем случае причиной было очень мало свободного места на диске (всего 35 Мб). Я сделал следующее —
- остановил все связанные с Redis процессы
- удалите некоторые файлы на диске, чтобы сделать достаточное свободное пространство
удалить файл дампа redis (если существующие данные не нужны)
sudo rm /var/lib/redis/*
удалить все ключи из всех существующих баз данных
sudo redis-cli flushall
- перезапустить все задачи сельдерея и проверьте соответствующие журналы для любых проблем
я тоже столкнулся с той же проблемой. Оба ответа (самый популярный и принятый) просто дают временное исправление для того же самого.
кроме того,
config set stop-writes-on-bgsave-error no
Это ужасный способ просмотреть эту ошибку, так как этот параметр останавливает redis от уведомления о том, что записи были остановлены и двигаться дальше без записи данных в моментальный снимок. Это просто игнорирование этой ошибки.
обратитесь к этомуа
dir
inconfig
in redis-cli, как только вы перезапустите службу redis, это также будет очищено, и снова появится та же ошибка. Значение по умолчаниюdir
inredis.conf
и./
, и если вы начинаете redis как пользователь root, то./
и/
для которых разрешения на запись не предоставляются, и, следовательно, ошибка.лучший способ-установить
dir
параметр в redis.файл conf и установите соответствующие разрешения для этого каталога. Большинство дистрибутивов debian должны иметь его в/etc/redis/redis.conf
Если вы используете Redis локально на машине windows, попробуйте «запустить от имени администратора» и посмотреть, работает ли он. Со мной проблема заключалась в том, что Redis был расположен в папке «Program Files», которая по умолчанию ограничивает разрешения. Как и положено.
, не запускайте Redis автоматически от имени администратора вы не хотите предоставлять ему больше прав, которые он должен иметь. Вы хотите решить это по книге.
Итак, мы смогли быстро выявить проблему, запустив ее от имени администратора, но это не лекарство. Вероятный сценарий заключается в том, что вы поместили Redis в папку, которая не имеет прав на запись, и, как следствие, файл DB хранится в том же месте.
вы можете решить эту проблему путем открытия
redis.windows.conf
и искать следующую конфигурацию:
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
изменить
dir ./
к пути у вас есть регулярные разрешения на чтение / запись длявы могли также просто переместите папку Redis целиком в папку, которая, как вы знаете, имеет правильные разрешения.
решение для @Govind Rai’, чтобы сохранить дамп.файл rdb в каталоге, в котором вы запустили сервер в’:
щелкните правой кнопкой мыши папку Redis, выберите пункт Свойства, а затем перейдите на вкладку Безопасность.
Нажмите кнопку Изменить, чтобы открыть диалоговое окно разрешения для.щелкните все пакеты приложений
в поле разрешения для установите флажок Разрешить «полный контроль».
1. redis view all current key
KEYS *
2. View the current redis configuration information
CONFIG GET *
3. MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
As a result, the redis run user has no permission to write rdb files or the disk space is full.
config set stop-writes-on-bgsave-error no
Such as:
set 'name' 'shenhui'
-MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
config set stop-writes-on-bgsave-error no
+OK
set 'name' 'shenhui'
+OK
4. redis 127.0.0.1:6379
>
CONFIG SET logfile «/var/log/redis/redis-server.log»
(error) ERR Unsupported CONFIG parameter: logfile
logfile cannot be dynamically set through set
5.(error) OOM command not allowed when used memory
>
The maxmemory option is set, and the redis memory usage is capped.
You can free up space by setting the LRU algorithm to delete part of key.
The default is by expiration time. If the expiration time is not added to set, the data will be written to maxmemory.
32-bit systems use up to 3GB memory if you do not set maxmemory or if you set it to 0 64-bit without limiting memory.
volatile-lru —
>
Delete according to the expiration time generated by LRU algorithm.
allkeys-lru —
>
Delete any key according to the LRU algorithm.
volatile-random —
>
Randomly delete key based on the expiration setting.
allkeys-
>
random —
>
Random deletion without distinction.
volatile-ttl —
>
Delete according to the latest expiration date (supported by TTL)
noeviction —
>
No one is deleted, and an error is returned during the write operation.
6. reids log location
logfile logging, stdout by default, if set to stdout and run as a daemon, the log will be redirected to /dev/null, which means no logging.
7. Details of reids configuration parameters
#daemonize no By default, redis Not running in the background, if you need to run in the background, change the value of the item to yes
daemonize yes
# when redis When you're running in the background, Redis By default the pid Files in /var/run/redis.pid , you can configure to other addresses.
# When running multiple redis When serving, you need to specify a different one pid Files and ports
pidfile /var/run/redis_6379.pid
# The specified redis The port to run on, by default 6379
port 6379
# In a high concurrency environment, setup is required to avoid slow client connection issues 1 High - speed background log
tcp-backlog 511
# The specified redis Receive only from this IP Request for address, if not set, then all requests will be processed
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
# Sets the timeout time in seconds when the client connects. When the client does not issue any instructions during this time, close the connection
# 0 Is to turn off this setting
timeout 0
# TCP keepalive
# in Linux On, a value (in seconds) is specified for sending ACKs The time. Note that closing the connection takes twice as long. The default is 0 .
tcp-keepalive 0
# Specify logging levels recommended for production environments notice
# Redis A total of support 4 Two levels: debug , verbose , notice , warning By default, verbose
# debug Record a lot of information for development and testing
# varbose Useful information, unlike debug It's going to record that much
# notice ordinary verbose , often used in production environments
# warning Only very important or serious information is logged
loglevel notice
# configuration log Address of the file
# The default value is stdout , standard output, if the background mode will output to /dev/null .
logfile /var/log/redis/redis.log
# Number of available databases
# The default value is 16 , the default database is 0 , the database scope is 0- ( database-1 ) between
databases 16
################################ The snapshot #################################
# Save the data to disk in the following format :
# save
# Indicates how many update operations are performed over how long a period of time to synchronize the data to the data file rdb .
# This is equivalent to a conditional snapshot trigger, which can be combined with multiple conditions
# For example, the Settings in the default configuration file are set 3 A condition
# save 900 1 900 At least in seconds 1 a key Be changed
# save 300 10 300 At least in seconds 300 a key Be changed
# save 60 10000 60 At least in seconds 10000 a key Be changed
# save 900 1
# save 300 10
# save 60 10000
# Background storage error stops write.
stop-writes-on-bgsave-error yes
# When stored to a local database (persisted to rdb File) whether to compress data, default is yes
rdbcompression yes
# RDB Whether the file is a direct icon chcksum
rdbchecksum yes
# Local persistent database file name with a default value of dump.rdb
dbfilename dump.rdb
# Working directory
# The path where the database image backup files are placed.
# The reason the path and file name are configured separately here is that redis When doing a backup, the state of the current database is written to 1 Two temporary files, and when the backup is completed,
# The temporary file is then replaced with the file specified above, and both the temporary file and the backup file configured above are placed in the specified path.
# AOF Files will also be stored in this directory
# Notice that you have to specify here 1 Directories instead of files
dir /var/lib/redis-server/
################################# copy #################################
# A master-slave replication . Set this database to be a slave to another database .
# Set when the native is slav Service, set master The service of IP Address and port, in Redis When started, it will automatically start from master Data synchronization
# slaveof
# when master When the service is password protected ( with requirepass Formulated password )
# slave Service connection master The password
# masterauth
# When the slave library loses connection to the host or replication is in progress, the slave library has two modes of operation:
# 1) if slave-serve-stale-data Set to yes( The default Settings ) , the slave library continues to respond to the client's request
# 2) if slave-serve-stale-data Refers to as no To go out INFO and SLAVOF Any request outside the command is returned 1 a
# error "SYNC with master in progress"
slave-serve-stale-data yes
# configuration slave Whether the instance accepts writes. write slave To store transient data (in the same master Data can be easily deleted after synchronization) is useful, but client writes can send problems if not configured.
# from Redis2.6 Later, the default slave for read-only
slaveread-only yes
# The slave library will follow 1 Is sent to the main library at intervals of PINGs. Can be achieved by repl-ping-slave-period Set this interval by default 10 seconds
# repl-ping-slave-period 10
# repl-timeout Set the batch data transfer time of the master library or ping The default value is 60 seconds
# 1 To ensure that repl-timeout Is greater than repl-ping-slave-period
# repl-timeout 60
# in slave socket the SYNC After disabling TCP_NODELAY
# If you choose" yes " ,Redis Will use the 1 A smaller number TCP Packets and less bandwidth to send data to slave . But this can cause data to be sent to slave There will be a delay at the end , If it is Linux kernel The default configuration will be reached 40 ms .
# If you choose to "no" , then send the data to slave The latency at the end will be reduced, but more bandwidth will be used for replication .
repl-disable-tcp-nodelay no
# Set the background log size for replication.
# The larger the replicated background log, slave The longer it takes to disconnect and possibly later perform a partial replication.
# Background logs are at least there 1 a slave When connected, only allocate 1 Times.
# repl-backlog-size 1mb
# in master No longer connect slave After that, the background log will be released. The following configuration definition starts at the end 1 a slave Time to be released after disconnection (seconds).
# 0 This means that the background logs are never released
# repl-backlog-ttl 3600
# if master Can no longer work normally, then will be in multiple slave , select the one with the lowest priority 1 a slave Promoted to master , the priority value is 0 Can not be promoted to master .
slave-priority 100
# If less than N a slave Connect, and delay time <=M Second, master Can be configured to stop accepting writes.
# For example need at least 3 a slave Connect, and delay <=10 Second configuration:
# min-slaves-to-write 3
# min-slaves-max-lag 10
# Set up the 0 For the disabled
# The default min-slaves-to-write for 0 (disabled), min-slaves-max-lag for 10
################################## security ###################################
# Set the password you need to use after the client connection before making any other specification.
# Warning: because redis It's pretty fast, so in the 1 Under a better server, 1 Eight external users are available 1 Seconds to 150K Second password attempt, which means you need to specify a very, very strong password to prevent brute force
# requirepass foobared
# Command rename .
# in 1 You can rename relatively dangerous commands in a Shared environment. Such as the CONFIG Wish for 1 It's not easy to guess.
# For example, :
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# If you want to delete 1 Rename it to 1 A null character "" Yes, as follows:
# rename-command CONFIG ""
################################### The constraint ###################################
# Set up with 1 Time maximum client connections, default unlimited,
#Redis The number of client connections that can be opened simultaneously is Redis The maximum number of file descriptors that a process can open,
# If set maxclients 0 , means no restriction.
# When the number of client connections reaches the limit, Redis The new connection is closed and returned to the client max number of clients reached The error message
# maxclients 10000
# The specified Redis Maximum memory limit, Redis At startup, the data is loaded into memory, and when the maximum memory is reached, Redis Attempts will be made to clear expired items according to the clear policy Key
# if Redis Cannot provide enough space after clearing according to the policy, or the policy is set to " noeviction " , then commands that use more space will report an error, for example SET, LPUSH And so on. But you can still do a read
# Note: Redis The new vm The mechanism will take Key Store memory, Value Will be stored in a swap area
# The option to LRU Strategy works.
# maxmemory The Settings are more suitable for the redis Treat as analogous to memcached The cache is used and not appropriate as 1 A real DB .
# When the Redis As a 1 When a real database is used, the memory usage will be 1 It's a big expense
# maxmemory
# When memory is at its maximum Redis Which data will you choose to delete? There are 5 There are several ways to choose from
# volatile-lru -> using LRU Algorithm to remove the set expiration time key (LRU: Recently used Least RecentlyUsed )
# allkeys-lru -> using LRU The algorithm removes any key
# volatile-random -> Removes randomness that has been set to expire key
# allkeys->random -> remove a randomkey, any key
# volatile-ttl -> Remove items that are about to expire key(minor TTL)
# noeviction -> Don't remove anything you can, just return 1 A write error
# Note: for the above strategy, if none is appropriate key You can remove it when you write it Redis Returns the 1 A mistake
# The default is : volatile-lru
# maxmemory-policy volatile-lru
# LRU and minimal TTL The algorithm is not a precise algorithm, but a relatively precise algorithm ( To save memory ) You can choose the sample size to test.
# Redis Default grey selection 3 One sample to test, you can pass maxmemory-samples set
# maxmemory-samples 3
############################## AOF###############################
# By default, redis The database image will be asynchronously backed up to disk in the background, but the backup is very time-consuming, and the backup is not very frequent, if there is such a situation as power cut, unplug, then will cause a large range of data loss.
# so redis Provides additional 1 A more efficient way of database backup and disaster recovery.
# open append only After the pattern, redis It will take everything it receives 1 Each write request is appended to appendonly.aof In the file, when redis When restarted, the previous state is restored from the file.
# But that's what happens appendonly.aof The file is too big, so redis Also supports the BGREWRITEAOF Instruction, appendonly.aof Reorganize.
# You can turn it on at the same time asynchronous dumps and AOF
appendonly no
# AOF The file name ( The default : "appendonly.aof")
# appendfilename appendonly.aof
# Redis support 3 A synchronous AOF Document policy :
# no: Without synchronization, the system operates . Faster.
# always: always Means that each write operation is synchronized . Slow, Safest.
# everysec: Represents the accumulation of write operations, synchronized per second 1 time . Compromise.
# The default is "everysec" This is the best compromise between speed and safety.
# If you want Redis To run more efficiently, you can also set the "no" , let the operating system decide when to execute
# Or if you want to make your data more secure you can set it to zero "always"
# Use it if you're not sure "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# AOF Policy set to always or everysec , the background processing process ( Background save or AOF Log to rewrite ) It does a lot of things I/O operation
# In some Linux Overlength is prevented in the configuration fsync() The request. Note that there are no fixes now, even though fsync On the other 1 It is processed by three threads
# To mitigate the problem, set the following parameter no-appendfsync-on-rewrite
no-appendfsync-on-rewrite no
# AOF Automatic override
# when AOF File growth to 1 When you set the size Redis To be able to call BGREWRITEAOF Rewrite the log file
# Here's how it works: Redis Will remember the size of the file since the last log ( If you haven't overwritten since startup, the date size will be determined at startup )
# The base size is compared to the current size. If the current size is larger than the base size specified by a percentage, the rewrite function will be enabled
# You also need to specify 1 The minimum size for AOF Rewrite, this is used to prevent the file from being rewritten even if the file is small but growing AOF File status
# Set up the percentage for 0 Just turn this feature off
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
################################ LUASCRIPTING #############################
# 1 a Lua The maximum execution time of a script is 5000 Milliseconds ( 5 Seconds), if is 0 Or negative Numbers indicate infinite execution time.
lua-time-limit 5000
################################LOW LOG################################
# Redis Slow Log Logs commands that exceed a specified execution time. Execution time is not included I/O Computes, such as connecting clients, returning results, etc., just command execution time
# You can do this with two parameters slow log : 1 One is to tell Redis Execute the parameter over how long it has been logged slowlog-log-slower-than( subtle ) .
# On the other 1 One is slow log The length of the. when 1 The earliest command will be removed from the queue when the new command is logged
# The following time is in units of subtlety, therefore 1000000 On behalf of 1 Seconds.
# Pay attention to the specified 1 Negative Numbers will turn off the slow log and set to 0 Each command will be forced to be logged
slowlog-log-slower-than 10000
# There is no limit to log length, just be aware that it consumes memory
# Can be achieved by SLOWLOG RESET Reclaim memory consumed by slow logging
# Default values are recommended 128 , when the slow log exceeds 128 , the first entry to the queue is kicked out
slowlog-max-len 128
################################ Event notification #############################
# When an event occurs, Redis You can notice Pub/Sub The client.
# You can choose from the following table Redis The type of event to be notified. Event types are identified by a single character:
# K Keyspace Events to _keyspace@_ Is published as a prefix
# E Keyevent Events to _keysevent@_ Is published as a prefix
# g Generic event (no type specified), like DEL, EXPIRE, RENAME, ...
# $ String The command
# s Set The command
# h Hash The command
# z Ordered set command
# x Expiration event (every time key Generated when expired)
# e Clear the event (when key Generated when memory is cleared)
# A g$lshzxe hence " AKE " It means everything
# notify-keyspace-events with 1 by 0 To a string parameter composed of more than one character. An empty string means that the notification is disabled.
# Example: enable list And general events:
# notify-keyspace-events Elg
# Notifications used by default are disabled because the user usually does not need to change the feature and there is a performance penalty for that feature.
# Note if you don't specify at least K or E the 1 , no event will be sent.
notify-keyspace-events ""
############################## Advanced configuration ###############################
# when hash Contains more than the specified number of elements and the largest element does not exceed the critical value,
# hash Will be in 1 These two thresholds can be set for storage in a special way that greatly reduces memory usage
# Redis Hash The corresponding Value The internal reality is 1 a HashMap Actually there will be 2 Different implementations,
# this Hash When there are fewer members Redis Something similar is used to save memory 1 Dimensional arrays are stored compactly, without the real thing HashMap Structure, corresponding to valueredisObject the encoding for zipmap,
# As the number of members increases, it will automatically become real HashMap, At this time encoding for ht .
hash-max-zipmap-entries 512
hash-max-zipmap-value 64
# and Hash 1 B: yes, several small ones list Code in a specific way to save space.
# list Data type node value sizes smaller than what bytes are stored in a compact format.
list-max-ziplist-entries 512
list-max-ziplist-value 64
# set Internal data of a data type is stored in a compact format if it is all numeric and how many nodes it contains.
set-max-intset-entries 512
# and hashe and list 1 sample , Sort of set Saves space by storing in the specified encoding for a specified length
# zsort Data type node value sizes smaller than what bytes are stored in a compact format.
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# Redis Will be in every 100 Millisecond use 1 ms CPU Time to redis the hash Table reset hash Can reduce the memory usage
# When you use the scene, there are very strict real-time needs, can not be accepted Redis From time to time there are requests 2 For a millisecond delay, configure this to no .
# If there is no such strict real-time requirement, it can be set as yes In order to free up memory as quickly as possible
activerehashing yes
# The limit of the client's output buffer, because for some reason the client can't read the data from the server fast enough,
# Can be used to force disconnection ( 1 One of the most common reasons 1 A release / Subscribing clients cannot consume messages as fast as they can produce them.
# can 3 Setup in different client ways:
# normal -> Normal client
# slave -> slave and MONITOR The client
# pubsub -> At least subscribed 1 a pubsub channel or pattern The client
# each client-output-buffer-limit grammar :
# client-output-buffer-limit
# 1 Once the hard limit is reached, the client disconnects immediately, or the soft limit is reached and the specified number of seconds (continuous) is reached.
# For example, if the hard limit is 32 Megabytes and soft limits are 16 megabytes /10 Second, the client will disconnect immediately
# If the size of the output buffer is reached 32 Megabytes, client reached 16 Megabytes and contiguous exceeded the limit 10 Seconds, will also disconnect.
# The default normal Clients do not make restrictions because they are in 1 No data will be received (in the form of a push) if not requested after a request,
# Only an asynchronous client can possibly request data faster than it can read it.
# Set both the hard limit and the soft limit to 0 To disable the feature
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb60
client-output-buffer-limit pubsub 32mb 8mb60
# Redis Internal functions are called to perform many background tasks, such as closing client timeout connections and clearing expired ones Key , and so on.
# Not all tasks are performed at the same frequency, however Redis In accordance with the specified" Hz "Value to perform the check task.
# By default," Hz "Is set as 10 .
# Increasing the value will be in Redis Use more in your free time CPU When, but at the same time when there are more than one key A simultaneous expiration will cause Redis Are more responsive and timeouts can be handled more accurately.
# The range is 1 to 500 Between, but the value exceeds 100 Is usually not 1 That's a good idea.
# Most users should 10 This default value is only necessary for very low latency situations where it is necessary to raise the maximum to 100 .
hz 10
# when 1 Child node rewrite AOF When a file is created, if the following options are enabled, the file is generated each time 32M Data is synchronized.
aof-rewrite-incremental-fsync yes
8.Redis official documentation recommendations for the use of VM:
When your key is small and your value is large, VM works better because it saves more memory.
When your key is not too small, consider using some unusual methods to turn a large key into a large value. For example, consider combining key,value into a new value.
It is better to use linux ext3 to save your swap files on a file system with good support for sparse files.
As for the parameter of vm-max-threads, you can set the number of threads to access swap file, and it is better to set it not to exceed the machine’s kernel. If it is set to 0, then all operations on swap file will be serial, which may cause a relatively long delay, but it is a good guarantee for data integrity.
With the VM feature,Redis is finally free of the memory limitation nightmare, and it seems that we can call it Redis database, and we can imagine how many new USES can be generated. Hopefully, this feature will not affect the memory performance of Redis’s original excellent B.
9. redis modifies persistent path and log path
vim redis.conf
logfile /data/redis_cache/logs/redis.log # Log path
dir /data/redis_cache # persistent path. Remember to copy the dump.rdb persistent file to /data/redis_cache
First kill redis, copy dump.rdb, start
10. Clear redis cache
./redis-cli # Enter the
dbsize
flushall # perform
exit
11. Delete all Key in the current redis database
flushdb
12. Delete key from all databases of redis
flushall