From 39460eac1a4d079f7d938b8e2477e9f0599ecdd8 Mon Sep 17 00:00:00 2001 From: spikefoo Date: Fri, 25 Aug 2017 09:13:52 +0300 Subject: [PATCH 0001/1457] Fix mistake in bitfield.md Make the example command match the explanation. --- commands/bitfield.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bitfield.md b/commands/bitfield.md index 3141be9fd5..2f3e7b661a 100644 --- a/commands/bitfield.md +++ b/commands/bitfield.md @@ -2,7 +2,7 @@ The command treats a Redis string as a array of bits, and is capable of addressi `BITFIELD` is able to operate with multiple bit fields in the same command call. It takes a list of operations to perform, and returns an array of replies, where each array matches the corresponding operation in the list of arguments. -For example the following command increments an 8 bit signed integer at bit offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0: +For example the following command increments an 5 bit signed integer at bit offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0: > BITFIELD mykey INCRBY i5 100 1 GET u4 0 1) (integer) 1 From 2d05ed1bf26b583109b9cc626c7538181ae419a4 Mon Sep 17 00:00:00 2001 From: Eugene Ponizovsky Date: Fri, 1 Dec 2017 18:10:54 +0300 Subject: [PATCH 0002/1457] Added Redis Cluster client Redis::ClusterRider for Perl --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 256e5ea2f1..d494dbe4e6 100644 --- a/clients.json +++ b/clients.json @@ -360,6 +360,16 @@ "active": true }, + { + "name": "Redis::ClusterRider", + "language": "Perl", + "url": "http://search.cpan.org/dist/Redis-ClusterRider/", + "repository": " https://github.com/iph0/Redis-ClusterRider", + "description": "Daring Redis Cluster client", + "authors": ["iph0"], + "active": true + }, + { "name": "AnyEvent::Hiredis", "language": "Perl", From 67b2c43861db8eaf5bbb7306d166ad6b0f7feeea Mon Sep 17 00:00:00 2001 From: Stefan Toman Date: Thu, 8 Feb 2018 17:53:20 +0100 Subject: [PATCH 0003/1457] Make the format of variable names more consistent While most variable names and commands in `twitter-clone.md` are highlighted using backticks, few of them are also wrapped in asterisks or single quotes. I propose to adjust the format to be more consistent to make the article easier to read. --- topics/twitter-clone.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/twitter-clone.md b/topics/twitter-clone.md index e27f350831..d52a079191 100644 --- a/topics/twitter-clone.md +++ b/topics/twitter-clone.md @@ -92,7 +92,7 @@ Sorted Sets, which are kind of a more capable version of Sets, it is better to start introducing Sets first (which are a very useful data structure per se), and later Sorted Sets. -There are more data types than just Lists. Redis also supports Sets, which are unsorted collections of elements. It is possible to add, remove, and test for existence of members, and perform the intersection between different Sets. Of course it is possible to get the elements of a Set. Some examples will make it more clear. Keep in mind that `SADD` is the _add to set_ operation, `SREM` is the _remove from set_ operation, _sismember_ is the _test if member_ operation, and `SINTER` is the _perform intersection_ operation. Other operations are `SCARD` to get the cardinality (the number of elements) of a Set, and `SMEMBERS` to return all the members of a Set. +There are more data types than just Lists. Redis also supports Sets, which are unsorted collections of elements. It is possible to add, remove, and test for existence of members, and perform the intersection between different Sets. Of course it is possible to get the elements of a Set. Some examples will make it more clear. Keep in mind that `SADD` is the _add to set_ operation, `SREM` is the _remove from set_ operation, `SISMEMBER` is the _test if member_ operation, and `SINTER` is the _perform intersection_ operation. Other operations are `SCARD` to get the cardinality (the number of elements) of a Set, and `SMEMBERS` to return all the members of a Set. SADD myset a SADD myset b @@ -108,7 +108,7 @@ Note that `SMEMBERS` does not return the elements in the same order we added the SADD mynewset hello SINTER myset mynewset => foo,b -`SINTER` can return the intersection between Sets but it is not limited to two Sets. You may ask for the intersection of 4,5, or 10000 Sets. Finally let's check how SISMEMBER works: +`SINTER` can return the intersection between Sets but it is not limited to two Sets. You may ask for the intersection of 4,5, or 10000 Sets. Finally let's check how `SISMEMBER` works: SISMEMBER myset foo => 1 SISMEMBER myset notamember => 0 @@ -277,7 +277,7 @@ This happens every time a user logs in, but we also need a function `isLoggedIn` * Get the "auth" cookie from the user. If there is no cookie, the user is not logged in, of course. Let's call the value of the cookie ``. * Check if `` field in the `auths` Hash exists, and what the value (the user ID) is (1000 in the example). * In order for the system to be more robust, also verify that user:1000 auth field also matches. - * OK the user is authenticated, and we loaded a bit of information in the $User global variable. + * OK the user is authenticated, and we loaded a bit of information in the `$User` global variable. The code is simpler than the description, possibly: @@ -367,7 +367,7 @@ After we create a post and we obtain the post ID, we need to LPUSH the ID in the header("Location: index.php"); -The core of the function is the `foreach` loop. We use `ZRANGE` to get all the followers of the current user, then the loop will LPUSH the push the post in every follower timeline List. +The core of the function is the `foreach` loop. We use `ZRANGE` to get all the followers of the current user, then the loop will `LPUSH` the push the post in every follower timeline List. Note that we also maintain a global timeline for all the posts, so that in the Retwis home page we can show everybody's updates easily. This requires just doing an `LPUSH` to the `timeline` List. Let's face it, aren't you starting to think it was a bit strange to have to sort things added in chronological order using `ORDER BY` with SQL? I think so. @@ -426,7 +426,7 @@ It is not hard, but we did not yet check how we create following / follower rela ZADD following:1000 5000 ZADD followers:5000 1000 -Note the same pattern again and again. In theory with a relational database, the list of following and followers would be contained in a single table with fields like `following_id` and `follower_id`. You can extract the followers or following of every user using an SQL query. With a key-value DB things are a bit different since we need to set both the `1000 is following 5000` and `5000 is followed by 1000` relations. This is the price to pay, but on the other hand accessing the data is simpler and extremely fast. Having these things as separate sets allows us to do interesting stuff. For example, using `ZINTERSTORE` we can have the intersection of 'following' of two different users, so we may add a feature to our Twitter clone so that it is able to tell you very quickly when you visit somebody else's profile, "you and Alice have 34 followers in common", and things like that. +Note the same pattern again and again. In theory with a relational database, the list of following and followers would be contained in a single table with fields like `following_id` and `follower_id`. You can extract the followers or following of every user using an SQL query. With a key-value DB things are a bit different since we need to set both the `1000 is following 5000` and `5000 is followed by 1000` relations. This is the price to pay, but on the other hand accessing the data is simpler and extremely fast. Having these things as separate sets allows us to do interesting stuff. For example, using `ZINTERSTORE` we can have the intersection of `following` of two different users, so we may add a feature to our Twitter clone so that it is able to tell you very quickly when you visit somebody else's profile, "you and Alice have 34 followers in common", and things like that. You can find the code that sets or removes a following / follower relation in the `follow.php` file. From 24be362750dc64f3119b2b90096610029e8009d1 Mon Sep 17 00:00:00 2001 From: Benjamin Cass Date: Wed, 14 Feb 2018 12:24:50 -0600 Subject: [PATCH 0004/1457] Fix typo/grammar issue in appendix --- topics/pipelining.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/pipelining.md b/topics/pipelining.md index 24c86b33ba..32b40fe991 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -131,7 +131,7 @@ Using [Redis scripting](/commands/eval) (available in Redis version 2.6 or great Sometimes the application may also want to send `EVAL` or `EVALSHA` commands in a pipeline. This is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD](http://redis.io/commands/script-load) command (it guarantees that `EVALSHA` can be called without the risk of failing). -Appendix: why a busy loops are slow even on the loopback interface? +Appendix: Why are busy loops slow even on the loopback interface? --- Even with all the background covered in this page, you may still wonder why From 8bb9fa5c1cc75d324850d280b50296d0fba6e3e8 Mon Sep 17 00:00:00 2001 From: Paul Giberson Date: Wed, 28 Feb 2018 16:46:27 -0700 Subject: [PATCH 0005/1457] Minor typo --- topics/rediscli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 0e9727be6a..c05917c122 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -238,7 +238,7 @@ is sent to the server, processed, and the reply is parsed back and rendered into a simpler form to read. Nothing special is needed for running the CLI in interactive mode - -just lunch it without any arguments and you are in: +just launch it without any arguments and you are in: $ redis-cli 127.0.0.1:6379> ping From 9298431d7ae56c7d86e41ef7b3d15ff2d7df8357 Mon Sep 17 00:00:00 2001 From: Jey Kottalam Date: Mon, 19 Mar 2018 17:28:59 -0700 Subject: [PATCH 0006/1457] Document that SIGINT is handled like SIGTERM Document that SIGINT is handled like SIGTERM, as indicated in https://github.com/antirez/redis/blob/8d92885bac17d8d82177a2f51d5b2a03c11ac8fe/src/server.c#L3488 --- topics/signals.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/signals.md b/topics/signals.md index a73b4eb9bb..6d82cc8579 100644 --- a/topics/signals.md +++ b/topics/signals.md @@ -6,11 +6,11 @@ of different POSIX signals such as `SIGTERM`, `SIGSEGV` and so forth. The information contained in this document is **only applicable to Redis version 2.6 or greater**. -Handling of SIGTERM +Handling of SIGTERM and SIGINT --- -The `SIGTERM` signals tells Redis to shutdown gracefully. When this signal is -received the server does not actually exits as a result, but it schedules +The `SIGTERM` and `SIGINT` signals tell Redis to shutdown gracefully. When this signal is +received the server does not immediately exit as a result, but it schedules a shutdown very similar to the one performed when the `SHUTDOWN` command is called. The scheduled shutdown starts ASAP, specifically as long as the current command in execution terminates (if any), with a possible additional From 1ba7152eafe74024675b51d5a3d502d8d6a11efc Mon Sep 17 00:00:00 2001 From: Paul Boyd Date: Mon, 2 Apr 2018 15:29:01 -0400 Subject: [PATCH 0007/1457] update select command --- commands/select.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/select.md b/commands/select.md index d8efd0a253..653f684fa2 100644 --- a/commands/select.md +++ b/commands/select.md @@ -1,9 +1,9 @@ Select the Redis logical database having the specified zero-based numeric index. New connections always use the database 0. -Redis different selectable databases are a form of namespacing: all the databases are anyway persisted together in the same RDB / AOF file. However different databases can have keys having the same name, and there are commands available like `FLUSHDB`, `SWAPDB` or `RANDOMKEY` that work on specific databases. +Selectable Redis databases are a form of namespacing: all databases are still persisted in the same RDB / AOF file. However different databases can have keys with the same name, and commands like `FLUSHDB`, `SWAPDB` or `RANDOMKEY` work on specific databases. -In practical terms, Redis databases should mainly used in order to, if needed, separate different keys belonging to the same application, and not in order to use a single Redis instance for multiple unrelated applications. +In practical terms, Redis databases should be used to separate different keys belonging to the same application (if needed), and not to use a single Redis instance for multiple unrelated applications. When using Redis Cluster, the `SELECT` command cannot be used, since Redis Cluster only supports database zero. In the case of Redis Cluster, having multiple databases would be useless, and a worthless source of complexity, because anyway commands operating atomically on a single database would not be possible with the Redis Cluster design and goals. From fbbc383560875eb81d97d64670fb91bd8d2e1949 Mon Sep 17 00:00:00 2001 From: hkhere Date: Sun, 15 Apr 2018 21:23:07 +0800 Subject: [PATCH 0008/1457] fix a small typo in rediscli --- topics/rediscli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 0e9727be6a..9a2ee1645f 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -78,7 +78,7 @@ preform authentication saving the need of explicitly using the `AUTH` command: $ redis-cli -a myUnguessablePazzzzzword123 ping PONG -Finally, it's possible to send a command that operates a on a database number +Finally, it's possible to send a command that operates on a database number other than the default number zero by using the `-n ` option: $ redis-cli flushall From 7d47eecf31187045ddbb32d4e8704cc485c3d651 Mon Sep 17 00:00:00 2001 From: Jakob Petersson Date: Sat, 21 Apr 2018 02:50:16 +0200 Subject: [PATCH 0009/1457] Fixes typo in notifications.md sunionostore -> sunionstore --- topics/notifications.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index 07f684669d..513a58df02 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -121,7 +121,7 @@ Different commands generate different kind of events according to the following * `SREM` generates a single `srem` event, and an additional `del` event if the resulting set is empty and the key is removed. * `SMOVE` generates an `srem` event for the source key, and an `sadd` event for the destination key. * `SPOP` generates an `spop` event, and an additional `del` event if the resulting set is empty and the key is removed. -* `SINTERSTORE`, `SUNIONSTORE`, `SDIFFSTORE` generate `sinterstore`, `sunionostore`, `sdiffstore` events respectively. In the special case the resulting set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* `SINTERSTORE`, `SUNIONSTORE`, `SDIFFSTORE` generate `sinterstore`, `sunionstore`, `sdiffstore` events respectively. In the special case the resulting set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. * `ZINCR` generates a `zincr` event. * `ZADD` generates a single `zadd` event even when multiple elements are added. * `ZREM` generates a single `zrem` event even when multiple elements are deleted. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. From 918b2cfe41faa034a5757f14f40f05f076a0d637 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 23 May 2018 16:02:52 +0300 Subject: [PATCH 0010/1457] Updates allowed commands in SUBSCRIBE Adds the previously-undocumented `PING` and `QUIT`. --- commands/subscribe.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/subscribe.md b/commands/subscribe.md index eb05702462..997670cf7d 100644 --- a/commands/subscribe.md +++ b/commands/subscribe.md @@ -1,5 +1,5 @@ Subscribes the client to the specified channels. Once the client enters the subscribed state it is not supposed to issue any -other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE` -and `PUNSUBSCRIBE` commands. +other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE`, +`PUNSUBSCRIBE`, `PING` and `QUIT` commands. From 1e52840e5f6bf56d5b5cb166103ab9ccf122992a Mon Sep 17 00:00:00 2001 From: herzrasen Date: Thu, 24 May 2018 20:57:24 +0200 Subject: [PATCH 0011/1457] fixed command usage instead of name --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index f067c97301..b0a9ca9a4f 100644 --- a/commands.json +++ b/commands.json @@ -2225,7 +2225,7 @@ "type": "string" }, { - "command": "expiration", + "name": "expiration", "type": "enum", "enum": ["EX seconds", "PX milliseconds"], "optional": true From 1a4a352bb033c9de4f3cf40a665f52db8d43daa1 Mon Sep 17 00:00:00 2001 From: Kem Tekinay Date: Sat, 2 Jun 2018 16:52:42 -0400 Subject: [PATCH 0012/1457] Added Redis Server GUI by Kem Tekinay --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index 25b021e4b9..8fe133a37e 100644 --- a/tools.json +++ b/tools.json @@ -640,5 +640,13 @@ "repository": "https://github.com/anandtrex/redis-browser", "description": "Cross platform GUI tool for redis that includes support for ReJSON", "authors": ["anandtrex"] + }, + { + "name": "Redis Server", + "language": "Xojo", + "repository": "https://github.com/ktekinay/XOJO-Redis", + "description": "Cross platform GUI to spin up and control redis-server, included in the project", + "url":"https://github.com/ktekinay/XOJO-Redis/releases/", + "authors": ["KemTekinay"] } ] From 10ddef7c3104010f0f43b01ed07575c8701f456b Mon Sep 17 00:00:00 2001 From: Dennis Collinson Date: Mon, 4 Jun 2018 14:11:19 -0600 Subject: [PATCH 0013/1457] Typo: to don't => to not --- topics/replication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 3dfe767964..ccb5e38435 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -69,7 +69,7 @@ is wiped from the master and all its slaves: 3. Nodes B and C will replicate from node A, which is empty, so they'll effectively destroy their copy of the data. When Redis Sentinel is used for high availability, also turning off persistence -on the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to don't detect a failure, so that the failure mode described above happens. +on the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to not detect a failure, so that the failure mode described above happens. Every time data safety is important, and replication is used with master configured without persistence, auto restart of instances should be disabled. From 162c8b918b9f51c17c1442dabc484d77faeb75f5 Mon Sep 17 00:00:00 2001 From: rufo Date: Sat, 9 Jun 2018 23:29:02 +0100 Subject: [PATCH 0014/1457] Update ZPOP pair order to reflect change in 5.0.0-RC2 --- commands/zpopmax.md | 2 +- commands/zpopmin.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/zpopmax.md b/commands/zpopmax.md index 630d70d580..8f6750a75d 100644 --- a/commands/zpopmax.md +++ b/commands/zpopmax.md @@ -8,7 +8,7 @@ be the first, followed by the elements with lower scores. @return -@array-reply: list of popped scores and elements. +@array-reply: list of popped elements and scores. @examples diff --git a/commands/zpopmin.md b/commands/zpopmin.md index 5533830fb7..16f7c97ac1 100644 --- a/commands/zpopmin.md +++ b/commands/zpopmin.md @@ -8,7 +8,7 @@ be the first, followed by the elements with greater scores. @return -@array-reply: list of popped scores and elements. +@array-reply: list of popped elements and scores. @examples From 4cc8e8f791909ee1bacaade2276ac03ebb8e10c8 Mon Sep 17 00:00:00 2001 From: Jake Angerman Date: Fri, 15 Jun 2018 16:14:09 -0400 Subject: [PATCH 0015/1457] Update memory-optimization.md grammar fixes --- topics/memory-optimization.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index b7e61c8dbf..34b75777ea 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -57,7 +57,7 @@ is small, the amortized time for HGET and HSET commands is still O(1): the hash will be converted into a real hash table as soon as the number of elements it contains will grow too much (you can configure the limit in redis.conf). -This does not work well just from the point of view of time complexity, but +This works well not just from the point of view of time complexity, but also from the point of view of constant times, since a linear array of key value pairs happens to play very well with the CPU cache (it has a better cache locality than a hash table). @@ -84,15 +84,15 @@ Now let's assume the objects we want to cache are numbered, like: * object:1234 * object:5 -This is what we can do. Every time there is to perform a +This is what we can do. Every time we perform a SET operation to set a new value, we actually split the key into two parts, -one used as a key, and used as field name for the hash. For instance the +one part used as a key, and the other part used as the field name for the hash. For instance the object named "object:1234" is actually split into: * a Key named object:12 * a Field named 34 -So we use all the characters but the latest two for the key, and the final +So we use all the characters but the last two for the key, and the final two characters for the hash field name. To set our key we use the following command: From f02692216702808e163fbd009d814d000db9a918 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Yann=20Sala=C3=BCn?= <1910607+yansal@users.noreply.github.com> Date: Mon, 18 Jun 2018 20:40:05 +0200 Subject: [PATCH 0016/1457] typo --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index ab694d72ec..38b1df5784 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -443,7 +443,7 @@ Basically we say, for this specific key and group, I want that the message IDs s ``` Client 1: XCLAIM mystream mygroup Alice 3600000 1526569498055-0 -Clinet 2: XCLAIM mystream mygroup Lora 3600000 1526569498055-0 +Client 2: XCLAIM mystream mygroup Lora 3600000 1526569498055-0 ``` However claiming a message, as a side effect will reset its idle time! And will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). From 136101d505c4587a7baa2be7580adbb546ef510e Mon Sep 17 00:00:00 2001 From: Guy Benoish Date: Wed, 20 Jun 2018 17:34:19 +0700 Subject: [PATCH 0017/1457] Updated RESTORE docs Due to commit b5197f1fc9d6fde776168951094e44d5e8742a89 in the Redis repo. --- commands.json | 19 ++++++++++++++++++- commands/restore.md | 9 +++++++++ 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index a3f1ff8afb..9227b6ab9f 100644 --- a/commands.json +++ b/commands.json @@ -2015,8 +2015,25 @@ "type": "enum", "enum": ["REPLACE"], "optional": true + }, + { + "name": "absttl", + "type": "enum", + "enum": ["ABSTTL"], + "optional": true + }, + { + "command": "IDLETIME", + "name": "seconds", + "type": "integer", + "optional": true + }, + { + "command": "FREQ", + "name": "frequency", + "type": "integer", + "optional": true } - ], "since": "2.6.0", "group": "generic" diff --git a/commands/restore.md b/commands/restore.md index b6ff886d13..30b083033e 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -4,6 +4,15 @@ provided serialized value (obtained via `DUMP`). If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. +If the `ABSTTL` modifier was used, `ttl` should represent an absolute +[Unix timestamp][hewowu] (in milliseconds) in which the key will expire. +(Redis 5.0 or greater). + +[hewowu]: http://en.wikipedia.org/wiki/Unix_time + +For eviction purposes, you may use the `IDLETIME` or `FREQ` modifiers. See +`OBJECT` for more information (Redis 5.0 or greater). + `RESTORE` will return a "Target key name is busy" error when `key` already exists unless you use the `REPLACE` modifier (Redis 3.0 or greater). From 44c6614c2663cd87cb73dc6764d16584b9a92ebb Mon Sep 17 00:00:00 2001 From: Jeff Jo Date: Sat, 14 Jul 2018 19:10:03 -0700 Subject: [PATCH 0018/1457] Clarify that keys must belong to the same slot when performing multi-key operations --- topics/cluster-spec.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index b9a33e9259..51b093e889 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -26,10 +26,10 @@ Implemented subset Redis Cluster implements all the single key commands available in the non-distributed version of Redis. Commands performing complex multi-key operations like Set type unions or intersections are implemented as well -as long as the keys all belong to the same node. +as long as the keys all hash to the same slot. -Redis Cluster implements a concept called **hash tags** that can be used -in order to force certain keys to be stored in the same node. However during +Redis Cluster implements a concept called **hash tags** that can be used in +order to force certain keys to be stored in the same hash slot. However during manual reshardings, multi-key operations may become unavailable for some time while single key operations are always available. @@ -629,9 +629,9 @@ For example the following operation is valid: Multi-key operations may become unavailable when a resharding of the hash slot the keys belong to is in progress. -More specifically, even during a resharding the multi-key operations -targeting keys that all exist and are all still in the same node (either -the source or destination node) are still available. +More specifically, even during a resharding the multi-key operations targeting +keys that all exist and all still hash to the same slot (either the source or +destination node) are still available. Operations on keys that don't exist or are - during the resharding - split between the source and destination nodes, will generate a `-TRYAGAIN` error. From 4b6205f230107cf20ebace8f97610f4e3ff0f5fe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C3=98yvind?= Date: Tue, 7 Aug 2018 12:54:54 +0200 Subject: [PATCH 0019/1457] Fix spelling mistake - syncrhonous => synchronous --- topics/replication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 9b780f6f66..bc67d85fc1 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -14,7 +14,7 @@ high performance, is the natural replication mode for the vast majority of Redis use cases. However Redis slaves asynchronously acknowledge the amount of data they received periodically with the master. So the master does not wait every time for a command to be processed by the slaves, however it knows, if needed, what -slave already processed what command. This allows to have optional syncrhonous replication. +slave already processed what command. This allows to have optional synchronous replication. Synchronous replication of certain data can be requested by the clients using the `WAIT` command. However `WAIT` is only able to ensure that there are the From 94a4137131041a5c5555aeee480cf95968f51281 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 8 Aug 2018 19:50:41 +0200 Subject: [PATCH 0020/1457] Trademark guidelines updated. --- topics/trademark.md | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/topics/trademark.md b/topics/trademark.md index 0e3a052bce..53177e4a34 100644 --- a/topics/trademark.md +++ b/topics/trademark.md @@ -4,10 +4,10 @@ 2. **PURPOSE** To outline the policy and guidelines for using the Redis trademark (“Mark”) and logo (“Logo”) by members of the Redis developer and user community. -3. **WHY IS THIS IMPORTANT?** The Mark and Logo are symbols of the quality and community support associated with the open source Redis. Trademarks protect not only its owners, but its users and the entire open source community. Our community members need to know that they can rely on the quality represented by the brand. No one should use the Mark or Logo in any way that misleads anyone, either directly or by omission, or in any way that is likely to confuse or take advantage of the community, or constitutes unfair use. For example, you cannot say you are distributing Redis software when you are distributing a modified version of it, because people will be confused when they are not getting the same features and functionality they would get if they downloaded the software directly from us, or will think that the modified software is endorsed or sponsored by us or the community. You also cannot use the Mark or Logo on your website or in connection with any services in a way that suggests that your website is an official Redis website or service, or that suggests that we endorse your website or services. +3. **WHY IS THIS IMPORTANT?** The Mark and Logo are symbols of the quality and community support associated with the open source Redis. Trademarks protect not only its owners, but its users and the entire open source community. Our community members need to know that they can rely on the quality represented by the brand. No one should use the Mark or Logo in any way that misleads anyone, either directly or by omission, or in any way that is likely to confuse or take advantage of the community, or constitutes unfair competition. For example, you cannot say you are distributing Redis software when you are distributing a modified version of it, because people will be confused when they are not getting the same features and functionality they would get if they downloaded the software directly from us, or will think that the modified software is endorsed or sponsored by us or the community. You also cannot use the Mark or Logo on your website or in connection with any services in a way that suggests that your website is an official Redis website or service, or that suggests that we endorse your website or services. 4. **PROPER USE OF THE Redis TRADEMARKS AND LOGO.** You may do any of the following: - * a. When you use an unaltered, unmodified copy of open source Redis (the “Software”) as a data source for your application, you may use the Mark and Logo to identify your use. For avoidance of any doubt, the open source Redis software combined with, or integrated into, any other software program, including but not limited to software for offering Redis as a cloud service or orchestration software for offering Redis in containers is considered “modified” Redis software and does not entitle you to use the Mark or the Logo, except in a case of nominative use, as described below. + * a. When you use an unaltered, unmodified copy of open source Redis downloaded from https://redis.io (the “Software”) as a data source for your application, you may use the Mark and Logo to identify your use. For avoidance of any doubt, the open source Redis software combined with, or integrated into, any other software program, including but not limited to automation software for offering Redis as a cloud service or orchestration software for offering Redis in containers is considered “modified” Redis software and does not entitle you to use the Mark or the Logo, except in a case of nominative use, as described below. Integrating the Software with other software or service can introduce performance or quality control problems that can devalue the goodwill in the Redis brand and we want to be sure that such problems do not confuse users as to the quality of the product. * b. The Software is developed by and for the Redis community. If you are engaged in community advocacy, you can use the Mark but not the Logo in the context of showing support for the open source Redis project, provided that: * i. The Mark is used in a manner consistent with this policy. * ii. There is no commercial purpose behind the use and you are not offering Redis commercially under the same domain name. @@ -18,23 +18,24 @@ * i. Offering an XYZ software, which is an altered, modified or combined copy of the open source Redis software, including but not limited to offering Redis as a cloud service or as a container service, and while fully complying with the open source Redis API - you may only state that **"XYZ software is compatible with the Redis API"** No other term or description of your software is allowed. * ii. Offering an XYZ application, which uses an altered, modified or combined copy of the open source Redis software as a data source, including but not limited to using Redis as a cloud service or a container service, and while the modified Redis fully complies with the open source Redis API - you may only state that **"XYZ application is using a software which is compatible with the Redis API"**. No other term or description of your application is allowed. * iii. If, however, the offered XYZ software, or service based thereof, uses an altered, modified or combined copy of the open source Redis software that does not fully comply with the open source Redis API - you may not use the Mark and Logo at all. - * iv. Finally, while our previous trademark policy suggested that the reference “XYZ Software for Redis” would be a permissible nominative use of the mark, after further consideration, such a use inappropriately suggests an endorsement or sponsorship by us and the community, which we believe creates an unreasonable likelihood of confusion, so this is no longer permitted. + * iv. Finally, while our previous trademark policy suggested that the reference “XYZ Software for Redis” would be a permissible nominative use of the mark, after further consideration, such a use inappropriately suggests an endorsement or sponsorship by us and the community, which we believe creates an unreasonable likelihood of confusion, so this is no longer permitted. See more below under section 5.b. 5. **IMPROPER USE OF THE REDIS TRADEMARKS AND LOGOS**. Any use of the Mark -or Logo other than as expressly described as permitted above, is not permitted because we believe that it would likely cause impermissible public confusion. As an example, combining the open source Redis software with another software program, or orchestrating its operations (orchestrating replication, for example), may change its behavior in a non-trivial way and alter the user experience in a significant manner. Absent our prior endorsement of any other use, you should expect that we will consider any such use as infringing our legal rights. Use of the Mark that we will likely consider infringing without permission for use include: - * a. Altered or Combined Software. You may not use the Mark with any software distribution in which the open source Redis software has been altered, modified or combined with any other software program, including software for offering Redis as a cloud service or orchestration software for offering Redis in containers. In this context, phrases like “XYZ for Redis”, “Redis based”, “Redis compatible” are not allowed. However, provided that you meet the requirements of Nominative Use in sections 4.c. and 4.d. hereabove, you can state that **"XYZ software is compatible with the Redis API"**, or **"XYZ application is using a software which is compatible with the Redis API"**. - * b. Entity Names. You may not form a company, use a company name, or create a software product or service name that includes the Mark or implies any that such company is the source or sponsor of Redis. If you wish to form an entity for a user or developer group, please contact us and we will be glad to discuss a license for a suitable name. - * c. Class or Quality. You may not imply that you are providing a class or quality of Redis (e.g., "enterprise-class" or "commercial quality" or “fully managed”) in a way that implies Redis is not of that class, grade or quality, nor that other parties are not of that class, grade, or quality. - * d. False or Misleading Statements. You may not make false or misleading statements regarding your use of Redis (e.g., "we wrote the majority of the code" or "we are major contributors" or "we are committers"). - * e. Domain Names and Subdomains. You must not use Redis or any confusingly similar phrase in a domain name or subdomain. For instance “www.Redishost.com” is not allowed. If you wish to use such a domain name for a user or developer group, please contact us and we will be glad to discuss a license for a suitable domain name. Because of the many persons who, unfortunately, seek to spoof, swindle or deceive the community by using confusing domain names, we must be very strict about this rule. - * f. Websites. You must not use our Mark or Logo on your website in a way that suggests that your website is an official website or that we endorse your website. - * g. Merchandise. You must not manufacture, sell or give away merchandise items, such as T-shirts and mugs, bearing the Mark or Logo, or create any mascot for Redis. If you wish to use the Mark or Logo for a user or developer group, please contact us and we will be glad to discuss a license to do this. - * h. Variations, takeoffs or abbreviations. You may not use a variation of the Mark for any purpose. For example, the following are not acceptable: +or Logo other than as expressly described as permitted above, is not permitted because we believe that it would likely cause impermissible public confusion. Use of the Mark that we will likely consider infringing without permission for use include: + * a. Altered or Combined Software. You may not use the Mark with any software distribution in which the open source Redis software has been altered, modified or combined with any other software program, including automation software for offering Redis as a cloud service or orchestration software for offering Redis in containers. In particular, phrases like “XYZ for Redis”, “Redis based”, “Redis compatible” are not allowed. However, provided that you meet the requirements of Nominative Use in sections 4.c. and 4.d. hereabove, you can state that **"XYZ software is compatible with the Redis API"**, or **"XYZ application is using a software which is compatible with the Redis API"**. + * b. The terms "X for Redis" or "Redis for X" are particularly confusing and many companies misunderstand this formulation. Use of the word "for" does not, on its own, qualify the use as nominative use. "X for Y" is nominative use when it signifies that the developer of X has built X for use with Y. For example, "Adobe Acrobat for Mac" is developed by Adobe for the Mac platform. Likewise, "ServiceStack.Redis" is a c# client for Redis built by ServiceStack. But if you customize the Software for use with another application or platform, it misleads users to believe that such product or service has passed the quality control of Redis and is qualified by it. + * c. Entity Names. You may not form a company, use a company name, or create a software product or service name that includes the Mark or implies any that such company is the source or sponsor of Redis. If you wish to form an entity for a user or developer group, please contact us and we will be glad to discuss a license for a suitable name. + * d. Class or Quality. You may not imply that you are providing a class or quality of Redis (e.g., "enterprise-class" or "commercial quality" or “fully managed”) in a way that implies Redis is not of that class, grade or quality, nor that other parties are not of that class, grade, or quality. + * e. False or Misleading Statements. You may not make false or misleading statements regarding your use of Redis (e.g., "we wrote the majority of the code" or "we are major contributors" or "we are committers"). + * f. Domain Names and Subdomains. You must not use Redis or any confusingly similar phrase in a domain name or subdomain. For instance “www.Redishost.com” is not allowed. If you wish to use such a domain name for a user or developer group, please contact us and we will be glad to discuss a license for a suitable domain name. Because of the many persons who, unfortunately, seek to spoof, swindle or deceive the community by using confusing domain names, we must be very strict about this rule. + * g. Websites. You must not use our Mark or Logo on your website in a way that suggests that your website is an official website or that we endorse your website. + * h. Merchandise. You must not manufacture, sell or give away merchandise items, such as T-shirts and mugs, bearing the Mark or Logo, or create any mascot for Redis. If you wish to use the Mark or Logo for a user or developer group, please contact us and we will be glad to discuss a license to do this. + * i. Variations, takeoffs or abbreviations. You may not use a variation of the Mark for any purpose. For example, the following are not acceptable: * i. Red * ii. MyRedis * iii. RedisHost - * i. Rebranding. You may not change the Mark or Logo on a redistributed (unmodified) Software to your own brand or logo. You may not hold yourself out as the source of the Redis software, except to the extent you have modified it as allowed under the three-clause BSD license, and you make it clear that you are the source only of the modification. - * j. Combination Marks. Do not use our Mark or Logo in combination with any other marks or logos. For example Foobar Redis, or the name of your company or product typeset to look like the Redis logo. - * k. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. + * j. Rebranding. You may not change the Mark or Logo on a redistributed (unmodified) Software to your own brand or logo. You may not hold yourself out as the source of the Redis software, except to the extent you have modified it as allowed under the three-clause BSD license, and you make it clear that you are the source only of the modification. + * k. Combination Marks. Do not use our Mark or Logo in combination with any other marks or logos. For example Foobar Redis, or the name of your company or product typeset to look like the Redis logo. + * l. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. 6. **GENERAL USE INFORMATION.** * a. Attribution. The appropriate trademark symbol (i.e., TM or ® ) must appear at least with the first use of the Mark and all occurrences of the Logo. When you use the Mark or Logo, you must include a statement attributing ownership of the trademark to Redis Labs Ltd. For example, "Redis and the Redis logo are trademarks owned by Redis Labs Ltd. in the U.S. and other countries." * b. Capitalization. Always distinguish the Mark from surrounding text with at least initial capital letters or in all capital letters, e.g., as Redis or REDIS. From 5395c8b1486ece034dac7d35f0ceb42b33de96c4 Mon Sep 17 00:00:00 2001 From: "yuuji.yaginuma" Date: Sat, 25 Aug 2018 20:40:25 +0900 Subject: [PATCH 0021/1457] Fix license of Redis Labs modules Ref: https://redislabs.com/community/redis-modules-hub/ --- modules.json | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules.json b/modules.json index 91bd88a7c0..0afa44f322 100644 --- a/modules.json +++ b/modules.json @@ -41,7 +41,7 @@ }, { "name": "ReJSON", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/ReJSON", "description": "A JSON data type for Redis", "authors": [ @@ -52,7 +52,7 @@ }, { "name": "Redis-ML", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/redis-ml", "description": "Machine Learning Model Server", "authors": [ @@ -63,7 +63,7 @@ }, { "name": "RediSearch", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/RediSearch", "description": "Full-Text search over Redis", "authors": [ @@ -74,7 +74,7 @@ }, { "name": "topk", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/topk", "description": "An almost deterministic top k elements counter", "authors": [ @@ -85,7 +85,7 @@ }, { "name": "countminsketch", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/countminsketch", "description": "An apporximate frequency counter", "authors": [ @@ -96,7 +96,7 @@ }, { "name": "rebloom", - "license": "AGPL", + "license": "Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/rebloom", "description": "Scalable Bloom filters", "authors": [ @@ -186,4 +186,4 @@ ], "stars": 563 } -] \ No newline at end of file +] From d322db86d31fb02c8f90b9db19772728854a46ad Mon Sep 17 00:00:00 2001 From: Bobby Calderwood <8336+bobby@users.noreply.github.com> Date: Tue, 28 Aug 2018 11:57:07 -0400 Subject: [PATCH 0022/1457] Add XGROUP, XACK, XTRIM, XCLAIM, and XDEL to commands.json - Needs analysis for time "complexity" entries - Summaries need reviewed, and XCLAIM summary needs added --- commands.json | 156 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 156 insertions(+) diff --git a/commands.json b/commands.json index 29a181e23f..ed7a11642c 100644 --- a/commands.json +++ b/commands.json @@ -3351,6 +3351,50 @@ "since": "5.0.0", "group": "stream" }, + "XTRIM": { + "summary": "Trims the stream to (approximately if '~' is passed) a certain size", + "complexity": "O(log(N)) with N being the number of in the stream prior to trim.", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "strategy", + "type": "enum", + "enum": ["MAXLEN"] + }, + { + "name": "approx", + "type": "enum", + "enum": ["~"], + "optional": true + }, + { + "name": "count", + "type": "integer" + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XDEL": { + "summary": "Removes the specified entries from the stream. Returns the number of items actually deleted, that may be different from the number of IDs passed in case certain IDs do not exist.", + "complexity": "O(log(N)) with N being the number of items in the stream.", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "ID", + "type": "string", + "multiple": "true" + } + ], + "since": "5.0.0", + "group": "stream" + }, "XRANGE": { "summary": "Return a range of elements in a stream, with IDs matching the specified IDs interval", "complexity": "O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)).", @@ -3450,6 +3494,38 @@ "since": "5.0.0", "group": "stream" }, + "XGROUP": { + "summary": "Create, destroy, and manage consumer groups.", + "complexity": "", + "arguments": [ + { + "command": "CREATE", + "name": ["key", "groupname", "id-or-$"], + "type": ["key", "string", "string"], + "optional": true + }, + { + "command": "SETID", + "name": ["key", "id-or-$"], + "type": ["key", "string"], + "optional": true + }, + { + "command": "DESTROY", + "name": ["key", "groupname"], + "type": ["key", "string"], + "optional": true + }, + { + "command": "DELCONSUMER", + "name": ["key", "groupname", "consumername"], + "type": ["key", "string", "string"], + "optional": true + } + ], + "since": "5.0.0", + "group": "stream" + }, "XREADGROUP": { "summary": "Return new entries from a stream using a consumer group, or access the history of the pending entries for a given consumer. Can block.", "complexity": "For each stream mentioned: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). On the other side, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", @@ -3490,6 +3566,86 @@ "since": "5.0.0", "group": "stream" }, + "XACK": { + "summary": "Marks a pending message as correctly processed. + +Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", + "complexity": "", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "group", + "type": "string" + }, + { + "name": "ID", + "type": "string", + "multiple": true + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XCLAIM": { + "summary": "", + "complexity": "", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "group", + "type": "string" + }, + { + "name": "consumer", + "type": "string" + }, + { + "name": "min-idle-time", + "type": "string" + }, + { + "name": "ID", + "type": "string", + "multiple": true + }, + { + "command": "IDLE", + "name": "ms", + "type": "integer", + "optional": true + }, + { + "command": "TIME", + "name": "ms-unix-time", + "type": "integer", + "optional": true + }, + { + "command": "RETRYCOUNT", + "name": "count", + "type": "integer", + "optional": true + }, + { + "name": "force", + "enum": ["FORCE"], + "optional": true + }, + { + "name": "justid", + "enum": ["JUSTID"], + "optional": true + } + ], + "since": "5.0.0", + "group": "stream" + }, "XPENDING": { "summary": "Return information and entries from a stream consumer group pending entries list, that are messages fetched but never acknowledged.", "complexity": "O(log(N)+M) with N being the number of elements in the consumer group pending entries list, and M the number of elements being returned. When the command returns just the summary it runs in O(1) time assuming the list of consumers is small, otherwise there is additional O(N) time needed to iterate every consumer.", From e56f35c15e1ec560cd0c4edb8e3c464309f4bac9 Mon Sep 17 00:00:00 2001 From: Bobby Calderwood <8336+bobby@users.noreply.github.com> Date: Tue, 28 Aug 2018 12:15:55 -0400 Subject: [PATCH 0023/1457] Remove newline in docstring that was confusing jsonlint --- commands.json | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/commands.json b/commands.json index ed7a11642c..8a6badcad4 100644 --- a/commands.json +++ b/commands.json @@ -3567,9 +3567,7 @@ "group": "stream" }, "XACK": { - "summary": "Marks a pending message as correctly processed. - -Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", + "summary": "Marks a pending message as correctly processed. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", "complexity": "", "arguments": [ { From 691d9df0d5aed7da1065479bc83f09c99211c29d Mon Sep 17 00:00:00 2001 From: Bobby Calderwood <8336+bobby@users.noreply.github.com> Date: Tue, 4 Sep 2018 22:23:55 -0400 Subject: [PATCH 0024/1457] Add XINFO to commands.json --- commands.json | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/commands.json b/commands.json index 8a6badcad4..6828a10b50 100644 --- a/commands.json +++ b/commands.json @@ -3330,6 +3330,38 @@ "since": "2.8.0", "group": "sorted_set" }, + "XINFO": { + "summary": "Get information on streams and consumer groups", + "complexity": "", + "arguments": [ + { + "command": "CONSUMERS", + "name": ["key", "groupname"], + "type": ["key", "string"], + "optional": true + }, + { + "command": "GROUPS", + "name": "key", + "type": "key", + "optional": true + }, + { + "command": "STREAM", + "name": "key", + "type": "key", + "optional": true + }, + { + "name": "help", + "type": "enum", + "enum": ["HELP"], + "optional": true + } + ], + "since": "5.0.0", + "group": "stream" + }, "XADD": { "summary": "Appends a new entry to a stream", "complexity": "O(log(N)) with N being the number of items already into the stream.", From 90f921152911f555e117fff629b9d7bdd7376df3 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 6 Sep 2018 10:26:29 +0200 Subject: [PATCH 0025/1457] Document that Redis 5 defaults to scripts effect replication. --- commands/eval.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands/eval.md b/commands/eval.md index e77cc71d0b..bf44ffa1b3 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -325,6 +325,8 @@ SCRIPT currently accepts three different commands: ## Scripts as pure functions +*Note: starting with Redis 5, scripts are always replicated as effects and not sending the script verbatim. So the following section is mostly applicable to Redis version 4 or older.* + A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are, by default, replicated on slaves and into the AOF file by sending the script itself -- not the resulting @@ -466,6 +468,8 @@ output. ## Replicating commands instead of scripts +*Note: starting with Redis 5, the replication method described in this section (scripts effects replication) is the default and does not need to be explicitly enabled.* + Starting with Redis 3.2, it is possible to select an alternative replication method. Instead of replication whole scripts, we can just replicate single write commands generated by the script. From 5a6f5ec88da38a6d529a85f647839b15318f03d3 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:04:00 +0200 Subject: [PATCH 0026/1457] Clients.json: remove slave term. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 5a38d0c4f9..9223037763 100644 --- a/clients.json +++ b/clients.json @@ -1345,7 +1345,7 @@ "language": "C++", "url": "http://xredis.0xsky.com/", "repository": "https://github.com/0xsky/xredis", - "description": "Redis C++ client with data slice storage, Redis cluster, connection pool, master slave connection, read/write separation; requires hiredis only", + "description": "Redis C++ client with data slice storage, Redis cluster, connection pool, master replica connection, read/write separation; requires hiredis only", "authors": ["0xsky"], "active": true }, From 52eea3db3aa0df345121412e7b96156fab709d95 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:07:16 +0200 Subject: [PATCH 0027/1457] Commands.json: remove slave term. Add alises. --- commands.json | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/commands.json b/commands.json index 29a181e23f..3061ab0cff 100644 --- a/commands.json +++ b/commands.json @@ -361,7 +361,7 @@ "group": "cluster" }, "CLUSTER FAILOVER": { - "summary": "Forces a slave to perform a manual failover of its master.", + "summary": "Forces a replica to perform a manual failover of its master.", "complexity": "O(1)", "arguments": [ { @@ -443,7 +443,7 @@ "group": "cluster" }, "CLUSTER REPLICATE": { - "summary": "Reconfigure a node as a slave of the specified master node", + "summary": "Reconfigure a node as a replica of the specified master node", "complexity": "O(1)", "arguments": [ { @@ -509,7 +509,7 @@ "group": "cluster" }, "CLUSTER SLAVES": { - "summary": "List slave nodes of the specified master node", + "summary": "List replica nodes of the specified master node", "complexity": "O(1)", "arguments": [ { @@ -520,6 +520,18 @@ "since": "3.0.0", "group": "cluster" }, + "CLUSTER REPLICAS": { + "summary": "List replica nodes of the specified master node", + "complexity": "O(1)", + "arguments": [ + { + "name": "node-id", + "type": "string" + } + ], + "since": "5.0.0", + "group": "cluster" + }, "CLUSTER SLOTS": { "summary": "Get array of Cluster slot to node mappings", "complexity": "O(N) where N is the total number of Cluster nodes", @@ -1951,13 +1963,13 @@ "group": "generic" }, "READONLY": { - "summary": "Enables read queries for a connection to a cluster slave node", + "summary": "Enables read queries for a connection to a cluster replica node", "complexity": "O(1)", "since": "3.0.0", "group": "cluster" }, "READWRITE": { - "summary": "Disables read queries for a connection to a cluster slave node", + "summary": "Disables read queries for a connection to a cluster replica node", "complexity": "O(1)", "since": "3.0.0", "group": "cluster" @@ -2376,7 +2388,7 @@ "group": "set" }, "SLAVEOF": { - "summary": "Make the server a slave of another instance, or promote it as master", + "summary": "Make the server a replica of another instance, or promote it as master. Deprecated starting with Redis 5. Use REPLICAOF instead.", "arguments": [ { "name": "host", @@ -2390,6 +2402,21 @@ "since": "1.0.0", "group": "server" }, + "REPLICAOF": { + "summary": "Make the server a replica of another instance, or promote it as master.", + "arguments": [ + { + "name": "host", + "type": "string" + }, + { + "name": "port", + "type": "string" + } + ], + "since": "5.0.0", + "group": "server" + }, "SLOWLOG": { "summary": "Manages the Redis slow queries log", "arguments": [ @@ -2694,7 +2721,7 @@ "complexity": "O(1)", "arguments": [ { - "name": "numslaves", + "name": "numreplicas", "type": "integer" }, { From dabc29ce9b9902e5dd8fb6e7e9ee75d50b3a1601 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:11:45 +0200 Subject: [PATCH 0028/1457] Add the REPLICAOF command. Deprecate SLAVEOF in doc. --- commands/replicaof.md | 11 +++++++++++ commands/slaveof.md | 18 ++++++++---------- 2 files changed, 19 insertions(+), 10 deletions(-) create mode 100644 commands/replicaof.md diff --git a/commands/replicaof.md b/commands/replicaof.md new file mode 100644 index 0000000000..d202cf5edb --- /dev/null +++ b/commands/replicaof.md @@ -0,0 +1,11 @@ +The `REPLICAOF` command can change the replication settings of a replica on the fly. + +If a Redis server is already acting as replica, the command `REPLICAOF` NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form `REPLICAOF` hostname port will make the server a replica of another server listening at the specified hostname and port. + +If a server is already a replica of some master, `REPLICAOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. + +The form `REPLICAOF` NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica. + +@return + +@simple-string-reply diff --git a/commands/slaveof.md b/commands/slaveof.md index 2dc61a8147..04b48eaaca 100644 --- a/commands/slaveof.md +++ b/commands/slaveof.md @@ -1,24 +1,22 @@ -The `SLAVEOF` command can change the replication settings of a slave on the fly. -If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE will +**A note about the word slave used in this man page and command name**: Starting with Redis 5 this command: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command `REPLICAOF`. The command `SLAVEOF` will continue to work for backward compatibility. + +The `SLAVEOF` command can change the replication settings of a replica on the fly. +If a Redis server is already acting as replica, the command `SLAVEOF` NO ONE will turn off the replication, turning the Redis server into a MASTER. -In the proper form `SLAVEOF` hostname port will make the server a slave of +In the proper form `SLAVEOF` hostname port will make the server a replica of another server listening at the specified hostname and port. -If a server is already a slave of some master, `SLAVEOF` hostname port will stop +If a server is already a replica of some master, `SLAVEOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. The form `SLAVEOF` NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. -So, if the old master stops working, it is possible to turn the slave into a +So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a -slave. +replica. @return @simple-string-reply - -**A note about slavery**: it's unfortunate that originally the master-slave terminology was picked for databases. When Redis was designed the existing terminology was used without much analysis of alternatives, however a **SLAVEOF NO ONE** command was added as a freedom message. Instead of changing the terminology, which would require breaking backward compatibility in the API and `INFO` output, we want to use this page to remind you that slavery is both **a crime against humanity today** and something that has been perpetuated [throughout all human history](https://en.wikipedia.org/wiki/Slavery). - -*If slavery is not wrong, nothing is wrong.* -- Abraham Lincoln From 9788aa5eb5c408d181b271135101a8eb1c7e5eed Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:13:46 +0200 Subject: [PATCH 0029/1457] Add the CLUSTER REPLICAS command. Deprecate CLUSTER SLAVES in doc. --- commands/cluster-slaves.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/commands/cluster-slaves.md b/commands/cluster-slaves.md index bec20fa29c..749bc6aad8 100644 --- a/commands/cluster-slaves.md +++ b/commands/cluster-slaves.md @@ -1,10 +1,12 @@ -The command provides a list of slave nodes replicating from the specified +**A note about the word slave used in this man page and command name**: Starting with Redis 5 this command: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command `CLUSTER REPLICAS`. The command `SLAVEOF` will continue to work for backward compatibility. + +The command provides a list of replica nodes replicating from the specified master node. The list is provided in the same format used by `CLUSTER NODES` (please refer to its documentation for the specification of the format). The command will fail if the specified node is not known or if it is not a master according to the node table of the node receiving the command. -Note that if a slave is added, moved, or removed from a given master node, +Note that if a replica is added, moved, or removed from a given master node, and we ask `CLUSTER SLAVES` to a node that has not yet received the configuration update, it may show stale information. However eventually (in a matter of seconds if there are no network partitions) all the nodes From f560405b3cac0e3058bb6c43af74d71065489599 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:13:32 +0200 Subject: [PATCH 0030/1457] Tools.js: remove slave word. --- tools.json | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/tools.json b/tools.json index 25b021e4b9..ab4aba9c93 100644 --- a/tools.json +++ b/tools.json @@ -88,7 +88,7 @@ "name": "Redis-sync", "language": "Javascript", "repository": "https://github.com/pconstr/redis-sync", - "description": "A node.js Redis replication slave toolkit", + "description": "A node.js Redis replication toolkit", "authors": ["pconstr"] }, { @@ -159,7 +159,7 @@ "name": "Redis_failover", "language": "Ruby", "repository": "https://github.com/ryanlecompte/redis_failover", - "description": "Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby.", + "description": "Redis Failover is a ZooKeeper-based automatic master/replica failover solution for Ruby.", "authors": ["ryanlecompte"] }, { @@ -460,13 +460,6 @@ "description": "Redis-tool - Little helpers for Redis (ztrim, del-all, rename)", "authors": ["fgribreau"] }, - { - "name": "Redis_failover", - "language": "Ruby", - "repository": "https://github.com/ryanlecompte/redis_failover", - "description": "Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby.", - "authors": ["ryanlecompte"] - }, { "name": "redis-in-labview", "language": "LabVIEW", From 875e85d6bb80cbab41bc4bac01d906f1f698ca2d Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:22:03 +0200 Subject: [PATCH 0031/1457] Actually add cluster-replicas.md. --- commands/cluster-replicas.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 commands/cluster-replicas.md diff --git a/commands/cluster-replicas.md b/commands/cluster-replicas.md new file mode 100644 index 0000000000..4e6192e117 --- /dev/null +++ b/commands/cluster-replicas.md @@ -0,0 +1,15 @@ +The command provides a list of replica nodes replicating from the specified +master node. The list is provided in the same format used by `CLUSTER NODES` (please refer to its documentation for the specification of the format). + +The command will fail if the specified node is not known or if it is not +a master according to the node table of the node receiving the command. + +Note that if a replica is added, moved, or removed from a given master node, +and we ask `CLUSTER REPLICAS` to a node that has not yet received the +configuration update, it may show stale information. However eventually +(in a matter of seconds if there are no network partitions) all the nodes +will agree about the set of nodes associated with a given master. + +@return + +The command returns data in the same format as `CLUSTER NODES`. From 14c6e6eeb965a295c5846a50f23a1ffddf0f2de9 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 13 Sep 2018 11:41:16 +0200 Subject: [PATCH 0032/1457] Substitute everywhere was possible slave with replica. --- commands/bitop.md | 2 +- commands/client-kill.md | 6 ++++-- commands/client-list.md | 4 ++-- commands/client-pause.md | 8 +++---- commands/cluster-failover.md | 28 ++++++++++++------------- commands/cluster-forget.md | 4 ++-- commands/cluster-nodes.md | 16 +++++++------- commands/cluster-replicate.md | 14 ++++++------- commands/cluster-reset.md | 2 +- commands/eval.md | 25 +++++++++++----------- commands/expire.md | 6 +++--- commands/georadius.md | 4 ++-- commands/georadiusbymember.md | 2 +- commands/info.md | 39 +++++++++++++++++------------------ commands/memory-stats.md | 4 +++- commands/readonly.md | 14 ++++++------- commands/role.md | 18 +++++++++------- commands/shutdown.md | 2 +- commands/wait.md | 30 +++++++++++++-------------- 19 files changed, 117 insertions(+), 111 deletions(-) diff --git a/commands/bitop.md b/commands/bitop.md index 794b2a8b66..95155b693f 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -58,5 +58,5 @@ bitmaps][hbgc212fermurb]" for a interesting use cases. Care should be taken when running it against long input strings. For real-time metrics and statistics involving large inputs a good approach is -to use a slave (with read-only option disabled) where the bit-wise +to use a replica (with read-only option disabled) where the bit-wise operations are performed to avoid blocking the master instance. diff --git a/commands/client-kill.md b/commands/client-kill.md index 879231ac05..092d07d897 100644 --- a/commands/client-kill.md +++ b/commands/client-kill.md @@ -17,11 +17,13 @@ instead of killing just by address. The following filters are available: * `CLIENT KILL TYPE type`, where *type* is one of `normal`, `master`, `slave` and `pubsub` (the `master` type is available from v3.2). This closes the connections of **all the clients** in the specified class. Note that clients blocked into the `MONITOR` command are considered to belong to the `normal` class. * `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is, the client calling the command will not get killed, however setting this option to `no` will have the effect of also killing the client calling the command. +**Note: starting with Redis 5 the project is no longer using the slave word. You can use `TYPE replica` instead, however the old form is still supported for backward compatibility.** + It is possible to provide multiple filters at the same time. The command will handle multiple filters via logical AND. For example: - CLIENT KILL addr 127.0.0.1:6379 type slave + CLIENT KILL addr 127.0.0.1:12345 type pubsub -is valid and will kill only a slaves with the specified address. This format containing multiple filters is rarely useful currently. +is valid and will kill only a pubsub client with the specified address. This format containing multiple filters is rarely useful currently. When the new form is used the command no longer returns `OK` or an error, but instead the number of killed clients, that may be zero. diff --git a/commands/client-list.md b/commands/client-list.md index d9a91f327b..d7dfca6de0 100644 --- a/commands/client-list.md +++ b/commands/client-list.md @@ -32,8 +32,8 @@ Here is the meaning of the fields: The client flags can be a combination of: ``` -O: the client is a slave in MONITOR mode -S: the client is a normal slave server +O: the client is a client in MONITOR mode +S: the client is a replica node connection to this instance M: the client is a master x: the client is in a MULTI/EXEC context b: the client is waiting in a blocking operation diff --git a/commands/client-pause.md b/commands/client-pause.md index 957163fff9..eb7a2f10c4 100644 --- a/commands/client-pause.md +++ b/commands/client-pause.md @@ -2,18 +2,18 @@ The command performs the following actions: -* It stops processing all the pending commands from normal and pub/sub clients. However interactions with slaves will continue normally. +* It stops processing all the pending commands from normal and pub/sub clients. However interactions with replicas will continue normally. * However it returns OK to the caller ASAP, so the `CLIENT PAUSE` command execution is not paused by itself. * When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause. This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following: * Pause the clients using `CLIENT PAUSE` -* Wait a few seconds to make sure the slaves processed the latest replication stream from the master. -* Turn one of the slaves into a master. +* Wait a few seconds to make sure the replicas processed the latest replication stream from the master. +* Turn one of the replicas into a master. * Reconfigure clients to connect with the new master. -It is possible to send `CLIENT PAUSE` in a MULTI/EXEC block together with the `INFO replication` command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the slave side in order to make sure all the replication stream was processed. +It is possible to send `CLIENT PAUSE` in a MULTI/EXEC block together with the `INFO replication` command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the replica side in order to make sure all the replication stream was processed. Since Redis 3.2.10 / 4.0.0, this command also prevents keys to be evicted or expired during the time clients are paused. This way the dataset is guaranteed diff --git a/commands/cluster-failover.md b/commands/cluster-failover.md index 2c4df76322..45c584ba44 100644 --- a/commands/cluster-failover.md +++ b/commands/cluster-failover.md @@ -1,50 +1,50 @@ -This command, that can only be sent to a Redis Cluster slave node, forces -the slave to start a manual failover of its master instance. +This command, that can only be sent to a Redis Cluster replica node, forces +the replica to start a manual failover of its master instance. A manual failover is a special kind of failover that is usually executed when there are no actual failures, but we wish to swap the current master with one -of its slaves (which is the node we send the command to), in a safe way, +of its replicas (which is the node we send the command to), in a safe way, without any window for data loss. It works in the following way: -1. The slave tells the master to stop processing queries from clients. -2. The master replies to the slave with the current *replication offset*. -3. The slave waits for the replication offset to match on its side, to make sure it processed all the data from the master before it continues. -4. The slave starts a failover, obtains a new configuration epoch from the majority of the masters, and broadcasts the new configuration. +1. The replica tells the master to stop processing queries from clients. +2. The master replies to the replica with the current *replication offset*. +3. The replica waits for the replication offset to match on its side, to make sure it processed all the data from the master before it continues. +4. The replica starts a failover, obtains a new configuration epoch from the majority of the masters, and broadcasts the new configuration. 5. The old master receives the configuration update: unblocks its clients and starts replying with redirection messages so that they'll continue the chat with the new master. This way clients are moved away from the old master to the new master -atomically and only when the slave that is turning into the new master +atomically and only when the replica that is turning into the new master has processed all of the replication stream from the old master. ## FORCE option: manual failover when the master is down The command behavior can be modified by two options: **FORCE** and **TAKEOVER**. -If the **FORCE** option is given, the slave does not perform any handshake +If the **FORCE** option is given, the replica does not perform any handshake with the master, that may be not reachable, but instead just starts a failover ASAP starting from point 4. This is useful when we want to start a manual failover while the master is no longer reachable. However using **FORCE** we still need the majority of masters to be available in order to authorize the failover and generate a new configuration epoch -for the slave that is going to become master. +for the replica that is going to become master. ## TAKEOVER option: manual failover without cluster consensus -There are situations where this is not enough, and we want a slave to failover +There are situations where this is not enough, and we want a replica to failover without any agreement with the rest of the cluster. A real world use case -for this is to mass promote slaves in a different data center to masters +for this is to mass promote replicas in a different data center to masters in order to perform a data center switch, while all the masters are down or partitioned away. The **TAKEOVER** option implies everything **FORCE** implies, but also does -not uses any cluster authorization in order to failover. A slave receiving +not uses any cluster authorization in order to failover. A replica receiving `CLUSTER FAILOVER TAKEOVER` will instead: 1. Generate a new `configEpoch` unilaterally, just taking the current greatest epoch available and incrementing it if its local configuration epoch is not already the greatest. 2. Assign itself all the hash slots of its master, and propagate the new configuration to every node which is reachable ASAP, and eventually to every other node. -Note that **TAKEOVER violates the last-failover-wins principle** of Redis Cluster, since the configuration epoch generated by the slave violates the normal generation of configuration epochs in several ways: +Note that **TAKEOVER violates the last-failover-wins principle** of Redis Cluster, since the configuration epoch generated by the replica violates the normal generation of configuration epochs in several ways: 1. There is no guarantee that it is actually the higher configuration epoch, since, for example, we can use the **TAKEOVER** option within a minority, nor any message exchange is performed to generate the new configuration epoch. 2. If we generate a configuration epoch which happens to collide with another instance, eventually our configuration epoch, or the one of another instance with our same epoch, will be moved away using the *configuration epoch collision resolution algorithm*. diff --git a/commands/cluster-forget.md b/commands/cluster-forget.md index 63b69a0772..2f16cd79de 100644 --- a/commands/cluster-forget.md +++ b/commands/cluster-forget.md @@ -7,7 +7,7 @@ Because when a given node is part of the cluster, all the other nodes participating in the cluster knows about it, in order for a node to be completely removed from a cluster, the `CLUSTER FORGET` command must be sent to all the remaining nodes, regardless of the fact they are masters -or slaves. +or replicas. However the command cannot simply drop the node from the internal node table of the node receiving the command, it also implements a ban-list, not @@ -49,7 +49,7 @@ we want to remove a node. The command does not succeed and returns an error in the following cases: 1. The specified node ID is not found in the nodes table. -2. The node receiving the command is a slave, and the specified node ID identifies its current master. +2. The node receiving the command is a replica, and the specified node ID identifies its current master. 3. The node ID identifies the same node we are sending the command to. @return diff --git a/commands/cluster-nodes.md b/commands/cluster-nodes.md index 87985efce8..5e74d0b694 100644 --- a/commands/cluster-nodes.md +++ b/commands/cluster-nodes.md @@ -40,10 +40,10 @@ The meaning of each filed is the following: 1. `id`: The node ID, a 40 characters random string generated when a node is created and never changed again (unless `CLUSTER RESET HARD` is used). 2. `ip:port`: The node address where clients should contact the node to run queries. 3. `flags`: A list of comma separated flags: `myself`, `master`, `slave`, `fail?`, `fail`, `handshake`, `noaddr`, `noflags`. Flags are explained in detail in the next section. -4. `master`: If the node is a slave, and the master is known, the master node ID, otherwise the "-" character. +4. `master`: If the node is a replica, and the master is known, the master node ID, otherwise the "-" character. 5. `ping-sent`: Milliseconds unix time at which the currently active ping was sent, or zero if there are no pending pings. 6. `pong-recv`: Milliseconds unix time the last pong was received. -7. `config-epoch`: The configuration epoch (or version) of the current node (or of the current master if the node is a slave). Each time there is a failover, a new, unique, monotonically increasing configuration epoch is created. If multiple nodes claim to serve the same hash slots, the one with higher configuration epoch wins. +7. `config-epoch`: The configuration epoch (or version) of the current node (or of the current master if the node is a replica). Each time there is a failover, a new, unique, monotonically increasing configuration epoch is created. If multiple nodes claim to serve the same hash slots, the one with higher configuration epoch wins. 8. `link-state`: The state of the link used for the node-to-node cluster bus. We use this link to communicate with the node. Can be `connected` or `disconnected`. 9. `slot`: A hash slot number or range. Starting from argument number 9, but there may be up to 16384 entries in total (limit never reached). This is the list of hash slots served by this node. If the entry is just a number, is parsed as such. If it is a range, it is in the form `start-end`, and means that the node is responsible for all the hash slots from `start` to `end` including the start and end values. @@ -51,7 +51,7 @@ Meaning of the flags (field number 3): * `myself`: The node you are contacting. * `master`: Node is a master. -* `slave`: Node is a slave. +* `slave`: Node is a replica. * `fail?`: Node is in `PFAIL` state. Not reachable for the node you are contacting, but still logically reachable (not in `FAIL` state). * `fail`: Node is in `FAIL` state. It was not reachable for multiple nodes that promoted the `PFAIL` state to `FAIL`. * `handshake`: Untrusted node, we are handshaking. @@ -60,12 +60,12 @@ Meaning of the flags (field number 3): ## Notes on published config epochs -Slaves broadcast their master's config epochs (in order to get an `UPDATE` +Replicas broadcast their master's config epochs (in order to get an `UPDATE` message if they are found to be stale), so the real config epoch of the -slave (which is meaningless more or less, since they don't serve hash slots) +replica (which is meaningless more or less, since they don't serve hash slots) can be only obtained checking the node flagged as `myself`, which is the entry -of the node we are asking to generate `CLUSTER NODES` output. The other slaves -epochs reflect what they publish in heartbeat packets, which is, the +of the node we are asking to generate `CLUSTER NODES` output. The other +replicas epochs reflect what they publish in heartbeat packets, which is, the configuration epoch of the masters they are currently replicating. ## Special slot entries @@ -105,3 +105,5 @@ Note that: @return @bulk-string-reply: The serialized cluster configuration. + +**A note about the word slave used in this man page and command name**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/cluster-replicate.md b/commands/cluster-replicate.md index 555634acd6..5b403aaa8b 100644 --- a/commands/cluster-replicate.md +++ b/commands/cluster-replicate.md @@ -1,25 +1,25 @@ -The command reconfigures a node as a slave of the specified master. +The command reconfigures a node as a replica of the specified master. If the node receiving the command is an *empty master*, as a side effect -of the command, the node role is changed from master to slave. +of the command, the node role is changed from master to replica. -Once a node is turned into the slave of another master node, there is no need +Once a node is turned into the replica of another master node, there is no need to inform the other cluster nodes about the change: heartbeat packets exchanged between nodes will propagate the new configuration automatically. -A slave will always accept the command, assuming that: +A replica will always accept the command, assuming that: 1. The specified node ID exists in its nodes table. 2. The specified node ID does not identify the instance we are sending the command to. 3. The specified node ID is a master. -If the node receiving the command is not already a slave, but is a master, -the command will only succeed, and the node will be converted into a slave, +If the node receiving the command is not already a replica, but is a master, +the command will only succeed, and the node will be converted into a replica, only if the following additional conditions are met: 1. The node is not serving any hash slots. 2. The node is empty, no keys are stored at all in the key space. -If the command succeeds the new slave will immediately try to contact its master in order to replicate from it. +If the command succeeds the new replica will immediately try to contact its master in order to replicate from it. @return diff --git a/commands/cluster-reset.md b/commands/cluster-reset.md index 5eb4e0fac7..02ffe9eb95 100644 --- a/commands/cluster-reset.md +++ b/commands/cluster-reset.md @@ -8,7 +8,7 @@ Effects on the node: 1. All the other nodes in the cluster are forgotten. 2. All the assigned / open slots are reset, so the slots-to-nodes mapping is totally cleared. -3. If the node is a slave it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master. +3. If the node is a replica it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master. 4. **Hard reset only**: a new Node ID is generated. 5. **Hard reset only**: `currentEpoch` and `configEpoch` vars are set to 0. 6. The new configuration is persisted on disk in the node cluster configuration file. diff --git a/commands/eval.md b/commands/eval.md index bf44ffa1b3..d4593f4e11 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -328,14 +328,14 @@ SCRIPT currently accepts three different commands: *Note: starting with Redis 5, scripts are always replicated as effects and not sending the script verbatim. So the following section is mostly applicable to Redis version 4 or older.* A very important part of scripting is writing scripts that are pure functions. -Scripts executed in a Redis instance are, by default, replicated on slaves -and into the AOF file by sending the script itself -- not the resulting +Scripts executed in a Redis instance are, by default, propagated to replicas +and to the AOF file by sending the script itself -- not the resulting commands. The reason is that sending a script to another Redis instance is often much faster than sending the multiple commands the script generates, so if the client is sending many scripts to the master, converting the scripts into -individual commands for the slave / AOF would result in too much bandwidth +individual commands for the replica / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via network is a lot more work for Redis compared to dispatching a command invoked by Lua scripts). @@ -458,7 +458,7 @@ changing one of the arguments in every invocation, generating the random seed client-side. The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, guaranteeing that the same changes will be -generated when the AOF is reloaded or when the slave processes the script. +generated when the AOF is reloaded or when the replica processes the script. Note: an important part of this behavior is that the PRNG that Redis implements as `math.random` and `math.randomseed` is guaranteed to have the same output @@ -479,12 +479,12 @@ In this replication mode, while Lua scripts are executed, Redis collects all the commands executed by the Lua scripting engine that actually modify the dataset. When the script execution finishes, the sequence of commands that the script generated are wrapped into a MULTI / EXEC transaction and -are sent to slaves and AOF. +are sent to replicas and AOF. This is useful in several ways depending on the use case: * When the script is slow to compute, but the effects can be summarized by -a few write commands, it is a shame to re-compute the script on the slaves +a few write commands, it is a shame to re-compute the script on the replicas or when reloading the AOF. In this case to replicate just the effect of the script is much better. * When script effects replication is enabled, the controls about non @@ -505,9 +505,9 @@ is used. ## Selective replication of commands When script effects replication is selected (see the previous section), it -is possible to have more control in the way commands are replicated to slaves +is possible to have more control in the way commands are replicated to replicas and AOF. This is a very advanced feature since **a misuse can do damage** by -breaking the contract that the master, slaves, and AOF, all must contain the +breaking the contract that the master, replicas, and AOF, all must contain the same logical content. However this is a useful feature since, sometimes, we need to execute certain @@ -528,13 +528,14 @@ an error if called when script effects replication is disabled. The command can be called with four different arguments: - redis.set_repl(redis.REPL_ALL) -- Replicate to AOF and slaves. + redis.set_repl(redis.REPL_ALL) -- Replicate to AOF and replicas. redis.set_repl(redis.REPL_AOF) -- Replicate only to AOF. - redis.set_repl(redis.REPL_SLAVE) -- Replicate only to slaves. + redis.set_repl(redis.REPL_REPLICA) -- Replicate only to replicas (Redis >= 5) + redis.set_repl(redis.REPL_SLAVE) -- Used for backward compatibility, the same as REPL_REPLICA. redis.set_repl(redis.REPL_NONE) -- Don't replicate at all. By default the scripting engine is always set to `REPL_ALL`. By calling -this function the user can switch on/off AOF and or slaves replication, and +this function the user can switch on/off AOF and or replicas propagation, and turn them back later at her/his wish. A simple example follows: @@ -547,7 +548,7 @@ A simple example follows: redis.call('set','C','3') After running the above script, the result is that only keys A and C -will be created on slaves and AOF. +will be created on replicas and AOF. ## Global variables protection diff --git a/commands/expire.md b/commands/expire.md index 2006a9dcce..fbd86172a2 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -162,12 +162,12 @@ second divided by 4. In order to obtain a correct behavior without sacrificing consistency, when a key expires, a `DEL` operation is synthesized in both the AOF file and gains all -the attached slaves. +the attached replicas nodes. This way the expiration process is centralized in the master instance, and there is no chance of consistency errors. -However while the slaves connected to a master will not expire keys +However while the replicas connected to a master will not expire keys independently (but will wait for the `DEL` coming from the master), they'll still take the full state of the expires existing in the dataset, so when a -slave is elected to a master it will be able to expire the keys independently, +replica is elected to master it will be able to expire the keys independently, fully acting as a master. diff --git a/commands/georadius.md b/commands/georadius.md index 87585d02b9..6ab77716a2 100644 --- a/commands/georadius.md +++ b/commands/georadius.md @@ -43,9 +43,9 @@ So for example the command `GEORADIUS Sicily 15 37 200 km WITHCOORD WITHDIST` wi ## Read only variants -Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only slaves will flag them, and Redis Cluster slaves will redirect them to the master instance even if the connection is in read only mode (See the `READONLY` command of Redis Cluster). +Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read only mode (See the `READONLY` command of Redis Cluster). -Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`, and can safely be used in slaves. +Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`, and can safely be used in replicas. Both commands were introduced in Redis 3.2.10 and Redis 4.0.0 respectively. diff --git a/commands/georadiusbymember.md b/commands/georadiusbymember.md index ad10745b31..5eab55d831 100644 --- a/commands/georadiusbymember.md +++ b/commands/georadiusbymember.md @@ -5,7 +5,7 @@ The position of the specified member is used as the center of the query. Please check the example below and the `GEORADIUS` documentation for more information about the command and its options. -Note that `GEORADIUSBYMEMBER_RO` is also available since Redis 3.2.10 and Redis 4.0.0 in order to provide a read-only command that can be used in slaves. See the `GEORADIUS` page for more information. +Note that `GEORADIUSBYMEMBER_RO` is also available since Redis 3.2.10 and Redis 4.0.0 in order to provide a read-only command that can be used in replicas. See the `GEORADIUS` page for more information. @examples diff --git a/commands/info.md b/commands/info.md index 4d6ff88528..421c588b26 100644 --- a/commands/info.md +++ b/commands/info.md @@ -8,7 +8,7 @@ The optional parameter can be used to select a specific section of information: * `memory`: Memory consumption related information * `persistence`: RDB and AOF related information * `stats`: General statistics -* `replication`: Master/slave replication information +* `replication`: Master/replica replication information * `cpu`: CPU consumption statistics * `commandstats`: Redis command statistics * `cluster`: Redis Cluster section @@ -68,7 +68,7 @@ Here is the meaning of all fields in the **server** section: Here is the meaning of all fields in the **clients** section: * `connected_clients`: Number of client connections (excluding connections - from slaves) + from replicas) * `client_longest_output_list`: longest output list among current client connections * `client_biggest_input_buf`: biggest input buffer among current client @@ -199,7 +199,7 @@ Here is the meaning of all fields in the **stats** section: * `instantaneous_output_kbps`: The network's write rate per second in KB/sec * `rejected_connections`: Number of connections rejected because of `maxclients` limit -* `sync_full`: The number of full resyncs with slaves +* `sync_full`: The number of full resyncs with replicas * `sync_partial_ok`: The number of accepted partial resync requests * `sync_partial_err`: The number of denied partial resync requests * `expired_keys`: Total number of key expiration events @@ -213,7 +213,7 @@ Here is the meaning of all fields in the **stats** section: * `latest_fork_usec`: Duration of the latest fork operation in microseconds * `migrate_cached_sockets`: The number of sockets open for `MIGRATE` purposes * `slave_expires_tracked_keys`: The number of keys tracked for expiry purposes - (applicable only to writable slaves) + (applicable only to writable replicas) * `active_defrag_hits`: Number of value reallocations performed by active the defragmentation process * `active_defrag_misses`: Number of aborted value reallocations started by the @@ -224,12 +224,10 @@ Here is the meaning of all fields in the **stats** section: Here is the meaning of all fields in the **replication** section: -* `role`: Value is "master" if the instance is slave of no one, or "slave" if - the instance is enslaved to master. - Note that a slave can be master of another slave (daisy chaining). -* `master_replid`: The replication ID of the Redis server, if it is a master -* `master_replid2`: The repliation ID of the Redis server's master, if it is - enslaved +* `role`: Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. + Note that a replica can be master of another replica (chained replication). +* `master_replid`: The replication ID of the Redis server. +* `master_replid2`: The secondary replication ID, used for PSYNC after a failover. * `master_repl_offset`: The server's current replication offset * `second_repl_offset`: The offset up to which replication IDs are accepted * `repl_backlog_active`: Flag indicating replication backlog is active @@ -239,17 +237,17 @@ Here is the meaning of all fields in the **replication** section: * `repl_backlog_histlen`: Size in bytes of the data in the replication backlog buffer -If the instance is a slave, these additional fields are provided: +If the instance is a replica, these additional fields are provided: * `master_host`: Host or IP address of the master * `master_port`: Master listening TCP port * `master_link_status`: Status of the link (up/down) * `master_last_io_seconds_ago`: Number of seconds since the last interaction with master -* `master_sync_in_progress`: Indicate the master is syncing to the slave -* `slave_repl_offset`: The replication offset of the slave instance +* `master_sync_in_progress`: Indicate the master is syncing to the replica +* `slave_repl_offset`: The replication offset of the replica instance * `slave_priority`: The priority of the instance as a candidate for failover -* `slave_read_only`: Flag indicating if the slave is read-only +* `slave_read_only`: Flag indicating if the replica is read-only If a SYNC operation is on-going, these additional fields are provided: @@ -257,20 +255,19 @@ If a SYNC operation is on-going, these additional fields are provided: * `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation -If the link between master and slave is down, an additional field is provided: +If the link between master and replica is down, an additional field is provided: * `master_link_down_since_seconds`: Number of seconds since the link is down The following field is always provided: -* `connected_slaves`: Number of connected slaves +* `connected_slaves`: Number of connected replicas -If the server is configured with the `min-slaves-to-write` directive, an -additional field is provided: +If the server is configured with the `min-slaves-to-write` (or starting with Redis 5 with the `min-replicas-to-write`) directive, an additional field is provided: -* `min_slaves_good_slaves`: Number of slaves currently considered good +* `min_slaves_good_slaves`: Number of replicas currently considered good -For each slave, the following line is added: +For each replica, the following line is added: * `slaveXXX`: id, IP address, port, state, offset, lag @@ -302,3 +299,5 @@ For each database, the following line is added: * `dbXXX`: `keys=XXX,expires=XXX` [hcgcpgp]: http://code.google.com/p/google-perftools/ + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/memory-stats.md b/commands/memory-stats.md index 17d417a8bf..a2db9347a7 100644 --- a/commands/memory-stats.md +++ b/commands/memory-stats.md @@ -12,7 +12,7 @@ values. The following metrics are reported: in bytes (see `INFO`'s `used_memory_startup`) * `replication.backlog`: Size in bytes of the replication backlog (see `INFO`'s `repl_backlog_size`) -* `clients.slaves`: The total size in bytes of all slaves overheads (output +* `clients.slaves`: The total size in bytes of all replicas overheads (output and query buffers, connection contexts) * `clients.normal`: The total size in bytes of all clients overheads (output and query buffers, connection contexts) @@ -41,3 +41,5 @@ values. The following metrics are reported: @return @array-reply: nested list of memory usage metrics and their values + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/readonly.md b/commands/readonly.md index 697010af76..bc73b9b980 100644 --- a/commands/readonly.md +++ b/commands/readonly.md @@ -1,18 +1,18 @@ -Enables read queries for a connection to a Redis Cluster slave node. +Enables read queries for a connection to a Redis Cluster replica node. -Normally slave nodes will redirect clients to the authoritative master for -the hash slot involved in a given command, however clients can use slaves +Normally replica nodes will redirect clients to the authoritative master for +the hash slot involved in a given command, however clients can use replicas in order to scale reads using the `READONLY` command. -`READONLY` tells a Redis Cluster slave node that the client is willing to +`READONLY` tells a Redis Cluster replica node that the client is willing to read possibly stale data and is not interested in running write queries. When the connection is in readonly mode, the cluster will send a redirection -to the client only if the operation involves keys not served by the slave's +to the client only if the operation involves keys not served by the replica's master node. This may happen because: -1. The client sent a command about hash slots never served by the master of this slave. -2. The cluster was reconfigured (for example resharded) and the slave is no longer able to serve commands for a given hash slot. +1. The client sent a command about hash slots never served by the master of this replica. +2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot. @return diff --git a/commands/role.md b/commands/role.md index 6d261328b5..6353dd5928 100644 --- a/commands/role.md +++ b/commands/role.md @@ -29,12 +29,12 @@ An example of output when `ROLE` is called in a master instance: The master output is composed of the following parts: 1. The string `master`. -2. The current master replication offset, which is an offset that masters and slaves share to understand, in partial resynchronizations, the part of the replication stream the slave needs to fetch to continue. -3. An array composed of three elements array representing the connected slaves. Every sub-array contains the slave IP, port, and the last acknowledged replication offset. +2. The current master replication offset, which is an offset that masters and replicas share to understand, in partial resynchronizations, the part of the replication stream the replicas needs to fetch to continue. +3. An array composed of three elements array representing the connected replicas. Every sub-array contains the replica IP, port, and the last acknowledged replication offset. -## Slave output +## Output of the command on replicas -An example of output when `ROLE` is called in a slave instance: +An example of output when `ROLE` is called in a replica instance: ``` 1) "slave" @@ -44,13 +44,13 @@ An example of output when `ROLE` is called in a slave instance: 5) (integer) 3167038 ``` -The slave output is composed of the following parts: +The replica output is composed of the following parts: -1. The string `slave`. +1. The string `slave`, because of backward compatbility (see note at the end of this page). 2. The IP of the master. 3. The port number of the master. -4. The state of the replication from the point of view of the master, that can be `connect` (the instance needs to connect to its master), `connecting` (the slave-master connection is in progress), `sync` (the master and slave are trying to perform the synchronization), `connected` (the slave is online). -5. The amount of data received from the slave so far in terms of master replication offset. +4. The state of the replication from the point of view of the master, that can be `connect` (the instance needs to connect to its master), `connecting` (the master-replica connection is in progress), `sync` (the master and replica are trying to perform the synchronization), `connected` (the replica is online). +5. The amount of data received from the replica so far in terms of master replication offset. ## Sentinel output @@ -82,3 +82,5 @@ The sentinel output is composed of the following parts: ```cli ROLE ``` + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/shutdown.md b/commands/shutdown.md index a05b414f65..9fd1ea4478 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -40,7 +40,7 @@ unsafe to do so, and the **SHUTDOWN** command will be refused with an error instead. This happens when: * The user just turned on AOF, and the server triggered the first AOF rewrite in order to create the initial AOF file. In this context, stopping will result in losing the dataset at all: once restarted, the server will potentially have AOF enabled without having any AOF file at all. -* A slave with AOF enabled, reconnected with its master, performed a full resynchronization, and restarted the AOF file, triggering the initial AOF creation process. In this case not completing the AOF rewrite is dangerous because the latest dataset received from the master would be lost. The new master can actually be even a different instance (if the **SLAVEOF** command was used in order to reconfigure the slave), so it is important to finish the AOF rewrite and start with the correct data set representing the data set in memory when the server was terminated. +* A replica with AOF enabled, reconnected with its master, performed a full resynchronization, and restarted the AOF file, triggering the initial AOF creation process. In this case not completing the AOF rewrite is dangerous because the latest dataset received from the master would be lost. The new master can actually be even a different instance (if the **REPLICAOF** or **SLAVEOF** command was used in order to reconfigure the replica), so it is important to finish the AOF rewrite and start with the correct data set representing the data set in memory when the server was terminated. There are conditions when we want just to terminate a Redis instance ASAP, regardless of what its content is. In such a case, the right combination of commands is to send a **CONFIG appendonly no** followed by a **SHUTDOWN NOSAVE**. The first command will turn off the AOF if needed, and will terminate the AOF rewriting child if there is one active. The second command will not have any problem to execute since the AOF is no longer enabled. diff --git a/commands/wait.md b/commands/wait.md index 65722c26fa..d3636ae0b4 100644 --- a/commands/wait.md +++ b/commands/wait.md @@ -1,47 +1,45 @@ This command blocks the current client until all the previous write commands are successfully transferred and acknowledged by at least the specified number -of slaves. If the timeout, specified in milliseconds, is reached, the command -returns even if the specified number of slaves were not yet reached. +of replicas. If the timeout, specified in milliseconds, is reached, the command +returns even if the specified number of replicas were not yet reached. -The command **will always return** the number of slaves that acknowledged +The command **will always return** the number of replicas that acknowledged the write commands sent before the `WAIT` command, both in the case where -the specified number of slaves are reached, or when the timeout is reached. +the specified number of replicas are reached, or when the timeout is reached. A few remarks: -1. When `WAIT` returns, all the previous write commands sent in the context of the current connection are guaranteed to be received by the number of slaves returned by `WAIT`. -2. If the command is sent as part of a `MULTI` transaction, the command does not block but instead just return ASAP the number of slaves that acknowledged the previous write commands. +1. When `WAIT` returns, all the previous write commands sent in the context of the current connection are guaranteed to be received by the number of replicas returned by `WAIT`. +2. If the command is sent as part of a `MULTI` transaction, the command does not block but instead just return ASAP the number of replicas that acknowledged the previous write commands. 3. A timeout of 0 means to block forever. -4. Since `WAIT` returns the number of slaves reached both in case of failure and success, the client should check that the returned value is equal or greater to the replication level it demanded. +4. Since `WAIT` returns the number of replicas reached both in case of failure and success, the client should check that the returned value is equal or greater to the replication level it demanded. Consistency and WAIT --- Note that `WAIT` does not make Redis a strongly consistent store: while synchronous replication is part of a replicated state machine, it is not the only thing needed. However in the context of Sentinel or Redis Cluster failover, `WAIT` improves the real world data safety. -Specifically if a given write is transferred to one or more slaves, it is more likely (but not guaranteed) that if the master fails, we'll be able to promote, during a failover, a slave that received the write: both Sentinel and Redis Cluster will do a best-effort attempt to promote the best slave among the set of available slaves. +Specifically if a given write is transferred to one or more replicas, it is more likely (but not guaranteed) that if the master fails, we'll be able to promote, during a failover, a replica that received the write: both Sentinel and Redis Cluster will do a best-effort attempt to promote the best replica among the set of available replicas. -However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple slaves. +However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple replicas. Implementation details --- -Since the introduction of partial resynchronization with slaves (PSYNC feature) -Redis slaves asynchronously ping their master with the offset they already -processed in the replication stream. This is used in multiple ways: +Since the introduction of partial resynchronization with replicas (PSYNC feature) Redis replicas asynchronously ping their master with the offset they already processed in the replication stream. This is used in multiple ways: -1. Detect timed out slaves. +1. Detect timed out replicas. 2. Perform a partial resynchronization after a disconnection. 3. Implement `WAIT`. In the specific case of the implementation of `WAIT`, Redis remembers, for each client, the replication offset of the produced replication stream when a given write command was executed in the context of a given client. When `WAIT` is -called Redis checks if the specified number of slaves already acknowledged +called Redis checks if the specified number of replicas already acknowledged this offset or a greater one. @return -@integer-reply: The command returns the number of slaves reached by all the writes performed in the context of the current connection. +@integer-reply: The command returns the number of replicas reached by all the writes performed in the context of the current connection. @examples @@ -54,4 +52,4 @@ OK (integer) 1 ``` -In the following example the first call to `WAIT` does not use a timeout and asks for the write to reach 1 slave. It returns with success. In the second attempt instead we put a timeout, and ask for the replication of the write to two slaves. Since there is a single slave available, after one second `WAIT` unblocks and returns 1, the number of slaves reached. +In the following example the first call to `WAIT` does not use a timeout and asks for the write to reach 1 replica. It returns with success. In the second attempt instead we put a timeout, and ask for the replication of the write to two replicas. Since there is a single replica available, after one second `WAIT` unblocks and returns 1, the number of replicas reached. From e44d82bf0c3106ffacd0136479864697d53ddf9c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 14 Sep 2018 16:41:13 +0200 Subject: [PATCH 0033/1457] Update Stream doc with more Kafka VS Redis consumer groups explanation. See this thread for more info: https://groups.google.com/d/topic/redis-db/td-aPJKycH0/discussion --- topics/streams-intro.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index ab694d72ec..61d0fd5b7f 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -556,6 +556,15 @@ Similarly, if a given consumer is much faster at processing messages than the ot However, this also means that in Redis if you really want to partition messages about the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. +We could say that schematically the following is true: + +* If you use 1 stream -> 1 consumer, you are processing messages in order. +* If you use N stream with N consumers, so only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. +* If you use 1 stream -> N consumers, you are load balancing to N consumers, however in that case, messages about the same logical item may be consumed out of order, because a given consumer may process message 3 faster than another consumer is processing message 4. + +So basically Kafka partitions are more similar to using N different Redis keys. +While Redis consumer groups are a server-side load balancing system of messages from a given stream to N different consumers. + ## Capped Streams Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to take the history for potentially decades to come. Redis streams have some support for this. One the **MAXLEN** option of the **XADD** command. Such option is very simple to use: From 806488a13e5c163ef851eef13e7c9e6c3b92bf27 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 25 Sep 2018 17:05:12 +0200 Subject: [PATCH 0034/1457] Modules API reference updated. --- topics/modules-api-ref.md | 548 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 531 insertions(+), 17 deletions(-) diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index c99a80da7d..8f3b8e203c 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -114,7 +114,7 @@ The command function type is the following: And is supposed to always return `REDISMODULE_OK`. The set of flags 'strflags' specify the behavior of the command, and should -be passed as a C string compoesd of space separated words, like for +be passed as a C string composed of space separated words, like for example "write deny-oom". The set of flags are: * **"write"**: The command may modify the data set (it may also read @@ -135,7 +135,7 @@ example "write deny-oom". The set of flags are: * **"allow-stale"**: The command is allowed to run on slaves that don't serve stale data. Don't use if you don't know what this means. -* **"no-monitor"**: Don't propoagate the command on monitor. Use this if +* **"no-monitor"**: Don't propagate the command on monitor. Use this if the command has sensible data among the arguments. * **"fast"**: The command time complexity is not greater than O(log(N)) where N is the size of the collection or @@ -159,6 +159,13 @@ Called by `RM_Init()` to setup the `ctx->module` structure. This is an internal function, Redis modules developers don't need to use it. +## `RedisModule_IsModuleNameBusy` + + int RedisModule_IsModuleNameBusy(const char *name); + +Return non-zero if the module name is busy. +Otherwise zero is returned. + ## `RedisModule_Milliseconds` long long RedisModule_Milliseconds(void); @@ -184,6 +191,11 @@ with `RedisModule_FreeString()`, unless automatic memory is enabled. The string is created by copying the `len` bytes starting at `ptr`. No reference is retained to the passed buffer. +The module context 'ctx' is optional and may be NULL if you want to create +a string out of the context scope. However in that case, the automatic +memory management will not be available, and the string memory must be +managed manually. + ## `RedisModule_CreateStringPrintf` RedisModuleString *RedisModule_CreateStringPrintf(RedisModuleCtx *ctx, const char *fmt, ...); @@ -194,6 +206,9 @@ automatic memory is enabled. The string is created using the sds formatter function sdscatvprintf(). +The passed context 'ctx' may be NULL if necessary, see the +`RedisModule_CreateString()` documentation for more info. + ## `RedisModule_CreateStringFromLongLong` RedisModuleString *RedisModule_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll); @@ -204,6 +219,9 @@ integer instead of taking a buffer and its length. The returned string must be released with `RedisModule_FreeString()` or by enabling automatic memory management. +The passed context 'ctx' may be NULL if necessary, see the +`RedisModule_CreateString()` documentation for more info. + ## `RedisModule_CreateStringFromString` RedisModuleString *RedisModule_CreateStringFromString(RedisModuleCtx *ctx, const RedisModuleString *str); @@ -214,6 +232,9 @@ RedisModuleString. The returned string must be released with `RedisModule_FreeString()` or by enabling automatic memory management. +The passed context 'ctx' may be NULL if necessary, see the +`RedisModule_CreateString()` documentation for more info. + ## `RedisModule_FreeString` void RedisModule_FreeString(RedisModuleCtx *ctx, RedisModuleString *str); @@ -225,6 +246,12 @@ It is possible to call this function even when automatic memory management is enabled. In that case the string will be released ASAP and removed from the pool of string to release at the end. +If the string was created with a NULL context 'ctx', it is also possible to +pass ctx as NULL when releasing the string (but passing a context will not +create any issue). Strings created with a context should be freed also passing +the context, so if you want to free a string out of context later, make sure +to create it using a NULL context. + ## `RedisModule_RetainString` void RedisModule_RetainString(RedisModuleCtx *ctx, RedisModuleString *str); @@ -252,6 +279,8 @@ any call to RetainString() since creating a string will always result into a string that lives after the callback function returns, if no FreeString() call is performed. +It is possible to call this function with a NULL context. + ## `RedisModule_StringPtrLen` const char *RedisModule_StringPtrLen(const RedisModuleString *str, size_t *len); @@ -289,9 +318,9 @@ binary blobs without any encoding care / collation attempt. int RedisModule_StringAppendBuffer(RedisModuleCtx *ctx, RedisModuleString *str, const char *buf, size_t len); -Append the specified buffere to the string 'str'. The string must be a +Append the specified buffer to the string 'str'. The string must be a string created by the user that is referenced only a single time, otherwise -`REDISMODULE_ERR` is returend and the operation is not performed. +`REDISMODULE_ERR` is returned and the operation is not performed. ## `RedisModule_WrongArity` @@ -382,7 +411,7 @@ could write: Note that in the above example there is no reason to postpone the array length, since we produce a fixed number of elements, but in the practice -the code may use an interator or other ways of creating the output so +the code may use an iterator or other ways of creating the output so that is not easy to calculate in advance the number of elements. ## `RedisModule_ReplyWithStringBuffer` @@ -494,6 +523,43 @@ to fetch the ID in the context the function was currently called. Return the currently selected DB. +## `RedisModule_GetContextFlags` + + int RedisModule_GetContextFlags(RedisModuleCtx *ctx); + +Return the current context's flags. The flags provide information on the +current request context (whether the client is a Lua script or in a MULTI), +and about the Redis instance in general, i.e replication and persistence. + +The available flags are: + + * REDISMODULE_CTX_FLAGS_LUA: The command is running in a Lua script + + * REDISMODULE_CTX_FLAGS_MULTI: The command is running inside a transaction + + * REDISMODULE_CTX_FLAGS_MASTER: The Redis instance is a master + + * REDISMODULE_CTX_FLAGS_SLAVE: The Redis instance is a slave + + * REDISMODULE_CTX_FLAGS_READONLY: The Redis instance is read-only + + * REDISMODULE_CTX_FLAGS_CLUSTER: The Redis instance is in cluster mode + + * REDISMODULE_CTX_FLAGS_AOF: The Redis instance has AOF enabled + + * REDISMODULE_CTX_FLAGS_RDB: The instance has RDB enabled + + * REDISMODULE_CTX_FLAGS_MAXMEMORY: The instance has Maxmemory set + + * REDISMODULE_CTX_FLAGS_EVICT: Maxmemory is set and has an eviction + policy that may delete keys + + * REDISMODULE_CTX_FLAGS_OOM: Redis is out of memory according to the + maxmemory setting. + + * REDISMODULE_CTX_FLAGS_OOM_WARNING: Less than 25% of memory remains before + reaching the maxmemory level. + ## `RedisModule_SelectDb` int RedisModule_SelectDb(RedisModuleCtx *ctx, int newid); @@ -517,7 +583,7 @@ Return an handle representing a Redis key, so that it is possible to call other APIs with the key handle as argument to perform operations on the key. -The return value is the handle repesenting the key, that must be +The return value is the handle representing the key, that must be closed with `RM_CloseKey()`. If the key does not exist and WRITE mode is requested, the handle @@ -560,6 +626,16 @@ accept new writes as an empty key (that will be created on demand). On success `REDISMODULE_OK` is returned. If the key is not open for writing `REDISMODULE_ERR` is returned. +## `RedisModule_UnlinkKey` + + int RedisModule_UnlinkKey(RedisModuleKey *key); + +If the key is open for writing, unlink it (that is delete it in a +non-blocking way, not reclaiming memory immediately) and setup the key to +accept new writes as an empty key (that will be created on demand). +On success `REDISMODULE_OK` is returned. If the key is not open for +writing `REDISMODULE_ERR` is returned. + ## `RedisModule_GetExpire` mstime_t RedisModule_GetExpire(RedisModuleKey *key); @@ -645,7 +721,7 @@ unless the new length value requested is zero. int RedisModule_ListPush(RedisModuleKey *key, int where, RedisModuleString *ele); -Push an element into a list, on head or tail depending on 'where' argumnet. +Push an element into a list, on head or tail depending on 'where' argument. If the key pointer is about an empty key opened for writing, the key is created. On error (key opened for read-only operations or of the wrong type) `REDISMODULE_ERR` is returned, otherwise `REDISMODULE_OK` is returned. @@ -719,7 +795,7 @@ zero. The input and output flags, and the return value, have the same exact meaning, with the only difference that this function will return `REDISMODULE_ERR` even when 'score' is a valid double number, but adding it -to the existing score resuts into a NaN (not a number) condition. +to the existing score results into a NaN (not a number) condition. This function has an additional field 'newscore', if not NULL is filled with the new score of the element after the increment, if no error @@ -857,7 +933,9 @@ hash value, in order to set the specified field. The function is variadic and the user must specify pairs of field names and values, both as RedisModuleString pointers (unless the -CFIELD option is set, see later). +CFIELD option is set, see later). At the end of the field/value-ptr pairs, +NULL must be specified as last argument to signal the end of the arguments +in the variadic function. Example to set the hash argv[1] to the value argv[2]: @@ -1099,7 +1177,7 @@ writing or there is an active iterator, `REDISMODULE_ERR` is returned. moduleType *RedisModule_ModuleTypeGetType(RedisModuleKey *key); Assuming `RedisModule_KeyType()` returned `REDISMODULE_KEYTYPE_MODULE` on -the key, returns the moduel type pointer of the value stored at key. +the key, returns the module type pointer of the value stored at key. If the key is NULL, is not associated with a module type, or is empty, then NULL is returned instead. @@ -1269,7 +1347,7 @@ that gets converted into a string before adding it to the digest. void RedisModule_DigestEndSequence(RedisModuleDigest *md); -See the doucmnetation for ``RedisModule_DigestAddElement()``. +See the documentation for ``RedisModule_DigestAddElement()``. ## `RedisModule_EmitAOF` @@ -1305,7 +1383,7 @@ level to use when emitting the log, and must be one of the following: If the specified log level is invalid, verbose is used by default. There is a fixed limit to the length of the log line this function is able -to emit, this limti is not specified but is guaranteed to be more than +to emit, this limit is not specified but is guaranteed to be more than a few lines of text. ## `RedisModule_LogIOError` @@ -1320,10 +1398,10 @@ critical reason. ## `RedisModule_BlockClient` - RedisModuleBlockedClient *RedisModule_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(void*), long long timeout_ms); + RedisModuleBlockedClient *RedisModule_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms); Block a client in the context of a blocking command, returning an handle -which will be used, later, in order to block the client with a call to +which will be used, later, in order to unblock the client with a call to `RedisModule_UnblockClient()`. The arguments specify callback functions and a timeout after which the client is unblocked. @@ -1335,7 +1413,7 @@ The callbacks are called in the following contexts: reply_timeout: called when the timeout is reached in order to send an error to the client. - free_privdata: called in order to free the privata data that is passed + free_privdata: called in order to free the private data that is passed by RedisModule_UnblockClient() call. ## `RedisModule_UnblockClient` @@ -1359,7 +1437,27 @@ Note: this function can be called from threads spawned by the module. int RedisModule_AbortBlock(RedisModuleBlockedClient *bc); Abort a blocked client blocking operation: the client will be unblocked -without firing the reply callback. +without firing any callback. + +## `RedisModule_SetDisconnectCallback` + + void RedisModule_SetDisconnectCallback(RedisModuleBlockedClient *bc, RedisModuleDisconnectFunc callback); + +Set a callback that will be called if a blocked client disconnects +before the module has a chance to call `RedisModule_UnblockClient()` + +Usually what you want to do there, is to cleanup your module state +so that you can call `RedisModule_UnblockClient()` safely, otherwise +the client will remain blocked forever if the timeout is large. + +Notes: + +1. It is not safe to call Reply* family functions here, it is also + useless since the client is gone. + +2. This callback is not called if the client disconnects because of + a timeout. In such a case, the client is unblocked automatically + and the timeout callback is called. ## `RedisModule_IsBlockedReplyRequest` @@ -1379,7 +1477,24 @@ reply for a blocked client that timed out. void *RedisModule_GetBlockedClientPrivateData(RedisModuleCtx *ctx); -Get the privata data set by `RedisModule_UnblockClient()` +Get the private data set by `RedisModule_UnblockClient()` + +## `RedisModule_GetBlockedClientHandle` + + RedisModuleBlockedClient *RedisModule_GetBlockedClientHandle(RedisModuleCtx *ctx); + +Get the blocked client associated with a given context. +This is useful in the reply and timeout callbacks of blocked clients, +before sometimes the module has the blocked client handle references +around, and wants to cleanup it. + +## `RedisModule_BlockedClientDisconnected` + + int RedisModule_BlockedClientDisconnected(RedisModuleCtx *ctx); + +Return true if when the free callback of a blocked client is called, +the reason for the client to be unblocked is that it disconnected +while it was blocked. ## `RedisModule_GetThreadSafeContext` @@ -1425,3 +1540,402 @@ a blocked client connected to the thread safe context. Release the server lock after a thread safe API call was executed. +## `RedisModule_SubscribeToKeyspaceEvents` + + int RedisModule_SubscribeToKeyspaceEvents(RedisModuleCtx *ctx, int types, RedisModuleNotificationFunc callback); + +Subscribe to keyspace notifications. This is a low-level version of the +keyspace-notifications API. A module can register callbacks to be notified +when keyspce events occur. + +Notification events are filtered by their type (string events, set events, +etc), and the subscriber callback receives only events that match a specific +mask of event types. + +When subscribing to notifications with `RedisModule_SubscribeToKeyspaceEvents` +the module must provide an event type-mask, denoting the events the subscriber +is interested in. This can be an ORed mask of any of the following flags: + + - REDISMODULE_NOTIFY_GENERIC: Generic commands like DEL, EXPIRE, RENAME + - REDISMODULE_NOTIFY_STRING: String events + - REDISMODULE_NOTIFY_LIST: List events + - REDISMODULE_NOTIFY_SET: Set events + - REDISMODULE_NOTIFY_HASH: Hash events + - REDISMODULE_NOTIFY_ZSET: Sorted Set events + - REDISMODULE_NOTIFY_EXPIRED: Expiration events + - REDISMODULE_NOTIFY_EVICTED: Eviction events + - REDISMODULE_NOTIFY_STREAM: Stream events + - REDISMODULE_NOTIFY_ALL: All events + +We do not distinguish between key events and keyspace events, and it is up +to the module to filter the actions taken based on the key. + +The subscriber signature is: + + int (*RedisModuleNotificationFunc) (RedisModuleCtx *ctx, int type, + const char *event, + RedisModuleString *key); + +`type` is the event type bit, that must match the mask given at registration +time. The event string is the actual command being executed, and key is the +relevant Redis key. + +Notification callback gets executed with a redis context that can not be +used to send anything to the client, and has the db number where the event +occurred as its selected db number. + +Notice that it is not necessary to enable notifications in redis.conf for +module notifications to work. + +Warning: the notification callbacks are performed in a synchronous manner, +so notification callbacks must to be fast, or they would slow Redis down. +If you need to take long actions, use threads to offload them. + +See https://redis.io/topics/notifications for more information. + +## `RedisModule_RegisterClusterMessageReceiver` + + void RedisModule_RegisterClusterMessageReceiver(RedisModuleCtx *ctx, uint8_t type, RedisModuleClusterMessageReceiver callback); + +Register a callback receiver for cluster messages of type 'type'. If there +was already a registered callback, this will replace the callback function +with the one provided, otherwise if the callback is set to NULL and there +is already a callback for this function, the callback is unregistered +(so this API call is also used in order to delete the receiver). + +## `RedisModule_SendClusterMessage` + + int RedisModule_SendClusterMessage(RedisModuleCtx *ctx, char *target_id, uint8_t type, unsigned char *msg, uint32_t len); + +Send a message to all the nodes in the cluster if `target` is NULL, otherwise +at the specified target, which is a `REDISMODULE_NODE_ID_LEN` bytes node ID, as +returned by the receiver callback or by the nodes iteration functions. + +The function returns `REDISMODULE_OK` if the message was successfully sent, +otherwise if the node is not connected or such node ID does not map to any +known cluster node, `REDISMODULE_ERR` is returned. + +## `RedisModule_GetClusterNodesList` + + char **RedisModule_GetClusterNodesList(RedisModuleCtx *ctx, size_t *numnodes); + +Return an array of string pointers, each string pointer points to a cluster +node ID of exactly `REDISMODULE_NODE_ID_SIZE` bytes (without any null term). +The number of returned node IDs is stored into `*numnodes`. +However if this function is called by a module not running an a Redis +instance with Redis Cluster enabled, NULL is returned instead. + +The IDs returned can be used with `RedisModule_GetClusterNodeInfo()` in order +to get more information about single nodes. + +The array returned by this function must be freed using the function +`RedisModule_FreeClusterNodesList()`. + +Example: + + size_t count, j; + char **ids = RedisModule_GetClusterNodesList(ctx,&count); + for (j = 0; j < count; j++) { + RedisModule_Log("notice","Node %.*s", + REDISMODULE_NODE_ID_LEN,ids[j]); + } + RedisModule_FreeClusterNodesList(ids); + +## `RedisModule_FreeClusterNodesList` + + void RedisModule_FreeClusterNodesList(char **ids); + +Free the node list obtained with `RedisModule_GetClusterNodesList`. + +## `RedisModule_GetMyClusterID` + + const char *RedisModule_GetMyClusterID(void); + +Return this node ID (`REDISMODULE_CLUSTER_ID_LEN` bytes) or NULL if the cluster +is disabled. + +## `RedisModule_GetClusterSize` + + size_t RedisModule_GetClusterSize(void); + +Return the number of nodes in the cluster, regardless of their state +(handshake, noaddress, ...) so that the number of active nodes may actually +be smaller, but not greater than this number. If the instance is not in +cluster mode, zero is returned. + +## `RedisModule_SetClusterFlags` + + void RedisModule_SetClusterFlags(RedisModuleCtx *ctx, uint64_t flags); + +Set Redis Cluster flags in order to change the normal behavior of +Redis Cluster, especially with the goal of disabling certain functions. +This is useful for modules that use the Cluster API in order to create +a different distributed system, but still want to use the Redis Cluster +message bus. Flags that can be set: + + CLUSTER_MODULE_FLAG_NO_FAILOVER + CLUSTER_MODULE_FLAG_NO_REDIRECTION + +With the following effects: + + NO_FAILOVER: prevent Redis Cluster slaves to failover a failing master. + Also disables the replica migration feature. + + NO_REDIRECTION: Every node will accept any key, without trying to perform + partitioning according to the user Redis Cluster algorithm. + Slots informations will still be propagated across the + cluster, but without effects. + +## `RedisModule_CreateTimer` + + RedisModuleTimerID RedisModule_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisModuleTimerProc callback, void *data); + +Create a new timer that will fire after `period` milliseconds, and will call +the specified function using `data` as argument. The returned timer ID can be +used to get information from the timer or to stop it before it fires. + +## `RedisModule_StopTimer` + + int RedisModule_StopTimer(RedisModuleCtx *ctx, RedisModuleTimerID id, void **data); + +Stop a timer, returns `REDISMODULE_OK` if the timer was found, belonged to the +calling module, and was stopped, otherwise `REDISMODULE_ERR` is returned. +If not NULL, the data pointer is set to the value of the data argument when +the timer was created. + +## `RedisModule_GetTimerInfo` + + int RedisModule_GetTimerInfo(RedisModuleCtx *ctx, RedisModuleTimerID id, uint64_t *remaining, void **data); + +Obtain information about a timer: its remaining time before firing +(in milliseconds), and the private data pointer associated with the timer. +If the timer specified does not exist or belongs to a different module +no information is returned and the function returns `REDISMODULE_ERR`, otherwise +`REDISMODULE_OK` is returned. The arguments remaining or data can be NULL if +the caller does not need certain information. + +## `RedisModule_CreateDict` + + RedisModuleDict *RedisModule_CreateDict(RedisModuleCtx *ctx); + +Create a new dictionary. The 'ctx' pointer can be the current module context +or NULL, depending on what you want. Please follow the following rules: + +1. Use a NULL context if you plan to retain a reference to this dictionary + that will survive the time of the module callback where you created it. +2. Use a NULL context if no context is available at the time you are creating + the dictionary (of course...). +3. However use the current callback context as 'ctx' argument if the + dictionary time to live is just limited to the callback scope. In this + case, if enabled, you can enjoy the automatic memory management that will + reclaim the dictionary memory, as well as the strings returned by the + Next / Prev dictionary iterator calls. + +## `RedisModule_FreeDict` + + void RedisModule_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d); + +Free a dictionary created with `RM_CreateDict()`. You need to pass the +context pointer 'ctx' only if the dictionary was created using the +context instead of passing NULL. + +## `RedisModule_DictSize` + + uint64_t RedisModule_DictSize(RedisModuleDict *d); + +Return the size of the dictionary (number of keys). + +## `RedisModule_DictSetC` + + int RedisModule_DictSetC(RedisModuleDict *d, void *key, size_t keylen, void *ptr); + +Store the specified key into the dictionary, setting its value to the +pointer 'ptr'. If the key was added with success, since it did not +already exist, `REDISMODULE_OK` is returned. Otherwise if the key already +exists the function returns `REDISMODULE_ERR`. + +## `RedisModule_DictReplaceC` + + int RedisModule_DictReplaceC(RedisModuleDict *d, void *key, size_t keylen, void *ptr); + +Like `RedisModule_DictSetC()` but will replace the key with the new +value if the key already exists. + +## `RedisModule_DictSet` + + int RedisModule_DictSet(RedisModuleDict *d, RedisModuleString *key, void *ptr); + +Like `RedisModule_DictSetC()` but takes the key as a RedisModuleString. + +## `RedisModule_DictReplace` + + int RedisModule_DictReplace(RedisModuleDict *d, RedisModuleString *key, void *ptr); + +Like `RedisModule_DictReplaceC()` but takes the key as a RedisModuleString. + +## `RedisModule_DictGetC` + + void *RedisModule_DictGetC(RedisModuleDict *d, void *key, size_t keylen, int *nokey); + +Return the value stored at the specified key. The function returns NULL +both in the case the key does not exist, or if you actually stored +NULL at key. So, optionally, if the 'nokey' pointer is not NULL, it will +be set by reference to 1 if the key does not exist, or to 0 if the key +exists. + +## `RedisModule_DictGet` + + void *RedisModule_DictGet(RedisModuleDict *d, RedisModuleString *key, int *nokey); + +Like `RedisModule_DictGetC()` but takes the key as a RedisModuleString. + +## `RedisModule_DictDelC` + + int RedisModule_DictDelC(RedisModuleDict *d, void *key, size_t keylen, void *oldval); + +Remove the specified key from the dictionary, returning `REDISMODULE_OK` if +the key was found and delted, or `REDISMODULE_ERR` if instead there was +no such key in the dictionary. When the operation is successful, if +'oldval' is not NULL, then '*oldval' is set to the value stored at the +key before it was deleted. Using this feature it is possible to get +a pointer to the value (for instance in order to release it), without +having to call `RedisModule_DictGet()` before deleting the key. + +## `RedisModule_DictDel` + + int RedisModule_DictDel(RedisModuleDict *d, RedisModuleString *key, void *oldval); + +Like `RedisModule_DictDelC()` but gets the key as a RedisModuleString. + +## `RedisModule_DictIteratorStartC` + + RedisModuleDictIter *RedisModule_DictIteratorStartC(RedisModuleDict *d, const char *op, void *key, size_t keylen); + +Return an interator, setup in order to start iterating from the specified +key by applying the operator 'op', which is just a string specifying the +comparison operator to use in order to seek the first element. The +operators avalable are: + +"^" -- Seek the first (lexicographically smaller) key. +"$" -- Seek the last (lexicographically biffer) key. +">" -- Seek the first element greter than the specified key. +">=" -- Seek the first element greater or equal than the specified key. +"<" -- Seek the first element smaller than the specified key. +"<=" -- Seek the first element smaller or equal than the specified key. +"==" -- Seek the first element matching exactly the specified key. + +Note that for "^" and "$" the passed key is not used, and the user may +just pass NULL with a length of 0. + +If the element to start the iteration cannot be seeked based on the +key and operator passed, `RedisModule_DictNext()` / Prev() will just return +`REDISMODULE_ERR` at the first call, otherwise they'll produce elements. + +## `RedisModule_DictIteratorStart` + + RedisModuleDictIter *RedisModule_DictIteratorStart(RedisModuleDict *d, const char *op, RedisModuleString *key); + +Exactly like `RedisModule_DictIteratorStartC`, but the key is passed as a +RedisModuleString. + +## `RedisModule_DictIteratorStop` + + void RedisModule_DictIteratorStop(RedisModuleDictIter *di); + +Release the iterator created with `RedisModule_DictIteratorStart()`. This call +is mandatory otherwise a memory leak is introduced in the module. + +## `RedisModule_DictIteratorReseekC` + + int RedisModule_DictIteratorReseekC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen); + +After its creation with `RedisModule_DictIteratorStart()`, it is possible to +change the currently selected element of the iterator by using this +API call. The result based on the operator and key is exactly like +the function `RedisModule_DictIteratorStart()`, however in this case the +return value is just `REDISMODULE_OK` in case the seeked element was found, +or `REDISMODULE_ERR` in case it was not possible to seek the specified +element. It is possible to reseek an iterator as many times as you want. + +## `RedisModule_DictIteratorReseek` + + int RedisModule_DictIteratorReseek(RedisModuleDictIter *di, const char *op, RedisModuleString *key); + +Like `RedisModule_DictIteratorReseekC()` but takes the key as as a +RedisModuleString. + +## `RedisModule_DictNextC` + + void *RedisModule_DictNextC(RedisModuleDictIter *di, size_t *keylen, void **dataptr); + +Return the current item of the dictionary iterator 'di' and steps to the +next element. If the iterator already yield the last element and there +are no other elements to return, NULL is returned, otherwise a pointer +to a string representing the key is provided, and the '*keylen' length +is set by reference (if keylen is not NULL). The '*dataptr', if not NULL +is set to the value of the pointer stored at the returned key as auxiliary +data (as set by the `RedisModule_DictSet` API). + +Usage example: + + ... create the iterator here ... + char *key; + void *data; + while((key = RedisModule_DictNextC(iter,&keylen,&data)) != NULL) { + printf("%.*s %p\n", (int)keylen, key, data); + } + +The returned pointer is of type void because sometimes it makes sense +to cast it to a char* sometimes to an unsigned char* depending on the +fact it contains or not binary data, so this API ends being more +comfortable to use. + +The validity of the returned pointer is until the next call to the +next/prev iterator step. Also the pointer is no longer valid once the +iterator is released. + +## `RedisModule_DictPrevC` + + void *RedisModule_DictPrevC(RedisModuleDictIter *di, size_t *keylen, void **dataptr); + +This function is exactly like `RedisModule_DictNext()` but after returning +the currently selected element in the iterator, it selects the previous +element (laxicographically smaller) instead of the next one. + +## `RedisModule_DictNext` + + RedisModuleString *RedisModule_DictNext(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr); + +Like RedisModuleNextC(), but instead of returning an internally allocated +buffer and key length, it returns directly a module string object allocated +in the specified context 'ctx' (that may be NULL exactly like for the main +API `RedisModule_CreateString)`. + +The returned string object should be deallocated after use, either manually +or by using a context that has automatic memory management active. + +## `RedisModule_DictPrev` + + RedisModuleString *RedisModule_DictPrev(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr); + +Like `RedisModule_DictNext()` but after returning the currently selected +element in the iterator, it selects the previous element (laxicographically +smaller) instead of the next one. + +## `RedisModule_GetRandomBytes` + + void RedisModule_GetRandomBytes(unsigned char *dst, size_t len); + +Return random bytes using SHA1 in counter mode with a /dev/urandom +initialized seed. This function is fast so can be used to generate +many bytes without any effect on the operating system entropy pool. +Currently this function is not thread safe. + +## `RedisModule_GetRandomHexChars` + + void RedisModule_GetRandomHexChars(char *dst, size_t len); + +Like `RedisModule_GetRandomBytes()` but instead of setting the string to +random bytes the string is set to random characters in the in the +hex charset [0-9a-f]. + From db3ead6e92230431b4896f16e311a71857a0dd59 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 11:56:33 +0200 Subject: [PATCH 0035/1457] Add XGROUP complexity. --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index a5a2f36b84..c7e2965c71 100644 --- a/commands.json +++ b/commands.json @@ -3555,7 +3555,7 @@ }, "XGROUP": { "summary": "Create, destroy, and manage consumer groups.", - "complexity": "", + "complexity": "O(log N) for all the subcommands, with N being the number of consumer groups registered in the stream, with the exception of the DESTROY subcommand which takes an additional O(M) time in order to delete the M entries inside the consumer group pending entries list (PEL).", "arguments": [ { "command": "CREATE", From 7749cfda4b884cd414249de6e1eff371acbec46b Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 11:59:45 +0200 Subject: [PATCH 0036/1457] Fix XTRIM time complexity. --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index c7e2965c71..50ed76f4e3 100644 --- a/commands.json +++ b/commands.json @@ -3412,7 +3412,7 @@ }, "XTRIM": { "summary": "Trims the stream to (approximately if '~' is passed) a certain size", - "complexity": "O(log(N)) with N being the number of in the stream prior to trim.", + "complexity": "O(log(N)) + M, with N being the number of entries in the stream prior to trim, and M being the number of evicted entries. M constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.", "arguments": [ { "name": "key", From 7b420c0dd3eb93a92bdf80ebac223f29c70db8ae Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 12:15:16 +0200 Subject: [PATCH 0037/1457] Add XINFO complexity. --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 50ed76f4e3..b8098c7b87 100644 --- a/commands.json +++ b/commands.json @@ -3359,7 +3359,7 @@ }, "XINFO": { "summary": "Get information on streams and consumer groups", - "complexity": "", + "complexity": "O(N) with N being the number of returned items for the subcommands CONSUMERS and GROUPS. The STREAM subcommand is O(log N) with N being the number of items in the stream.", "arguments": [ { "command": "CONSUMERS", From 533c3597bf43440bf69cad59a81c1ad3a6b3c776 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 12:23:36 +0200 Subject: [PATCH 0038/1457] Add XACK complexity. --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index b8098c7b87..de5d980ca6 100644 --- a/commands.json +++ b/commands.json @@ -3626,8 +3626,8 @@ "group": "stream" }, "XACK": { - "summary": "Marks a pending message as correctly processed. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", - "complexity": "", + "summary": "Marks a pending message as correctly processed, effectively removing it from the pending entries list of the consumer group. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", + "complexity": "O(log N) for each message ID processed.", "arguments": [ { "name": "key", From 4dc56ffe989ae57b4f6d6517ea40dc5c94de5278 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 12:26:16 +0200 Subject: [PATCH 0039/1457] ADD XCLAIM complexity and description. --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index de5d980ca6..dc8fb4877e 100644 --- a/commands.json +++ b/commands.json @@ -3647,8 +3647,8 @@ "group": "stream" }, "XCLAIM": { - "summary": "", - "complexity": "", + "summary": "Changes (or acquires) ownership of a message in a consumer group, as if the message was delivered to the specified consumer.", + "complexity": "O(log N) with N being the number of messages in the PEL of the consumer group.", "arguments": [ { "name": "key", From 897e8ad70429f19bb8923cb19b25eb2de40fbf9c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 17:29:03 +0200 Subject: [PATCH 0040/1457] XACK doc. --- commands/xack.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 commands/xack.md diff --git a/commands/xack.md b/commands/xack.md new file mode 100644 index 0000000000..145ce21b38 --- /dev/null +++ b/commands/xack.md @@ -0,0 +1,28 @@ +The `XACK` command removes one or multiple messages from the +*pending entries list* (PEL) of a stream consumer group. A message is pending, +and as such stored inside the PEL, when it was delivered to some consumer, +normally as a side effect of calling `XREADGROUP`, or when a consumer took +ownership of a message calling `XCLAIM`. The pending message was delivered to +some consumer but the server is yet not sure it was processed at least once. +So new calls to `XREADGROUP` to grab the messages history for a consumer +(for instance using an ID of 0), will return such message. +Similarly the pending message will be listed by the `XPENDING` command, +that inspects the PEL. + +Once a consumer *succesfully* processes a message, it should call `XACK` +so that such message does not get processed again, and as a side effect, +the PEL entry about this message is also purged, releasing memory from the +Redis server. + +@return + +@integer-reply, specifically: + +The command returns the number of messages successfully acknowledged. +Certain message IDs may no longer be part of the PEL (for example because +they have been already acknowledge), and XACK will not count them as +successfully acknowledged. + +```cli +XACK mystream mygroup 1526569495631-0 +``` From c5574ddc084317bdc0b084a26ece2e7ad67017d7 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 28 Sep 2018 17:52:24 +0200 Subject: [PATCH 0041/1457] XCLAIM doc added. --- commands/xclaim.md | 50 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 commands/xclaim.md diff --git a/commands/xclaim.md b/commands/xclaim.md new file mode 100644 index 0000000000..48f5b6e3f6 --- /dev/null +++ b/commands/xclaim.md @@ -0,0 +1,50 @@ +In the context of a stream consumer group, this command changes the ownership +of a pending message, so that the new owner is the consumer specified as the +command argument. Normally this is what happens: + +1. There is a stream with an associated consumer group. +2. Some consumer A reads a message via `XREADGROUP` from a stream, in the context of that consumer group. +3. As a side effect a pending message entry is created in the pending entries list (PEL) of the consumer group: it means the message was delivered to a given consumer, but it was not yet acknowledged via `XACK`. +4. Then suddenly that consumer fails forever. +5. Other consumers may inspect the list of pending messages, that are stale for quite some time, using the `XPENDING` command. In order to continue processing such messages, they use `XCLAIM` to acquire the ownership of the message and continue. + +This dynamic is clearly explained in the [Stream intro documentation](/topics/streams-intro). + +Note that the message is claimed only if its idle time is greater the minimum idle time we specify when calling `XCLAIM`. Because as a side effect `XCLAIM` will also reset the idle time (since this is a new attempt at processing the message), two consumers trying to claim a message at the same time will never both succeed: only one will successfully claim the message. This avoids that we process a given message multiple times in a trivial way (yet multiple processing is possible and unavoidable in the general case). + +Moreover, as a side effect, `XCLAIM` will increment the count of attempted +deliveries of the message. In this way messages that cannot be processed for +some reason, for instance because the consumers crash attempting to process +them, will start to have a larger counter and can be detected inside the system. + +## Command options + +The command has multiple options, however most are mainly for internal use in +order to transfer the effects of `XCLAIM` or other commands to the AOF file +and to propagate the same effects to the slaves, and are unlikely to be +useful to normal users: + +1. `IDLE `: Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is reset because the message has now a new owner trying to process it. +2. `TIME `: This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). This is useful in order to rewrite the AOF file generating `XCLAIM` commands. +3. `RETRYCOUNT `: Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. Normally `XCLAIM` does not alter this counter, which is just served to clients when the XPENDING command is called: this way clients can detect anomalies, like messages that are never processed for some reason after a big number of delivery attempts. +4. `FORCE`: Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are ignored. +5. `JUSTID`: Return just an array of IDs of messages successfully claimed, without returning the actual message. + +@return + +@array-reply, specifically: + +The command returns all the messages successfully claimed, in the same format +as `XRANGE`. However if the `JUSTID` option was specified, only the message +IDs are reported, without including the actual message. + +Example: + +``` +> XCLAIM mystream mygroup Alice 3600000 1526569498055-0 +1) 1) 1526569498055-0 + 2) 1) "message" + 2) "orange" +``` + +In the above example we claim the message with ID `1526569498055-0`, only if the message is idle for at least one hour without the original consumer or some other consumer making progresses (acknowledging or claiming it), and assigns the ownership to the consumer `Alice`. From 2658cfc8b5801b682cd600c574503c1682da4c68 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 1 Oct 2018 11:38:33 +0200 Subject: [PATCH 0042/1457] XINFO command doc. --- commands/xinfo.md | 106 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 106 insertions(+) create mode 100644 commands/xinfo.md diff --git a/commands/xinfo.md b/commands/xinfo.md new file mode 100644 index 0000000000..25c8cef400 --- /dev/null +++ b/commands/xinfo.md @@ -0,0 +1,106 @@ +This is an introspection command used in order to retrieve different information +about the streams and associated consumer groups. Three forms are possible: + +* `XINFO STREAM ` + +In this form the command returns general information about the stream stored +at the specified key. + +``` +> XINFO STREAM mystream + 1) length + 2) (integer) 2 + 3) radix-tree-keys + 4) (integer) 1 + 5) radix-tree-nodes + 6) (integer) 2 + 7) groups + 8) (integer) 2 + 9) last-generated-id +10) 1538385846314-0 +11) first-entry +12) 1) 1538385820729-0 + 2) 1) "foo" + 2) "bar" +13) last-entry +14) 1) 1538385846314-0 + 2) 1) "field" + 2) "value" +``` + +In the above example you can see that the reported information are the number +of elements of the stream, details about the radix tree representing the +stream mostly useful for optimization and debugging tasks, the number of +consumer groups associated with the stream, the last generated ID that may +not be the same as the last entry ID in case some entry was deleted. Finally +the full first and last entry in the stream are shown, in order to give some +sense about what is the stream content. + +* `XINFO GROUPS ` + +In this form we just get as output all the consumer groups associated with the +stream: + +``` +> XINFO GROUPS mystream +1) 1) name + 2) "mygroup" + 3) consumers + 4) (integer) 2 + 5) pending + 6) (integer) 2 +2) 1) name + 2) "some-other-group" + 3) consumers + 4) (integer) 1 + 5) pending + 6) (integer) 0 +``` + +For each consumer group listed the command also shows the number of consumers +known in that group and the pending messages (delivered but not yet acknowledged) +in that group. + +* `XINFO CONSUMERS ` + +Finally it is possible to get the list of every consumer in a specific consumer +group: + +``` +> XINFO CONSUMERS mystream mygroup +1) 1) name + 2) "Alice" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 9104628 +2) 1) name + 2) "Bob" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 83841983 +``` + +We can see the idle time in milliseconds (last field) together with the +consumer name and the number of pending messages for this specific +consumer. + +**Note that you should not rely on the fields exact position**, nor on the +number of fields, new fields may be added in the future. So a well behaving +client should fetch the whole list, and report it to the user, for example, +as a dictionary data structure. Low level clients such as C clients where +the items will likely be reported back in a linear array should document +that the order is undefined. + +Finally it is possible to get help from the command, in case the user can't +remember the exact syntax, by using the `HELP` subcommnad: + +``` +> XINFO HELP +1) XINFO arg arg ... arg. Subcommands are: +2) CONSUMERS -- Show consumer groups of group . +3) GROUPS -- Show the stream consumer groups. +4) STREAM -- Show information about the stream. +5) HELP +``` From 13f41ace02586bc768338bfb7f5b00e09af95b99 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 1 Oct 2018 11:45:46 +0200 Subject: [PATCH 0043/1457] XTRIM command documented. --- commands/xtrim.md | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 commands/xtrim.md diff --git a/commands/xtrim.md b/commands/xtrim.md new file mode 100644 index 0000000000..1754fa6823 --- /dev/null +++ b/commands/xtrim.md @@ -0,0 +1,37 @@ +`XTRIM` trims the stream to a given number of items, evicting older items +(items with lower IDs) if needed. The command is conceived to accept multiple +trimming strategies, however currently only a single one is implemented, +which is `MAXLEN`, and works exactly as the `MAXLEN` option in `XADD`. + +For example the following command will trim the stream to exactly +the latest 1000 items: + +``` +XTRIM mystream MAXLEN 1000 +``` + +It is possible to give the command in the following special form in +order to make it more efficient: + +``` +XTRIM mystream MAXLEN ~ 1000 +``` + +The `~` argument between the **MAXLEN** option and the actual count means that +the user is not really requesting that the stream length is exactly 1000 items, +but instead it could be a few tens of entries more, but never less than 1000 +items. When this option modifier is used, the trimming is performed only when +Redis is able to remove a whole macro node. This makes it much more efficient, +and it is usually what you want. + +@return + +@integer-reply, specifically: + +The command returns the number of entries deleted from the stream. + +```cli +XADD mystream * field1 A field2 B field3 C field4 D +XTRIM mystream MAXLEN 2 +XRANGE mystream - + +``` From 8b2e3fc9c2f8c7e8dd05ff4792ae6c341ab3612c Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 1 Oct 2018 12:16:42 +0200 Subject: [PATCH 0044/1457] XGROUP documented. --- commands/xgroup.md | 59 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 commands/xgroup.md diff --git a/commands/xgroup.md b/commands/xgroup.md new file mode 100644 index 0000000000..6a2df5e77a --- /dev/null +++ b/commands/xgroup.md @@ -0,0 +1,59 @@ +This command is used in order to manage the consumer groups associated +with a stream data structure. Using `XGROUP` you can: + +* Create a new consumer group associated with a stream. +* Destroy a consumer group. +* Remove a specific consumer from a consumer group. +* Set the consumer group *last delivered ID* to something else. + +To create a new consumer group, use the following form: + + XGROUP CREATE mystream consumer-group-name $ + +The last argument is the ID of the last item in the stream to consider already +delivered. In the above case we used the special ID '$' (that means: the ID +of the last item in the stream). In this case the consumers fetching data +from that consumer group will only see new elements arriving in the stream. + +If instead you want consumers to fetch the whole stream history, use +zero as the starting ID for the consumer group: + + XGROUP CREATE mystream consumer-group-name 0 + +Of course it is also possible to use any other valid ID. If the specified +consumer group already exists, the command returns a `-BUSYGROUP` error. +Otherwise the operation is performed and OK is returned. There are no hard +limits to the number of consumer groups you can associate to a given stream. + +A consumer can be destroyed completely by using the following form: + + XGROUP DESTROY mystream some-consumer-group + +The consumer group will be destroyed even if there are active consumers +and pending messages, so make sure to call this command only when really +needed. + +To just remove a given consumer from a consumer group, the following +form is used: + + XGROUP DELCONSUMER mystream consumergrouo myconsumer123 + +Consumers in a consumer group are auto-created every time a new consumer +name is mentioned by some command. However sometimes it may be useful to +remove old consumers since they are no longer used. This form returns +the number of pending messages that the consumer had before it was deleted. + +Finally it possible to set the next message to deliver using the +`SETID` subcommand. Normally the next ID is set when the consumer is +created, as the last argument of `XGROUP CREATE`. However using this form +the next ID can be modified later without deleting and creating the consumer +group again. For instance if you want the consumers in a consumer group +to re-process all the messages in a stream, you may want to set its next +ID to 0: + + XGROUP SETID mystream my-consumer-group 0 + +Finally to get some help if you don't remember the syntax, use the +HELP subcommand: + + XGROUP HELP From 1d08de58f95ea3f0bb55461774c1d8f80475cfc3 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Oct 2018 12:15:57 +0200 Subject: [PATCH 0045/1457] XDEL documented. --- commands/xdel.md | 51 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 commands/xdel.md diff --git a/commands/xdel.md b/commands/xdel.md new file mode 100644 index 0000000000..6ac1477fd8 --- /dev/null +++ b/commands/xdel.md @@ -0,0 +1,51 @@ +Removes the specified entries from a stream, and returns the number of entries +deleted, that may be different from the number of IDs passed to the command in +case certain IDs do not exist. + +Normally you may think at a Redis stream as an append-only data structure, +however Redis streams are represented in memory, so we are able to also +delete entries. This may be useful, for instance, in order to comply with +certain privacy policies. + +# Understanding the low level details of entries deletion + +Redis streams are represented in a way that makes them memory efficient: +a radix tree is used in order to index macro-nodes that pack linearly tens +of stream entries. Normally what happens when you delete an entry from a stream +is that the entry is not *really* evicted, it just gets marked as deleted. + +Eventually if all the entries in a macro-node are marked as deleted, the whole +node is destroyed and the memory reclaimed. This means that if you delete +a large amount of entries from a stream, for instance more than 50% of the +entries appended to the stream, the memory usage per entry may increment, since +what happens is that the stream will start to be fragmented. However the stream +performances will remain the same. + +In future versions of Redis it is possible that we'll trigger a node garbage +collection in case a given macro-node reaches a given amount of deleted +entries. Currently with the usage we anticipate for this data structure, it is +not a good idea to add such complexity. + +@return + +@integer-reply: the number of entries actually deleted. + +@examples + +``` +> XADD mystream * a 1 +1538561698944-0 +> XADD mystream * b 2 +1538561700640-0 +> XADD mystream * c 3 +1538561701744-0 +> XDEL mystream 1538561700640-0 +(integer) 1 +127.0.0.1:6379> XRANGE mystream - + +1) 1) 1538561698944-0 + 2) 1) "a" + 2) "1" +2) 1) 1538561701744-0 + 2) 1) "c" + 2) "3" +``` From 2850ba810e3100cc824591637e84e60939891609 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Oct 2018 16:12:19 +0200 Subject: [PATCH 0046/1457] CLIENT UNBLOCK added to commands.json. --- commands.json | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/commands.json b/commands.json index dc8fb4877e..5bc07e5e6b 100644 --- a/commands.json +++ b/commands.json @@ -310,6 +310,24 @@ ], "group": "server" }, + "CLIENT UNBLOCK": { + "summary": "Unblock a client blocked in a blocking command from a different connection", + "complexity": "O(log N) where N is the number of client connections", + "arguments": [ + { + "name": "client-id", + "type": "string" + }, + { + "name": "unblock-type", + "type": "enum", + "enum": ["TIMEOUT", "ERROR"], + "optional": true + } + ], + "since": "5.0.0", + "group": "server" + }, "CLUSTER ADDSLOTS": { "summary": "Assign new hash slots to receiving node", "complexity": "O(N) where N is the total number of hash slot arguments", From adaff90ffdc7ec67a65236f1f4627c8f92c0a89e Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Oct 2018 16:15:04 +0200 Subject: [PATCH 0047/1457] CLIENT ID added to commands.json. --- commands.json | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands.json b/commands.json index 5bc07e5e6b..01424dff55 100644 --- a/commands.json +++ b/commands.json @@ -224,6 +224,12 @@ "since": "5.0.0", "group": "sorted_set" }, + "CLIENT ID": { + "summary": "Returns the client ID for the current connection", + "complexity": "O(1)", + "since": "5.0.0", + "group": "server" + }, "CLIENT KILL": { "summary": "Kill the connection of a client", "complexity": "O(N) where N is the number of client connections", From e7a01b8843f8248950cd4601daea7b2db53a6097 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Oct 2018 16:28:25 +0200 Subject: [PATCH 0048/1457] CLIENT ID documented. --- commands/client-id.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 commands/client-id.md diff --git a/commands/client-id.md b/commands/client-id.md new file mode 100644 index 0000000000..f3715586f4 --- /dev/null +++ b/commands/client-id.md @@ -0,0 +1,14 @@ +The command just returns the ID of the current connection. Every connection +ID has certain guarantees: + +1. It is never repeated, so if `CLIENT ID` returns the same number, the caller can be sure that the underlying client did not disconnect and reconnect the connection, but it is still the same connection. +2. The ID is monotonically incremental. If the ID of a connection is greater than the ID of another connection, it is guaranteed that the second connection was established with the server at a later time. + +This command is especially useful together with `CLIENT UNBLOCK` which was +introduced also in Redis 5 together with `CLIETN ID`. Check the `CLIENT UNBLOCK` command page for a pattern involving the two commands. + +@examples + +```cli +CLIENT ID +``` From ee0dee9d98b1c1b071a46c535e7dd6da15f45c1c Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Oct 2018 16:38:54 +0200 Subject: [PATCH 0049/1457] CLIENT UNBLOCK documented. --- commands/client-unblock.md | 51 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 commands/client-unblock.md diff --git a/commands/client-unblock.md b/commands/client-unblock.md new file mode 100644 index 0000000000..1a43e4c72f --- /dev/null +++ b/commands/client-unblock.md @@ -0,0 +1,51 @@ +This command can unblock, from a different connection, a client blocked in a blocking operation, such as for instance `BRPOP` or `XREAD` or `WAIT`. + +By default the client is unblocked as if the timeout of the command was +reached, however if an additional (and optional) argument is passed, it is possible to specify the unblocking behavior, that can be **TIMEOUT** (the default) or **ERROR**. If **ERROR** is specified, the behavior is to unblock the client returning as error the fact that the client was force-unblocked. Specifically the client will receive the following error: + + -UNBLOCKED client unblocked via CLIENT UNBLOCK + +Note: of course as usually it is not guaranteed that the error text remains +the same, however the error code will remain `-UNBLOCKED`. + +This command is useful especially when we are monitoring many keys with +a limited number of connections. For instance we may want to monitor multiple +streams with `XREAD` without using more than N connections. However at some +point the consumer process is informed that there is one more stream key +to monitor. In order to avoid using more connections, the best behavior would +be to stop the blocking command from one of the connections in the pool, add +the new key, and issue the blocking command again. + +To obtain this behavior the following pattern is used. The process uses +an additional *control connection* in order to send the `CLIENT UNBLOCK` command +if needed. In the meantime, before running the blocking operation on the other +connections, the process runs `CLIENT ID` in order to get the ID associated +with that connection. When a new key should be added, or when a key should +no longer be monitored, the relevant connection blocking command is aborted +by sending `CLIENT UNBLOCK` in the control connection. The blocking command +will return and can be finally reissued. + +This example shows the application in the context of Redis streams, however +the pattern is a general one and can be applied to other cases. + +@example + +``` +Connection A (blocking connection): +> CLIENT ID +2934 +> BRPOP key1 key2 key3 0 +(client is blocked) + +... Now we want to add a new key ... + +Connection B (control connection): +> CLIENT UNBLOCK 2934 +1 + +Connection A (blocking connection): +... BRPOP reply with timeout ... +NULL +> BRPOP key1 key2 key3 key4 0 +(client is blocked again) +``` From e55888580bf25270689690bb13fcddcd4526e303 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 4 Oct 2018 15:43:27 +0200 Subject: [PATCH 0050/1457] Trademark policy changes. --- topics/trademark.md | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/topics/trademark.md b/topics/trademark.md index 53177e4a34..e2d088d512 100644 --- a/topics/trademark.md +++ b/topics/trademark.md @@ -15,29 +15,30 @@ * iv. You must include attribution according to section 6.a. herein. * c. __Nominative Use__: Trademark law permits third parties the use of a mark to identify the trademark holder’s product or service so long as such use is not likely to cause unnecessary consumer or public confusion. This is referred to as a nominative or fair use. When you distribute, or offer an altered, modified or combined copy of the Software, such as in the case of a cloud service or a container service, you may engage in “nominative use” of the Mark, but this does not allow you to use the Logo. * d. Examples of Nominative Use: - * i. Offering an XYZ software, which is an altered, modified or combined copy of the open source Redis software, including but not limited to offering Redis as a cloud service or as a container service, and while fully complying with the open source Redis API - you may only state that **"XYZ software is compatible with the Redis API"** No other term or description of your software is allowed. - * ii. Offering an XYZ application, which uses an altered, modified or combined copy of the open source Redis software as a data source, including but not limited to using Redis as a cloud service or a container service, and while the modified Redis fully complies with the open source Redis API - you may only state that **"XYZ application is using a software which is compatible with the Redis API"**. No other term or description of your application is allowed. - * iii. If, however, the offered XYZ software, or service based thereof, uses an altered, modified or combined copy of the open source Redis software that does not fully comply with the open source Redis API - you may not use the Mark and Logo at all. - * iv. Finally, while our previous trademark policy suggested that the reference “XYZ Software for Redis” would be a permissible nominative use of the mark, after further consideration, such a use inappropriately suggests an endorsement or sponsorship by us and the community, which we believe creates an unreasonable likelihood of confusion, so this is no longer permitted. See more below under section 5.b. + * i. Offering an XYZ software, which is an altered, modified or combined copy of the open source Redis software, including but not limited to offering Redis as a cloud service or as a container service, and while fully complying with the open source Redis API - you may only name it **"XYZ for Redis™"** or state that **"XYZ software is compatible with the Redis™ API"** No other term or description of your software is allowed. + * ii. Offering an ABC application, which uses an altered, modified or combined copy of the open source Redis software as a data source, including but not limited to using Redis as a cloud service or a container service, and while the modified Redis fully complies with the open source Redis API - you may only state that **"ABC application is using XYZ for Redis™"**, or **"ABC application is using a software which is compatible with the Redis™ API"**. No other term or description of your application is allowed. + * iii. If, however, the offered XYZ software, or service based thereof, or application ABC uses an altered, modified or combined copy of the open source Redis software that does not fully comply with the open source Redis API - you may not use the Mark and Logo at all. + * e. In any use (or nominative use) of the Mark or the Logo as per the above, you should comply with all the provisions of Section 6 (General Use). 5. **IMPROPER USE OF THE REDIS TRADEMARKS AND LOGOS**. Any use of the Mark or Logo other than as expressly described as permitted above, is not permitted because we believe that it would likely cause impermissible public confusion. Use of the Mark that we will likely consider infringing without permission for use include: - * a. Altered or Combined Software. You may not use the Mark with any software distribution in which the open source Redis software has been altered, modified or combined with any other software program, including automation software for offering Redis as a cloud service or orchestration software for offering Redis in containers. In particular, phrases like “XYZ for Redis”, “Redis based”, “Redis compatible” are not allowed. However, provided that you meet the requirements of Nominative Use in sections 4.c. and 4.d. hereabove, you can state that **"XYZ software is compatible with the Redis API"**, or **"XYZ application is using a software which is compatible with the Redis API"**. - * b. The terms "X for Redis" or "Redis for X" are particularly confusing and many companies misunderstand this formulation. Use of the word "for" does not, on its own, qualify the use as nominative use. "X for Y" is nominative use when it signifies that the developer of X has built X for use with Y. For example, "Adobe Acrobat for Mac" is developed by Adobe for the Mac platform. Likewise, "ServiceStack.Redis" is a c# client for Redis built by ServiceStack. But if you customize the Software for use with another application or platform, it misleads users to believe that such product or service has passed the quality control of Redis and is qualified by it. - * c. Entity Names. You may not form a company, use a company name, or create a software product or service name that includes the Mark or implies any that such company is the source or sponsor of Redis. If you wish to form an entity for a user or developer group, please contact us and we will be glad to discuss a license for a suitable name. - * d. Class or Quality. You may not imply that you are providing a class or quality of Redis (e.g., "enterprise-class" or "commercial quality" or “fully managed”) in a way that implies Redis is not of that class, grade or quality, nor that other parties are not of that class, grade, or quality. - * e. False or Misleading Statements. You may not make false or misleading statements regarding your use of Redis (e.g., "we wrote the majority of the code" or "we are major contributors" or "we are committers"). - * f. Domain Names and Subdomains. You must not use Redis or any confusingly similar phrase in a domain name or subdomain. For instance “www.Redishost.com” is not allowed. If you wish to use such a domain name for a user or developer group, please contact us and we will be glad to discuss a license for a suitable domain name. Because of the many persons who, unfortunately, seek to spoof, swindle or deceive the community by using confusing domain names, we must be very strict about this rule. - * g. Websites. You must not use our Mark or Logo on your website in a way that suggests that your website is an official website or that we endorse your website. - * h. Merchandise. You must not manufacture, sell or give away merchandise items, such as T-shirts and mugs, bearing the Mark or Logo, or create any mascot for Redis. If you wish to use the Mark or Logo for a user or developer group, please contact us and we will be glad to discuss a license to do this. - * i. Variations, takeoffs or abbreviations. You may not use a variation of the Mark for any purpose. For example, the following are not acceptable: + * a. Entity Names. You may not form a company, use a company name, or create a software product or service name that includes the Mark or implies any that such company is the source or sponsor of Redis. If you wish to form an entity for a user or developer group, please contact us and we will be glad to discuss a license for a suitable name. + * b. Class or Quality. You may not imply that you are providing a class or quality of Redis (e.g., "enterprise-class" or "commercial quality" or “fully managed”) in a way that implies Redis is not of that class, grade or quality, nor that other parties are not of that class, grade, or quality. + * c. False or Misleading Statements. You may not make false or misleading statements regarding your use of Redis (e.g., "we wrote the majority of the code" or "we are major contributors" or "we are committers"). + * d. Domain Names and Subdomains. You must not use Redis or any confusingly similar phrase in a domain name or subdomain. For instance “www.Redishost.com” is not allowed. If you wish to use such a domain name for a user or developer group, please contact us and we will be glad to discuss a license for a suitable domain name. Because of the many persons who, unfortunately, seek to spoof, swindle or deceive the community by using confusing domain names, we must be very strict about this rule. + * e. Websites. You must not use our Mark or Logo on your website in a way that suggests that your website is an official website or that we endorse your website. + * f. Merchandise. You must not manufacture, sell or give away merchandise items, such as T-shirts and mugs, bearing the Mark or Logo, or create any mascot for Redis. If you wish to use the Mark or Logo for a user or developer group, please contact us and we will be glad to discuss a license to do this. + * g. Variations, takeoffs or abbreviations. You may not use a variation of the Mark for any purpose. For example, the following are not acceptable: * i. Red * ii. MyRedis * iii. RedisHost - * j. Rebranding. You may not change the Mark or Logo on a redistributed (unmodified) Software to your own brand or logo. You may not hold yourself out as the source of the Redis software, except to the extent you have modified it as allowed under the three-clause BSD license, and you make it clear that you are the source only of the modification. - * k. Combination Marks. Do not use our Mark or Logo in combination with any other marks or logos. For example Foobar Redis, or the name of your company or product typeset to look like the Redis logo. - * l. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. + * h. Rebranding. You may not change the Mark or Logo on a redistributed (unmodified) Software to your own brand or logo. You may not hold yourself out as the source of the Redis software, except to the extent you have modified it as allowed under the three-clause BSD license, and you make it clear that you are the source only of the modification. + * i. Combination Marks. Do not use our Mark or Logo in combination with any other marks or logos. For example Foobar Redis, or the name of your company or product typeset to look like the Redis logo. + * j. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. 6. **GENERAL USE INFORMATION.** - * a. Attribution. The appropriate trademark symbol (i.e., TM or ® ) must appear at least with the first use of the Mark and all occurrences of the Logo. When you use the Mark or Logo, you must include a statement attributing ownership of the trademark to Redis Labs Ltd. For example, "Redis and the Redis logo are trademarks owned by Redis Labs Ltd. in the U.S. and other countries." + * a. Attribution. Any permitted use of the Mark or Logo, as indicated above, should comply with the following provisions: + * i. You should add an asterisk (`*`) to the first mention of the word "Redis™`*`" as part of or in connection with a product name. + * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and ";. + * iii. Sections i. And ii. above apply to any appearance of the word "Redis" in: (a) any web page, gated or un-gated; (b) any marketing collateral, white paper, or other promotional material, whether printed or electronic; and (c) any advertisement, in any format. * b. Capitalization. Always distinguish the Mark from surrounding text with at least initial capital letters or in all capital letters, e.g., as Redis or REDIS. * c. Adjective. Always use the Mark as an adjective modifying a noun, such as “the Redis software.” * d. Do not make any changes to the Logo. This means you may not add decorative elements, change the colors, change the proportions, distort it, add elements or combine it with other logos. From 9faa44b4207b2f4e7e4bf65998235e6b35105e27 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 4 Oct 2018 16:20:43 +0200 Subject: [PATCH 0051/1457] Trademark fixes. --- topics/trademark.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/trademark.md b/topics/trademark.md index e2d088d512..25a095c803 100644 --- a/topics/trademark.md +++ b/topics/trademark.md @@ -36,8 +36,8 @@ or Logo other than as expressly described as permitted above, is not permitted b * j. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. 6. **GENERAL USE INFORMATION.** * a. Attribution. Any permitted use of the Mark or Logo, as indicated above, should comply with the following provisions: - * i. You should add an asterisk (`*`) to the first mention of the word "Redis™`*`" as part of or in connection with a product name. - * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and ";. + * i. You should add the TM mark (™) and an asterisk (`*`) to the first mention of the word "Redis™`*`" as part of or in connection with a product name. + * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by `<`company XYZ`>` is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and `<`company XYZ`>`";. * iii. Sections i. And ii. above apply to any appearance of the word "Redis" in: (a) any web page, gated or un-gated; (b) any marketing collateral, white paper, or other promotional material, whether printed or electronic; and (c) any advertisement, in any format. * b. Capitalization. Always distinguish the Mark from surrounding text with at least initial capital letters or in all capital letters, e.g., as Redis or REDIS. * c. Adjective. Always use the Mark as an adjective modifying a noun, such as “the Redis software.” From 2cb03bb4d8a6f5dce19a35948efced5fdcb2e862 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 4 Oct 2018 16:59:00 +0200 Subject: [PATCH 0052/1457] Another minor fix to the Trademark document. --- topics/trademark.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/trademark.md b/topics/trademark.md index 25a095c803..24452afdba 100644 --- a/topics/trademark.md +++ b/topics/trademark.md @@ -36,7 +36,7 @@ or Logo other than as expressly described as permitted above, is not permitted b * j. Web Tags. Do not use the Mark in a title or metatag of a web page to influence search engine rankings or result listings, rather than for discussion or advocacy of the Redis project. 6. **GENERAL USE INFORMATION.** * a. Attribution. Any permitted use of the Mark or Logo, as indicated above, should comply with the following provisions: - * i. You should add the TM mark (™) and an asterisk (`*`) to the first mention of the word "Redis™`*`" as part of or in connection with a product name. + * i. You should add the TM mark (™) and an asterisk (`*`) to the first mention of the word "Redis" as part of or in connection with a product name. * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by `<`company XYZ`>` is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and `<`company XYZ`>`";. * iii. Sections i. And ii. above apply to any appearance of the word "Redis" in: (a) any web page, gated or un-gated; (b) any marketing collateral, white paper, or other promotional material, whether printed or electronic; and (c) any advertisement, in any format. * b. Capitalization. Always distinguish the Mark from surrounding text with at least initial capital letters or in all capital letters, e.g., as Redis or REDIS. From 7dae4de70c7e1f7c9f58f8c6b26d893b69924c32 Mon Sep 17 00:00:00 2001 From: artix Date: Wed, 27 Jun 2018 16:40:49 +0200 Subject: [PATCH 0053/1457] Cluster tutorial: replaced redis-trib with redis-cli. --- topics/cluster-tutorial.md | 70 +++++++++++++++++--------------------- 1 file changed, 32 insertions(+), 38 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 6f6dac2e07..3d932ab3c7 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -301,31 +301,25 @@ Now that we have a number of instances running, we need to create our cluster by writing some meaningful configuration to the nodes. This is very easy to accomplish as we are helped by the Redis Cluster -command line utility called `redis-trib`, a Ruby program -executing special commands on instances in order to create new clusters, +command line utility embedded into `redis-cli`, that can be used to create new clusters, check or reshard an existing cluster, and so forth. -The `redis-trib` utility is in the `src` directory of the Redis source code -distribution. -You need to install `redis` gem to be able to run `redis-trib`. - - gem install redis - To create your cluster simply type: - ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \ - 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 + redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \ + 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \ + --cluster-replicas 1 The command used here is **create**, since we want to create a new cluster. -The option `--replicas 1` means that we want a slave for every master created. +The option `--cluster-replicas 1` means that we want a slave for every master created. The other arguments are the list of addresses of the instances I want to use to create the new cluster. Obviously the only setup with our requirements is to create a cluster with 3 masters and 3 slaves. -Redis-trib will propose you a configuration. Accept the proposed configuration by typing **yes**. +Redis-cli will propose you a configuration. Accept the proposed configuration by typing **yes**. The cluster will be configured and *joined*, which means, instances will be bootstrapped into talking with each other. Finally, if everything went well, you'll see a message like that: @@ -351,7 +345,7 @@ commands: 1. `create-cluster start` 2. `create-cluster create` -Reply to `yes` in step 2 when the `redis-trib` utility wants you to accept +Reply to `yes` in step 2 when the `redis-cli` utility wants you to accept the cluster layout. You can now interact with the cluster, the first node will start at port 30001 @@ -544,16 +538,16 @@ call in order to have some more serious write load during resharding. Resharding basically means to move hash slots from a set of nodes to another set of nodes, and like cluster creation it is accomplished using the -redis-trib utility. +redis-cli utility. To start a resharding just type: - ./redis-trib.rb reshard 127.0.0.1:7000 + redis-cli --cluster reshard 127.0.0.1:7000 -You only need to specify a single node, redis-trib will find the other nodes +You only need to specify a single node, redis-cli will find the other nodes automatically. -Currently redis-trib is only able to reshard with the administrator support, +Currently redis-cli is only able to reshard with the administrator support, you can't just say move 5% of slots from this node to the other one (but this is pretty trivial to implement). So it starts with questions. The first is how much a big resharding do you want to do: @@ -564,11 +558,11 @@ We can try to reshard 1000 hash slots, that should already contain a non trivial amount of keys if the example is still running without the sleep call. -Then redis-trib needs to know what is the target of the resharding, that is, +Then redis-cli needs to know what is the target of the resharding, that is, the node that will receive the hash slots. I'll use the first master node, that is, 127.0.0.1:7000, but I need to specify the Node ID of the instance. This was already printed in a -list by redis-trib, but I can always find the ID of a node with the following +list by redis-cli, but I can always find the ID of a node with the following command if I need: ``` @@ -583,7 +577,7 @@ I'll just type `all` in order to take a bit of hash slots from all the other master nodes. After the final confirmation you'll see a message for every slot that -redis-trib is going to move from a node to another, and a dot will be printed +redis-cli is going to move from a node to another, and a dot will be printed for every actual key moved from one side to the other. While the resharding is in progress you should be able to see your @@ -593,7 +587,7 @@ during the resharding if you want. At the end of the resharding, you can test the health of the cluster with the following command: - ./redis-trib.rb check 127.0.0.1:7000 + redis-cli --cluster check 127.0.0.1:7000 All the slots will be covered as usually, but this time the master at 127.0.0.1:7000 will have more hash slots, something around 6461. @@ -605,10 +599,10 @@ Reshardings can be performed automatically without the need to manually enter the parameters in an interactive way. This is possible using a command line like the following: - ./redis-trib.rb reshard --from --to --slots --yes : + redis-cli reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes This allows to build some automatism if you are likely to reshard often, -however currently there is no way for `redis-trib` to automatically +however currently there is no way for `redis-cli` to automatically rebalance the cluster checking the distribution of keys across the cluster nodes and intelligently moving slots as needed. This feature will be added in the future. @@ -818,20 +812,20 @@ do in order to conform with the setup we used for the previous nodes: At this point the server should be running. -Now we can use **redis-trib** as usually in order to add the node to +Now we can use **redis-cli** as usually in order to add the node to the existing cluster. - ./redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000 + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 As you can see I used the **add-node** command specifying the address of the new node as first argument, and the address of a random existing node in the cluster as second argument. -In practical terms redis-trib here did very little to help us, it just +In practical terms redis-cli here did very little to help us, it just sent a `CLUSTER MEET` message to the node, something that is also possible -to accomplish manually. However redis-trib also checks the state of the +to accomplish manually. However redis-cli also checks the state of the cluster before to operate, so it is a good idea to perform cluster operations -always via redis-trib even when you know how the internals work. +always via redis-cli even when you know how the internals work. Now we can connect to the new node to see if it really joined the cluster: @@ -854,7 +848,7 @@ the cluster. However it has two peculiarities compared to the other masters: * Because it is a master without assigned slots, it does not participate in the election process when a slave wants to become a master. Now it is possible to assign hash slots to this node using the resharding -feature of `redis-trib`. It is basically useless to show this as we already +feature of `redis-cli`. It is basically useless to show this as we already did in a previous section, there is no difference, it is just a resharding having as a target the empty node. @@ -862,19 +856,19 @@ Adding a new node as a replica --- Adding a new Replica can be performed in two ways. The obvious one is to -use redis-trib again, but with the --slave option, like this: +use redis-cli again, but with the --cluster-slave option, like this: - ./redis-trib.rb add-node --slave 127.0.0.1:7006 127.0.0.1:7000 + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave Note that the command line here is exactly like the one we used to add a new master, so we are not specifying to which master we want to add -the replica. In this case what happens is that redis-trib will add the new +the replica. In this case what happens is that redis-cli will add the new node as replica of a random master among the masters with less replicas. However you can specify exactly what master you want to target with your new replica with the following command line: - ./redis-trib.rb add-node --slave --master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7006 127.0.0.1:7000 + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave --cluster-master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e This way we assign the new replica to a specific master. @@ -905,9 +899,9 @@ The node 3c3a0c... now has two slaves, running on ports 7002 (the existing one) Removing a node --- -To remove a slave node just use the `del-node` command of redis-trib: +To remove a slave node just use the `del-node` command of redis-cli: - ./redis-trib del-node 127.0.0.1:7000 `` + redis-cli --cluster del-node 127.0.0.1:7000 `` The first argument is just a random node in the cluster, the second argument is the ID of the node you want to remove. @@ -1022,12 +1016,12 @@ in order to migrate your data set to Redis Cluster: 4. Create a Redis Cluster composed of N masters and zero slaves. You'll add slaves later. Make sure all your nodes are using the append only file for persistence. 5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N. 6. Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration. -7. Use `redis-trib fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not. -8. Use `redis-trib check` at the end to make sure your cluster is ok. +7. Use `redis-cli --cluster fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not. +8. Use `redis-cli --cluster check` at the end to make sure your cluster is ok. 9. Restart your clients modified to use a Redis Cluster aware client library. There is an alternative way to import data from external instances to a Redis -Cluster, which is to use the `redis-trib import` command. +Cluster, which is to use the `redis-cli --cluster import` command. The command moves all the keys of a running instance (deleting the keys from the source instance) to the specified pre-existing Redis Cluster. However From 53d5aefc16efebc52abc29503396591ef8ab7d37 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 5 Oct 2018 12:15:24 +0200 Subject: [PATCH 0054/1457] Stars udpated in modules. --- modules.json | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/modules.json b/modules.json index 0afa44f322..0b8c2a1fb3 100644 --- a/modules.json +++ b/modules.json @@ -7,7 +7,7 @@ "authors": [ "aviggiano" ], - "stars": 49 + "stars": 60 }, { "name": "redis-cell", @@ -17,7 +17,7 @@ "authors": [ "brandur" ], - "stars": 343 + "stars": 403 }, { "name": "Redis Graph", @@ -27,7 +27,7 @@ "authors": [ "swilly22" ], - "stars": 372 + "stars": 401 }, { "name": "redis-tdigest", @@ -37,7 +37,7 @@ "authors": [ "usmanm" ], - "stars": 41 + "stars": 45 }, { "name": "ReJSON", @@ -48,7 +48,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 535 + "stars": 641 }, { "name": "Redis-ML", @@ -59,7 +59,7 @@ "shaynativ", "RedisLabs" ], - "stars": 160 + "stars": 212 }, { "name": "RediSearch", @@ -70,7 +70,7 @@ "dvirsky", "RedisLabs" ], - "stars": 1093 + "stars": 1323 }, { "name": "topk", @@ -81,7 +81,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 20 + "stars": 21 }, { "name": "countminsketch", @@ -92,7 +92,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 27 + "stars": 31 }, { "name": "rebloom", @@ -103,7 +103,7 @@ "mnunberg", "RedisLabs" ], - "stars": 95 + "stars": 136 }, { "name": "neural-redis", @@ -113,7 +113,7 @@ "authors": [ "antirez" ], - "stars": 2021 + "stars": 2073 }, { "name": "redis-timerseries", @@ -123,7 +123,7 @@ "authors": [ "danni-m" ], - "stars": 140 + "stars": 186 }, { "name": "ReDe", @@ -133,7 +133,7 @@ "authors": [ "daTokenizer" ], - "stars": 17 + "stars": 23 }, { "name": "commentDis", @@ -143,7 +143,7 @@ "authors": [ "daTokenizer" ], - "stars": 5 + "stars": 6 }, { "name": "redis-cuckoofilter", @@ -153,7 +153,7 @@ "authors": [ "kristoff-it" ], - "stars": 62 + "stars": 66 }, { "name": "cthulhu", @@ -163,7 +163,7 @@ "authors": [ "sklivvz" ], - "stars": 112 + "stars": 117 }, { "name": "Session Gate", @@ -173,7 +173,7 @@ "authors": [ "f0rmiga" ], - "stars": 29 + "stars": 32 }, { "name": "rediSQL", @@ -184,6 +184,6 @@ "siscia", "RedBeardLab" ], - "stars": 563 + "stars": 613 } -] +] \ No newline at end of file From 7dcf945b5a7bb7bdbafe387269dc4e069c9d638c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 12 Oct 2018 16:41:38 +0200 Subject: [PATCH 0055/1457] Fix ownership of Redis name/logo in doc. --- topics/license.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/license.md b/topics/license.md index fcec35880c..d7ad7e9296 100644 --- a/topics/license.md +++ b/topics/license.md @@ -2,7 +2,7 @@ Redis is **open source software** released under the terms of the **three clause BSD license**. Most of the Redis source code was written and is copyrighted by Salvatore Sanfilippo and Pieter Noordhuis. A list of other contributors can be found in the git history. -The Redis trademark and logo are owned by Salvatore Sanfilippo and can be +The Redis trademark and logo are owned by Redis Labs and can be used in accordance with the [Redis Trademark Guidelines](/topics/trademark). # Three clause BSD license From 94a41a6e7a4ae1b288bd75bb70edc1de413c128b Mon Sep 17 00:00:00 2001 From: Chirag Date: Sat, 13 Oct 2018 15:53:18 +0530 Subject: [PATCH 0056/1457] Fixes grammer in Redus Cluster Tutorial Doc --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 3d932ab3c7..eb0471024d 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -31,7 +31,7 @@ some nodes fail or are not able to communicate. However the cluster stops to operate in the event of larger failures (for example when the majority of masters are unavailable). -So in practical terms, what you get with Redis Cluster? +So in practical terms, what do you get with Redis Cluster? * The ability to **automatically split your dataset among multiple nodes**. * The ability to **continue operations when a subset of the nodes are experiencing failures** or are unable to communicate with the rest of the cluster. From c717dfcb21aaf21f041f3616d6754ee9d8a81b12 Mon Sep 17 00:00:00 2001 From: Roey Prat Date: Mon, 15 Oct 2018 16:04:36 +0300 Subject: [PATCH 0057/1457] XPENDING - missing word 'group' --- commands/xpending.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/commands/xpending.md b/commands/xpending.md index f2768cbb53..d1a0f65fcc 100644 --- a/commands/xpending.md +++ b/commands/xpending.md @@ -20,10 +20,11 @@ explained in the [streams intro](/topics/streams-intro) and in the ## Summary form of XPENDING -When `XPENDING` is called with just a key name and a consumer name, it -just outputs a summary about the pending messages in a given consumer -group. In the following example, we create a consumed group and immediately -create a pending message by reading from the group with `XREADGROUP`. +When `XPENDING` is called with just a key name and a consumer group +name, it just outputs a summary about the pending messages in a given +consumer group. In the following example, we create a consumed group and +immediatelycreate a pending message by reading from the group with +`XREADGROUP`. ``` > XGROUP CREATE mystream group55 0-0 From bc2dad350364650887a7455d3164b91fa2151b31 Mon Sep 17 00:00:00 2001 From: pushthat Date: Fri, 19 Oct 2018 10:48:36 +0200 Subject: [PATCH 0058/1457] Update replication.md, fix a typo --- topics/replication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 9b780f6f66..8544a9923f 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -120,7 +120,7 @@ that are promoted to masters. After a failover, the promoted slave requires to still remember what was its past replication ID, because such replication ID was the one of the former master. In this way, when other slaves will synchronize with the new master, they will try to perform a partial resynchronization using the -old master replication ID. This will work as expected, becuase when the slave +old master replication ID. This will work as expected, because when the slave is promoted to master it sets its secondary ID to its main ID, remembering what was the offset when this ID switch happend. Later it will select a new random replication ID, because a new history begins. When handling the new slaves From 5df909aca706feeecf04e1b34379ed760e9e94a8 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 19 Oct 2018 12:12:17 +0200 Subject: [PATCH 0059/1457] ARM page updated. --- topics/ARM.md | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/topics/ARM.md b/topics/ARM.md index 7c649b5e1e..845ed7bbb5 100644 --- a/topics/ARM.md +++ b/topics/ARM.md @@ -1,13 +1,12 @@ # Redis on ARM -Since the Redis 4.0 version (currently in release candidate state) Redis -supports the ARM processor in general, and the Raspberry Pi specifically, as a -main platform, exactly like it happens for Linux/x86. It means that every new -release of Redis is tested on the Pi environment, and that we take -this documentation page updated with information about supported devices -and information. While Redis already runs on Android, in the future we look -forward to extend our testing efforts to Android to also make it an officially -supported platform. +Both Redis 4 and Redis 5 versions supports the ARM processor in general, and +the Raspberry Pi specifically, as a main platform, exactly like it happens +for Linux/x86. It means that every new release of Redis is tested on the Pi +environment, and that we take this documentation page updated with information +about supported devices and other useful info. While Redis already runs on +Android, in the future we look forward to extend our testing efforts to Android +to also make it an officially supported platform. We believe that Redis is ideal for IoT and Embedded devices for several reasons: @@ -17,6 +16,7 @@ reasons: * Modeling data inside Redis can be very useful in order to make in-device decisions for appliances that must respond very quickly or when the remote servers are offline. * Redis can be used as an interprocess communication system between the processes running in the device. * The append only file storage of Redis is well suited for the SSD cards. +* The Redis 5 stream data structure was specifically designed for time series applications and has a very low memory overhead. ## Redis /proc/cpu/alignment requirements @@ -29,7 +29,7 @@ run as expected. ## Building Redis in the Pi -* Grab the latest commit of the Redis 4.0 branch. +* Download Redis verison 4 or 5. * Just use `make` as usually to create the executable. There is nothing special in the process. The only difference is that by @@ -45,7 +45,8 @@ Performance testing of Redis was performed in the Raspberry Pi 3 and in the original model B Pi. The difference between the two Pis in terms of delivered performance is quite big. The benchmarks were performed via the loopback interface, since most use cases will probably use Redis from within -the device and not via the network. +the device and not via the network. The following numbers were obtained using +Redis 4. Raspberry Pi 3: From f04dd7142481a3023a2a11f68ff83350fc330f92 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 19 Oct 2018 15:14:35 +0300 Subject: [PATCH 0060/1457] Adds streams to intro/homepage Signed-off-by: Itamar Haber --- topics/introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/introduction.md b/topics/introduction.md index bb17783022..b1a4d29534 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -2,7 +2,7 @@ Introduction to Redis === Redis is an open source (BSD licensed), in-memory **data structure store**, used as a database, cache and message broker. It supports data structures such as -[strings](/topics/data-types-intro#strings), [hashes](/topics/data-types-intro#hashes), [lists](/topics/data-types-intro#lists), [sets](/topics/data-types-intro#sets), [sorted sets](/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](/topics/data-types-intro#bitmaps), [hyperloglogs](/topics/data-types-intro#hyperloglogs) and [geospatial indexes](/commands/geoadd) with radius queries. Redis has built-in [replication](/topics/replication), [Lua scripting](/commands/eval), [LRU eviction](/topics/lru-cache), [transactions](/topics/transactions) and different levels of [on-disk persistence](/topics/persistence), and provides high availability via [Redis Sentinel](/topics/sentinel) and automatic partitioning with [Redis Cluster](/topics/cluster-tutorial). +[strings](/topics/data-types-intro#strings), [hashes](/topics/data-types-intro#hashes), [lists](/topics/data-types-intro#lists), [sets](/topics/data-types-intro#sets), [sorted sets](/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](/topics/data-types-intro#bitmaps), [hyperloglogs](/topics/data-types-intro#hyperloglogs), [geospatial indexes](/commands/geoadd) with radius queries and [streams](/topics/streams-intro.md). Redis has built-in [replication](/topics/replication), [Lua scripting](/commands/eval), [LRU eviction](/topics/lru-cache), [transactions](/topics/transactions) and different levels of [on-disk persistence](/topics/persistence), and provides high availability via [Redis Sentinel](/topics/sentinel) and automatic partitioning with [Redis Cluster](/topics/cluster-tutorial). You can run **atomic operations** on these types, like [appending to a string](/commands/append); From a8ac842b4e918f56646b682c6e41d08077c19259 Mon Sep 17 00:00:00 2001 From: andrewsensus <7727721+andrewsensus@users.noreply.github.com> Date: Fri, 19 Oct 2018 10:11:56 -0600 Subject: [PATCH 0061/1457] typo: s/CLIETN ID/CLIENT ID/ --- commands/client-id.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-id.md b/commands/client-id.md index f3715586f4..53fcac5016 100644 --- a/commands/client-id.md +++ b/commands/client-id.md @@ -5,7 +5,7 @@ ID has certain guarantees: 2. The ID is monotonically incremental. If the ID of a connection is greater than the ID of another connection, it is guaranteed that the second connection was established with the server at a later time. This command is especially useful together with `CLIENT UNBLOCK` which was -introduced also in Redis 5 together with `CLIETN ID`. Check the `CLIENT UNBLOCK` command page for a pattern involving the two commands. +introduced also in Redis 5 together with `CLIENT ID`. Check the `CLIENT UNBLOCK` command page for a pattern involving the two commands. @examples From e6574a2b7b70c58606cd9962c2a626be080695f3 Mon Sep 17 00:00:00 2001 From: charpty Date: Sun, 21 Oct 2018 10:27:45 +0800 Subject: [PATCH 0062/1457] Fix typo in module-intro. Signed-off-by: charpty --- topics/modules-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 3ac6a46732..3fad95fe8f 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -573,7 +573,7 @@ To create a new key, open it for writing and then write to it using one of the key writing functions. Example: RedisModuleKey *key; - key = RedisModule_OpenKey(ctx,argv[1],REDISMODULE_READ); + key = RedisModule_OpenKey(ctx,argv[1],REDISMODULE_WRITE); if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) { RedisModule_StringSet(key,argv[2]); } From 3796c2eb13b62d29db1d123ba81ac1fa756a9a10 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 21 Oct 2018 18:07:27 +0300 Subject: [PATCH 0063/1457] Add TYPE filter and description of name field to CLIENT LIST Signed-off-by: Itamar Haber --- commands.json | 8 ++++++++ commands/client-list.md | 3 +++ 2 files changed, 11 insertions(+) diff --git a/commands.json b/commands.json index 01424dff55..9a0a492d2d 100644 --- a/commands.json +++ b/commands.json @@ -270,6 +270,14 @@ "CLIENT LIST": { "summary": "Get the list of client connections", "complexity": "O(N) where N is the number of client connections", + "arguments": [ + { + "command": "TYPE", + "type": "enum", + "enum": ["normal", "master", "replica", "pubsub"], + "optional": true + } + ], "since": "2.4.0", "group": "server" }, diff --git a/commands/client-list.md b/commands/client-list.md index d7dfca6de0..844cc419cc 100644 --- a/commands/client-list.md +++ b/commands/client-list.md @@ -1,6 +1,8 @@ The `CLIENT LIST` command returns information and statistics about the client connections server in a mostly human readable format. +As of v5.0, the optional `TYPE type` subcommand can be used to filter the list by clients' type, where *type* is one of `normal`, `master`, `replica` and `pubsub`. Note that clients blocked into the `MONITOR` command are considered to belong to the `normal` class. + @return @bulk-string-reply: a unique string, formatted as follows: @@ -12,6 +14,7 @@ connections server in a mostly human readable format. Here is the meaning of the fields: * `id`: an unique 64-bit client ID (introduced in Redis 2.8.12). +* `name`: the name set by the client with `CLIENT SETNAME` * `addr`: address/port of the client * `fd`: file descriptor corresponding to the socket * `age`: total duration of the connection in seconds From cad83c8e6af8b98b9c715bfec4bcac9a5edaf573 Mon Sep 17 00:00:00 2001 From: Chirag Date: Mon, 22 Oct 2018 00:53:05 +0530 Subject: [PATCH 0064/1457] Fixes redis cluster grammer --- topics/cluster-tutorial.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index eb0471024d..359fe6ff1e 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -218,7 +218,7 @@ let's introduce the configuration parameters that Redis Cluster introduces in the `redis.conf` file. Some will be obvious, others will be more clear as you continue reading. -* **cluster-enabled ``**: If yes enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usually. +* **cluster-enabled ``**: If yes enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. * **cluster-config-file ``**: Note that despite the name of this option, this is not an user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. * **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its slaves. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. * **cluster-slave-validity-factor ``**: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. @@ -589,7 +589,7 @@ the following command: redis-cli --cluster check 127.0.0.1:7000 -All the slots will be covered as usually, but this time the master at +All the slots will be covered as usual, but this time the master at 127.0.0.1:7000 will have more hash slots, something around 6461. Scripting a resharding operation @@ -812,7 +812,7 @@ do in order to conform with the setup we used for the previous nodes: At this point the server should be running. -Now we can use **redis-cli** as usually in order to add the node to +Now we can use **redis-cli** as usual in order to add the node to the existing cluster. redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 From 405725323f8eecee3f2e0a7e9f51cb09daf5f0ca Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 21 Oct 2018 23:13:01 +0300 Subject: [PATCH 0065/1457] Adds P (Pub/Sub) flag description Signed-off-by: Itamar Haber --- commands/client-list.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/client-list.md b/commands/client-list.md index 844cc419cc..c148b98ed3 100644 --- a/commands/client-list.md +++ b/commands/client-list.md @@ -48,6 +48,7 @@ U: the client is connected via a Unix domain socket r: the client is in readonly mode against a cluster node A: connection to be closed ASAP N: no specific flag set +P: the client is a Pub/Sub subscriber ``` The file descriptor events can be: From 9d8c18418f1d15df711fd3594df11d6cd9c7a9c7 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 21 Oct 2018 23:16:31 +0300 Subject: [PATCH 0066/1457] Orders flags man-like Signed-off-by: Itamar Haber --- commands/client-list.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/commands/client-list.md b/commands/client-list.md index c148b98ed3..3843689be1 100644 --- a/commands/client-list.md +++ b/commands/client-list.md @@ -35,20 +35,20 @@ Here is the meaning of the fields: The client flags can be a combination of: ``` -O: the client is a client in MONITOR mode -S: the client is a replica node connection to this instance -M: the client is a master -x: the client is in a MULTI/EXEC context +A: connection to be closed ASAP b: the client is waiting in a blocking operation -i: the client is waiting for a VM I/O (deprecated) -d: a watched keys has been modified - EXEC will fail c: connection to be closed after writing entire reply -u: the client is unblocked -U: the client is connected via a Unix domain socket -r: the client is in readonly mode against a cluster node -A: connection to be closed ASAP +d: a watched keys has been modified - EXEC will fail +i: the client is waiting for a VM I/O (deprecated) +M: the client is a master N: no specific flag set +O: the client is a client in MONITOR mode P: the client is a Pub/Sub subscriber +r: the client is in readonly mode against a cluster node +S: the client is a replica node connection to this instance +u: the client is unblocked +U: the client is connected via a Unix domain socket +x: the client is in a MULTI/EXEC context ``` The file descriptor events can be: From c81727fe6450c25a6b296756f4b76590cd2cea3b Mon Sep 17 00:00:00 2001 From: Benoit Larroque Date: Tue, 23 Oct 2018 23:49:11 +0200 Subject: [PATCH 0067/1457] Add missing groupname in XGROUP SETID From https://github.com/antirez/redis/blob/144832ee67c058dfec9910e8953588d23ae3224c/src/t_stream.c#L842 and the doc this subcommand also should take the groupname as argument --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 01424dff55..3682c42f11 100644 --- a/commands.json +++ b/commands.json @@ -3589,8 +3589,8 @@ }, { "command": "SETID", - "name": ["key", "id-or-$"], - "type": ["key", "string"], + "name": ["key", "groupname", "id-or-$"], + "type": ["key", "string", "string"], "optional": true }, { From da2417590ec3f152cdfacb95d1f771eb45ae22a0 Mon Sep 17 00:00:00 2001 From: Benoit Larroque Date: Tue, 23 Oct 2018 23:53:30 +0200 Subject: [PATCH 0068/1457] Use the same consumergroup name in all example --- commands/xgroup.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/commands/xgroup.md b/commands/xgroup.md index 6a2df5e77a..15fb235d46 100644 --- a/commands/xgroup.md +++ b/commands/xgroup.md @@ -27,7 +27,7 @@ limits to the number of consumer groups you can associate to a given stream. A consumer can be destroyed completely by using the following form: - XGROUP DESTROY mystream some-consumer-group + XGROUP DESTROY mystream consumer-group-name The consumer group will be destroyed even if there are active consumers and pending messages, so make sure to call this command only when really @@ -36,7 +36,7 @@ needed. To just remove a given consumer from a consumer group, the following form is used: - XGROUP DELCONSUMER mystream consumergrouo myconsumer123 + XGROUP DELCONSUMER mystream consumer-group-name myconsumer123 Consumers in a consumer group are auto-created every time a new consumer name is mentioned by some command. However sometimes it may be useful to @@ -51,7 +51,7 @@ group again. For instance if you want the consumers in a consumer group to re-process all the messages in a stream, you may want to set its next ID to 0: - XGROUP SETID mystream my-consumer-group 0 + XGROUP SETID mystream consumer-group-name 0 Finally to get some help if you don't remember the syntax, use the HELP subcommand: From b6459bba957c2e7b1b2d8a50f1e4d328cba8b8fa Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 24 Oct 2018 13:08:06 +0200 Subject: [PATCH 0069/1457] Fix streams operations time complexity. --- commands.json | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/commands.json b/commands.json index 01424dff55..3712ad3953 100644 --- a/commands.json +++ b/commands.json @@ -3415,7 +3415,7 @@ }, "XADD": { "summary": "Appends a new entry to a stream", - "complexity": "O(log(N)) with N being the number of items already into the stream.", + "complexity": "O(1) but with non trivial constant times", "arguments": [ { "name": "key", @@ -3436,7 +3436,7 @@ }, "XTRIM": { "summary": "Trims the stream to (approximately if '~' is passed) a certain size", - "complexity": "O(log(N)) + M, with N being the number of entries in the stream prior to trim, and M being the number of evicted entries. M constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.", + "complexity": "O(N), with N being the number of evicted entries. Constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.", "arguments": [ { "name": "key", @@ -3463,7 +3463,7 @@ }, "XDEL": { "summary": "Removes the specified entries from the stream. Returns the number of items actually deleted, that may be different from the number of IDs passed in case certain IDs do not exist.", - "complexity": "O(log(N)) with N being the number of items in the stream.", + "complexity": "O(1) for each single item to delete in the stream, regardless of the stream size.", "arguments": [ { "name": "key", @@ -3480,7 +3480,7 @@ }, "XRANGE": { "summary": "Return a range of elements in a stream, with IDs matching the specified IDs interval", - "complexity": "O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)).", + "complexity": "O(N) with N being the number of elements being returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).", "arguments": [ { "name": "key", @@ -3506,7 +3506,7 @@ }, "XREVRANGE": { "summary": "Return a range of elements in a stream, with IDs matching the specified IDs interval, in reverse order (from greater to smaller IDs) compared to XRANGE", - "complexity": "O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)).", + "complexity": "O(N) with N being the number of elements returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).", "arguments": [ { "name": "key", @@ -3544,7 +3544,7 @@ }, "XREAD": { "summary": "Return never seen elements in multiple streams, with IDs greater than the ones reported by the caller for each stream. Can block.", - "complexity": "For each stream mentioned: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). On the other side, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", + "complexity": "For each stream mentioned: O(N) with N being the number of elements being returned, it menas that XREAD-ing with a fixed COUNT is O(1), even if with non-trivial constant times. Note that when the BLOCK option is used, XADD will pay O(M) time in order to serve the M clients blocked on the stream getting new data.", "arguments": [ { "command": "COUNT", @@ -3579,7 +3579,7 @@ }, "XGROUP": { "summary": "Create, destroy, and manage consumer groups.", - "complexity": "O(log N) for all the subcommands, with N being the number of consumer groups registered in the stream, with the exception of the DESTROY subcommand which takes an additional O(M) time in order to delete the M entries inside the consumer group pending entries list (PEL).", + "complexity": "O(1) for all the subcommands, with the exception of the DESTROY subcommand which takes an additional O(M) time in order to delete the M entries inside the consumer group pending entries list (PEL).", "arguments": [ { "command": "CREATE", @@ -3611,7 +3611,7 @@ }, "XREADGROUP": { "summary": "Return new entries from a stream using a consumer group, or access the history of the pending entries for a given consumer. Can block.", - "complexity": "For each stream mentioned: O(log(N)+M) with N being the number of elements in the stream and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(log(N)). On the other side, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", + "complexity": "For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", "arguments": [ { "command": "GROUP", @@ -3651,7 +3651,7 @@ }, "XACK": { "summary": "Marks a pending message as correctly processed, effectively removing it from the pending entries list of the consumer group. Return value of the command is the number of messages successfully acknowledged, that is, the IDs we were actually able to resolve in the PEL.", - "complexity": "O(log N) for each message ID processed.", + "complexity": "O(1) for each message ID processed.", "arguments": [ { "name": "key", @@ -3729,7 +3729,7 @@ }, "XPENDING": { "summary": "Return information and entries from a stream consumer group pending entries list, that are messages fetched but never acknowledged.", - "complexity": "O(log(N)+M) with N being the number of elements in the consumer group pending entries list, and M the number of elements being returned. When the command returns just the summary it runs in O(1) time assuming the list of consumers is small, otherwise there is additional O(N) time needed to iterate every consumer.", + "complexity": "O(N) with N being the number of elements returned, so asking for a small fixed number of entries per call is O(1). When the command returns just the summary it runs in O(1) time assuming the list of consumers is small, otherwise there is additional O(N) time needed to iterate every consumer.", "arguments": [ { "name": "key", From 6f51caff912df321b53eea64c288ebf2838fe416 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Thu, 25 Oct 2018 00:40:44 +0300 Subject: [PATCH 0070/1457] missing key in example --- commands/xrevrange.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/xrevrange.md b/commands/xrevrange.md index bf03635557..5ddcbcd4e7 100644 --- a/commands/xrevrange.md +++ b/commands/xrevrange.md @@ -7,12 +7,12 @@ between (or exactly like) the two IDs, starting from the *end* side. So for instance, to get all the elements from the higher ID to the lower ID one could use: - XREVRANGE + - + XREVRANGE somestream + - Similarly to get just the last element added into the stream it is enough to send: - XREVRANGE + - COUNT 1 + XREVRANGE somestream + - COUNT 1 ## Iterating with XREVRANGE From cbf6f505ceaaf9392763bc08d077b66a0f24c680 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 25 Oct 2018 13:27:46 +0200 Subject: [PATCH 0071/1457] Add info about redis-trib back into the Cluster tutorial. See #1006. --- topics/cluster-tutorial.md | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 3d932ab3c7..cdc4eeecbb 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -300,17 +300,25 @@ Creating the cluster Now that we have a number of instances running, we need to create our cluster by writing some meaningful configuration to the nodes. -This is very easy to accomplish as we are helped by the Redis Cluster -command line utility embedded into `redis-cli`, that can be used to create new clusters, -check or reshard an existing cluster, and so forth. +If you are using Redis 5, this is very easy to accomplish as we are helped by the Redis Cluster command line utility embedded into `redis-cli`, that can be used to create new clusters, check or reshard an existing cluster, and so forth. +For Redis version 3 or 4, there is the older tool called `redis-trib.rb` which is very similar. You can find it in the `src` directory of the Redis source code distribution. You need to install `redis` gem to be able to run `redis-trib`. - To create your cluster simply type: + gem install redis + +The first example, that is, the cluster creation, will be shown using both `redis-cli` in Redis 5 and `redis-trib` in Redis 3 and 4. However all the next examples will only use `redis-cli`, since as you can see the syntax is very similar, and you can trivially change one command line into the other by using `redis-trib.rb help` to get info about the old syntax. **Important:** note that you can use Redis 5 `redis-cli` against Redis 4 clusters without issues if you wish. + +To create your cluster for Redis 5 with `redis-cli` simply type: redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \ 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \ --cluster-replicas 1 +Using `redis-trib.rb` for Redis 4 or 3 type: + + ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \ + 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 + The command used here is **create**, since we want to create a new cluster. The option `--cluster-replicas 1` means that we want a slave for every master created. The other arguments are the list of addresses of the instances I want to use From 19f0de6b0ecffa034c7e3e0cb4b1ca704f6e7a73 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 29 Oct 2018 14:35:03 +0200 Subject: [PATCH 0072/1457] Adds NOACK to XREADGROUP And fixes a couple of typos in streams-intro. Signed-off-by: Itamar Haber --- commands.json | 6 ++++++ commands/xreadgroup.md | 6 +++++- topics/streams-intro.md | 4 ++-- 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/commands.json b/commands.json index 9ebb8d9e77..5af826c849 100644 --- a/commands.json +++ b/commands.json @@ -3638,6 +3638,12 @@ "type": "integer", "optional": true }, + { + "name": "noack", + "type": "enum", + "enum": ["NOACK"], + "optional": true + }, { "name": "streams", "type": "enum", diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index 80a6b12c3c..e19324e1f5 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -16,7 +16,7 @@ Without consumer groups, just using `XREAD`, all the clients are served with all Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with an unique *consumer name*. Which is just a string. -One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledged of the messages successfully processed by the consumer, via the `XACK` command. This is needed because the stream will track, for each consumer group, who is processing what message. +One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgement of the messages successfully processed by the consumer, via the `XACK` command. This is needed because the stream will track, for each consumer group, who is processing what message. This is how to understand if you want to use a consumer group or not: @@ -45,6 +45,10 @@ The client will have to acknowledge the message processing using `XACK` in order for the pending entry to be removed from the PEL. The PEL can be inspected using the `XPENDING` command. +The `NOACK` subcommand can be used to avoid adding the message to the PEL in +cases where reliability is not a requirement and the occasional message loss +is acceptable. This is equivalent to acknowledging the message when it is read. + The ID to specify in the **STREAMS** option when using `XREADGROUP` can be one of the following two: diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 61d0fd5b7f..76e8c02653 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -273,7 +273,7 @@ There is another very important detail in the command line above, after the mand This is almost always what you want, however it is also possible to specify a real ID, such as `0` or any other valid ID, in this case however what happens is that we request to **XREADGROUP** to just provide us with the **history of pending messages**, and in such case, will never see new messages in the group. So basically **XREADGROUP** has the following behavior based on the ID we specify: * If the ID is the special ID `>` then the command will return only new messages never delivered to other consumers so far, and as a side effect, will update the consumer group *last ID*. -* If the ID is any other valid numerical ID, then the command will let us access our *history of pending messages*. That is, the set of messages that were delivered to this specified consumer (identified by the provided name), and never acknowledged so var with **XACK**. +* If the ID is any other valid numerical ID, then the command will let us access our *history of pending messages*. That is, the set of messages that were delivered to this specified consumer (identified by the provided name), and never acknowledged so far with **XACK**. We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about apples: @@ -377,7 +377,7 @@ while true end ``` -As you can see the idea here is to start consuming the history, that is, our list of pending messages. This is useful because the consumer may have crashed before, so in the event of a restart, we want to read again messages that were delivered to us without getting acknowledged. This way we can process a message multiple times or one time (at least in the case of consumers failures, but there are also the limits of Redis persistence and replication involved, see the specific section about this topic). +As you can see the idea here is to start consuming the history, that is, our list of pending messages. This is useful because the consumer may have crashed before, so in the event of a restart we want to read again messages that were delivered to us without getting acknowledged. This way we can process a message multiple times or one time (at least in the case of consumers failures, but there are also the limits of Redis persistence and replication involved, see the specific section about this topic). Once the history was consumed, and we get an empty list of messages, we can switch to use the `>` special ID in order to consume new messages. From 127e72e947ba1696218452bbc9776bd212abccb8 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 29 Oct 2018 18:56:52 +0100 Subject: [PATCH 0073/1457] Some admin hints refreshed. --- topics/admin.md | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/topics/admin.md b/topics/admin.md index 9adc011c54..85cec9d9e7 100644 --- a/topics/admin.md +++ b/topics/admin.md @@ -7,16 +7,18 @@ Every topic is self contained in form of a FAQ. New topics will be created in th Redis setup hints ----------------- -+ We suggest deploying Redis using the **Linux operating system**. Redis is also tested heavily on OS X, and tested from time to time on FreeBSD and OpenBSD systems. However Linux is where we do all the major stress testing, and where most production deployments are working. ++ We suggest deploying Redis using the **Linux operating system**. Redis is also tested heavily on OS X, and tested from time to time on FreeBSD and OpenBSD systems. However Linux is where we do all the major stress testing, and where most production deployments are running. + Make sure to set the Linux kernel **overcommit memory setting to 1**. Add `vm.overcommit_memory = 1` to `/etc/sysctl.conf` and then reboot or run the command `sysctl vm.overcommit_memory=1` for this to take effect immediately. * Make sure to disable Linux kernel feature *transparent huge pages*, it will affect greatly both memory usage and latency in a negative way. This is accomplished with the following command: `echo never > /sys/kernel/mm/transparent_hugepage/enabled`. -+ Make sure to **setup some swap** in your system (we suggest as much as swap as memory). If Linux does not have swap and your Redis instance accidentally consumes too much memory, either Redis will crash for out of memory or the Linux kernel OOM killer will kill the Redis process. -+ Set an explicit `maxmemory` option limit in your instance in order to make sure that the instance will report errors instead of failing when the system memory limit is near to be reached. ++ Make sure to **setup some swap** in your system (we suggest as much as swap as memory). If Linux does not have swap and your Redis instance accidentally consumes too much memory, either Redis will crash for out of memory or the Linux kernel OOM killer will kill the Redis process. When swapping is enabled Redis will work in a bad way, but you'll likely notice the latency spikes and do something before it's too late. ++ Set an explicit `maxmemory` option limit in your instance in order to make sure that the instance will report errors instead of failing when the system memory limit is near to be reached. Note that maxmemory should be set calculating the overhead that Redis has, other than data, and the fragmentation overhead. So if you think you have 10 GB of free memory, set it to 8 or 9. + If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log **Redis may use up to 2 times the memory normally used**. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly. -+ Use `daemonize no` when run under daemontools. -+ Even if you have persistence disabled, Redis will need to perform RDB saves if you use replication, unless you use the new diskless replication feature, which is currently experimental. -+ If you are using replication, make sure that either your master has persistence enabled, or that it does not automatically restarts on crashes: slaves will try to be an exact copy of the master, so if a master restarts with an empty data set, slaves will be wiped as well. ++ Use `daemonize no` when running under daemontools. ++ Make sure to setup some non trivial replication backlog, which must be set in proportion to the amount of memory Redis is using. In a 20 GB instance it does not make sense to have just 1 MB of backlog. The backlog will allow replicas to resynchronize with the master instance much easily. ++ Even if you have persistence disabled, Redis will need to perform RDB saves if you use replication, unless you use the new diskless replication feature. If you have no disk usage on the master, make sure to enable diskless replication. ++ If you are using replication, make sure that either your master has persistence enabled, or that it does not automatically restarts on crashes: replicas will try to be an exact copy of the master, so if a master restarts with an empty data set, replicas will be wiped as well. + By default Redis does not require **any authentication and listens to all the network interfaces**. This is a big security issue if you leave Redis exposed on the internet or other places where attackers can reach it. See for example [this attack](http://antirez.com/news/96) to see how dangerous it can be. Please check our [security page](/topics/security) and the [quick start](/topics/quickstart) for information about how to secure Redis. ++ `LATENCY DOCTOR` and `MEMORY DOCTOR` are your friends. Running Redis on EC2 -------------------- @@ -24,7 +26,7 @@ Running Redis on EC2 + Use HVM based instances, not PV based instances. + Don't use old instances families, for example: use m3.medium with HVM instead of m1.medium with PV. + The use of Redis persistence with **EC2 EBS volumes** needs to be handled with care since sometimes EBS volumes have high latency characteristics. -+ You may want to try the new **diskless replication** if you have issues when slaves are synchronizing with the master. ++ You may want to try the new **diskless replication** if you have issues when replicas are synchronizing with the master. Upgrading or restarting a Redis instance without downtime ------------------------------------------------------- @@ -43,9 +45,9 @@ The following steps provide a very commonly used way in order to avoid any downt * Wait for the replication initial synchronization to complete (check the slave log file). * Make sure using INFO that there are the same number of keys in the master and in the slave. Check with redis-cli that the slave is working as you wish and is replying to your commands. * Allow writes to the slave using **CONFIG SET slave-read-only no** -* Configure all your clients in order to use the new instance (that is, the slave). +* Configure all your clients in order to use the new instance (that is, the slave). Note that you may want to use the `CLIENT PAUSE` command in order to make sure that no client can write to the old master during the switch. * Once you are sure that the master is no longer receiving any query (you can check this with the [MONITOR command](/commands/monitor)), elect the slave to master using the **SLAVEOF NO ONE** command, and shut down your master. -If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way in order to upgrade to newer versions, is to upgrade a slave after the other, then perform a manual fail-over in order to promote one of the upgraded slaves as master, and finally promote the last slave. +If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way in order to upgrade to newer versions, is to upgrade a slave after the other, then perform a manual fail-over in order to promote one of the upgraded replicas as master, and finally promote the last slave. -Note however that Redis Cluster 4.0 is not compatible with Redis Cluster 3.2 at cluster bus protocol level, so a mass restart is needed in this case. +Note however that Redis Cluster 4.0 is not compatible with Redis Cluster 3.2 at cluster bus protocol level, so a mass restart is needed in this case. However Redis 5 cluster bus is backward compatible with Redis 4. From 38d0daf81be62c72b0f77e32153a5d27e94de43c Mon Sep 17 00:00:00 2001 From: Pieter Cailliau Date: Wed, 31 Oct 2018 10:56:38 +0000 Subject: [PATCH 0074/1457] Updating RedisGraph in modules.json --- modules.json | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/modules.json b/modules.json index 0b8c2a1fb3..527f79586c 100644 --- a/modules.json +++ b/modules.json @@ -20,12 +20,13 @@ "stars": 403 }, { - "name": "Redis Graph", - "license": "AGPL", - "repository": "https://github.com/swilly22/redis-graph", - "description": "A graph database with a Cypher-based querying language", + "name": "RedisGraph", + "license": "Apache 2.0 modified with Commons Clause", + "repository": "https://github.com/RedisLabsModules/redis-graph", + "description": "A graph database with a Cypher-based querying language using sparse adjacency matrices", "authors": [ - "swilly22" + "swilly22", + "RedisLabs" ], "stars": 401 }, @@ -186,4 +187,4 @@ ], "stars": 613 } -] \ No newline at end of file +] From e03d10e92c5a877feb91a7dfc6c65bb84f5c50ab Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 31 Oct 2018 13:07:11 +0100 Subject: [PATCH 0075/1457] XADD / XREAD constant times are actually not higher than other commands. Moreover such statement is going to confuse the hell of most users. XADD for instance is probably faster than ZADD, so does not make sense to add this sentence. --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 5af826c849..9ce4963543 100644 --- a/commands.json +++ b/commands.json @@ -3423,7 +3423,7 @@ }, "XADD": { "summary": "Appends a new entry to a stream", - "complexity": "O(1) but with non trivial constant times", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -3552,7 +3552,7 @@ }, "XREAD": { "summary": "Return never seen elements in multiple streams, with IDs greater than the ones reported by the caller for each stream. Can block.", - "complexity": "For each stream mentioned: O(N) with N being the number of elements being returned, it menas that XREAD-ing with a fixed COUNT is O(1), even if with non-trivial constant times. Note that when the BLOCK option is used, XADD will pay O(M) time in order to serve the M clients blocked on the stream getting new data.", + "complexity": "For each stream mentioned: O(N) with N being the number of elements being returned, it menas that XREAD-ing with a fixed COUNT is O(1). Note that when the BLOCK option is used, XADD will pay O(M) time in order to serve the M clients blocked on the stream getting new data.", "arguments": [ { "command": "COUNT", From 5f967165efae3fed5b52b9531a01b757cfcfab92 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 31 Oct 2018 16:30:48 +0100 Subject: [PATCH 0076/1457] Document the new Sentinel authentication feature. --- topics/sentinel.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/topics/sentinel.md b/topics/sentinel.md index f8c98629ad..614176c2d2 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -783,6 +783,27 @@ configured with `requirepass`, the Sentinel configuration must include the sentinel auth-pass +Configuring Sentinel instances with authentication +--- + +You can also configure the Sentinel instance itself in order to require +client authentication via the `AUTH` command, however this feature is +only available starting with Redis 5.0.1. + +In order to do so, just add the following configuration directive to +all your Sentinel instances: + + requirepass "your_password_here" + +When configured this way, Sentinels will do two things: + +1. A password will be required from clients in order to send commands to Sentinels. This is obvious since this is how such configuration directive works in Redis in general. +2. Moreover the same password configured to access the local Sentinel, will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to. + +This means that **you will have to configure the same `requirepass` password in all the Sentinel instances**. This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels, that would be very impractical. + +Before using this configuration make sure your client library is able to send the `AUTH` command to Sentinel instances. + Sentinel clients implementation --- From caaea035179ed89e7053664feb1cfdc80e32d3cf Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 7 Nov 2018 17:05:03 +0100 Subject: [PATCH 0077/1457] Don't rely on Lua return value ordering. See issue #5538. --- commands/eval.md | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index d4593f4e11..b9039af7e7 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -373,13 +373,19 @@ In order to enforce this behavior in scripts Redis does the following: Note that a _random command_ does not necessarily mean a command that uses random numbers: any non-deterministic command is considered a random command (the best example in this regard is the `TIME` command). -* Redis commands that may return elements in random order, like `SMEMBERS` - (because Redis Sets are _unordered_) have a different behavior when called - from Lua, and undergo a silent lexicographical sorting filter before - returning data to Lua scripts. - So `redis.call("smembers",KEYS[1])` will always return the Set elements - in the same order, while the same command invoked from normal clients may - return different results even if the key contains exactly the same elements. +* In Redis version 4, commands that may return elements in random order, like + `SMEMBERS` (because Redis Sets are _unordered_) have a different behavior + when called from Lua, and undergo a silent lexicographical sorting filter + before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` + will always return the Set elements in the same order, while the same + command invoked from normal clients may return different results even if + the key contains exactly the same elements. However starting with Redis 5 + there is no longer such ordering step, because Redis 5 replicates scripts + in a way that no longer need non-deterministic commands to be converted + into deterministic ones. In general, even when developing for Redis 4, never + assume that certain commands in Lua will be ordered, but instead rely on + the documentation of the original command you call to see the properties + it provides. * Lua pseudo random number generation functions `math.random` and `math.randomseed` are modified in order to always have the same seed every time a new script is executed. From 658233eb1cc4ae963bf520ea9f51bd2f826226bb Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 7 Nov 2018 17:45:56 +0100 Subject: [PATCH 0078/1457] Minor grammar fix: need -> needs. --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index b9039af7e7..a01bdc6f18 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -381,7 +381,7 @@ In order to enforce this behavior in scripts Redis does the following: command invoked from normal clients may return different results even if the key contains exactly the same elements. However starting with Redis 5 there is no longer such ordering step, because Redis 5 replicates scripts - in a way that no longer need non-deterministic commands to be converted + in a way that no longer needs non-deterministic commands to be converted into deterministic ones. In general, even when developing for Redis 4, never assume that certain commands in Lua will be ordered, but instead rely on the documentation of the original command you call to see the properties From a59c016fabe113539731621c99d52ef0d656dbae Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 8 Nov 2018 09:36:57 +0100 Subject: [PATCH 0079/1457] TYPE command now can return 'stream'. --- commands/type.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/type.md b/commands/type.md index 3c2bec72aa..8a818e0544 100644 --- a/commands/type.md +++ b/commands/type.md @@ -1,6 +1,6 @@ Returns the string representation of the type of the value stored at `key`. -The different types that can be returned are: `string`, `list`, `set`, `zset` -and `hash`. +The different types that can be returned are: `string`, `list`, `set`, `zset`, +`hash` and `stream`. @return From 393d44959a54be5dcaf0d2610baafed1fbe0bf88 Mon Sep 17 00:00:00 2001 From: Romuald Brunet Date: Tue, 13 Nov 2018 12:20:54 +0100 Subject: [PATCH 0080/1457] Add example for SET command with expiration --- commands/set.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/set.md b/commands/set.md index 2a27498382..d691e9e7f4 100644 --- a/commands/set.md +++ b/commands/set.md @@ -24,6 +24,8 @@ Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it ```cli SET mykey "Hello" GET mykey + +SET anotherkey "will expire in a minute" EX 60 ``` ## Patterns From 8b3a1067c387d382d4fdd7083495ba7f277cd310 Mon Sep 17 00:00:00 2001 From: Balazs Brunaczky Date: Sun, 25 Nov 2018 17:51:32 +0100 Subject: [PATCH 0081/1457] wiredis added as c++ client --- clients.json | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 9223037763..564a1a38ee 100644 --- a/clients.json +++ b/clients.json @@ -1554,6 +1554,18 @@ "description": "A Xojo library to connect to a Redis server.", "authors": ["kemtekinay"], "active": true - } + }, + + { + "name": "wiredis", + "language": "C++", + "url": "https://github.com/nokia/wiredis", + "repository": "https://github.com/nokia/wiredis", + "description": "Standalone, asynchronous Redis client library based on ::boost::asio and c++11 standard", + "authors": ["winnieelte"], + "active": true + }, + + ] From b21266b0aebd8d29abcdb2082886cd3f25f8fb03 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 28 Nov 2018 12:52:22 +0100 Subject: [PATCH 0082/1457] Latency of streams: WIP 1. --- topics/streams-intro.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 76e8c02653..ca7305d5ab 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -652,3 +652,16 @@ However in the current implementation, memory is not really reclaimed until a ma A difference between streams and other Redis data structures is that when the other data structures have no longer elements, as a side effect of calling commands that remove elements, the key itself will be removed. So for instance, a sorted set will be completely removed when a call to **ZREM** will remove the last element in the sorted set. Streams instead are allowed to stay at zero elements, both as a result of using a **MAXLEN** option with a count of zero (**XADD** and **XTRIM** commands), or because **XDEL** was called. The reason why such an asymmetry exists is because Streams may have associated consumer groups, and we do not want to lose the state that the consumer groups define just because there are no longer items inside the stream. Currently the stream is not deleted even when it has no associated consumer groups, but this may change in the future. + +## Total latency of consuming a message + +Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLOCK option are server synchronously like any other Redis command, so to discuss latency of such commands is meaningless: more interesting is to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half million to one million of items per second in an average machine if pipelining is used. + +However latency becomes an interesting parameter if we want to understand the delay of processing the message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. + +In order to check this latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted in comparing the current computer time with the message timestamp, in order to understand the total latecy. + +Such programs were not optimized and were executed in a small two core instance also running Redis, in order to try to provide the latency figures you could expect in non optimal conditions. Messages were produced at a rate of 10k per second, with ten simultaneous consumers consuming and acknowledging the messages from the same Redis stream and consumer group. + + +Results obtained: From 1631f4e3dc274f0dd510c5e502f1e8dd52204692 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 28 Nov 2018 13:03:15 +0100 Subject: [PATCH 0083/1457] Latency of streams: WIP 2. --- topics/streams-intro.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index ca7305d5ab..3c13d63275 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -659,6 +659,20 @@ Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLO However latency becomes an interesting parameter if we want to understand the delay of processing the message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. +## How serving blocked consumers work + +Before providing the results of performed tests, it is interesting to understand what model Redis uses in order to route stream messages (and in general actually how any blocking operation waiting for data is managed). + +* The blocked client is referenced in an hash table that maps keys for which there is at least one blocking consumer, to a list of consumers that are waiting for such key. This way, given a key that received data, we can resolve all the clients that are waiting for such data. +* When a write happens, in this case when the `XADD` command is called, it calls the `signalKeyAsReady()` function. This function will put the key into a list of keys that need to be processed, because such keys may have new data for consumers blocked. Note that such *ready keys* will be processed later, so in the course of the same event loop cycle, it is possible that the key will receive other writes. +* Finally, before returning into the event loop, the *ready keys* are finally processed. For each key the list of clients waiting for data is ran, and if applicable, such clients will receive the new data that arrived. In the case of streams the data is the messages in the applicable range requested by the consumer. + +As you can see, basically, before returning to the event loop both the client calling `XADD` that the clients blocked to consume messages, will have their reply in the output buffers, so the caller of `XADD` should receive the reply from Redis about at the same time the consumers will receive the new messages. + +This model is *push based*, since adding data to the consumers buffers will be performed directly by the action of calling `XADD`, so the latency tends to be quite predictable. + +## Latency tests results + In order to check this latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted in comparing the current computer time with the message timestamp, in order to understand the total latecy. Such programs were not optimized and were executed in a small two core instance also running Redis, in order to try to provide the latency figures you could expect in non optimal conditions. Messages were produced at a rate of 10k per second, with ten simultaneous consumers consuming and acknowledging the messages from the same Redis stream and consumer group. From 77971afaf573cf8a775b5c564fb5e75e9adc5db3 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 28 Nov 2018 16:18:27 +0100 Subject: [PATCH 0084/1457] Latency of streams: WIP 3. --- topics/streams-intro.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 3c13d63275..e949e3a1aa 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -679,3 +679,22 @@ Such programs were not optimized and were executed in a small two core instance Results obtained: + +``` +Processed between 0 and 1 ms -> 74.11% +Processed between 1 and 2 ms -> 25.80% +Processed between 2 and 3 ms -> 0.06% +Processed between 3 and 4 ms -> 0.01% +Processed between 4 and 5 ms -> 0.02% +``` + +So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. + +Adding a few millions of not acknowledged messages in the stream does not change the gist of the benchmark, with most queries still processed with very shor latency. + +A few remarks: + +* Here we processed up to 10k messages per iteration, this means that the `COUNT` parameter of XREADGROUP was set to 10000. This adds a lot of latency but is needed in order to allow the slow consumers to be able to keep with the message flow. So you can expect a real world latency that is a lot smaller. +* The system used for this benchmark is very slow compared to today's standards. + + From 714e0dc04da640470ce8f239d13e3cecf510d719 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 28 Nov 2018 17:00:02 +0100 Subject: [PATCH 0085/1457] Latency of streams: typos fixed. --- topics/streams-intro.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index e949e3a1aa..f699731fa6 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -673,7 +673,7 @@ This model is *push based*, since adding data to the consumers buffers will be p ## Latency tests results -In order to check this latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted in comparing the current computer time with the message timestamp, in order to understand the total latecy. +In order to check this latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted in comparing the current computer time with the message timestamp, in order to understand the total latency. Such programs were not optimized and were executed in a small two core instance also running Redis, in order to try to provide the latency figures you could expect in non optimal conditions. Messages were produced at a rate of 10k per second, with ten simultaneous consumers consuming and acknowledging the messages from the same Redis stream and consumer group. @@ -690,7 +690,7 @@ Processed between 4 and 5 ms -> 0.02% So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. -Adding a few millions of not acknowledged messages in the stream does not change the gist of the benchmark, with most queries still processed with very shor latency. +Adding a few millions of not acknowledged messages in the stream does not change the gist of the benchmark, with most queries still processed with very short latency. A few remarks: From e1ec4a9b0f9d6a4892d528c7193fded06d7bfa1b Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Wed, 28 Nov 2018 09:57:40 -0800 Subject: [PATCH 0086/1457] Typo --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index f699731fa6..223e5e0b57 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -66,7 +66,7 @@ Redis streams support all the three query modes described above via different co ### Querying by range: XRANGE and XREVRANGE -To query the stream by range we are only required to specify two IDs, *start* end *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively means the smallest and the greatest ID possible. +To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively means the smallest and the greatest ID possible. ``` > XRANGE mystream - + From 392c11e28914217638fccda2fc4c550ffa31d145 Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Wed, 28 Nov 2018 21:40:43 -0800 Subject: [PATCH 0087/1457] Typos and tiny tweaks --- topics/streams-intro.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 223e5e0b57..450ea112c1 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -169,11 +169,11 @@ The blocking form of **XREAD** is also able to listen to multiple Streams, just Similarly to blocking list operations, blocking stream reads are *fair* from the point of view of clients waiting for data, since the semantics is FIFO style. The first client that blocked for a given stream is the first that will be unblocked as new items are available. -**XREAD** has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attack consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called **XREADGROUP**, covered in the next section of this guide. +**XREAD** has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called **XREADGROUP**, covered in the next section of this guide. ## Consumer groups -When the task at hand is to consume the same stream from different clients, then **XREAD** already offers a way to *fan-out* to N clients, potentially also using slaves in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is the case of slow to process messages: the ability to have N different workers that will receive different parts of the stream allow to scale message processing, by routing different messages to different workers that are ready to do more work. +When the task at hand is to consume the same stream from different clients, then **XREAD** already offers a way to *fan-out* to N clients, potentially also using slaves in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is the case of slow to process messages: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. In practical terms, if we imagine having three consumers C1, C2, C3, and a stream that contains the messages 1, 2, 3, 4, 5, 6, 7 then what we want is to serve the messages like in the following diagram: @@ -237,7 +237,7 @@ As you can see in the command above when creating the consumer group we have to Now that the consumer group is created we can immediately start trying to read messages via the consumer group, by using the **XREADGROUP** command. We'll read from the consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice and Bob. -**XREADGROUP** is very similar yo **XREAD** and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in **XREAD**. +**XREADGROUP** is very similar to **XREAD** and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in **XREAD**. Before reading from the stream, let's put some messages inside: @@ -403,7 +403,7 @@ When called in this way the command just outputs the total number of pending mes We can ask for more info by giving more arguments to **XPENDING**, because the full command signature is the following: ``` -XPENDING [ []] +XPENDING [ []] ``` By providing a start and end ID (that can be just `-` and `+` as in **XRANGE**) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer group name, is used if we want to limit the output to just messages pending for a given consumer group, but we'll not use this feature in the following example. From 4eea070c8f11059a215be65963218627cb7ee2c5 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Dec 2018 16:41:50 +0100 Subject: [PATCH 0088/1457] Clarify streams special IDs. --- topics/streams-intro.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index f699731fa6..1aa12c00ab 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -613,6 +613,24 @@ However, **XTRIM** is designed to accept different trimming strategies, even if One useful eviction strategy that **XTRIM** should have is probably the ability to remove by a range of IDs. This is currently not possible, but will be likely implemented in the future in order to more easily use **XRANGE** and **XTRIM** together to move data from Redis to other storage systems if needed. +## Special IDs in the streams API + +You may have noticed that there are several special IDs that can be +used in the Redis API. Here is a short recap, so that they can make more +sense in the future. + +The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively means the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a log cleaner to write `-` and `+` instead of those numbers. + +Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entires with `XREADGROUP` I use such ID to tell that I already have all the existing entries, but not the news that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entires to the consumers using the group. + +As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greated ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol of multiple meanings. + +Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entires that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. + +Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry. + +So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and most of the times, can be used in different contexts. + ## Persistence, replication and message safety A Stream, like any other Redis data structure, is asynchronously replicated to slaves and persisted into AOF and RDB files. However what may not be so obvious is that also consumer groups full state is propagated to AOF, RDB and slaves, so if a message is pending in the master, also the slave will have the same information. Similarly, after a restart, the AOF will restore the consumer groups state. From b84735ef6e460f40c209d33e2388fc0c76f8d80a Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Dec 2018 16:51:37 +0100 Subject: [PATCH 0089/1457] log -> lot typo fix. --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 10c7a13e2b..97f35337d6 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -619,7 +619,7 @@ You may have noticed that there are several special IDs that can be used in the Redis API. Here is a short recap, so that they can make more sense in the future. -The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively means the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a log cleaner to write `-` and `+` instead of those numbers. +The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively means the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entires with `XREADGROUP` I use such ID to tell that I already have all the existing entries, but not the news that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entires to the consumers using the group. From 6536981681057ff8ebd77b2b6564d9113045da52 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Dec 2018 16:52:53 +0100 Subject: [PATCH 0090/1457] Fix typo: greatest. --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 97f35337d6..5563a51f5c 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -623,7 +623,7 @@ The first two special IDs are `-` and `+`, and are used in range queries with th Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entires with `XREADGROUP` I use such ID to tell that I already have all the existing entries, but not the news that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entires to the consumers using the group. -As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greated ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol of multiple meanings. +As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol of multiple meanings. Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entires that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. From 6cd6ff9e5cf784fbd0d693a9bc19c13253c42d4d Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Dec 2018 16:54:22 +0100 Subject: [PATCH 0091/1457] Improve grammar. --- topics/streams-intro.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 5563a51f5c..4c423d84d4 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -625,11 +625,11 @@ Then there are APIs where we want to say, the ID of the item with the greatest I As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol of multiple meanings. -Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entires that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. +Another special ID is `>`, that has a special meaning only in the context of consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entires that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. -Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry. +Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry that we are going to create. -So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and most of the times, can be used in different contexts. +So we have `-`, `+`, `$`, `>` and `*`, and all have a different meanings, and most of the times, can only be used in different contexts. ## Persistence, replication and message safety From 902f843b58dc9e304ba771774e15835ea99b2fa8 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 20 Dec 2018 11:07:11 +0100 Subject: [PATCH 0092/1457] Clearly mark non OSI licensed modules. Related to #984. --- modules.json | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules.json b/modules.json index 527f79586c..f6a9296f92 100644 --- a/modules.json +++ b/modules.json @@ -21,7 +21,7 @@ }, { "name": "RedisGraph", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/redis-graph", "description": "A graph database with a Cypher-based querying language using sparse adjacency matrices", "authors": [ @@ -42,7 +42,7 @@ }, { "name": "ReJSON", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/ReJSON", "description": "A JSON data type for Redis", "authors": [ @@ -53,7 +53,7 @@ }, { "name": "Redis-ML", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/redis-ml", "description": "Machine Learning Model Server", "authors": [ @@ -64,7 +64,7 @@ }, { "name": "RediSearch", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/RediSearch", "description": "Full-Text search over Redis", "authors": [ @@ -75,7 +75,7 @@ }, { "name": "topk", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/topk", "description": "An almost deterministic top k elements counter", "authors": [ @@ -86,7 +86,7 @@ }, { "name": "countminsketch", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/countminsketch", "description": "An apporximate frequency counter", "authors": [ @@ -97,7 +97,7 @@ }, { "name": "rebloom", - "license": "Apache 2.0 modified with Commons Clause", + "license": "(non OSI license) Apache 2.0 modified with Commons Clause", "repository": "https://github.com/RedisLabsModules/rebloom", "description": "Scalable Bloom filters", "authors": [ From 542cfb956500c6f96df23cd2bb985ec592fe8eac Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 9 Jan 2019 14:48:26 +0200 Subject: [PATCH 0093/1457] Adds Streams to data types intro Added a reference to the Streams Intro, and an exception to the rule of empty aggregate data types. --- topics/data-types-intro.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 7e7431ad9b..6f0b392bdd 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -24,6 +24,9 @@ by Redis, which will be covered separately in this tutorial: * HyperLogLogs: this is a probabilistic data structure which is used in order to estimate the cardinality of a set. Don't be scared, it is simpler than it seems... See later in the HyperLogLog section of this tutorial. +* Streams: append-only collections of map-like entries that provide an abstract + log data type. They are covered in depth in the + [Introduction to Redis Streams](/topics/streams-intro). It's not always trivial to grasp how these data types work and what to use in order to solve a given problem from the [command reference](/commands), so this @@ -455,12 +458,12 @@ an empty list if the key does not exist and we are trying to add elements to it, for example, with `LPUSH`. This is not specific to lists, it applies to all the Redis data types -composed of multiple elements -- Sets, Sorted Sets and Hashes. +composed of multiple elements -- Streams, Sets, Sorted Sets and Hashes. Basically we can summarize the behavior with three rules: 1. When we add an element to an aggregate data type, if the target key does not exist, an empty aggregate data type is created before adding the element. -2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. +2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. The Stream data type is the only exception to this rule. 3. Calling a read-only command such as `LLEN` (which returns the length of the list), or a write command removing elements, with an empty key, always produces the same result as if the key is holding an empty aggregate type of the type the command expects to find. Examples of rule 1: From b71d216928746ccd2a02ca17558fedd2f2506553 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 12 Jan 2019 15:50:54 +0200 Subject: [PATCH 0094/1457] Documents negative count for ZRANGEBYSCORE/LEX LIMIT Signed-off-by: Itamar Haber --- commands/zrangebylex.md | 3 ++- commands/zrangebyscore.md | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/commands/zrangebylex.md b/commands/zrangebylex.md index b1f7bd1a5f..4eefffc05a 100644 --- a/commands/zrangebylex.md +++ b/commands/zrangebylex.md @@ -5,7 +5,8 @@ If the elements in the sorted set have different scores, the returned elements a The elements are considered to be ordered from lower to higher strings as compared byte-by-byte using the `memcmp()` C function. Longer strings are considered greater than shorter strings if the common part is identical. The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to _SELECT LIMIT offset, count_ in SQL). +elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count` +returns all elements from the `offset`. Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 3cbc05723a..bc81708533 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -7,7 +7,8 @@ follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to _SELECT LIMIT offset, count_ in SQL). +elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count` +returns all elements from the `offset`. Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. From 1388819a0588083dbe04f836fc9cf3212c38dc2c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 8 Feb 2019 11:54:36 +0100 Subject: [PATCH 0095/1457] ACL doc: placeholder with initial TODOs. --- topics/acl.md | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 topics/acl.md diff --git a/topics/acl.md b/topics/acl.md new file mode 100644 index 0000000000..bbab5a8489 --- /dev/null +++ b/topics/acl.md @@ -0,0 +1,8 @@ +# ACL + +TODO list: + +* Make sure to specify that modules commands are ignored when adding/removing categories. +* Document cost of keys matching with some benchmark. +* Document how +@all also includes module commands and every future command. +* Document how ACL SAVE is not included in CONFIG REWRITE. From 1451937bec21b5902becee4f021450d0308ce393 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 8 Feb 2019 18:26:42 +0100 Subject: [PATCH 0096/1457] ACL intro + one new TODO list item added. --- topics/acl.md | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/topics/acl.md b/topics/acl.md index bbab5a8489..421ea15ec0 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -1,8 +1,35 @@ # ACL +Redis ACLs, short for Access Control List, is the feature that allows certain +connections to be limited in the commands that can be executed and the keys +that can be accessed. The way it works is that, after connecting, a client +requires to authenticate providing a username and one valid password: if +the authentication stage succeeded, the connection is associated with a given +user and the limits the user has. Redis can be configured so that new +connections are already authenticated with a "default" user (this is the +default configuration), so configuring the default user has, as a side effect, +the ability to provide only a specific subset of functionalities to connections +that are not authenticated. + +In the default configuration, Redis 6, the first version to have ACLs, works +exactly like older versions of Redis, that is, every new connection is +capable of calling every possible command and accessing every key, so the +new feature is backward compatible with old clients and applications. Moreover +the old way to configure a password, using the **requirepass** configuration +directive, still works as expected, however now what it does is just to +set a password for the default user. + +Before using ACLs you may want to ask yourself what's the goal you want to +accomplish by implementing this layer of protection. Normally there are +two main goals that are well served by ACLs: + +1. You want to improve security by restricting the access to commands and keys, so that unstrusted clients have no access and less trusted clients are not able to do bad things. +2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed, because of software errors or mistakes, to damage the data or the configuration. For instance there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command. + TODO list: * Make sure to specify that modules commands are ignored when adding/removing categories. * Document cost of keys matching with some benchmark. * Document how +@all also includes module commands and every future command. * Document how ACL SAVE is not included in CONFIG REWRITE. +* Document backward compatibility with requirepass and single argument AUTH. From bf94f66124e3336bcd97e099277a35c5d1e316cd Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 11 Feb 2019 13:20:30 +0100 Subject: [PATCH 0097/1457] ACL doc: explain the default configuration and ACL LIST. --- topics/acl.md | 70 +++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 62 insertions(+), 8 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 421ea15ec0..6d39dfc446 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -1,30 +1,84 @@ # ACL -Redis ACLs, short for Access Control List, is the feature that allows certain +The Redis ACL, short for Access Control List, is the feature that allows certain connections to be limited in the commands that can be executed and the keys that can be accessed. The way it works is that, after connecting, a client -requires to authenticate providing a username and one valid password: if +requires to authenticate providing a username and a valid password: if the authentication stage succeeded, the connection is associated with a given user and the limits the user has. Redis can be configured so that new connections are already authenticated with a "default" user (this is the default configuration), so configuring the default user has, as a side effect, the ability to provide only a specific subset of functionalities to connections -that are not authenticated. +that are not explicitly authenticated. -In the default configuration, Redis 6, the first version to have ACLs, works +In the default configuration, Redis 6 (the first version to have ACLs) works exactly like older versions of Redis, that is, every new connection is capable of calling every possible command and accessing every key, so the -new feature is backward compatible with old clients and applications. Moreover +ACL feature is backward compatible with old clients and applications. Also the old way to configure a password, using the **requirepass** configuration -directive, still works as expected, however now what it does is just to +directive, still works as expected, but now what it does is just to set a password for the default user. +The Redis `AUTH` command was extended in Redis 6, so now it is possible to +use it in the two-arguments form: + + AUTH + +When it is used according to the old form, that is: + + AUTH + +What happens is that the username used to authenticate is "default", so +just specifying the password implies that we want to authenticate against +the default user. This provides perfect backward compatibility with the past. + +## When ACLs are useful + Before using ACLs you may want to ask yourself what's the goal you want to accomplish by implementing this layer of protection. Normally there are two main goals that are well served by ACLs: -1. You want to improve security by restricting the access to commands and keys, so that unstrusted clients have no access and less trusted clients are not able to do bad things. -2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed, because of software errors or mistakes, to damage the data or the configuration. For instance there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command. +1. You want to improve security by restricting the access to commands and keys, so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed. For instance certain clients may just be able to execute read only commands. +2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed, because of software errors or manual mistakes, to damage the data or the configuration. For instance there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command. + +Another typical usage of ACLs is related to managed Redis instances. Redis is +often provided as a managed service both by internal company teams that handle +the Redis infrastructure for the other internal customers they have, or is +provided in a software-as-a-service setup by cloud providers. In both such +setups we want to be sure that configuration commands are excluded for the +customers. The way this was accomplished in the past, via command renaming, was +a trick that allowed us to survive without ACLs for a long time, but is not +ideal. + +## Configuring ACLs using the ACL command + +ACLs are defined using a DSL (domain specific language) that describes what +a given user is able to do or not. Such rules are always implemented from the +first to the last, left-to-right, because sometimes the order of the rules is +important to understand what the user is really able to do. + +By default there is a single user defined, that is called *default*. We +can use the `ACL LIST` command in order to check the currently active ACLs +and verify what the configuration of a freshly stared and unconfigured Redis +instance is: + + > ACL LIST + 1) "user default on nopass ~* +@all" + +This command is able to report the list of users in the same format that is +used in the Redis configuration files, by translating the current ACLs set +for the users back into their description. + +The first two words in each line are "user" followed by the username. The +following words are ACL rules that describe different things. We'll show in +details how the rules work, but for now it is enough to say that the default +user is configured to be active (on), to require no password (nopass), to +access every possible key (`~*`) and be able to call every possible command +(+@all). + +Also, in the special case of the default user, having the *nopass* rule means +that new connections are automatically authenticated with the default user +without any explicit `AUTH` call needed. TODO list: From d6b61492c55b35da5786d9ac906a1ee5c12b450a Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 11 Feb 2019 17:03:50 +0100 Subject: [PATCH 0098/1457] ACL doc: first serious attempt at ACL rules documentation. --- topics/acl.md | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 6d39dfc446..591253eaa0 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -65,12 +65,12 @@ instance is: > ACL LIST 1) "user default on nopass ~* +@all" -This command is able to report the list of users in the same format that is +The command above reports the list of users in the same format that is used in the Redis configuration files, by translating the current ACLs set for the users back into their description. The first two words in each line are "user" followed by the username. The -following words are ACL rules that describe different things. We'll show in +next words are ACL rules that describe different things. We'll show in details how the rules work, but for now it is enough to say that the default user is configured to be active (on), to require no password (nopass), to access every possible key (`~*`) and be able to call every possible command @@ -80,6 +80,32 @@ Also, in the special case of the default user, having the *nopass* rule means that new connections are automatically authenticated with the default user without any explicit `AUTH` call needed. +## ACL rules + +The following is the list of the valid ACL rules. Certain rules are just +single words that are used in order to activate or remove a flag, or to +perform a given change to the user ACL. Other rules are char prefixes that +are concatenated with command or cagetories names, or key patterns, and +so forth. + +* `on`: Enable the user: it is possible to authenticate as this user. +* `off`: Disable the user: it's no longer possible to authenticate with this user, however the already authenticated connections will still work. Note that if the default user is flagged as *off*, new connections will start not authenticated and will require the user to send `AUTH` or `HELLO` with the AUTH option in order to authenticate in some way, regardless of the default user configuration. +* `+`: Add the command to the list of commands the user can call. +* `-`: Remove the command to the list of commands the user can call. +* `+@`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules. +* `-@`: Like `+@` but removes the commands from the list of commands the client can call. +* `+|subcommand`: Allow a specific subcommand of an otherwise disabled command. Note that this form is not allowed as negative like `-DEBUG|SEGFAULT`, but only additive starting with "+". This ACL will cause an error if the command is already active as a whole. +* `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system. +* `nocommands`: Alias for -@all. +`~`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of KEYS. It is possible to specify multiple patterns. +* `allkeys`: Alias for `~*`. +* `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will result in the client only be able to access keys matching the pattern `objects:*`. +* `>`: Add this passowrd to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. +* `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. +* `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. +* `resetpass`: Flush the list of allowed passwords. Moreover removes the *nopass* status. After *resetpass* the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). +* `reset` Performs the following actions: resetpass, resetkeys, off, -@all. The user returns to the same state it has immediately after its creation. + TODO list: * Make sure to specify that modules commands are ignored when adding/removing categories. From a87f4b9767d7c31a2f5d6511018f3f70c9bc13ac Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 11 Feb 2019 17:15:13 +0100 Subject: [PATCH 0099/1457] ACL doc: split ACL rules into sections. --- topics/acl.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/topics/acl.md b/topics/acl.md index 591253eaa0..236fa4ffa3 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -88,8 +88,13 @@ perform a given change to the user ACL. Other rules are char prefixes that are concatenated with command or cagetories names, or key patterns, and so forth. +Enable and disallow users: + * `on`: Enable the user: it is possible to authenticate as this user. * `off`: Disable the user: it's no longer possible to authenticate with this user, however the already authenticated connections will still work. Note that if the default user is flagged as *off*, new connections will start not authenticated and will require the user to send `AUTH` or `HELLO` with the AUTH option in order to authenticate in some way, regardless of the default user configuration. + +Allow and disallow commands: + * `+`: Add the command to the list of commands the user can call. * `-`: Remove the command to the list of commands the user can call. * `+@`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules. @@ -97,15 +102,28 @@ so forth. * `+|subcommand`: Allow a specific subcommand of an otherwise disabled command. Note that this form is not allowed as negative like `-DEBUG|SEGFAULT`, but only additive starting with "+". This ACL will cause an error if the command is already active as a whole. * `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system. * `nocommands`: Alias for -@all. + +Allow and disallow certain keys: + `~`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of KEYS. It is possible to specify multiple patterns. * `allkeys`: Alias for `~*`. * `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will result in the client only be able to access keys matching the pattern `objects:*`. + +Configure valid passwords for the user: + * `>`: Add this passowrd to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. * `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. * `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. * `resetpass`: Flush the list of allowed passwords. Moreover removes the *nopass* status. After *resetpass* the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). + +*Note: an use that is not flagged with nopass, and has no list of valid passwords, is effectively impossible to use, because there will be no way to log in as such user.* + +Reset the user: + * `reset` Performs the following actions: resetpass, resetkeys, off, -@all. The user returns to the same state it has immediately after its creation. +# Creating and editing users ACLs with the ACL SETUSER command + TODO list: * Make sure to specify that modules commands are ignored when adding/removing categories. From ecd58320423677e7f5d2d34d2766265a55d9274a Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 11 Feb 2019 18:47:26 +0100 Subject: [PATCH 0100/1457] ACL doc: ACL SETUSER first steps. --- topics/acl.md | 93 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 92 insertions(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index 236fa4ffa3..b791d7fc9a 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -124,7 +124,98 @@ Reset the user: # Creating and editing users ACLs with the ACL SETUSER command -TODO list: +Users can be created and modified in two main ways: + +1. Using the ACL command and its `ACL SETUSER` subcommand. +2. Modifying the server configuration, where users can be defined, and restarting the server, or if we are using an *external ACL file*, just issuing `ACL LOAD`. + +In this section we'll learn how to define users using the `ACL` command. +With such knowledge it will be trivial to do the same things via the +configuration files. Defining users in the configuration deserves its own +section and will be discussed later separately. + +To start let's try the simplest `ACL SETUSER` command call: + + > ACL SETUSER alice + OK + +The `SETUSER` command takes the username and a list of ACL rules to apply +to the user. However in the above example I did not specify any rule at all. +This will just create the user if it did not exist, using the default +attributes of a just creates uses. If the user already exist, the command +above will do nothing at all. + +Let's check what is the default user status: + + > ACL LIST + 1) "user alice off -@all" + 2) "user default on nopass ~* +@all" + +The just created user "alice" is: + +* In off status, that is, it's disabled. AUTH will not work. +* Cannot access any command. Note that the user is created by default without the ability to access any command, so the `-@all` in the output above could be omitted, however `ACL LIST` attempts to be explicit rather than implicit. +* Finally there are no key patterns that the user can access. +* The user also has no passwords set. + +Such user is completely useless. Let's try to define the user so that +it is active, has a password, and can access with only the `GET` command +to key names starting with the string "cached:". + + > ACL SETUSER alice on >p1pp0 ~cached:* +get + OK + +Now the user can do something, but will refuse to do other things: + + > AUTH alice p1pp0 + OK + > GET foo + (error) NOPERM this user has no permissions to access one of the keys used as arguments + > GET cached:1234 + (nil) + > SET cached:1234 zap + (error) NOPERM this user has no permissions to run the 'set' command or its subcommnad + +Things are working as expected. In order to inspect the configuration of the +user alice (remember that user names are case sensitive), it is possible to +use an alternative to `ACL LIST` which is designed to be more suitable for +computers to read, while `ACL LIST` is more biased towards humans. + + > ACL GETUSER alice + 1) "flags" + 2) 1) "on" + 3) "passwords" + 4) 1) "p1pp0" + 5) "commands" + 6) "-@all +get" + 7) "keys" + 8) 1) "cached:*" + +The `ACL GETUSER` returns a field-value array describing the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords and so forth. The output is probably more readalbe if we use RESP3, so that it is returned as as map reply: + + > ACL GETUSER alice + 1# "flags" => 1~ "on" + 2# "passwords" => 1) "p1pp0" + 3# "commands" => "-@all +get" + 4# "keys" => 1) "cached:*" + +*Note: from now on we'll continue using the Redis default protocol, version 2, because it will take some time for the community to switch to the new one.* + +Using another `ACL SETUSER` command (from a different user, because alice cannot run the `ACL` command) we can add multiple patterns to the user: + + > ACL SETUSER alice ~objects:* ~items:* ~public:* + OK + > ACL LIST + 1) "user alice on >p1pp0 ~cached:* ~objects:* ~items:* ~public:* -@all +get" + 2) "user default on nopass ~* +@all" + +The user representation in memory is now as we expect it to be. + +## Playings with command categories + +## +@all VS -@all + +## TODO list for this document * Make sure to specify that modules commands are ignored when adding/removing categories. * Document cost of keys matching with some benchmark. From f0ad4e0e0997100dcdfd98f8de6036c3de08e6f2 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 12 Feb 2019 08:52:03 +0100 Subject: [PATCH 0101/1457] ACL doc: fix title level. --- topics/acl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index b791d7fc9a..af2771e647 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -122,7 +122,7 @@ Reset the user: * `reset` Performs the following actions: resetpass, resetkeys, off, -@all. The user returns to the same state it has immediately after its creation. -# Creating and editing users ACLs with the ACL SETUSER command +## Creating and editing users ACLs with the ACL SETUSER command Users can be created and modified in two main ways: From 57760a287225bbaccd7db2b0632c6bf508e4c4d9 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 13 Feb 2019 12:43:31 +0100 Subject: [PATCH 0102/1457] ACL doc: categories. --- topics/acl.md | 89 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 89 insertions(+) diff --git a/topics/acl.md b/topics/acl.md index af2771e647..b1ca6d742e 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -211,8 +211,97 @@ Using another `ACL SETUSER` command (from a different user, because alice cannot The user representation in memory is now as we expect it to be. +## What happens calling ACL SETUSER multiple times + +It is very important to understand what happens when ACL SETUSER is called +multiple times. What is critical to know is that every `SETUSER` call will +NOT reset the user, but will just apply the ACL rules to the existing user. +The user is reset only if it was not known before: in that case a brand new +user is created with zeroed-ACLs, that is, the user cannot do anything, is +disabled, has no passwords and so forth: for safety this is the best default. + +However later calls will just modify the user incrementally so for instance +the following sequence: + + > ACL SETUSER myuser +set + OK + > ACL SETUSER myuser +get + OK + +Will result into myuser to be able to call both `GET` and `SET`: + + > ACL LIST + 1) "user default on nopass ~* +@all" + 2) "user myuser off -@all +set +get" + ## Playings with command categories +Setting users ACLs by specifying all the commands one after the other is +really annoying, so instead we do things like that: + + > ACL SETUSER antirez on +@all -@dangerous >somepassword ~* + +By saying +@all and -@dangerous we included all the commands and later removed +all the commands that are tagged as dangerous inside the Redis command table. +Please note that command categories **never include modules commnads** with +the exception of +@all. If you say +@all all the commands can be executed by +the user, even future commands loaded via the modules system. However if you +use the ACL rule +@readonly or any other, the modules commands are always +excluded. This is very important because you should just trust the Redis +internal command table for sanity. Moudles my expose dangerous things and in +the case of an ACL that is just additive, that is, in the form of `+@all -...` +You should be absolutely sure that you'll never include what you did not mean +to. + +However to remember that categories are defined, and what commands each +category exactly includes, is impossible and would be super boring, so the +Redis `ACL` command exports the `CAT` subcommand that can be used in two forms: + + ACL CAT -- Will just list all the categories available + ACL CAT -- Will list all the commands inside the category + +Examples: + + > ACL CAT + 1) "keyspace" + 2) "read" + 3) "write" + 4) "set" + 5) "sortedset" + 6) "list" + 7) "hash" + 8) "string" + 9) "bitmap" + 10) "hyperloglog" + 11) "geo" + 12) "stream" + 13) "pubsub" + 14) "admin" + 15) "fast" + 16) "slow" + 17) "blocking" + 18) "dangerous" + 19) "connection" + 20) "transaction" + 21) "scripting" + +As you can see so far there are 21 distinct categories. Now let's check what +command is part of the *geo* category: + + > ACL CAT geo + 1) "geohash" + 2) "georadius_ro" + 3) "georadiusbymember" + 4) "geopos" + 5) "geoadd" + 6) "georadiusbymember_ro" + 7) "geodist" + 8) "georadius" + +Note that commands may be part of multiple categories, so for instance an +ACL rule like `+@geo -@readonly` will result in certain geo commands to be +excluded because they are readonly commands. + ## +@all VS -@all ## TODO list for this document From 08d96b8158b2b7e8570d716b3c8aac2aeb30a5a6 Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Sat, 16 Feb 2019 15:13:21 -0800 Subject: [PATCH 0103/1457] Small copy tweaks --- topics/acl.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index b1ca6d742e..7542a8be94 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -1,9 +1,9 @@ # ACL The Redis ACL, short for Access Control List, is the feature that allows certain -connections to be limited in the commands that can be executed and the keys -that can be accessed. The way it works is that, after connecting, a client -requires to authenticate providing a username and a valid password: if +connections to be limited in terms of the commands that can be executed and the +keys that can be accessed. The way it works is that, after connecting, a client +is required to authenticate providing a username and a valid password: if the authentication stage succeeded, the connection is associated with a given user and the limits the user has. Redis can be configured so that new connections are already authenticated with a "default" user (this is the From 8e3067314fdd3a506fc2b02b163016e3dd8f2ea4 Mon Sep 17 00:00:00 2001 From: lwjli Date: Thu, 21 Feb 2019 14:14:02 +0800 Subject: [PATCH 0104/1457] appendfsync --- topics/persistence.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index ef75f689f8..6106a49b67 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -134,18 +134,18 @@ You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: -* `fsync` every time a new command is appended to the AOF. Very very +* appendfsync always: `fsync` every time a new command is appended to the AOF. Very very slow, very safe. -* `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. +* appendfsync everysec: `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. -* Never `fsync`, just put your data in the hands of the Operating +* appendfsync no: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. The suggested (and default) policy is to `fsync` every second. It is both very fast and pretty safe. The `always` policy is very slow in practice (although it was improved in Redis 2.0) – there is no way to -make `fsync` faster than it is. +make `fsync` slower than it is. ### What should I do if my AOF gets corrupted? From 99493a234a37d919e46546de51270e8bccb00c9e Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 21 Feb 2019 12:42:31 +0100 Subject: [PATCH 0105/1457] Update persistence doc about AOF truncation. --- topics/persistence.md | 53 ++++++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 21 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 6106a49b67..d9eaa2943d 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -134,36 +134,47 @@ You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: -* appendfsync always: `fsync` every time a new command is appended to the AOF. Very very -slow, very safe. - +* appendfsync always: `fsync` every time a new command is appended to the AOF. Very very slow, very safe. * appendfsync everysec: `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. - -* appendfsync no: Never `fsync`, just put your data in the hands of the Operating -System. The faster and less safe method. +* appendfsync no: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel exact tuning. The suggested (and default) policy is to `fsync` every second. It is both very fast and pretty safe. The `always` policy is very slow in -practice (although it was improved in Redis 2.0) – there is no way to -make `fsync` slower than it is. - -### What should I do if my AOF gets corrupted? - -It is possible that the server crashes while writing the AOF file (this -still should never lead to inconsistencies), corrupting the file in a -way that is no longer loadable by Redis. When this happens you can fix -this problem using the following procedure: +practice, but it supports group commit, so if there are multiple parallel +writes Redis will try to perform a single `fsync` operation. + +### What should I do if my AOF gets truncated? + +It is possible that the server crashed while writing the AOF file, or that the +volume where the AOF file is stored is store was full. When this happens the +AOF still contains consistent data representing a given point-in-time version +of the dataset (that may be old up to one second with the default AOF fsync +policy), but the last command in the AOF could be truncated. +The latest major versions of Redis will be able to load the AOF anyway, just +discarding the last non well formed command in the file. In this case the +server will emit a log like the following: + +``` +* Reading RDB preamble from AOF file... +* Reading the remaining AOF tail... +# !!! Warning: short read while loading the AOF file !!! +# !!! Truncating the AOF at offset 439 !!! +# AOF loaded anyway because aof-load-truncated is enabled +``` + +You can change the default configuration to force Redis to stop in such +cases if you want, but the default configuration is to continue regardless +the fact the last command in the file is not well-formed, in order to guarantee +availabiltiy after a restart. + +Older versions of Redis may not recover, and may require the following steps: * Make a backup copy of your AOF file. - -* Fix the original file using the `redis-check-aof` tool that ships with -Redis: +* Fix the original file using the `redis-check-aof` tool that ships with Redis: $ redis-check-aof --fix -* Optionally use `diff -u` to check what is the difference between two -files. - +* Optionally use `diff -u` to check what is the difference between two files. * Restart the server with the fixed file. ### How it works From d62ca28ebd2fb24899065be5e21bf1ea073c4f44 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 21 Feb 2019 12:47:33 +0100 Subject: [PATCH 0106/1457] Persistence doc: document AOF corruption. --- topics/persistence.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/topics/persistence.md b/topics/persistence.md index d9eaa2943d..553f9c4113 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -177,6 +177,26 @@ Older versions of Redis may not recover, and may require the following steps: * Optionally use `diff -u` to check what is the difference between two files. * Restart the server with the fixed file. +### What should I do if my AOF gets corrupted? + +If the AOF file is not just truncated, but corrupted with invalid byte +sequences in the middle, things are more complex. Redis will complain +at startup and will abort: + +``` +* Reading the remaining AOF tail... +# Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix +``` + +The best thing to do is to run the `redis-check-aof` utility, initially without +the `--fix` option, then understand the problem, jump at the given +offset in the file, and see if it is possible to manually repair the file: +the AOF uses the same format of the Redis protocol and is quite simple to fix +manually. Otherwise it is possible to let the utility fix the file for us, but +in that case all the AOF portion from the invalid part to the end of the +file may be discareded, leading to a massive amount of data lost if the +corruption happen to be in the initial part of the file. + ### How it works Log rewriting uses the same copy-on-write trick already in use for From 5cd6ffdc5810826ff6cc5602d12a51993bb7d802 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 21 Feb 2019 12:51:50 +0100 Subject: [PATCH 0107/1457] Persistence doc: a few more improvements around AOF and backups. --- topics/persistence.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 553f9c4113..0fc00b3451 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -287,6 +287,11 @@ running. This is what we suggest: * Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with data and time information. * At least one time every day make sure to transfer an RDB snapshot *outside your data center* or at least *outside the physical machine* running your Redis instance. +If you run a Redis instance with only AOF persistence enabled, you can still +copy the AOF in order to create backups. The file may lack the final part +but Redis will be still able to load it (see the previous sections about +truncated AOF files). + Disaster recovery --- @@ -300,15 +305,15 @@ Since many Redis users are in the startup scene and thus don't have plenty of money to spend we'll review the most interesting disaster recovery techniques that don't have too high costs. -* Amazon S3 and other similar services are a good way for mounting your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety. -* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate an ssh client key without passphrase, then add it in the authorized_keys file of your small VPS. You are ready to transfer -backups in an automated fashion. Get at least two VPS in two different providers +* Amazon S3 and other similar services are a good way for implementing your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety. +* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate an ssh client key without passphrase, then add it in the `authorized_keys` file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers for best results. -It is important to understand that this system can easily fail if not coded -in the right way. At least make absolutely sure that after the transfer is -completed you are able to verify the file size (that should match the one of -the file you copied) and possibly the SHA1 digest if you are using a VPS. +It is important to understand that this system can easily fail if not +implemented in the right way. At least make absolutely sure that after the +transfer is completed you are able to verify the file size (that should match +the one of the file you copied) and possibly the SHA1 digest if you are using +a VPS. You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason. From 14acd0d698e9adfdf667b2b22abd86a511571f08 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 22 Feb 2019 17:31:36 +0100 Subject: [PATCH 0108/1457] ACL doc: subcommands matching. --- topics/acl.md | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 7542a8be94..93bde5d508 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -248,7 +248,7 @@ the exception of +@all. If you say +@all all the commands can be executed by the user, even future commands loaded via the modules system. However if you use the ACL rule +@readonly or any other, the modules commands are always excluded. This is very important because you should just trust the Redis -internal command table for sanity. Moudles my expose dangerous things and in +internal command table for sanity. Modules my expose dangerous things and in the case of an ACL that is just additive, that is, in the form of `+@all -...` You should be absolutely sure that you'll never include what you did not mean to. @@ -300,10 +300,48 @@ command is part of the *geo* category: Note that commands may be part of multiple categories, so for instance an ACL rule like `+@geo -@readonly` will result in certain geo commands to be -excluded because they are readonly commands. +excluded because they are read-only commands. + +## Adding subcommands + +Often the ability to exclude or include a command as a whole is not enough. +Many Redis commands do multiple things based on the subcommand passed as +argument. For example the `CLIENT` command can be used in order to do +dangerous and non dangerous operations. Many deployments may not be happy to +provide the ability to execute `CLIENT KILL` to non admin-level users, but may +still want them to be able to run `CLIENT SETNAME`. + +_Note: probably the new RESP3 `HELLO` command will provide a SETNAME option soon, but this is still a good exmaple anyway._ + +In such case I could alter the ACL of a user in the following way: + + ACL SETUSER myuser -client +client|setname +client|getname + +I started removing the `CLIENT` command, and later added the two allowed +subcommands. Note that **it is not possible to do the reverse**, the subcommands +can be only added, and not excluded, because it is possible that in the future +new subcommands may be added: it is a lot safer to specify all the subcommands +that are valid for some user. Moreover, if you add a subcommand about a command +that is not already disabled, an error is generated, because this can only +be a bug in the ACL rules: + + > ACL SETUSER default +debug|segfault + (error) ERR Error in ACL SETUSER modifier '+debug|segfault': Adding a + subcommand of a command already fully added is not allowed. Remove the + command to start. Example: -DEBUG +DEBUG|DIGEST + +Note that subcommand matching may add some performance penalty, however such +penalty is very hard to measure even with synthetic benchmarks, and the +additional CPU cost is only payed when such command is called, and not when +other commands are called. ## +@all VS -@all +In the previous section it was observed how it is possible to define commands +ACLs based on adding/removing single commands. + +## Using an external ACL file + ## TODO list for this document * Make sure to specify that modules commands are ignored when adding/removing categories. From e965afa2166f50b9138e1d518b015d71adf69266 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 22 Feb 2019 17:32:44 +0100 Subject: [PATCH 0109/1457] ACL: fix a few typos. --- topics/acl.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 93bde5d508..7f5156debe 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -111,7 +111,7 @@ Allow and disallow certain keys: Configure valid passwords for the user: -* `>`: Add this passowrd to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. +* `>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. * `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. * `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. * `resetpass`: Flush the list of allowed passwords. Moreover removes the *nopass* status. After *resetpass* the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). @@ -191,7 +191,7 @@ computers to read, while `ACL LIST` is more biased towards humans. 7) "keys" 8) 1) "cached:*" -The `ACL GETUSER` returns a field-value array describing the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords and so forth. The output is probably more readalbe if we use RESP3, so that it is returned as as map reply: +The `ACL GETUSER` returns a field-value array describing the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords and so forth. The output is probably more readable if we use RESP3, so that it is returned as as map reply: > ACL GETUSER alice 1# "flags" => 1~ "on" From 84115f392b62a22e641ca79a8d25bc8bc39c3476 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 25 Feb 2019 20:03:02 +0100 Subject: [PATCH 0110/1457] Gopher page added. --- topics/gopher.md | 52 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 topics/gopher.md diff --git a/topics/gopher.md b/topics/gopher.md new file mode 100644 index 0000000000..50e78baddd --- /dev/null +++ b/topics/gopher.md @@ -0,0 +1,52 @@ +Redis contains an implementation of the Gopher protocol, as specified in +the [RFC 1436](https://www.ietf.org/rfc/rfc1436.txt). + +The Gopher protocol was very popular in the late '90s. It is an alternative +to the web, and the implementation both server and client side is so simple +that the Redis server has just 100 lines of code in order to implement this +support. + +What do you do with Gopher nowadays? Well Gopher never *really* died, and +lately there is a movement in order for the Gopher more hierarchical content +composed of just plain text documents to be resurrected. Some want a simpler +internet, others believe that the mainstream internet became too much +controlled, and it's cool to create an alternative space for people that +want a bit of fresh air. + +Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol +as a gift. + +## How it works? + +The Redis Gopher support uses the inline protocol of Redis, and specifically +two kind of inline requests that were anyway illegal: an empty request +or any request that starts with "/" (there are no Redis commands starting +with such a slash). Normal RESP2/RESP3 requests are completely out of the +path of the Gopher protocol implementation and are served as usually as well. + +If you open a connection to Redis when Gopher is enabled and send it +a string like "/foo", if there is a key named "/foo" it is served via the +Gopher protocol. + +In order to create a real Gopher "hole" (the name of a Gopher site in Gopher +talking), you likely need a script like the following: + + https://github.com/antirez/gopher2redis + +## SECURITY WARNING + +If you plan to put Redis on the internet in a publicly accessible address +to server Gopher pages **make sure to set a password** to the instance. +Once a password is set: + +1. The Gopher server (when enabled, not by default) will kill serve content via Gopher. +2. However other commands cannot be called before the client will authenticate. + +So use the `requirepass` option to protect your instance. + +To enable Gopher support use hte following configuration line. + + gopher-enabled yes + +Accessing keys that are not strings or do not exit will generate +an error in Gopher protocol format. From 0b44b34b505e6319e5e97bde1e4a385b185cd46e Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 25 Feb 2019 21:32:36 +0100 Subject: [PATCH 0111/1457] Page with titles are nicer. --- topics/gopher.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/gopher.md b/topics/gopher.md index 50e78baddd..8381a74fc5 100644 --- a/topics/gopher.md +++ b/topics/gopher.md @@ -1,3 +1,5 @@ +# Redis Gopher support + Redis contains an implementation of the Gopher protocol, as specified in the [RFC 1436](https://www.ietf.org/rfc/rfc1436.txt). From ccebe098ef715c1f8ee9e70c73640d401a4cb96a Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 2 Mar 2019 18:57:08 +0100 Subject: [PATCH 0112/1457] Modules updated. --- modules.json | 42 ++++++++++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 16 deletions(-) diff --git a/modules.json b/modules.json index f6a9296f92..d9f07952a6 100644 --- a/modules.json +++ b/modules.json @@ -21,7 +21,7 @@ }, { "name": "RedisGraph", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", + "license": "Redis Source Available License", "repository": "https://github.com/RedisLabsModules/redis-graph", "description": "A graph database with a Cypher-based querying language using sparse adjacency matrices", "authors": [ @@ -41,9 +41,9 @@ "stars": 45 }, { - "name": "ReJSON", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", - "repository": "https://github.com/RedisLabsModules/ReJSON", + "name": "RedisJSON", + "license": "Redis Source Available License", + "repository": "https://github.com/RedisLabsModules/redisjson", "description": "A JSON data type for Redis", "authors": [ "itamarhaber", @@ -52,9 +52,9 @@ "stars": 641 }, { - "name": "Redis-ML", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", - "repository": "https://github.com/RedisLabsModules/redis-ml", + "name": "RedisML", + "license": "Redis Source Available License", + "repository": "https://github.com/RedisLabsModules/redisml", "description": "Machine Learning Model Server", "authors": [ "shaynativ", @@ -64,7 +64,7 @@ }, { "name": "RediSearch", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", + "license": "Redis Source Available License", "repository": "https://github.com/RedisLabsModules/RediSearch", "description": "Full-Text search over Redis", "authors": [ @@ -75,7 +75,7 @@ }, { "name": "topk", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", + "license": "Redis Source Available License", "repository": "https://github.com/RedisLabsModules/topk", "description": "An almost deterministic top k elements counter", "authors": [ @@ -86,7 +86,7 @@ }, { "name": "countminsketch", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", + "license": "Redis Source Available License", "repository": "https://github.com/RedisLabsModules/countminsketch", "description": "An apporximate frequency counter", "authors": [ @@ -96,9 +96,9 @@ "stars": 31 }, { - "name": "rebloom", - "license": "(non OSI license) Apache 2.0 modified with Commons Clause", - "repository": "https://github.com/RedisLabsModules/rebloom", + "name": "RedisBloom", + "license": "Redis Source Available License", + "repository": "https://github.com/RedisLabsModules/redisbloom", "description": "Scalable Bloom filters", "authors": [ "mnunberg", @@ -117,15 +117,25 @@ "stars": 2073 }, { - "name": "redis-timerseries", - "license": "AGPL", - "repository": "https://github.com/danni-m/redis-timeseries", + "name": "RedisTimeSeries", + "license": "Redis Source Available License", + "repository": "https://github.com/RedisLabsModules/RedisTimeSeries", "description": "Time-series data structure for redis", "authors": [ "danni-m" ], "stars": 186 }, + { + "name": "RedisAI", + "license": "AGPL", + "repository": "https://github.com/RedisAI/RedisAI", + "description": "A Redis module for serving tensors and executing deep learning graphs", + "authors": [ + "lantiga" + ], + "stars": 61 + }, { "name": "ReDe", "license": "MIT", From b77a93f2e6b2695361829c2dbe8558d278f88330 Mon Sep 17 00:00:00 2001 From: Steve Webster Date: Tue, 12 Mar 2019 20:37:06 +0000 Subject: [PATCH 0113/1457] Clarify JUSTID option doesn't increment delivery count See antires/redis#5194 for discussion on `JUSTID` semantics --- commands/xclaim.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/commands/xclaim.md b/commands/xclaim.md index 48f5b6e3f6..a8ca2b1176 100644 --- a/commands/xclaim.md +++ b/commands/xclaim.md @@ -12,10 +12,7 @@ This dynamic is clearly explained in the [Stream intro documentation](/topics/st Note that the message is claimed only if its idle time is greater the minimum idle time we specify when calling `XCLAIM`. Because as a side effect `XCLAIM` will also reset the idle time (since this is a new attempt at processing the message), two consumers trying to claim a message at the same time will never both succeed: only one will successfully claim the message. This avoids that we process a given message multiple times in a trivial way (yet multiple processing is possible and unavoidable in the general case). -Moreover, as a side effect, `XCLAIM` will increment the count of attempted -deliveries of the message. In this way messages that cannot be processed for -some reason, for instance because the consumers crash attempting to process -them, will start to have a larger counter and can be detected inside the system. +Moreover, as a side effect, `XCLAIM` will increment the count of attempted deliveries of the message unless the `JUSTID` option has been specified (which only delivers the message ID, not the message itself). In this way messages that cannot be processed for some reason, for instance because the consumers crash attempting to process them, will start to have a larger counter and can be detected inside the system. ## Command options @@ -28,7 +25,7 @@ useful to normal users: 2. `TIME `: This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). This is useful in order to rewrite the AOF file generating `XCLAIM` commands. 3. `RETRYCOUNT `: Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. Normally `XCLAIM` does not alter this counter, which is just served to clients when the XPENDING command is called: this way clients can detect anomalies, like messages that are never processed for some reason after a big number of delivery attempts. 4. `FORCE`: Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are ignored. -5. `JUSTID`: Return just an array of IDs of messages successfully claimed, without returning the actual message. +5. `JUSTID`: Return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented. @return From caeeddd093647b8dcfa5f774289713fc1e8c93b5 Mon Sep 17 00:00:00 2001 From: Doug Nelson Date: Mon, 3 Jun 2019 11:15:29 +0100 Subject: [PATCH 0114/1457] fix: consumer-name misspelled --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 4c423d84d4..c74ea33cc9 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -403,7 +403,7 @@ When called in this way the command just outputs the total number of pending mes We can ask for more info by giving more arguments to **XPENDING**, because the full command signature is the following: ``` -XPENDING [ []] +XPENDING [ []] ``` By providing a start and end ID (that can be just `-` and `+` as in **XRANGE**) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer group name, is used if we want to limit the output to just messages pending for a given consumer group, but we'll not use this feature in the following example. From 9d8f5053ed6e3ad45b827f443a57fa574d458a0a Mon Sep 17 00:00:00 2001 From: Benoit de Chezelles Date: Tue, 25 Jun 2019 11:39:11 +0200 Subject: [PATCH 0115/1457] Add minimal doc for MIGRATE AUTH option --- commands.json | 6 ++++++ commands/migrate.md | 4 +++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 92ca1e299a..97345576bb 100644 --- a/commands.json +++ b/commands.json @@ -1706,6 +1706,12 @@ "enum": ["REPLACE"], "optional": true }, + { + "command": "AUTH", + "name": "password", + "type": "string", + "optional": true + }, { "name": "key", "command": "KEYS", diff --git a/commands/migrate.md b/commands/migrate.md index 6c082005bb..928c953cfc 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -63,11 +63,13 @@ just a single key exists. * `COPY` -- Do not remove the key from the local instance. * `REPLACE` -- Replace existing key on the remote instance. * `KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `KEYS` option (see the above section for more info). +* `AUTH` -- Authenticate with the given password to the remote instance. `COPY` and `REPLACE` are available only in 3.0 and above. `KEYS` is available starting with Redis 3.0.6. +`AUTH` is available starting with Redis 4.0.7. @return @simple-string-reply: The command returns OK on success, or `NOKEY` if no keys were -found in the source instance. +found in the source instance. From 8f6bd9ef135f709a84db62c554645028c6d70358 Mon Sep 17 00:00:00 2001 From: Delius Date: Wed, 3 Jul 2019 12:29:26 +0300 Subject: [PATCH 0116/1457] Fix bzpopmax example --- commands/bzpopmax.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bzpopmax.md b/commands/bzpopmax.md index 6b8235f509..6eb712f520 100644 --- a/commands/bzpopmax.md +++ b/commands/bzpopmax.md @@ -31,7 +31,7 @@ redis> DEL zset1 zset2 redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMAX zset1 zset2 0 -1) "zet1" +1) "zset1" 2) "2" 2) "c" ``` From 34396f2ada571c6f25236c0ec6c11ff9efc60214 Mon Sep 17 00:00:00 2001 From: Angus Pearson Date: Mon, 8 Jul 2019 12:53:31 +0100 Subject: [PATCH 0117/1457] Add SCAN TYPE documentation, re. Redis PR #6116 --- commands.json | 6 ++++++ commands/scan.md | 24 ++++++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/commands.json b/commands.json index 92ca1e299a..ed3c296a2b 100644 --- a/commands.json +++ b/commands.json @@ -3317,6 +3317,12 @@ "name": "count", "type": "integer", "optional": true + }, + { + "command": "TYPE", + "name": "type", + "type": "string", + "optional": true } ], "since": "2.8.0", diff --git a/commands/scan.md b/commands/scan.md index e8f9853ae7..4a435ab2a1 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -139,6 +139,30 @@ redis 127.0.0.1:6379> As you can see most of the calls returned zero elements, but the last call where a COUNT of 1000 was used in order to force the command to do more scanning for that iteration. + +## The TYPE option + +This option asks `SCAN` to only return objects that match a given `type`, allowing you to iterate through the database looking for keys of a specific type. The **TYPE** option is only available on the whole-database `SCAN`, not `HSCAN` or `ZSCAN` etc. + +The `type` argument is the same string name that the `TYPE` command returns. Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps, and Bitfields, may internally be implemented using other Redis types, such as a string or zset, so can't be distinguished from other keys of that same type by `SCAN`. For example, a ZSET and GEOHASH: + +``` +redis 127.0.0.1:6379> GEOADD geokey 0 0 value +(integer) 1 +redis 127.0.0.1:6379> ZADD zkey 1000 value +(integer) 1 +redis 127.0.0.1:6379> TYPE geokey +zset +redis 127.0.0.1:6379> TYPE zkey +zset +redis 127.0.0.1:6379> SCAN 0 TYPE zset +1) "0" +2) 1) "geokey" + 2) "zkey" +``` + +It is important to note that the **TYPE** filter is also applied after elements are retrieved from the database, so the option does not reduce the amount of work the server has to do to complete a full iteration, and for rare types you may receive no elements in many iterations. + ## Multiple parallel iterations It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. Server side no state is taken at all. From b7753dc32987950f0c9b2c0814e12a55bd8e5af5 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 19 Jul 2019 18:38:32 +0200 Subject: [PATCH 0118/1457] Client side caching: intro. --- topics/client-side-caching.md | 71 +++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 topics/client-side-caching.md diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md new file mode 100644 index 0000000000..83f71d1a1d --- /dev/null +++ b/topics/client-side-caching.md @@ -0,0 +1,71 @@ +# Redis server-assisted client side caching + +Client side caching is a technique used in order to create high performance +services. It exploits the available memory in the application servers, that +usually are distinct computers compared to the database nodes, in order to +store some subset of the database information directly in the application side. + +Normally when some data is required, the application servers will ask the +database about such information, like in the following picture: + + + +-------------+ +----------+ + | | ------- GET user:1234 -------> | | + | Application | | Database | + | | <---- username = Alice ------- | | + +-------------+ +----------+ + +When client side caching is used, the application will store the reply of +popular queries directily inside the application memory, so that it can +reuse such replies later, without contacting the database again. + + +-------------+ +----------+ + | | | | + | Application | ( No chat needed ) | Database | + | | | | + +-------------+ +----------+ + | Local cache | + | | + | user:1234 = | + | username | + | Alice | + +-------------+ + +While the application memory used for the local cache may not be very big, +the time needed in order to access the local computer memory is orders of +magnitude smaller compared to asking a networked service like a database. +Since often the same small percentage of data are accessed very frequently +this pattern can greatly reduce the latency for the application to get data +and, at the same time, the load in the database side. + +## There are only two big problems in computer science... + +A problem with the above pattern is how to invalidate the information that +the application is holding, in order to avoid presenting to the user stale +data. For example after the application above locally cached the user:1234 +information, Alice may update her username to Flora. Yet the application +may continue to serve the old username for user 1234. + +Sometimes this problem is not a big deal, so the client will just use a +"time to live" for the cached information. Once a given amount of time has +elapsed, the information will no longer be considered valid. More complex +patterns, when using Redis, leverage Pub/Sub messages in order to +send invalidation messages to clients listening. This can be made to work +but is tricky and costly from the point of view of the bandwidth used, because +often such patterns involve sending the invalidation messages to every client +in the application, even if certain clients may not have any copy of the +invalidated data. + +Yet many very big applications use client side caching: it is in some way +the next logical strategy, after using a fast store like Redis, in order +to cut on latency and be able to handle more queries per second. Because the +usefulness of such pattern, to make it more accessible could be a real +advantage for Redis users. For this reason Redis 6, currently under +development, already implements server-assisted client side caching. Once +the database is an active part of the protocol, it can remember what +keys a given client requested (if it enabled client side caching in the +connection), and send invalidation messages if such keys gets modified. + + + + From 46d445691f65f7b22ff114c52d974664bf271a78 Mon Sep 17 00:00:00 2001 From: Gustavo Kishima Date: Wed, 31 Jul 2019 15:18:57 -0300 Subject: [PATCH 0119/1457] Update ARM.md --- topics/ARM.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/ARM.md b/topics/ARM.md index 845ed7bbb5..23d2f22cbc 100644 --- a/topics/ARM.md +++ b/topics/ARM.md @@ -29,7 +29,7 @@ run as expected. ## Building Redis in the Pi -* Download Redis verison 4 or 5. +* Download Redis version 4 or 5. * Just use `make` as usually to create the executable. There is nothing special in the process. The only difference is that by From 0940645b3526e9084a29f22d21298016ae6dc04c Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 1 Aug 2019 16:44:09 +0200 Subject: [PATCH 0120/1457] Client side caching: details about the Redis implementation. --- topics/client-side-caching.md | 67 ++++++++++++++++++++++++++++++----- 1 file changed, 58 insertions(+), 9 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 83f71d1a1d..88925241da 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -56,16 +56,65 @@ often such patterns involve sending the invalidation messages to every client in the application, even if certain clients may not have any copy of the invalidated data. -Yet many very big applications use client side caching: it is in some way -the next logical strategy, after using a fast store like Redis, in order -to cut on latency and be able to handle more queries per second. Because the -usefulness of such pattern, to make it more accessible could be a real -advantage for Redis users. For this reason Redis 6, currently under -development, already implements server-assisted client side caching. Once -the database is an active part of the protocol, it can remember what -keys a given client requested (if it enabled client side caching in the -connection), and send invalidation messages if such keys gets modified. +Regardless of what schema is used, there is however a simple fact: many +very large applications implement some form of client side caching, because +it is the next logical step to having a fast store or a fast cache server. +Once clients can retrieve an important amount of information without even +asking a networked server at all, but just accessing their local memory, +then it is possible to fetch more data per second (since many queries will +not hit the database or the cache at all) with much smaller latency. +For this reason Redis 6 implements direct support for client side cachig, +in order to make this pattern much simpler to implement, more accessible, +reliable and efficient. +## The Redis implementation of client side caching +The Redis client side caching support is called _Tracking_. It basically +consist in a few very simple ideas: +1. Clients can enable tracking if they want. Connections start without tracking enabled. +2. When tracking is enabled, the server remembers what keys each client requseted during the connection lifetime (by sending read commands about such keys). +3. When a key is modified by some client, or is evicted because it has an associated expire time, or evicted because of a _maxmemory_ policy, all the clients with tracking enabled that may have the key cached, are notified with an _invalidation message_. +4. When clients receive invalidation messages, they are required to remove the corresponding keys, in order to avoid serving stale data. +This is an example of the protocol (the actual details are very different as you'll discover reading this document till the end): + +* Client 1 `->` Server: CLIENT TRACKING ON +* Client 1 `->` Server: GET foo +* (The server remembers that Client 1 may have the key "foo" cached) +* (Client 1 may remember the value of "foo" inside its local memory) +* Client 2 `->` Server: SET foo SomeOtherValue +* Server `->` Client 1: INVALIDATE "foo" + +While this is the general idea, the actual implementation and the details are very different, because the vanilla implementation of what exposed above would be extremely inefficient. For instance a Redis instance may have 10k clients all caching 1 million keys each. In such situation Redis would be required to remember 10 billions distinct informations, including the key name itself, which could be quite expensive. Moreover once a client disconnects, there is to garbage collect all the no longer useful information associated with it. + +In order to make client side caching more viable the Redis actual +implementation uses the following ideas: + +* The keyspace is divided into a bit more than 16 millions caching slots. Given a key, the caching slot is obtained by taking the CRC64(key) modulo 16777216 (this basically means that just the lower 24 bits of the result are taken). +* The server remembers which client may have cached keys about a given caching slots. To do so we just need a table with 16 millions of entries (one for each caching slot), associated with a dictionary of all the clients that may have keys about it. This table is called the **Invalidation Table**. +* Inside the invalidation table we don't really need to store pointers to clients structures and do any garbage collection when the client disconnects: instead what we do is just storing client IDs (each Redis client has an unique numberical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. + +This means that clients also have to organize their local cache according to the caching slots, so that when they receive an invalidation message about a given caching slot, such group of keys are no longer considered valid. + +Another advantage of caching slots, other than being more space efficient, is that, once the user memory in the server side in order to track client side information become too big, it is very simple to release some memory, just picking a random caching slot and evicting it, even if there was no actual modification hitting any key of such caching slot. + +Note that by using 16 millions of caching slots, it is still possible to have plenty of keys per instance, with just a few keys hashing to the same caching slot: this means that invalidation messages will expire just a couple of keys in the avareage case, even if the instance has tens of millions of keys. + +## Two connections mode + +Using the new version of the Redis protocol, RESP3, supported by Redis 6, it is possible to run the data queries and receive the invalidation messages in the same connection. However many client implementations may prefer to implement client side caching using two separated connections: one for data, and one for invalidation messages. For this reason when a client enables tracking, it can specify to redirect the invalidation messages to another connection by specifying the "client ID" of different connection. Many data connections can redirect invalidation messages to the same connection, this is useful for clients implementing connection pooling. The two connections model is the only one that is also supported for RESP2 (that lacks the ability to multiplex different kind of information in the same connection). + +We'll show an example, this time by using the actual Redis protocol in the old RRESP2 mode, how a complete session, involving the following steps: enabling tracking redirecting to another connection, asking for a key, and getting an invalidation message once such key gets modified. + +## Opt-in caching + +## When client side caching makes sense + +# Implementing client side caching in client libraries + +## What to cache + +## Limiting the amount of memory used by clients + +## Avoiding race conditions From 5469e43aa57d21927e371ccf1117c93db84845e6 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 1 Aug 2019 19:01:28 +0200 Subject: [PATCH 0121/1457] Client side caching: show actual protocol and usage. --- topics/client-side-caching.md | 111 +++++++++++++++++++++++++++++++++- 1 file changed, 110 insertions(+), 1 deletion(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 88925241da..562dcbad65 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -107,14 +107,123 @@ Using the new version of the Redis protocol, RESP3, supported by Redis 6, it is We'll show an example, this time by using the actual Redis protocol in the old RRESP2 mode, how a complete session, involving the following steps: enabling tracking redirecting to another connection, asking for a key, and getting an invalidation message once such key gets modified. +To start, the client opens a first connection that will be used for invalidations, requests the connection ID, and subscribes via Pub/Sub to the special channel that is used to get invalidation messages when in RESP2 modes (remember that RESP2 is the usual Redis protocol, and not the more advanced protocol that you can use, optionally, with Redis 6 using the `HELLO` command): + +``` +(Connection 1 -- used for invalidations) +CLIENT ID +:4 +SUBSCRIBE __redis__:invalidate +*3 +$9 +subscribe +$20 +__redis__:invalidate +:1 +``` + +Now we can enable tracking from the data connection: + +``` +(Connection 2 -- data connection) +CLIENT TRACKING ON redirect 4 ++OK + +GET foo +$3 +bar +``` + +The client may decide to cache `"foo" => "bar"` in the local memory. + +A different client will now modify the value of the "foo" key: + +``` +(Some other unrelated connection) +SET foo bar ++OK +``` + +As a result, the invalidations connection will receive a message that invalidates cachign slot 1872974. That number is obtained by doing the CRC64("foo") taking the least 24 significant bits. + +``` +(Connection 1 -- used for invalidations) +*3 +$7 +message +$20 +__redis__:invalidate +$7 +1872974 +``` + +The client will check if there are cached keys in such caching slot, and will evict the information that is no longer valid. + +## What tracking tracks + +As you can see clients do not need, by default, to tell the server what keys +they are caching. Every key that is mentioned in the context of a read only +command is tracked by the server, because it *could be cached*. + +This has the obvious advantage of not requiring the client to tell the server +what it is caching. Moreover in many clients implementations, this is what +you want, because a good solution could be to just cache everything that is not +already cached, using a first-in last-out approach: we may want to cache a +fixed number of objects, every new data we retrieve, we could cache it, +discarding the oldest cached object. More advanced implementations may instead +drop the least used object or alike. + +Note that anyway if there is write traffic on the server, caching slots +will get invalidated during the course of the time. In general when the +server assumes that what we get we also cache, we are making a tradeoff: + +1. It is more efficient when the client tends to cache many things with a policy that welcomes new objects. +2. The server will be forced to retain more data about the client keys. +3. The client will receive useless invalidation messages about objects it did not cache. + +So there is an alternative described in the next section. + ## Opt-in caching +(Note: this part is a work in progress and is yet not implemented inside Redis) + +Clients implementations may want to cache only selected keys, and communicate +explicitly to the server what they'll cache and what not: this will require +more bandwidth when caching new objects, but at the same time will reduce +the amount of data that the server has to remember, and the amount of +invalidation messages received by the client. + +In order to do so, tracking must be enabled using the OPTIN option: + + CLIENT TRACKING on REDIRECT 1234 OPTIN + +In this mode, by default keys mentioned in read queries *are not supposed to be cached*, instead when a client wants to cache something, it must send a special command immediately before the actual command to retrieve the data: + + CACHING + +OK + GET foo + "bar" + +To make the protocol more efficient, the `CACHING` command can be sent with the +`NOREPLY` option: in this case it will be totally silent: + + CACHING NOREPLY + GET foo + "bar" + +The `CACHING` command affects the command executed immadietely after it, +however in case the next command is `MULTI`, all the commands in the +transaction will be tracked. Similarly in case of Lua scripts, all the +commands executed by the script will be tracked. + ## When client side caching makes sense # Implementing client side caching in client libraries ## What to cache +## Avoiding race conditions + ## Limiting the amount of memory used by clients -## Avoiding race conditions +## Limiting the amount of memory used by Redis From 928bac7fdb5b3b0230f72511cdcfafecab9798af Mon Sep 17 00:00:00 2001 From: Lev Kokotov Date: Tue, 20 Aug 2019 17:30:10 -0700 Subject: [PATCH 0122/1457] Losign --> Loosig typo --- topics/replication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 8544a9923f..408e7fc798 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -21,7 +21,7 @@ the `WAIT` command. However `WAIT` is only able to ensure that there are the specified number of acknowledged copies in the other Redis instances, it does not turn a set of Redis instances into a CP system with strong consistency: acknowledged writes can still be lost during a failover, depending on the exact configuration -of the Redis persistence. However with `WAIT` the probability of losign a write +of the Redis persistence. However with `WAIT` the probability of losing a write after a failure event is greatly reduced to certain hard to trigger failure modes. From ff7389b0d3838aed4c2135d5cc1106833f3547a5 Mon Sep 17 00:00:00 2001 From: Roy Date: Thu, 29 Aug 2019 15:38:54 -0700 Subject: [PATCH 0123/1457] Fix typo `123` -> `1234` in streams-intro --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index c74ea33cc9..7346b4f779 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -17,7 +17,7 @@ Because streams are an append only data structure, the fundamental write command 1518951480106-0 ``` -The above call to the **XADD** command adds an entry `sensor-id: 123, temperature: 19.8` to the stream at key `mystream`, using an auto-generated entry ID, which is the one returned by the command, specifically `1518951480106-0`. It gets as first argument the key name `mystream`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our **XADD** example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. +The above call to the **XADD** command adds an entry `sensor-id: 1234, temperature: 19.8` to the stream at key `mystream`, using an auto-generated entry ID, which is the one returned by the command, specifically `1518951480106-0`. It gets as first argument the key name `mystream`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our **XADD** example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. It is possible to get the number of items inside a Stream just using the **XLEN** command: From 6ffda587be33cc6ff90d9ad0c2e3a3f26eb8dfee Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Sep 2019 18:20:58 +0200 Subject: [PATCH 0124/1457] Section about ACL passwords and hashing. --- topics/acl.md | 58 +++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 7f5156debe..af84aa07bb 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -185,7 +185,7 @@ computers to read, while `ACL LIST` is more biased towards humans. 1) "flags" 2) 1) "on" 3) "passwords" - 4) 1) "p1pp0" + 4) 1) "2d9c75..." 5) "commands" 6) "-@all +get" 7) "keys" @@ -195,7 +195,7 @@ The `ACL GETUSER` returns a field-value array describing the user in more parsab > ACL GETUSER alice 1# "flags" => 1~ "on" - 2# "passwords" => 1) "p1pp0" + 2# "passwords" => 1) "2d9c75..." 3# "commands" => "-@all +get" 4# "keys" => 1) "cached:*" @@ -206,7 +206,7 @@ Using another `ACL SETUSER` command (from a different user, because alice cannot > ACL SETUSER alice ~objects:* ~items:* ~public:* OK > ACL LIST - 1) "user alice on >p1pp0 ~cached:* ~objects:* ~items:* ~public:* -@all +get" + 1) "user alice on >2d9c75... ~cached:* ~objects:* ~items:* ~public:* -@all +get" 2) "user default on nopass ~* +@all" The user representation in memory is now as we expect it to be. @@ -239,7 +239,7 @@ Will result into myuser to be able to call both `GET` and `SET`: Setting users ACLs by specifying all the commands one after the other is really annoying, so instead we do things like that: - > ACL SETUSER antirez on +@all -@dangerous >somepassword ~* + > ACL SETUSER antirez on +@all -@dangerous >42a979... ~* By saying +@all and -@dangerous we included all the commands and later removed all the commands that are tagged as dangerous inside the Redis command table. @@ -340,6 +340,56 @@ other commands are called. In the previous section it was observed how it is possible to define commands ACLs based on adding/removing single commands. +## How passwords are stored internally + +Redis internally stores passwords hashed with SHA256, if you set a password +and check the output of `ACL LIST` or `GETUSER` you'll see a long hex +string that looks pseudo random. Here is an example, because in the previous +examples, for the sake of brevity, the long hex string was trimmed: + + > ACL GETUSER default + 1) "flags" + 2) 1) "on" + 2) "allkeys" + 3) "allcommands" + 3) "passwords" + 4) 1) "2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927" + 5) "commands" + 6) "+@all" + 7) "keys" + 8) 1) "*" + +Also the old command `CONFIG GET requirepass` will, starting with Redis 6, +no longer return the clear text password, but instead the hashed password. + +Using SHA256 provides the ability to avoid storing the password in clear text +while still allowing for a very fast `AUTH` command, which is a very important +feature of Redis and is coherent with what clients expect from Redis. + +However ACL *passwords* are not really passwords: they are shared secrets +between the server and the client, because in that case the password is +not an authentication token used by a human being. For instance: + + * There are no length limits, the password will just be memorized in some client software, there is no human that need to recall a password in this context. + * The ACL password does not protect any other thing: it will never be, for instance, the password for some email account. + * Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you have already access to what such password is protecting: the Redis instance stability and the data it contains. + +For this reason to slowdown the password authentication in order to use an +algorithm that uses time and space, in order to make password cracking hard, +is a very poor choice. What we suggest instead is to generate very strong +password, so that even having the hash nobody will be able to crack it using a +dictionary nor a brute force attack. For this reason there is a special ACL +command that generates passwords using the system cryptographic pseudorandom +generator: + + > ACL GENPASS + "0e8ad12c1962355a3eb35e0ca686343b" + +The command outputs a 16 bytes (128 bit) pseudorandom string converted to a +32 byte alphanumerical string. This is long enough to avoid attacks and short +enough to be easy to manage, cut & paste, store and so forth. This is what +you should use in order to generate Redis passwords. + ## Using an external ACL file ## TODO list for this document From 8c01bf7912c4b0d3b9121fe2b1520991a3a32f79 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Sep 2019 18:29:40 +0200 Subject: [PATCH 0125/1457] ACL: document ACL file use. --- topics/acl.md | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index af84aa07bb..0e4cae5265 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -392,10 +392,46 @@ you should use in order to generate Redis passwords. ## Using an external ACL file +There are two ways in order to store users inside the Redis configuration. + + 1. Users can be specified directly inside the `redis.conf` file. + 2. It is possible to specify an external ACL file. + +The two methods are *mutually incompatible*, Redis will ask you to use one +or the other. To specify useres inside `redis.conf` is a very simple way +good for simple use cases. When there are multiple users to define, in a +complex environment, we strongly suggest you to use the ACL file. + +The format used inside `redis.conf` and in the external ACL file is exactly +the same, so it is trivial to switch from one to the other, and is +the following: + + user ... acl rules ... + +For instance: + + user worker +@list +@connection ~jobs:* on >ffa9203c493aa99 + +When you want to use an external ACL file, you are required to specify +the configuration directive called `aclfile`, like this: + + aclfile /etc/redis/users.acl + +When you are just specifying a few users directly inside the `redis.conf` +file, you can use `CONFIG REWRITE` in order to store the new user configuration +inside the file by rewriting it. + +The external ACL file however is more powerful. You can do the following: + + * Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*, otherwise an error is reported to the user, and the old configuration will remain valid. + * USE `ACL SAVE` in order to save the current ACL configuration to the ACL file. + +Note that `CONFIG REWRITE` does not also trigger `ACL SAVE`: when you use +an ACL file the configuration and the ACLs are handled separately. + ## TODO list for this document * Make sure to specify that modules commands are ignored when adding/removing categories. * Document cost of keys matching with some benchmark. * Document how +@all also includes module commands and every future command. -* Document how ACL SAVE is not included in CONFIG REWRITE. * Document backward compatibility with requirepass and single argument AUTH. From cd84f10e873a30bfc52ef6dbf88a3ada46de7e51 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 26 Sep 2019 16:19:16 +0200 Subject: [PATCH 0126/1457] Improve the BGREWRITEAOF documentation. --- commands/bgrewriteaof.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 0acbcbdff1..19bad25de7 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -8,17 +8,13 @@ If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. The rewrite will be only triggered by Redis if there is not already a background process doing persistence. + Specifically: -* If a Redis child is creating a snapshot on disk, the AOF rewrite is - _scheduled_ but not started until the saving child producing the RDB file - terminates. - In this case the `BGREWRITEAOF` will still return an OK code, but with an - appropriate message. - You can check if an AOF rewrite is scheduled looking at the `INFO` command - as of Redis 2.6. +* If a Redis child is creating a snapshot on disk, the AOF rewrite is _scheduled_ but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an positive status reply, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command as of Redis 2.6 or successive versions. * If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. +* If the AOF rewrite could start, but the attempt at starting it fails (for instance because of an error in creating the child process), an error is returned to the caller. Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. @@ -29,4 +25,6 @@ Please refer to the [persistence documentation][tp] for detailed information. @return -@simple-string-reply: always `OK`. +@simple-string-reply: A simple string reply indicating that the rewriting started or is about to start ASAP, when the call is executed with success. + +The command may reply with an error in certain cases, as documented above. From 7aac47dd25805cb534a427b42147c85d33586b7c Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Sat, 28 Sep 2019 03:16:51 +0200 Subject: [PATCH 0127/1457] Fix boolean value --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index ed3c296a2b..a3a2bd1525 100644 --- a/commands.json +++ b/commands.json @@ -3503,7 +3503,7 @@ { "name": "ID", "type": "string", - "multiple": "true" + "multiple": true } ], "since": "5.0.0", From 84d34d89d71720dc1cf6d9c43975b716120afe21 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Sep 2019 17:40:20 +0200 Subject: [PATCH 0128/1457] First attempt at documenting the new Lua RESP3 support. --- commands/eval.md | 56 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 56 insertions(+) diff --git a/commands/eval.md b/commands/eval.md index a01bdc6f18..3fa49f1443 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -109,6 +109,8 @@ Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. +**RESP3 mode conversion rules**: note that the Lua engine can work in RESP3 mode using the new Redis 6 protocol. In this case there are additional conversion rules, and certain conversions are also modified compared to the RESP2 mode. Please refer to the RESP3 section of this document for more information. + Also there are two important rules to note: * Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. **If you want to return a float from Lua you should return it as a string**, exactly like Redis itself does (see for instance the `ZSCORE` command). @@ -597,6 +599,60 @@ The semantic change between patch level releases was needed since the old behavior was inherently incompatible with the Redis replication layer and was the cause of bugs. +## Using Lua scripting in RESP3 mode + +Starting with Redis version 6, the server supports two differnent protocols. +One is called RESP2, and is the old protocol: all the new connections to +the server start in this mode. However clients are able to negotiate the +new protocol using the `HELLO` command: this way the connection is put +in RESP3 mode. In this mode certain commands, like for instance `HGETALL`, +reply with a new data type (the Map data type in this specific case). The +RESP3 protocol is semantically more powerful, however most scripts are ok +with using just RESP3. + +The Lua engine always assumes to run in RESP2 mode when talking with Redis, +so whatever the connection that is invoking the `EVAL` or `EVALSHA` command +is in RESP2 or RESP3 mode, Lua scripts will, by default, still see the +same kind of replies they used to see in the past from Redis, when calling +commands using the `redis.call()` built-in function. + +However Lua scripts running in Redis 6 or greater, are able to switch to +RESP3 mode, and get the replies using the new available types. Similarly +Lua scripts are able to reply to clients using the new types. Please make +sure to understand +[the capabilities for RESP3](https://github.com/antirez/resp3) +before continuing reading this section. + +In order to switch to RESP3 a script should call this function: + + redis.setresp(3) + +Note that a script can switch back and forth from RESP3 and RESP2 by +calling the function with the argument '3' or '2'. + +At this point the new conversions are available, specifically: + +**Redis to Lua** conversion table specific to RESP3: + +* Redis map reply -> Lua table with a single `map` field containing a Lua table representing the fields and values of the map. +* Redis set reply -> Lua table with a single `set` field containing a Lua table representing the elements of the set as fields, having as value just `true`. +* Redis new RESP3 single null value -> Lua nil. +* Redis true reply -> Lua true boolean value. +* Redis false reply -> Lua false boolean value. +* Redis double reply -> Lua table with a single `score` field containing a Lua number representing the double value. +* All the RESP2 old conversions still apply. + +**Lua to Redis** conversion table specific for RESP3. + +* Lua boolean -> Redis boolean true or false. **Note that this is a change compared to the RESP2 mode**, where returning true from Lua returned the number 1 to the Redis client, and returning false used to return NULL. +* Lua table with a single `map` field set to a field-value Lua table -> Redis map reply. +* Lua table with a single `set` field set to a field-value Lua table -> Redis set reply, the values are discared and can be anything. +* Lua table with a single `double` field set to a field-value Lua table -> Redis double reply. +* Lua null -> Redis RESP3 new null reply (protocol `"_\r\n"`). +* All the RESP2 old conversions still apply unless specified above. + +There is one key thing to understand: in case Lua replies with RESP3 types, but the connection calling Lua is in RESP2 mode, Redis will automatically convert the RESP3 protocol to RESP2 compatible protocol, as it happens for normal commands. For instance returning a map type to a connection in RESP2 mode will have the effect of returning a flat array of fields and values. + ## Available libraries The Redis Lua interpreter loads the following Lua libraries: From 4e3dd3577802f2bd819e2a507d5aaad4c6c8ea1c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 4 Oct 2019 11:48:24 +0200 Subject: [PATCH 0129/1457] Document the new RM_Call() A and R modifiers. --- topics/modules-intro.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 3fad95fe8f..5c33bc73d2 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -287,7 +287,9 @@ This is the full list of format specifiers: * **s** -- RedisModuleString as received in `argv` or by other Redis module APIs returning a RedisModuleString object. * **l** -- Long long integer. * **v** -- Array of RedisModuleString objects. -* **!** -- This modifier just tells the function to replicate the command to slaves and AOF. It is ignored from the point of view of arguments parsing. +* **!** -- This modifier just tells the function to replicate the command to replicas and AOF. It is ignored from the point of view of arguments parsing. +* **A** -- This modifier, when `!` is given, tells to suppress AOF propagation: the command will be propagated only to replicas. +* **R** -- This modifier, when `!` is given, tells to suppress replicas propagation: the command will be propagated only to the AOF if enabled. The function returns a `RedisModuleCallReply` object on success, on error NULL is returned. From 6a82fe479f79f6bd395ddbee1b2f742aa112c65e Mon Sep 17 00:00:00 2001 From: Poga Po Date: Mon, 31 Jul 2017 11:41:04 +0800 Subject: [PATCH 0130/1457] add redis-rating to modules.json --- modules.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/modules.json b/modules.json index d9f07952a6..2523350ba1 100644 --- a/modules.json +++ b/modules.json @@ -196,5 +196,14 @@ "RedBeardLab" ], "stars": 613 + }, + + { + "name": "redis-rating", + "license" : "MIT", + "repository": "https://github.com/poga/redis-rating", + "description": "Estimate actual rating from postive/negative ratings", + "authors": ["devpoga"], + "stars": 14 } ] From 49f995677a1507363669257d49981e33a807d684 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2019 10:49:55 +0200 Subject: [PATCH 0131/1457] LOLWUT man page. --- commands/lolwut.md | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 commands/lolwut.md diff --git a/commands/lolwut.md b/commands/lolwut.md new file mode 100644 index 0000000000..b06890f73c --- /dev/null +++ b/commands/lolwut.md @@ -0,0 +1,29 @@ +The LOLWUT command displays the Redis version: however as a side effect of +doing so, it also creates a piece of generative computer art that is different +with each version of Redis. The command was introduced in Redis 5 and announced +with this [blog post](http://antirez.com/news/123). + +By default the `LOLWUT` command will display the piece corresponding to the +current Redis version, however it is possible to display a specific version +using the following form: + + LOLWUT VERSION 5 ... optional other arguments ... + +Of course the "5" above is an example. Each LOLWUT version takes a different +set of arguments in order to change the output. The user is encouraged to +play with it to discover how the output changes adding more numerical +arguments. + +LOLWUT wants to be a reminder that there is more in programming than just +putting some code together in order to create something useful. Every +LOLWUT version should have the following properties: + +1. It should display some computer art. There are no limits as long the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be even generative poetry and other things. +2. LOLWUT output should be completely useless. Displaying some useful Redis internal metrics does not count as a valid LOLWUT. +3. LOLWUT output should be fast to generate so that the command can be called in production instances without issues. It should remain fast even when the user experiments with odd parameters. +4. LOLWUT implementations should be safe and carefully checked for security, and resist to untrusted inputs if they take arguments. +5. LOLWUT must always dispaly the Redis version at the end. + +@return + +@bulk-string-reply (or verbatim reply if using the RESP3 protocol): the string containing the generative computer art, and a text with the Redis version. From 8311fdd8c15719173f63a48797fa955fe6c03b97 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2019 10:52:22 +0200 Subject: [PATCH 0132/1457] Add LOLWUT to commands.json. --- commands.json | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/commands.json b/commands.json index b7d343161e..7e0a7e9108 100644 --- a/commands.json +++ b/commands.json @@ -1417,6 +1417,19 @@ "since": "1.0.0", "group": "server" }, + "LOLWUT": { + "summary": "Display some computer art and the Redis version", + "arguments": [ + { + "command": "VERSION", + "name": "version", + "type": "integer", + "optional": true + } + ], + "since": "5.0.0", + "group": "server" + }, "KEYS": { "summary": "Find all keys matching the given pattern", "complexity": "O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.", From 72b78634f23057b7eab3414aa88e6e3d714d8924 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2019 11:00:22 +0200 Subject: [PATCH 0133/1457] LOLWUT: invert two words to fix grammar. --- commands/lolwut.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/lolwut.md b/commands/lolwut.md index b06890f73c..ba28e7d4d1 100644 --- a/commands/lolwut.md +++ b/commands/lolwut.md @@ -7,7 +7,7 @@ By default the `LOLWUT` command will display the piece corresponding to the current Redis version, however it is possible to display a specific version using the following form: - LOLWUT VERSION 5 ... optional other arguments ... + LOLWUT VERSION 5 ... other optional arguments ... Of course the "5" above is an example. Each LOLWUT version takes a different set of arguments in order to change the output. The user is encouraged to From 097fbfa75d4f2f5e868e010548cd27f9fe0cb4e6 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2019 11:02:34 +0200 Subject: [PATCH 0134/1457] Other fix to LOLWUT page. --- commands/lolwut.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/commands/lolwut.md b/commands/lolwut.md index ba28e7d4d1..a767a381ea 100644 --- a/commands/lolwut.md +++ b/commands/lolwut.md @@ -18,12 +18,12 @@ LOLWUT wants to be a reminder that there is more in programming than just putting some code together in order to create something useful. Every LOLWUT version should have the following properties: -1. It should display some computer art. There are no limits as long the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be even generative poetry and other things. +1. It should display some computer art. There are no limits as long as the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be generative poetry and other non graphical things. 2. LOLWUT output should be completely useless. Displaying some useful Redis internal metrics does not count as a valid LOLWUT. 3. LOLWUT output should be fast to generate so that the command can be called in production instances without issues. It should remain fast even when the user experiments with odd parameters. 4. LOLWUT implementations should be safe and carefully checked for security, and resist to untrusted inputs if they take arguments. -5. LOLWUT must always dispaly the Redis version at the end. +5. LOLWUT must always display the Redis version at the end. @return -@bulk-string-reply (or verbatim reply if using the RESP3 protocol): the string containing the generative computer art, and a text with the Redis version. +@bulk-string-reply (or verbatim reply when using the RESP3 protocol): the string containing the generative computer art, and a text with the Redis version. From 9d3f8dce000c5d21b38eaf4020d0bc1bba5ee761 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2019 17:09:28 +0200 Subject: [PATCH 0135/1457] Update GEOHASH man page. --- commands/geohash.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/commands/geohash.md b/commands/geohash.md index 2517c3f28d..6bdbe003f5 100644 --- a/commands/geohash.md +++ b/commands/geohash.md @@ -10,14 +10,17 @@ described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and Geohash string properties --- -The command returns 11 characters Geohash strings, so no precision is loss -compared to the Redis internal 52 bit representation. The returned Geohashes -have the following properties: +The command returns 10 characters Geohash strings, so only two bits of +precision are lost compared to the Redis internal 52 bit representation, but +this loss doesn't affect the precision in a sensible way: normally geohashes +are cut to up to 8 characters, giving anyway a precision of +/- 0.019 km. -1. They can be shortened removing characters from the right. It will lose precision but will still point to the same area. +1. Geo hashes can be shortened removing characters from the right. It will lose precision but will still point to the same area. 2. It is possible to use them in `geohash.org` URLs such as `http://geohash.org/`. This is an [example of such URL](http://geohash.org/sqdtr74hyu0). 3. Strings with a similar prefix are nearby, but the contrary is not true, it is possible that strings with different prefixes are nearby too. +Note: older versions of Redis used to return 11 characters instead of 10, however because of a bug the last character was not correct and was not helping in having a better precision. + @return @array-reply, specifically: From cb2f6507bafbc1ef648c4d40b29fa901085c8e5f Mon Sep 17 00:00:00 2001 From: pocams Date: Thu, 10 Oct 2019 13:24:11 -0500 Subject: [PATCH 0136/1457] Fix typo in BZPOPMIN example --- commands/bzpopmin.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bzpopmin.md b/commands/bzpopmin.md index 0fefff108b..94990eae62 100644 --- a/commands/bzpopmin.md +++ b/commands/bzpopmin.md @@ -31,7 +31,7 @@ redis> DEL zset1 zset2 redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMIN zset1 zset2 0 -1) "zet1" +1) "zset1" 2) "0" 2) "a" ``` From 1eece810aea9da3f39f3bda5f711fb968732c192 Mon Sep 17 00:00:00 2001 From: "Kyle J. Davis" Date: Fri, 6 Oct 2017 13:58:54 -0600 Subject: [PATCH 0137/1457] Added note about deprecation --- commands.json | 11 ++++------- commands/hmset.md | 2 ++ commands/hset.md | 7 +++---- 3 files changed, 9 insertions(+), 11 deletions(-) diff --git a/commands.json b/commands.json index 06a9b10719..47783ce734 100644 --- a/commands.json +++ b/commands.json @@ -1217,19 +1217,16 @@ }, "HSET": { "summary": "Set the string value of a hash field", - "complexity": "O(1)", + "complexity": "O(1) for each field/value pair added, so O(N) to add N field/value pairs when the command is called with multiple field/value pairs.", "arguments": [ { "name": "key", "type": "key" }, { - "name": "field", - "type": "string" - }, - { - "name": "value", - "type": "string" + "name": ["field", "value"], + "type": ["string", "string"], + "multiple": true } ], "since": "2.0.0", diff --git a/commands/hmset.md b/commands/hmset.md index 8cec77585e..b06013e2f8 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -3,6 +3,8 @@ Sets the specified fields to their respective values in the hash stored at This command overwrites any specified fields already existing in the hash. If `key` does not exist, a new key holding a hash is created. +As per Redis 4.0.0, HMSET is considered deprecated. Please use `HSET` in new code. + @return @simple-string-reply diff --git a/commands/hset.md b/commands/hset.md index b4e871ec8d..f560d3b258 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -2,12 +2,11 @@ Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a new key holding a hash is created. If `field` already exists in the hash, it is overwritten. -@return +As of Redis 4.0.0, HSET is variadic and allows for multiple `field`/`value` pairs. -@integer-reply, specifically: +@return -* `1` if `field` is a new field in the hash and `value` was set. -* `0` if `field` already exists in the hash and the value was updated. +@integer-reply: The number of fields that were added. @examples From b0e4a5f910e1c1bbd67e7e5e74e6ec8af4c5bcaa Mon Sep 17 00:00:00 2001 From: Kyle Date: Fri, 11 Oct 2019 12:28:14 -0600 Subject: [PATCH 0138/1457] update list complexity and nomeclature --- commands.json | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/commands.json b/commands.json index 7e0a7e9108..a1e0b93a55 100644 --- a/commands.json +++ b/commands.json @@ -171,7 +171,7 @@ "group": "list" }, "BRPOPLPUSH": { - "summary": "Pop a value from a list, push it to another list and return it; or block until one is available", + "summary": "Pop an element from a list, push it to another list and return it; or block until one is available", "complexity": "O(1)", "arguments": [ { @@ -1481,7 +1481,7 @@ "type": "string" }, { - "name": "value", + "name": "element", "type": "string" } ], @@ -1513,15 +1513,15 @@ "group": "list" }, "LPUSH": { - "summary": "Prepend one or multiple values to a list", - "complexity": "O(1)", + "summary": "Prepend one or multiple elements to a list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", "arguments": [ { "name": "key", "type": "key" }, { - "name": "value", + "name": "element", "type": "string", "multiple": true } @@ -1530,15 +1530,15 @@ "group": "list" }, "LPUSHX": { - "summary": "Prepend a value to a list, only if the list exists", - "complexity": "O(1)", + "summary": "Prepend an element to a list, only if the list exists", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", "arguments": [ { "name": "key", "type": "key" }, { - "name": "value", + "name": "element", "type": "string" } ], @@ -2135,15 +2135,15 @@ "group": "list" }, "RPUSH": { - "summary": "Append one or multiple values to a list", - "complexity": "O(1)", + "summary": "Append one or multiple elements to a list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", "arguments": [ { "name": "key", "type": "key" }, { - "name": "value", + "name": "element", "type": "string", "multiple": true } @@ -2152,15 +2152,15 @@ "group": "list" }, "RPUSHX": { - "summary": "Append a value to a list, only if the list exists", - "complexity": "O(1)", + "summary": "Append an element to a list, only if the list exists", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", "arguments": [ { "name": "key", "type": "key" }, { - "name": "value", + "name": "element", "type": "string" } ], From f5a74aad71b8530f97a71e599144ab806cfde080 Mon Sep 17 00:00:00 2001 From: landergate Date: Mon, 14 Oct 2019 10:57:51 +0300 Subject: [PATCH 0139/1457] Typo on xread.md --- commands/xread.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xread.md b/commands/xread.md index 471c9a2620..87a2b80d4d 100644 --- a/commands/xread.md +++ b/commands/xread.md @@ -186,7 +186,7 @@ And so forth. ## How multiple clients blocked on a single stream are served Blocking list operations on lists or sorted sets have a *pop* behavior. -Bascially, the element is removed from the list or sorted set in order +Basically, the element is removed from the list or sorted set in order to be returned to the client. In this scenario you want the items to be consumed in a fair way, depending on the moment clients blocked on a given key arrived. Normally Redis uses the FIFO semantics in this From e7eba75585235ad020c592169e40e45ddae9ef1c Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Oct 2019 20:07:12 +0300 Subject: [PATCH 0140/1457] Adds reference to ZPOPMIN and ZPOPMAX (#936) --- topics/transactions.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/topics/transactions.md b/topics/transactions.md index ac3786071d..2da9b4e2fd 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -223,9 +223,11 @@ transactions. ### Using `WATCH` to implement ZPOP A good example to illustrate how `WATCH` can be used to create new -atomic operations otherwise not supported by Redis is to implement ZPOP, -that is a command that pops the element with the lower score from a -sorted set in an atomic way. This is the simplest implementation: +atomic operations otherwise not supported by Redis is to implement ZPOP +(`ZPOPMIN`, `ZPOPMAX` and their blocking variants have only been added +in version 5.0), that is a command that pops the element with the lower +score from a sorted set in an atomic way. This is the simplest +implementation: WATCH zset element = ZRANGE zset 0 0 From 60dcac3031b5303fe5d16c7f3bbe71ec22e98ac8 Mon Sep 17 00:00:00 2001 From: Timofey Stolbov Date: Mon, 14 Oct 2019 22:53:31 +0500 Subject: [PATCH 0141/1457] Fix typo (#938) --- modules.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 2523350ba1..70db03f5cd 100644 --- a/modules.json +++ b/modules.json @@ -140,7 +140,7 @@ "name": "ReDe", "license": "MIT", "repository": "https://github.com/TamarLabs/ReDe", - "description": "Low Latancy timed queues (Dehydrators) as Redis data types.", + "description": "Low Latency timed queues (Dehydrators) as Redis data types.", "authors": [ "daTokenizer" ], From 5ca468607e2512fb346d209ad8c08531b29a5337 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Oct 2019 20:57:37 +0300 Subject: [PATCH 0142/1457] Adds Lua metrics to INFO and MEMORY STATS (#947) As per https://github.com/antirez/redis/pull/4883 Signed-off-by: Itamar Haber --- commands/info.md | 2 ++ commands/memory-stats.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/commands/info.md b/commands/info.md index 421c588b26..b348c98531 100644 --- a/commands/info.md +++ b/commands/info.md @@ -102,6 +102,8 @@ Here is the meaning of all fields in the **memory** section: * `total_system_memory_human`: Human readable representation of previous value * `used_memory_lua`: Number of bytes used by the Lua engine * `used_memory_lua_human`: Human readable representation of previous value +* `used_memory_scripts`: Number of bytes used by cached Lua scripts +* `used_memory_scripts_human`: Human readable representation of previous value * `maxmemory`: The value of the `maxmemory` configuration directive * `maxmemory_human`: Human readable representation of previous value * `maxmemory_policy`: The value of the `maxmemory-policy` configuration diff --git a/commands/memory-stats.md b/commands/memory-stats.md index a2db9347a7..54d1728d0c 100644 --- a/commands/memory-stats.md +++ b/commands/memory-stats.md @@ -19,6 +19,8 @@ values. The following metrics are reported: * `aof.buffer`: The summed size in bytes of the current and rewrite AOF buffers (see `INFO`'s `aof_buffer_length` and `aof_rewrite_buffer_length`, respectively) +* `lua.caches`: the summed size in bytes of the overheads of the Lua scripts' + caches * `dbXXX`: For each of the server's databases, the overheads of the main and expiry dictionaries (`overhead.hashtable.main` and `overhead.hashtable.expires`, respectively) are reported in bytes From 1459c95d2146a3ae74083d99b9366547a5ce1008 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aleix=20Conchillo=20Flaqu=C3=A9?= Date: Mon, 14 Oct 2019 11:42:14 -0700 Subject: [PATCH 0143/1457] clients: added guile-redis (#955) --- clients.json | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index e0211721d5..de07074dbf 100644 --- a/clients.json +++ b/clients.json @@ -1557,13 +1557,22 @@ "active": true }, - { + { "name": "Redis_MTC", "language": "Xojo", "repository": "https://github.com/ktekinay/XOJO-Redis/", "description": "A Xojo library to connect to a Redis server.", "authors": ["kemtekinay"], "active": true + }, + + { + "name": "guile-redis", + "language": "Scheme", + "repository": "https://github.com/aconchillo/guile-redis", + "description": "A Redis client for Guile", + "authors": ["aconchillo"], + "active": true } - + ] From 048f59de698a956bda387c830b1f6400e3b3ce77 Mon Sep 17 00:00:00 2001 From: Tuoris Date: Mon, 14 Oct 2019 21:45:55 +0300 Subject: [PATCH 0144/1457] Add a note about redis-check-dump to quickstart.md (#957) --- topics/quickstart.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/quickstart.md b/topics/quickstart.md index 1d915c0840..4818a16475 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -34,7 +34,7 @@ At this point you can try if your build works correctly by typing **make test**, * **redis-sentinel** is the Redis Sentinel executable (monitoring and failover). * **redis-cli** is the command line interface utility to talk with Redis. * **redis-benchmark** is used to check Redis performances. -* **redis-check-aof** and **redis-check-dump** are useful in the rare event of corrupted data files. +* **redis-check-aof** and **redis-check-rdb** (**redis-check-dump** in 3.0 and below) are useful in the rare event of corrupted data files. It is a good idea to copy both the Redis server and the command line interface in proper places, either manually using the following commands: From 46ebe3119bb7e99c5442b2eebd9658efba8ad43d Mon Sep 17 00:00:00 2001 From: Evan Summers Date: Mon, 14 Oct 2019 20:48:48 +0200 Subject: [PATCH 0145/1457] Typo corrections in protocol.md (#958) --- topics/protocol.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/protocol.md b/topics/protocol.md index 40f268acdd..9753cd0d34 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -263,8 +263,8 @@ The above RESP data type encodes a two elements Array consisting of an Array tha Null elements in Arrays ----------------------- -Single elements of an Array may be Null. This is used in Redis replies in -order to signal that this elements are missing and not empty strings. This +Single elements of an Array may be Null. This is used in Redis replies in +order to signal that the element is missing and not an empty string. This can happen with the SORT command when used with the GET _pattern_ option when the specified key is missing. Example of an Array reply containing a Null element: @@ -281,7 +281,7 @@ like this: ["foo",nil,"bar"] -Note that this is not an exception to what said in the previous sections, but +Note that this is not an exception to what was said in the previous sections, but just an example to further specify the protocol. Sending commands to a Redis Server From d4f499e694805ceb7a8d460aba16c8c9c4351b39 Mon Sep 17 00:00:00 2001 From: Howard Chung Date: Mon, 14 Oct 2019 12:26:52 -0700 Subject: [PATCH 0146/1457] Fix example in Streams (#976) * Fix example in Streams * fix typo --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 1d00ec0788..17a9b09f75 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -237,7 +237,7 @@ As you can see in the command above when creating the consumer group we have to Now that the consumer group is created we can immediately start trying to read messages via the consumer group, by using the **XREADGROUP** command. We'll read from the consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice and Bob. -**XREADGROUP** is very similar yo **XREAD** and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in **XREAD**. +**XREADGROUP** is very similar to **XREAD** and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in **XREAD**. Before reading from the stream, let's put some messages inside: From b639908f5a730c276f90ee776e0d8d71ba6cfdee Mon Sep 17 00:00:00 2001 From: njb_said Date: Mon, 14 Oct 2019 20:28:32 +0100 Subject: [PATCH 0147/1457] Fix mixed content warning on pipelining (#977) Image was http, should be https --- topics/pipelining.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/pipelining.md b/topics/pipelining.md index 32b40fe991..a13c801de7 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -78,7 +78,7 @@ initially increases almost linearly with longer pipelines, and eventually reaches 10 times the baseline obtained not using pipelining, as you can see from the following graph: -![Pipeline size and IOPs](http://redis.io/images/redisdoc/pipeline_iops.png) +![Pipeline size and IOPs](https://redis.io/images/redisdoc/pipeline_iops.png) Some real world code example --- From acaa88e4b801f350f2f2360ac444cdc4306a5efa Mon Sep 17 00:00:00 2001 From: Shogo Date: Tue, 15 Oct 2019 04:29:25 +0900 Subject: [PATCH 0148/1457] fix typo: si -> is (#978) --- topics/modules-blocking-ops.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/modules-blocking-ops.md b/topics/modules-blocking-ops.md index 87409de7f5..6bb6e0adb9 100644 --- a/topics/modules-blocking-ops.md +++ b/topics/modules-blocking-ops.md @@ -16,7 +16,7 @@ Redis modules have the ability to implement blocking commands as well, this documentation shows how the API works and describes a few patterns that can be used in order to model blocking commands. -NOTE: This API si currently *experimental*, so it can only be used if +NOTE: This API is currently *experimental*, so it can only be used if the macro `REDISMODULE_EXPERIMENTAL_API` is defined. This is required because these calls are still not in their final stage of design, so may change in the future, certain parts may be reprecated and so forth. From e0695850b2994ef4df7cd2418e7bf4b3bc1fe3e9 Mon Sep 17 00:00:00 2001 From: Shogo Date: Tue, 15 Oct 2019 04:33:03 +0900 Subject: [PATCH 0149/1457] Fix some mistake of modules intro (#980) * fix typographical error, 'se' character * fix method name INCR -> INCRBY --- topics/modules-intro.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 5c33bc73d2..21dfb9c8be 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -270,7 +270,7 @@ number "10" as second argument (the increment), I'll use the following function call: RedisModuleCallReply *reply; - reply = RedisModule_Call(ctx,"INCR","sc",argv[1],"10"); + reply = RedisModule_Call(ctx,"INCRBY","sc",argv[1],"10"); The first argument is the context, and the second is always a null terminated C string with the command name. The third argument is the format specifier @@ -308,7 +308,7 @@ In order to obtain the type or reply (corresponding to one of the data types supported by the Redis protocol), the function `RedisModule_CallReplyType()` is used: - reply = RedisModule_Call(ctx,"INCR","sc",argv[1],"10"); + reply = RedisModule_Call(ctx,"INCRBY","sc",argv[1],"10"); if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_INTEGER) { long long myval = RedisModule_CallReplyInteger(reply); /* Do something with myval. */ @@ -451,8 +451,8 @@ with a special argument to `RedisModule_ReplyWithArray()`: RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); The above call starts an array reply so we can use other `ReplyWith` calls -in order to produce the array items. Finally in order to set the length -se use the following call: +in order to produce the array items. Finally in order to set the length, +use the following call: RedisModule_ReplySetArrayLength(ctx, number_of_items); @@ -739,7 +739,7 @@ When using the higher level APIs to invoke commands, replication happens automatically if you use the "!" modifier in the format string of `RedisModule_Call()` as in the following example: - reply = RedisModule_Call(ctx,"INCR","!sc",argv[1],"10"); + reply = RedisModule_Call(ctx,"INCRBY","!sc",argv[1],"10"); As you can see the format specifier is `"!sc"`. The bang is not parsed as a format specifier, but it internally flags the command as "must replicate". From 01c868dc6cecb6887c8d0377e54e8ed0d2be02e4 Mon Sep 17 00:00:00 2001 From: Juan Mellado Date: Mon, 14 Oct 2019 22:38:27 +0200 Subject: [PATCH 0150/1457] Added dartis to clients.json (#1171) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index de07074dbf..d233c762b2 100644 --- a/clients.json +++ b/clients.json @@ -1573,6 +1573,15 @@ "description": "A Redis client for Guile", "authors": ["aconchillo"], "active": true + }, + + { + "name": "dartis", + "language": "Dart", + "repository": "https://github.com/jcmellado/dartis", + "description": "A Redis client for Dart 2", + "authors": [], + "active": true } ] From 0ac10eb7e781c293403b99ab0bbaf385bc169fdc Mon Sep 17 00:00:00 2001 From: Robin Dupret Date: Mon, 14 Oct 2019 22:45:12 +0200 Subject: [PATCH 0151/1457] Fix a broken link (#1149) The article is no longer available so let's fix the link to a reworked version of it. Also, the 1 and 2 values are no longer swapped in this article but let's keep a link to the `proc` man page as it can be useful and simpler than the whole article. --- topics/faq.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index 1b2be3c803..3dc209cec7 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -111,11 +111,10 @@ more optimistic allocation fashion, and this is indeed what you want for Redis. A good source to understand how Linux Virtual Memory works and other alternatives for `overcommit_memory` and `overcommit_ratio` is this classic from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. -Beware, this article had `1` and `2` configuration values for `overcommit_memory` -reversed: refer to the [proc(5)][proc5] man page for the right meaning of the +You can also refer to the [proc(5)][proc5] man page for explanations of the available values. -[redhatvm]: http://www.redhat.com/magazine/001nov04/features/vm/ +[redhatvm]: https://people.redhat.com/nhorman/papers/rhel3_vm.pdf [proc5]: http://man7.org/linux/man-pages/man5/proc.5.html ## Are Redis on-disk-snapshots atomic? From 0f18cdac1e6fead20e50b9d86223713fb42c2aa6 Mon Sep 17 00:00:00 2001 From: Kevin James Date: Mon, 14 Oct 2019 13:46:26 -0700 Subject: [PATCH 0152/1457] fix(cluster-nodes): include cport reference (#986) Fixes missing reference to `cport` value in CLUSTER NODES output as first introduced by antirez/redis#1c038379f7 and antirez/redis#b841f3ad1a and released in Redis v4.0.0. --- commands/cluster-nodes.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/commands/cluster-nodes.md b/commands/cluster-nodes.md index 5e74d0b694..0b8c9f9ad0 100644 --- a/commands/cluster-nodes.md +++ b/commands/cluster-nodes.md @@ -21,24 +21,24 @@ each line represents a node in the cluster. The following is an example of output: ``` -07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected -67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002 master - 0 1426238316232 2 connected 5461-10922 -292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003 master - 0 1426238318243 3 connected 10923-16383 -6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected -824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006 slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected -e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001 myself,master - 0 0 1 connected 0-5460 +07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004@31004 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected +67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002@31002 master - 0 1426238316232 2 connected 5461-10922 +292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003@31003 master - 0 1426238318243 3 connected 10923-16383 +6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005@31005 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected +824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006@31006 slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected +e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001@31001 myself,master - 0 0 1 connected 0-5460 ``` Each line is composed of the following fields: ``` - ... + ... ``` The meaning of each filed is the following: 1. `id`: The node ID, a 40 characters random string generated when a node is created and never changed again (unless `CLUSTER RESET HARD` is used). -2. `ip:port`: The node address where clients should contact the node to run queries. +2. `ip:port@cport`: The node address where clients should contact the node to run queries. 3. `flags`: A list of comma separated flags: `myself`, `master`, `slave`, `fail?`, `fail`, `handshake`, `noaddr`, `noflags`. Flags are explained in detail in the next section. 4. `master`: If the node is a replica, and the master is known, the master node ID, otherwise the "-" character. 5. `ping-sent`: Milliseconds unix time at which the currently active ping was sent, or zero if there are no pending pings. From 75edfaa51bb3fcccf1c92a9ed8185d33084b8759 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Oct 2019 23:49:05 +0300 Subject: [PATCH 0153/1457] Adds return value to CLIENT UNBLOCK (#993) --- commands/client-unblock.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/commands/client-unblock.md b/commands/client-unblock.md index 1a43e4c72f..9e276df444 100644 --- a/commands/client-unblock.md +++ b/commands/client-unblock.md @@ -49,3 +49,10 @@ NULL > BRPOP key1 key2 key3 key4 0 (client is blocked again) ``` + +@return + +@integer-reply, specifically: + +* `1` if the client was unblocked successfully. +* `0` if the client wasn't unblocked. From 88c160e991ca3db0f187f2393b896845b468ae2c Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Oct 2019 23:49:51 +0300 Subject: [PATCH 0154/1457] Adds return to CLIENT ID (#994) --- commands/client-id.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands/client-id.md b/commands/client-id.md index 53fcac5016..fe6723c513 100644 --- a/commands/client-id.md +++ b/commands/client-id.md @@ -12,3 +12,9 @@ introduced also in Redis 5 together with `CLIENT ID`. Check the `CLIENT UNBLOCK` ```cli CLIENT ID ``` + +@return + +@integer-reply + +The id of the client. From 3f8539688355de1f55ae800c76e9f82d89388f7c Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Oct 2019 23:53:48 +0300 Subject: [PATCH 0155/1457] Adds warlus to clients.json (#996) Fixes #995 --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index d233c762b2..04ecd7af4a 100644 --- a/clients.json +++ b/clients.json @@ -1564,6 +1564,15 @@ "description": "A Xojo library to connect to a Redis server.", "authors": ["kemtekinay"], "active": true + } + + { + "name": "walrus", + "language": "Python", + "repository": "https://github.com/coleifer/walrus", + "description": "Lightweight Python utilities for working with Redis.", + "authors": [], + "recommended": true, }, { From f6a38d0e59c7a4d35fe4385b91f92625229b7c6d Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 15 Oct 2019 00:34:03 +0300 Subject: [PATCH 0156/1457] Revert "Adds warlus to clients.json (#996)" (#1172) This reverts commit 3f8539688355de1f55ae800c76e9f82d89388f7c. --- clients.json | 9 --------- 1 file changed, 9 deletions(-) diff --git a/clients.json b/clients.json index 04ecd7af4a..d233c762b2 100644 --- a/clients.json +++ b/clients.json @@ -1564,15 +1564,6 @@ "description": "A Xojo library to connect to a Redis server.", "authors": ["kemtekinay"], "active": true - } - - { - "name": "walrus", - "language": "Python", - "repository": "https://github.com/coleifer/walrus", - "description": "Lightweight Python utilities for working with Redis.", - "authors": [], - "recommended": true, }, { From 29cc7bce84be1951d768827fa124faccf7c2cefe Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 15 Oct 2019 00:45:01 +0300 Subject: [PATCH 0157/1457] Updates clients.json with Wurlus (#1173) --- clients.json | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index d233c762b2..520b6af9a8 100644 --- a/clients.json +++ b/clients.json @@ -1582,6 +1582,15 @@ "description": "A Redis client for Dart 2", "authors": [], "active": true - } + }, + { + "name": "walrus", + "language": "Python", + "repository": "https://github.com/coleifer/walrus", + "description": "Lightweight Python utilities for working with Redis.", + "authors": [], + "recommended": true, + "active": true + } ] From ac8650d0b5f2e5a7ac95dc85a5929c8550fe5159 Mon Sep 17 00:00:00 2001 From: Artem Labazin Date: Tue, 15 Oct 2019 16:07:09 +0300 Subject: [PATCH 0158/1457] Update modules.json (#965) * Update modules.json Add redis-fpn module description * Update modules.json * Update modules.json Add redis-fpn module description --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 70db03f5cd..43c1bf4c6a 100644 --- a/modules.json +++ b/modules.json @@ -204,6 +204,16 @@ "repository": "https://github.com/poga/redis-rating", "description": "Estimate actual rating from postive/negative ratings", "authors": ["devpoga"], - "stars": 14 + "stars": 563 + }, + { + "name": "redis-fpn", + "license": "Apache 2.0", + "repository": "https://github.com/infobip/redis-fpn", + "description": "Redis module for Fixed Point Number data type", + "authors": [ + "xxlabaza" + ], + "stars": 7 } ] From d51f8dacb0123fd78765e9509cf2a76f1d6ad6f2 Mon Sep 17 00:00:00 2001 From: Gandalf Date: Tue, 15 Oct 2019 21:17:01 +0800 Subject: [PATCH 0159/1457] Limithit patch 1 (#1174) * Update modules.json * Update modules.json --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 43c1bf4c6a..8b2d56ac3f 100644 --- a/modules.json +++ b/modules.json @@ -204,8 +204,18 @@ "repository": "https://github.com/poga/redis-rating", "description": "Estimate actual rating from postive/negative ratings", "authors": ["devpoga"], - "stars": 563 + "stars": 14 }, + + { + "name": "RedisPushIptables", + "license": "GPL-3.0", + "repository": "https://github.com/limithit/RedisPushIptables", + "description": "RedisPushIptables is used to update firewall rules to reject the IP addresses for a specified amount of time or forever reject.", + "authors": ["Gandalf"], + "stars": 16 + }, + { "name": "redis-fpn", "license": "Apache 2.0", From 0f5f9f81991bcc170f9c54e783c54053bbe5275b Mon Sep 17 00:00:00 2001 From: patrikx3 Date: Tue, 15 Oct 2019 15:28:21 +0200 Subject: [PATCH 0160/1457] added p3x-redis-ui to the redis-doc repository (#1175) * added p3x-redis-ui to the redis-doc repository * added p3x-redis-ui to the redis-doc repository --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index cf025b7e0a..b2ab76bac5 100644 --- a/tools.json +++ b/tools.json @@ -634,6 +634,13 @@ "description": "Cross platform GUI tool for redis that includes support for ReJSON", "authors": ["anandtrex"] }, + { + "name": "p3x-redis-ui", + "language": "javascript", + "repository": "https://github.com/patrikx3/p3x-redis-ui", + "description": "📡 P3X Redis UI that uses Socket.IO, AngularJs Material and IORedis with statistics, console - terminal, tree, dark mode, internationalization, multiple connections, web and desktop by Electron. Works as an app without Node.JS GUI or with the latest Node.Js version. Can test it at https://p3x.redis.patrikx3.com/.", + "authors": ["patrikx3"] + }, { "name": "Redis Server", "language": "Xojo", @@ -642,4 +649,5 @@ "url":"https://github.com/ktekinay/XOJO-Redis/releases/", "authors": ["KemTekinay"] } + ] From 0abdbbe711b7abf139960c24a879757e98d562d8 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Tue, 15 Oct 2019 16:33:48 +0300 Subject: [PATCH 0161/1457] Update modules license and added smartcache (#948) 1. Update modules license 2. Added smarcache 3. updated stars --- modules.json | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/modules.json b/modules.json index 8b2d56ac3f..85b2c6895b 100644 --- a/modules.json +++ b/modules.json @@ -190,14 +190,13 @@ "name": "rediSQL", "license": "AGPL-3.0", "repository": "https://github.com/RedBeardLab/rediSQL", - "description": "A redis module that provide full SQL capabilities embedding SQLite", + "description": "A redis module that provides a full SQL capabilities embedding SQLite", "authors": [ "siscia", "RedBeardLab" ], "stars": 613 }, - { "name": "redis-rating", "license" : "MIT", @@ -206,7 +205,18 @@ "authors": ["devpoga"], "stars": 14 }, - + + { + "name": "smartcache", + "license": "AGPL-3.0", + "repository": "https://github.com/fcerbell/redismodule-smartcache", + "description": "A redis module that provides a pass-through cache", + "authors": [ + "fcerbelle" + ], + "stars": 2 + }, + { "name": "RedisPushIptables", "license": "GPL-3.0", From b5830ec84b4a8ded14e176986c221ceb3df44824 Mon Sep 17 00:00:00 2001 From: patrikx3 Date: Tue, 15 Oct 2019 19:06:58 +0200 Subject: [PATCH 0162/1457] added p3x-redis-ui to the redis-doc repository - type/fix url for github (#1176) --- tools.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools.json b/tools.json index b2ab76bac5..6bd37c341a 100644 --- a/tools.json +++ b/tools.json @@ -637,7 +637,8 @@ { "name": "p3x-redis-ui", "language": "javascript", - "repository": "https://github.com/patrikx3/p3x-redis-ui", + "repository": "https://github.com/patrikx3/redis-ui/", + "url":"https://pages.corifeus.com/redis-ui/", "description": "📡 P3X Redis UI that uses Socket.IO, AngularJs Material and IORedis with statistics, console - terminal, tree, dark mode, internationalization, multiple connections, web and desktop by Electron. Works as an app without Node.JS GUI or with the latest Node.Js version. Can test it at https://p3x.redis.patrikx3.com/.", "authors": ["patrikx3"] }, From c8e9b740763af856c57eb875ef951a9f7cacb5af Mon Sep 17 00:00:00 2001 From: Roey Prat Date: Tue, 15 Oct 2019 23:40:56 +0300 Subject: [PATCH 0163/1457] BZPOPMAX and BZPOPMIN: correcting the order of returned values (#997) --- commands/bzpopmax.md | 6 +++--- commands/bzpopmin.md | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/commands/bzpopmax.md b/commands/bzpopmax.md index 6eb712f520..1c99c247a4 100644 --- a/commands/bzpopmax.md +++ b/commands/bzpopmax.md @@ -20,8 +20,8 @@ with the highest scores instead of popping the ones with the lowest scores. * A `nil` multi-bulk when no element could be popped and the timeout expired. * A three-element multi-bulk with the first element being the name of the key - where a member was popped, the second element being the score of the popped - member, and the third element being the popped member itself. + where a member was popped, the second element is the popped member itself, + and the third element is the score of the popped element. @examples @@ -32,6 +32,6 @@ redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMAX zset1 zset2 0 1) "zset1" -2) "2" 2) "c" +3) "2" ``` diff --git a/commands/bzpopmin.md b/commands/bzpopmin.md index 94990eae62..1b75aa4f4a 100644 --- a/commands/bzpopmin.md +++ b/commands/bzpopmin.md @@ -20,8 +20,8 @@ popped from. * A `nil` multi-bulk when no element could be popped and the timeout expired. * A three-element multi-bulk with the first element being the name of the key - where a member was popped, the second element being the score of the popped - member, and the third element being the popped member itself. + where a member was popped, the second element is the popped member itself, + and the third element is the score of the popped element. @examples @@ -32,6 +32,6 @@ redis> ZADD zset1 0 a 1 b 2 c (integer) 3 redis> BZPOPMIN zset1 zset2 0 1) "zset1" -2) "0" 2) "a" +3) "0" ``` From 5f8bd4a59c12e5487f20a23afd735e7874bb6298 Mon Sep 17 00:00:00 2001 From: Chirag Aggarwal Date: Wed, 16 Oct 2019 03:47:59 +0700 Subject: [PATCH 0164/1457] Fix/redis cluster doc consistency guarantees grammer (#1003) * Fixes grammer in Redus Cluster Tutorial Doc * Fixing grammer of Redis Cluster: Consistency Guarantees section --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index c4f17b5a51..959a2f9d4f 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -155,7 +155,7 @@ happens: * The master B replies OK to your client. * The master B propagates the write to its slaves B1, B2 and B3. -As you can see B does not wait for an acknowledge from B1, B2, B3 before +As you can see, B does not wait for an acknowledgement from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its slaves, one of the From e182cab3bd2205b93fe6544722ec80fe607d13b2 Mon Sep 17 00:00:00 2001 From: Hamid Alaei Varnosfaderani Date: Wed, 16 Oct 2019 00:18:49 +0330 Subject: [PATCH 0165/1457] add laravel queue redis module (#998) --- modules.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/modules.json b/modules.json index 85b2c6895b..d665915400 100644 --- a/modules.json +++ b/modules.json @@ -197,6 +197,16 @@ ], "stars": 613 }, + + { + "name": "lqrm", + "license": "BSD", + "repository": "https://github.com/halaei/lqrm", + "description": "A Laravel compatible queue driver for Redis that supports reliable blocking pop from FIFO and scheduled queues.", + "authors": [], + "stars": 4 + }, + { "name": "redis-rating", "license" : "MIT", From 5e8bbe80f52cfc2f8b0481c4000eea527ddbf14c Mon Sep 17 00:00:00 2001 From: sewenew Date: Wed, 16 Oct 2019 04:49:54 +0800 Subject: [PATCH 0166/1457] Add a c++ client: redis-plus-plus (#1000) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 520b6af9a8..ceba053bd2 100644 --- a/clients.json +++ b/clients.json @@ -1566,6 +1566,15 @@ "active": true }, + { + "name": "redis-plus-plus", + "language": "C++", + "repository": "https://github.com/sewenew/redis-plus-plus", + "description": "This is a Redis client, based on hiredis and written in C++11. It supports scritpting, pub/sub, pipeline, transaction, Redis Cluster, connection pool and thread safety.", + "authors": ["sewenew"], + "active": true + }, + { "name": "guile-redis", "language": "Scheme", From 6e2d21ed651fc4a1614c56635c7012fb4ed13c9d Mon Sep 17 00:00:00 2001 From: Chirag Aggarwal Date: Wed, 16 Oct 2019 03:57:26 +0700 Subject: [PATCH 0167/1457] Fixing redis cluster configurations section grammer (#1005) --- topics/cluster-tutorial.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 959a2f9d4f..9e6b27ba37 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -218,8 +218,8 @@ let's introduce the configuration parameters that Redis Cluster introduces in the `redis.conf` file. Some will be obvious, others will be more clear as you continue reading. -* **cluster-enabled ``**: If yes enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. -* **cluster-config-file ``**: Note that despite the name of this option, this is not an user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. +* **cluster-enabled ``**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. +* **cluster-config-file ``**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. * **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its slaves. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. * **cluster-slave-validity-factor ``**: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. * **cluster-migration-barrier ``**: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. From 95096fd8ebc6d7e52c1d34f9c6e531864f8dfb3c Mon Sep 17 00:00:00 2001 From: Chris Tanner Date: Tue, 15 Oct 2019 14:02:55 -0700 Subject: [PATCH 0168/1457] Spelling/grammar fixes (#1009) * grammar/spelling in ARM.md, internals, introduction, latency, ldb removed message about 3.2 being unstable in ldb.md * grammar corrections for ldb, lru-cache, mass-insert, memory optimisations * grammar/typos for notifications, partitioning, persistence, pipelining, problems, protocol.md * protocol, quickstart, rediscli.md grammar * rediscli, replication, security, sentinel.md grammar/typos --- topics/ARM.md | 10 +++---- topics/internals-rediseventlib.md | 4 +-- topics/internals-sds.md | 2 +- topics/introduction.md | 4 +-- topics/latency-monitor.md | 20 +++++++------- topics/latency.md | 29 ++++++++++----------- topics/ldb.md | 12 ++++----- topics/lru-cache.md | 10 +++---- topics/mass-insert.md | 7 +++-- topics/memory-optimization.md | 22 ++++++++-------- topics/notifications.md | 10 +++---- topics/partitioning.md | 10 +++---- topics/persistence.md | 15 +++++------ topics/pipelining.md | 4 +-- topics/problems.md | 2 +- topics/protocol.md | 14 +++++----- topics/quickstart.md | 12 ++++----- topics/rediscli.md | 14 +++++----- topics/replication.md | 14 +++++----- topics/security.md | 22 ++++++++-------- topics/sentinel-clients.md | 6 ++--- topics/sentinel.md | 43 +++++++++++++++---------------- 22 files changed, 139 insertions(+), 147 deletions(-) diff --git a/topics/ARM.md b/topics/ARM.md index 23d2f22cbc..a7048bc789 100644 --- a/topics/ARM.md +++ b/topics/ARM.md @@ -11,7 +11,7 @@ to also make it an officially supported platform. We believe that Redis is ideal for IoT and Embedded devices for several reasons: -* Redis has a very small memory footprint and CPU requirements. Can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory, while delivering good performance for many use cases. +* Redis has a very small memory footprint and CPU requirements. It can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory, while delivering good performance for many use cases. * The data structures of Redis are often a good way to model IoT/embedded use cases. For example in order to accumulate time series data, to receive or queue commands to execute or responses to send back to the remote servers and so forth. * Modeling data inside Redis can be very useful in order to make in-device decisions for appliances that must respond very quickly or when the remote servers are offline. * Redis can be used as an interprocess communication system between the processes running in the device. @@ -29,8 +29,8 @@ run as expected. ## Building Redis in the Pi -* Download Redis version 4 or 5. -* Just use `make` as usually to create the executable. +* Download Redis verison 4 or 5. +* Just use `make` as usual to create the executable. There is nothing special in the process. The only difference is that by default, Redis uses the libc allocator instead of defaulting to Jemalloc @@ -62,6 +62,4 @@ Raspberry Pi 1 model B: * Test 3: Like test 1 but with AOF enabled, fsync 1 sec: 1,820 ops/sec * Test 4: Like test 3, but with an AOF rewrite in progress: 1,000 ops/sec -The benchmarks above are referring to simple SET/GET operations. The performance is similar for all the Redis fast operations (not running in linear time). However sorted sets may show slightly slow numbers. - - +The benchmarks above are referring to simple SET/GET operations. The performance is similar for all the Redis fast operations (not running in linear time). However sorted sets may show slightly slower numbers. diff --git a/topics/internals-rediseventlib.md b/topics/internals-rediseventlib.md index 90db781e16..529fe8e21c 100644 --- a/topics/internals-rediseventlib.md +++ b/topics/internals-rediseventlib.md @@ -80,11 +80,11 @@ Event Loop Processing `ae.c:aeProcessEvents` looks for the time event that will be pending in the smallest amount of time by calling `ae.c:aeSearchNearestTimer` on the event loop. In our case there is only one timer event in the event loop that was created by `ae.c:aeCreateTimeEvent`. -Remember, that timer event created by `aeCreateTimeEvent` has by now probably elapsed because it had a expiry time of one millisecond. Since, the timer has already expired the seconds and microseconds fields of the `tvp` `timeval` structure variable is initialized to zero. +Remember, that the timer event created by `aeCreateTimeEvent` has probably elapsed by now because it had an expiry time of one millisecond. Since the timer has already expired, the seconds and microseconds fields of the `tvp` `timeval` structure variable is initialized to zero. The `tvp` structure variable along with the event loop variable is passed to `ae_epoll.c:aeApiPoll`. -`aeApiPoll` functions does a [`epoll_wait`](http://man.cx/epoll_wait) on the `epoll` descriptor and populates the `eventLoop->fired` table with the details: +`aeApiPoll` functions does an [`epoll_wait`](http://man.cx/epoll_wait) on the `epoll` descriptor and populates the `eventLoop->fired` table with the details: * `fd`: The descriptor that is now ready to do a read/write operation depending on the mask value. * `mask`: The read/write event that can now be performed on the corresponding descriptor. diff --git a/topics/internals-sds.md b/topics/internals-sds.md index d3bb8fc3a1..9edc4a6de9 100644 --- a/topics/internals-sds.md +++ b/topics/internals-sds.md @@ -91,4 +91,4 @@ Look at `sdslen` function and see this trick at work: Knowing this trick you could easily go through the rest of the functions in `sds.c`. -The Redis string implementation is hidden behind an interface that accepts only character pointers. The users of Redis strings need not care about how its implemented and treat Redis strings as a character pointer. +The Redis string implementation is hidden behind an interface that accepts only character pointers. The users of Redis strings need not care about how it's implemented and can treat Redis strings as a character pointer. diff --git a/topics/introduction.md b/topics/introduction.md index b1a4d29534..1268a6e140 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -30,9 +30,9 @@ Other features include: * [LRU eviction of keys](/topics/lru-cache) * [Automatic failover](/topics/sentinel) -You can use Redis from [most programming languages](/clients) out there. +You can use Redis from [most programming languages](/clients) out there. Redis is written in **ANSI C** and works in most POSIX systems like Linux, -\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and more tested, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There +\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and tested the most, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There is no official support for Windows builds, but Microsoft develops and maintains a [Win-64 port of Redis](https://github.com/MSOpenTech/redis). diff --git a/topics/latency-monitor.md b/topics/latency-monitor.md index 4e6a767e5d..c18927ee01 100644 --- a/topics/latency-monitor.md +++ b/topics/latency-monitor.md @@ -2,22 +2,22 @@ Redis latency monitoring framework === Redis is often used in the context of demanding use cases, where it -serves a big amount of queries per second per instance, and at the same +serves a large number of queries per second per instance, and at the same time, there are very strict latency requirements both for the average response time and for the worst case latency. -While Redis is an in memory system, it deals with the operating system in +While Redis is an in-memory system, it deals with the operating system in different ways, for example, in the context of persisting to disk. Moreover Redis implements a rich set of commands. Certain commands are fast and run in constant or logarithmic time, other commands are slower -O(N) commands, that can cause latency spikes. +O(N) commands that can cause latency spikes. Finally Redis is single threaded: this is usually an advantage from the point of view of the amount of work it can perform per core, and in the latency figures it is able to provide, but at the same time it poses a challenge from the point of view of latency, since the single -thread must be able to perform certain tasks incrementally, like for -example keys expiration, in a way that does not impact the other clients +thread must be able to perform certain tasks incrementally, for +example key expiration, in a way that does not impact the other clients that are served. For all these reasons, Redis 2.8.13 introduced a new feature called @@ -50,16 +50,16 @@ event. This is how the time series work: * Every time a latency spike happens, it is logged in the appropriate time series. * Every time series is composed of 160 elements. -* Each element is a pair: an unix timestamp of the time the latency spike was measured, and the number of milliseconds the event took to executed. +* Each element is a pair: a Unix timestamp of the time the latency spike was measured, and the number of milliseconds the event took to executed. * Latency spikes for the same event happening in the same second are merged (by taking the maximum latency), so even if continuous latency spikes are measured for a given event, for example because the user set a very low threshold, at least 180 seconds of history are available. * For every element the all-time maximum latency is recorded. How to enable latency monitoring --- -What is high latency for an use case, is not high latency for another. There are applications where all the queries must be served in less than 1 millisecond and applications where from time to time a small percentage of clients experiencing a 2 seconds latency is acceptable. +What is high latency for one use case is not high latency for another. There are applications where all the queries must be served in less than 1 millisecond and applications where from time to time a small percentage of clients experiencing a 2 second latency is acceptable. -So the first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that will take more than the specified threshold will be logged as latency spikes. The user should set the threshold according to its needs. For example if for the requirements of the application based on Redis the maximum acceptable latency is 100 milliseconds, the threshold should be set to such a value in order to log all the events blocking the server for a time equal or greater to 100 milliseconds. +So the first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that will take more than the specified threshold will be logged as latency spikes. The user should set the threshold according to their needs. For example if for the requirements of the application based on Redis the maximum acceptable latency is 100 milliseconds, the threshold should be set to such a value in order to log all the events blocking the server for a time equal or greater to 100 milliseconds. The latency monitor can easily be enabled at runtime in a production server with the following command: @@ -83,9 +83,9 @@ The `LATENCY LATEST` command reports the latest latency events logged. Each even * Event name. * Unix timestamp of the latest latency spike for the event. * Latest event latency in millisecond. -* All time maximum latency for this event. +* All-time maximum latency for this event. -All time does not really mean the maximum latency since the Redis instance was +All-time does not really mean the maximum latency since the Redis instance was started, because it is possible to reset events data using `LATENCY RESET` as we'll see later. The following is an example output: diff --git a/topics/latency.md b/topics/latency.md index 27a3ed3cef..7b42556120 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -36,7 +36,7 @@ And now for people with 15 minutes to spend, the details... Measuring latency ----------------- -If you are experiencing latency problems, probably you know how to measure +If you are experiencing latency problems, you probably know how to measure it in the context of your application, or maybe your latency problem is very evident even macroscopically. However redis-cli can be used to measure the latency of a Redis server in milliseconds, just try: @@ -49,11 +49,11 @@ Using the internal Redis latency monitoring subsystem Since Redis 2.8.13, Redis provides latency monitoring capabilities that are able to sample different execution paths to understand where the server is blocking. This makes debugging of the problems illustrated in -this documentation much simpler, so we suggest to enable latency monitoring +this documentation much simpler, so we suggest enabling latency monitoring ASAP. Please refer to the [Latency monitor documentation](/topics/latency-monitor). While the latency monitoring sampling and reporting capabilities will make -simpler to understand the source of latency in your Redis system, it is still +it simpler to understand the source of latency in your Redis system, it is still advised that you read this documentation extensively to better understand the topic of Redis and latency spikes. @@ -65,7 +65,7 @@ you run Redis, that is the latency provided by your operating system kernel and, if you are using virtualization, by the hypervisor you are using. While this latency can't be removed it is important to study it because -it is the baseline, or in other words, you'll not be able to achieve a Redis +it is the baseline, or in other words, you won't be able to achieve a Redis latency that is better than the latency that every process running in your environment will experience because of the kernel or hypervisor implementation or setup. @@ -108,7 +108,7 @@ instance running Redis and Apache: Max latency so far: 9243 microseconds. Max latency so far: 9671 microseconds. -Here we have an intrinsic latency of 9.7 milliseconds: this means that we can't ask better than that to Redis. However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values. We were able to measured up to 40 milliseconds in +Here we have an intrinsic latency of 9.7 milliseconds: this means that we can't ask better than that to Redis. However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values. We were able to measure up to 40 milliseconds in systems otherwise apparently running normally. Latency induced by network and communication @@ -163,7 +163,7 @@ Redis uses a *mostly* single threaded design. This means that a single process serves all the client requests, using a technique called **multiplexing**. This means that Redis can serve a single request in every given moment, so all the requests are served sequentially. This is very similar to how Node.js -works as well. However, both products are often not perceived as being slow. +works as well. However, both products are not often perceived as being slow. This is caused in part by the small amount of time to complete a single request, but primarily because these products are designed to not block on system calls, such as reading data from or writing data to a socket. @@ -179,7 +179,7 @@ Latency generated by slow commands A consequence of being single thread is that when a request is slow to serve all the other clients will wait for this request to be served. When executing normal commands, like `GET` or `SET` or `LPUSH` this is not a problem -at all since this commands are executed in constant (and very small) time. +at all since these commands are executed in constant (and very small) time. However there are commands operating on many elements, like `SORT`, `LREM`, `SUNION` and others. For instance taking the intersection of two big sets can take a considerable amount of time. @@ -189,7 +189,7 @@ is to systematically check it when using commands you are not familiar with. If you have latency concerns you should either not use slow commands against values composed of many elements, or you should run a replica using Redis -replication where to run all your slow queries. +replication where you run all your slow queries. It is possible to monitor slow commands using the Redis [Slow Log feature](/commands/slowlog). @@ -230,13 +230,13 @@ of a large memory chunk can be expensive. Fork time in different systems ------------------------------ -Modern hardware is pretty fast to copy the page table, but Xen is not. +Modern hardware is pretty fast at copying the page table, but Xen is not. The problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virtual Box does not result into slow fork time. The following is a table that compares fork time for different Redis instance size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. However the good news is that **new types of EC2 HVM based instances are much -better with fork times**, almost on pair with physical servers, so for example +better with fork times**, almost on par with physical servers, so for example using m3.medium (or better) instances will provide good results. * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). @@ -247,7 +247,7 @@ using m3.medium (or better) instances will provide good results. * **Linux VM on EC2, new instance types (Xen)** 1GB RSS forked in 10 milliseconds (10 milliseconds per GB). * **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 milliseconds (424 milliseconds per GB). -As you can see certain VM running on Xen have a performance hit that is between one order to two orders of magnitude. For EC2 users the suggestion is simple: use modern HVM based instances. +As you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude. For EC2 users the suggestion is simple: use modern HVM based instances. Latency induced by transparent huge pages ----------------------------------------- @@ -282,7 +282,7 @@ The kernel relocates Redis memory pages on disk mainly because of three reasons: * The system is under memory pressure since the running processes are demanding more physical memory than the amount that is available. The simplest instance of -this problem is simply Redis using more memory than the one available. +this problem is simply Redis using more memory than is available. * The Redis instance data set, or part of the data set, is mostly completely idle (never accessed by clients), so the kernel could swap idle memory pages on disk. This problem is very rare since even a moderately slow instance will touch all @@ -554,7 +554,7 @@ The active expiring is designed to be adaptive. An expire cycle is started every Given that `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` is set to 20 by default, and the process is performed ten times per second, usually just 200 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time, so that the *lazy* algorithm does not help. At the same time expiring just 200 keys per second has no effects in the latency a Redis instance. -However the algorithm is adaptive and will loop if it founds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*. +However the algorithm is adaptive and will loop if it finds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*. Basically this means that **if the database has many many keys expiring in the same second, and these make up at least 25% of the current population of keys with an expire set**, Redis can block in order to get the percentage of keys already expired below 25%. @@ -583,7 +583,7 @@ This is how this feature works: * If Redis detects that the server is blocked into some operation that is not returning fast enough, and that may be the source of the latency issue, a low level report about where the server is blocked is dumped on the log file. * The user contacts the developers writing a message in the Redis Google Group, including the watchdog report in the message. -Note that this feature can not be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes. +Note that this feature cannot be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes. To enable the feature just use the following: @@ -616,4 +616,3 @@ The following is an example of what you'll see printed in the log file once the Note: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context. If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is. - diff --git a/topics/ldb.md b/topics/ldb.md index 4749e28b69..0d454228f3 100644 --- a/topics/ldb.md +++ b/topics/ldb.md @@ -3,8 +3,6 @@ Starting with version 3.2 Redis includes a complete Lua debugger, that can be used in order to make the task of writing complex Redis scripts much simpler. -Because Redis 3.2 is still in beta, please download the `unstable` branch of Redis from Github and compile it in order to test the debugger. You can use Redis unstable in order to debug your scripts that you'll later run in a stable version of Redis, so the debugger is already usable in practical terms. - The Redis Lua debugger, codename LDB, has the following important features: * It uses a server-client model, so it's a remote debugger. The Redis server acts as the debugging server, while the default client is `redis-cli`. However other clients can be developed by following the simple protocol implemented by the server. @@ -40,7 +38,7 @@ Note that with the `--eval` option of `redis-cli` you can pass key names and arg ./redis-cli --ldb --eval /tmp/script.lua mykey somekey , arg1 arg2 You'll enter a special mode where `redis-cli` no longer accepts its normal -commands, but instead prints an help screen and passes the unmodified debugging +commands, but instead prints a help screen and passes the unmodified debugging commands directly to Redis. The only commands which are not passed to the Redis debugger are: @@ -104,7 +102,7 @@ Termination of the debugging session When the scripts terminates naturally, the debugging session ends and `redis-cli` returns in its normal non-debugging mode. You can restart the -session using the `restart` command as usually. +session using the `restart` command as usual. Another way to stop a debugging session is just interrupting `redis-cli` manually by pressing `Ctrl+C`. Note that also any event breaking the @@ -144,8 +142,8 @@ a breakpoint in the next line that will be executed. if counter > 10 then redis.breakpoint() end -This feature is extremely useful when debugging, so that we can avoid to -continue the script execution manually multiple times until a given condition +This feature is extremely useful when debugging, so that we can avoid +continuing the script execution manually multiple times until a given condition is encountered. Synchronous mode @@ -211,7 +209,7 @@ lua debugger> e redis.sha1hex('foo') Debugging clients --- -LDB uses the client-server model where the Redis servers acts as a debugging server that communicates using [RESP](/topics/protocol). While `redis-cli` is the default debug client, any [client](/clients) can be used for debugging as long as it meets one of the following conditions: +LDB uses the client-server model where the Redis server acts as a debugging server that communicates using [RESP](/topics/protocol). While `redis-cli` is the default debug client, any [client](/clients) can be used for debugging as long as it meets one of the following conditions: 1. The client provides a native interface for setting the debug mode and controlling the debug session. 2. The client provides an interface for sending arbitrary commands over RESP. diff --git a/topics/lru-cache.md b/topics/lru-cache.md index dadbcb3023..20d4f355f2 100644 --- a/topics/lru-cache.md +++ b/topics/lru-cache.md @@ -2,7 +2,7 @@ Using Redis as an LRU cache === When Redis is used as a cache, often it is handy to let it automatically -evict old data as you add new one. This behavior is very well known in the +evict old data as you add new data. This behavior is very well known in the community of developers, since it is the default behavior of the popular *memcached* system. @@ -55,9 +55,9 @@ The following policies are available: The policies **volatile-lru**, **volatile-random** and **volatile-ttl** behave like **noeviction** if there are no keys to evict matching the prerequisites. -To pick the right eviction policy is important depending on the access pattern -of your application, however you can reconfigure the policy at runtime while -the application is running, and monitor the number of cache misses and hits +Picking the right eviction policy is important depending on the access pattern +of your application, however you can reconfigure the policy at runtime while +the application is running, and monitor the number of cache misses and hits using the Redis `INFO` output in order to tune your setup. In general as a rule of thumb: @@ -68,7 +68,7 @@ In general as a rule of thumb: The **volatile-lru** and **volatile-random** policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem. -It is also worth to note that setting an expire to a key costs memory, so using a policy like **allkeys-lru** is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure. +It is also worth noting that setting an expire to a key costs memory, so using a policy like **allkeys-lru** is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure. How the eviction process works --- diff --git a/topics/mass-insert.md b/topics/mass-insert.md index 9f0c31f950..ea81e09094 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -20,7 +20,7 @@ make sure you are inserting as fast as possible. Only a small percentage of clients support non-blocking I/O, and not all the clients are able to parse the replies in an efficient way in order to maximize -throughput. For all this reasons the preferred way to mass import data into +throughput. For all of these reasons the preferred way to mass import data into Redis is to generate a text file containing the Redis protocol, in raw format, in order to call the commands needed to insert the required data. @@ -121,7 +121,7 @@ first mass import session. Last reply received from server. errors: 0, replies: 1000 -How the pipe mode works under the hoods +How the pipe mode works under the hood --------------------------------------- The magic needed inside the pipe mode of redis-cli is to be as fast as netcat @@ -133,9 +133,8 @@ This is obtained in the following way: + redis-cli --pipe tries to send data as fast as possible to the server. + At the same time it reads data when available, trying to parse it. + Once there is no more data to read from stdin, it sends a special **ECHO** command with a random 20 bytes string: we are sure this is the latest command sent, and we are sure we can match the reply checking if we receive the same 20 bytes as a bulk reply. -+ Once this special final command is sent, the code receiving replies starts to match replies with this 20 bytes. When the matching reply is reached it can exit with success. ++ Once this special final command is sent, the code receiving replies starts to match replies with these 20 bytes. When the matching reply is reached it can exit with success. Using this trick we don't need to parse the protocol we send to the server in order to understand how many commands we are sending, but just the replies. However while parsing the replies we take a counter of all the replies parsed so that at the end we are able to tell the user the amount of commands transferred to the server by the mass insert session. - diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 34b75777ea..a1ab206240 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -16,7 +16,7 @@ Since this is a CPU / memory trade off it is possible to tune the maximum number zset-max-ziplist-value 64 set-max-intset-entries 512 -If a specially encoded value will overflow the configured max size, Redis will automatically convert it into normal encoding. This operation is very fast for small values, but if you change the setting in order to use specially encoded values for much larger aggregate types the suggestion is to run some benchmark and test to check the conversion time. +If a specially encoded value overflows the configured max size, Redis will automatically convert it into normal encoding. This operation is very fast for small values, but if you change the setting in order to use specially encoded values for much larger aggregate types the suggestion is to run some benchmarks and tests to check the conversion time. Using 32 bit instances ---------------------- @@ -38,7 +38,7 @@ If you want to know more about this, read the next section. Using hashes to abstract a very memory efficient plain key-value store on top of Redis -------------------------------------------------------------------------------------- -I understand the title of this section is a bit scaring, but I'm going to explain in details what this is about. +I understand the title of this section is a bit scary, but I'm going to explain in details what this is about. Basically it is possible to model a plain key-value store using Redis where values can just be just strings, that is not just more memory efficient @@ -55,9 +55,9 @@ instead just encode them in an O(N) data structure, like a linear array with length-prefixed key value pairs. Since we do this only when N is small, the amortized time for HGET and HSET commands is still O(1): the hash will be converted into a real hash table as soon as the number of elements -it contains will grow too much (you can configure the limit in redis.conf). +it contains grows too large (you can configure the limit in redis.conf). -This works well not just from the point of view of time complexity, but +This does not only work well from the point of view of time complexity, but also from the point of view of constant times, since a linear array of key value pairs happens to play very well with the CPU cache (it has a better cache locality than a hash table). @@ -65,7 +65,7 @@ cache locality than a hash table). However since hash fields and values are not (always) represented as full featured Redis objects, hash fields can't have an associated time to live (expire) like a real key, and can only contain a string. But we are okay with -this, this was anyway the intention when the hash data type API was +this, this was the intention anyway when the hash data type API was designed (we trust simplicity more than features, so nested data structures are not allowed, as expires of single fields are not allowed). @@ -102,7 +102,7 @@ As you can see every hash will end containing 100 fields, that is an optimal compromise between CPU and memory saved. There is another very important thing to note, with this schema -every hash will have more or +every hash will have more or less 100 fields regardless of the number of objects we cached. This is since our objects will always end with a number, and not a random string. In some way the final number can be considered as a form of implicit pre-sharding. @@ -112,7 +112,7 @@ What about small numbers? Like object:2? We handle this case using just So object:2 and object:10 will both end inside the key "object:", but one as field name "2" and one as "10". -How much memory we save this way? +How much memory do we save this way? I used the following Ruby program to test how this works: @@ -168,12 +168,12 @@ of your keys and values: hash-max-zipmap-value 1024 -Every time a hash will exceed the number of elements or element size specified +Every time a hash exceeds the number of elements or element size specified it will be converted into a real hash table, and the memory saving will be lost. You may ask, why don't you do this implicitly in the normal key space so that I don't have to care? There are two reasons: one is that we tend to make -trade offs explicit, and this is a clear tradeoff between many things: CPU, +tradeoffs explicit, and this is a clear tradeoff between many things: CPU, memory, max element size. The second is that the top level key space must support a lot of interesting things like expires, LRU data, and so forth so it is not practical to do this in a general way. @@ -203,11 +203,11 @@ memory is around 3GB. This happens because the underlying allocator can't easil most of the times 5GB could do, you need to provision for 10GB. * However allocators are smart and are able to reuse free chunks of memory, so after you freed 2GB of your 5GB data set, when you start adding more keys -again, you'll see the RSS (Resident Set Size) to stay steady and don't grow +again, you'll see the RSS (Resident Set Size) stay steady and not grow more, as you add up to 2GB of additional keys. The allocator is basically trying to reuse the 2GB of memory previously (logically) freed. * Because of all this, the fragmentation ratio is not reliable when you -had a memory usage that at peak is much larger than the currently used memory. +have a memory usage that at peak is much larger than the currently used memory. The fragmentation is calculated as the amount of memory currently in use (as the sum of all the allocations performed by Redis) divided by the physical memory actually used (the RSS value). Because the RSS reflects the peak memory, diff --git a/topics/notifications.md b/topics/notifications.md index 513a58df02..e82a8ba637 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -6,10 +6,10 @@ Redis Keyspace Notifications Feature overview --- -Keyspace notifications allows clients to subscribe to Pub/Sub channels in order +Keyspace notifications allow clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way. -Examples of the events that is possible to receive are the following: +Examples of the events that are possible to receive are the following: * All the commands affecting a given key. * All the keys receiving an LPUSH operation. @@ -31,7 +31,7 @@ Pub/Sub messages to perform operations like pushing the events into a list. Type of events --- -Keyspace notifications are implemented sending two distinct type of events +Keyspace notifications are implemented by sending two distinct types of events for every operation affecting the Redis data space. For instance a `DEL` operation targeting the key named `mykey` in database `0` will trigger the delivering of two messages, exactly equivalent to the following two @@ -40,7 +40,7 @@ the delivering of two messages, exactly equivalent to the following two PUBLISH __keyspace@0__:mykey del PUBLISH __keyevent@0__:del mykey -It is easy to see how one channel allows to listen to all the events targeting +It is easy to see how one channel allows us to listen to all the events targeting the key `mykey` and the other channel allows to obtain information about all the keys that are target of a `del` operation. @@ -60,7 +60,7 @@ just the subset of events we are interested in. Configuration --- -By default keyspace events notifications are disabled because while not +By default keyspace event notifications are disabled because while not very sensible the feature uses some CPU power. Notifications are enabled using the `notify-keyspace-events` of redis.conf or via the **CONFIG SET**. diff --git a/topics/partitioning.md b/topics/partitioning.md index 352e7af5bd..2989ae4727 100644 --- a/topics/partitioning.md +++ b/topics/partitioning.md @@ -33,7 +33,7 @@ Different implementations of partitioning Partitioning can be the responsibility of different parts of a software stack. * **Client side partitioning** means that the clients directly select the right node where to write or read a given key. Many Redis clients implement client side partitioning. -* **Proxy assisted partitioning** means that our clients send requests to a proxy that is able to speak the Redis protocol, instead of sending requests directly to the right Redis instance. The proxy will make sure to forward our request to the right Redis instance accordingly to the configured partitioning schema, and will send the replies back to the client. The Redis and Memcached proxy [Twemproxy](https://github.com/twitter/twemproxy) implements proxy assisted partitioning. +* **Proxy assisted partitioning** means that our clients send requests to a proxy that is able to speak the Redis protocol, instead of sending requests directly to the right Redis instance. The proxy will make sure to forward our request to the right Redis instance according to the configured partitioning schema, and will send the replies back to the client. The Redis and Memcached proxy [Twemproxy](https://github.com/twitter/twemproxy) implements proxy assisted partitioning. * **Query routing** means that you can send your query to a random instance, and the instance will make sure to forward your query to the right node. Redis Cluster implements an hybrid form of query routing, with the help of the client (the request is not directly forwarded from a Redis instance to another, but the client gets *redirected* to the right node). Disadvantages of partitioning @@ -66,11 +66,11 @@ We learned that a problem with partitioning is that, unless we are using Redis a However the data storage needs may vary over the time. Today I can live with 10 Redis nodes (instances), but tomorrow I may need 50 nodes. -Since Redis is extremely small footprint and lightweight (a spare instance uses 1 MB of memory), a simple approach to this problem is to start with a lot of instances since the start. Even if you start with just one server, you can decide to live in a distributed world since your first day, and run multiple Redis instances in your single server, using partitioning. +Since Redis has an extremely small footprint and is lightweight (a spare instance uses 1 MB of memory), a simple approach to this problem is to start with a lot of instances from the start. Even if you start with just one server, you can decide to live in a distributed world from day one, and run multiple Redis instances in your single server, using partitioning. -And you can select this number of instances to be quite big since the start. For example, 32 or 64 instances could do the trick for most users, and will provide enough room for growth. +And you can select this number of instances to be quite big from the start. For example, 32 or 64 instances could do the trick for most users, and will provide enough room for growth. -In this way as your data storage needs increase and you need more Redis servers, what to do is to simply move instances from one server to another. Once you add the first additional server, you will need to move half of the Redis instances from the first server to the second, and so forth. +In this way as your data storage needs increase and you need more Redis servers, what you do is simply move instances from one server to another. Once you add the first additional server, you will need to move half of the Redis instances from the first server to the second, and so forth. Using Redis replication you will likely be able to do the move with minimal or no downtime for your users: @@ -94,7 +94,7 @@ Redis Cluster is the preferred way to get automatic sharding and high availabili It is generally available and production-ready as of [April 1st, 2015](https://groups.google.com/d/msg/redis-db/dO0bFyD_THQ/Uoo2GjIx6qgJ). You can get more information about Redis Cluster in the [Cluster tutorial](/topics/cluster-tutorial). -Once Redis Cluster will be available, and if a Redis Cluster compliant client is available for your language, Redis Cluster will be the de facto standard for Redis partitioning. +Once Redis Cluster is available, and if a Redis Cluster compliant client is available for your language, Redis Cluster will be the de facto standard for Redis partitioning. Redis Cluster is a mix between *query routing* and *client side partitioning*. diff --git a/topics/persistence.md b/topics/persistence.md index 0fc00b3451..49b23df3cb 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -1,4 +1,4 @@ -This page provides a technical description of Redis persistence, it is a suggested read for all the Redis users. For a wider overview of Redis persistence and the durability guarantees it provides you may want to also read [Redis persistence demystified](http://antirez.com/post/redis-persistence-demystified.html). +This page provides a technical description of Redis persistence, it is a suggested read for all Redis users. For a wider overview of Redis persistence and the durability guarantees it provides you may also want to read [Redis persistence demystified](http://antirez.com/post/redis-persistence-demystified.html). Redis Persistence === @@ -6,8 +6,8 @@ Redis Persistence Redis provides a different range of persistence options: * The RDB persistence performs point-in-time snapshots of your dataset at specified intervals. -* the AOF persistence logs every write operation received by the server, that will be played again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself, in an append-only fashion. Redis is able to rewrite the log on background when it gets too big. -* If you wish, you can disable persistence at all, if you want your data to just exist as long as the server is running. +* The AOF persistence logs every write operation received by the server, that will be played again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself, in an append-only fashion. Redis is able to rewrite the log in the background when it gets too big. +* If you wish, you can disable persistence completely, if you want your data to just exist as long as the server is running. * It is possible to combine both AOF and RDB in the same instance. Notice that, in this case, when Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete. The most important thing to understand is the different trade-offs between the @@ -17,7 +17,7 @@ RDB advantages --- * RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters. -* RDB is very good for disaster recovery, being a single compact file can be transferred to far data centers, or on Amazon S3 (possibly encrypted). +* RDB is very good for disaster recovery, being a single compact file that can be transferred to far data centers, or onto Amazon S3 (possibly encrypted). * RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent instance will never perform disk I/O or alike. * RDB allows faster restarts with big datasets compared to AOF. @@ -39,10 +39,10 @@ AOF disadvantages --- * AOF files are usually bigger than the equivalent RDB files for the same dataset. -* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performances are still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. -* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. This bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but this kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - +* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load. +* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. These bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but these kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). - 2) We never had a single report from users about an AOF corruption that was detected in the real world. + 2) We have never had a single report from users about an AOF corruption that was detected in the real world. Ok, so what should I use? --- @@ -317,4 +317,3 @@ a VPS. You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason. - diff --git a/topics/pipelining.md b/topics/pipelining.md index a13c801de7..20c30a2a71 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -37,7 +37,7 @@ A Request/Response server can be implemented so that it is able to process new r This is called pipelining, and is a technique widely in use since many decades. For instance many POP3 protocol implementations already supported this feature, dramatically speeding up the process of downloading new emails from the server. -Redis supports pipelining since the very early days, so whatever version you are running, you can use pipelining with Redis. This is an example using the raw netcat utility: +Redis has supported pipelining since the very early days, so whatever version you are running, you can use pipelining with Redis. This is an example using the raw netcat utility: $ (printf "PING\r\nPING\r\nPING\r\n"; sleep 1) | nc localhost 6379 +PONG @@ -57,7 +57,7 @@ To be very explicit, with pipelining the order of operations of our very first e * *Server:* 3 * *Server:* 4 -**IMPORTANT NOTE**: While the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it is better to send them as batches having a reasonable number, for instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed will be nearly the same, but the additional memory used will be at max the amount needed to queue the replies for this 10k commands. +**IMPORTANT NOTE**: While the client sends commands using pipelining, the server will be forced to queue the replies, using memory. So if you need to send a lot of commands with pipelining, it is better to send them as batches having a reasonable number, for instance 10k commands, read the replies, and then send another 10k commands again, and so forth. The speed will be nearly the same, but the additional memory used will be at max the amount needed to queue the replies for these 10k commands. It's not just a matter of RTT --- diff --git a/topics/problems.md b/topics/problems.md index fdb4303686..50ec957f7f 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -1,7 +1,7 @@ Problems with Redis? This is a good starting point. === -This page tries to help you about what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to let people alone with their issues. +This page tries to help you with what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to leave people alone with their issues. * If you have **latency problems** with Redis, that in some way appears to be idle for some time, read our [Redis latency troubleshooting guide](/topics/latency). * Redis stable releases are usually very reliable, however in the rare event you are **experiencing crashes** the developers can help a lot more if you provide debugging information. Please read our [Debugging Redis guide](/topics/debugging). diff --git a/topics/protocol.md b/topics/protocol.md index 9753cd0d34..2e9b6e890a 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -30,7 +30,7 @@ Once a command is received, it is processed and a reply is sent back to the clie This is the simplest model possible, however there are two exceptions: * Redis supports pipelining (covered later in this document). So it is possible for clients to send multiple commands at once, and wait for replies later. -* When a Redis client subscribes to a Pub/Sub channel, the protocol changes semantics and becomes a *push* protocol, that is, the client no longer requires to send commands, because the server will automatically send to the client new messages (for the channels the client is subscribed to) as soon as they are received. +* When a Redis client subscribes to a Pub/Sub channel, the protocol changes semantics and becomes a *push* protocol, that is, the client no longer requires sending commands, because the server will automatically send to the client new messages (for the channels the client is subscribed to) as soon as they are received. Excluding the above two exceptions, the Redis protocol is a simple request-response protocol. @@ -264,7 +264,7 @@ Null elements in Arrays ----------------------- Single elements of an Array may be Null. This is used in Redis replies in -order to signal that the element is missing and not an empty string. This +order to signal that these elements are missing and not empty strings. This can happen with the SORT command when used with the GET _pattern_ option when the specified key is missing. Example of an Array reply containing a Null element: @@ -288,10 +288,10 @@ Sending commands to a Redis Server ---------------------------------- Now that you are familiar with the RESP serialization format, writing an -implementation of a Redis client library will be easy. We can further specify +implementation of a Redis client library will be easy. We can further to specify how the interaction between the client and the server works: -* A client sends to the Redis server a RESP Array consisting of just Bulk Strings. +* A client sends the Redis server a RESP Array consisting of just Bulk Strings. * A Redis server replies to clients sending any valid RESP data type as reply. So for example a typical interaction could be the following. @@ -306,7 +306,7 @@ The client sends the command **LLEN mylist** in order to get the length of the l S: :48293\r\n -As usually we separate different parts of the protocol with newlines for simplicity, but the actual interaction is the client sending `*2\r\n$4\r\nLLEN\r\n$6\r\nmylist\r\n` as a whole. +As usual we separate different parts of the protocol with newlines for simplicity, but the actual interaction is the client sending `*2\r\n$4\r\nLLEN\r\n$6\r\nmylist\r\n` as a whole. Multiple commands and pipelining -------------------------------- @@ -322,7 +322,7 @@ For more information please check our [page about Pipelining](/topics/pipelining Inline Commands --------------- -Sometimes you have only `telnet` in your hands and you need to send a command +Sometimes you have only `telnet` to hand and you need to send a command to the Redis server. While the Redis protocol is simple to implement it is not ideal to use in interactive sessions, and `redis-cli` may not always be available. For this reason Redis also accepts commands in a special way that @@ -350,7 +350,7 @@ While the Redis protocol is very human readable and easy to implement it can be implemented with a performance similar to that of a binary protocol. RESP uses prefixed lengths to transfer bulk data, so there is -never need to scan the payload for special characters like it happens for +never a need to scan the payload for special characters like it happens for instance with JSON, nor to quote the payload that needs to be sent to the server. diff --git a/topics/quickstart.md b/topics/quickstart.md index 4818a16475..54232b4830 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -21,14 +21,14 @@ discouraged as usually the available version is not the latest. You can either download the latest Redis tar ball from the [redis.io](http://redis.io) web site, or you can alternatively use this special URL that always points to the latest stable Redis version, that is, [http://download.redis.io/redis-stable.tar.gz](http://download.redis.io/redis-stable.tar.gz). -In order to compile Redis follow this simple steps: +In order to compile Redis follow these simple steps: wget http://download.redis.io/redis-stable.tar.gz tar xvzf redis-stable.tar.gz cd redis-stable make -At this point you can try if your build works correctly by typing **make test**, but this is an optional step. After the compilation the **src** directory inside the Redis distribution is populated with the different executables that are part of Redis: +At this point you can test if your build has worked correctly by typing **make test**, but this is an optional step. After compilation the **src** directory inside the Redis distribution is populated with the different executables that are part of Redis: * **redis-server** is the Redis Server itself. * **redis-sentinel** is the Redis Sentinel executable (monitoring and failover). @@ -36,7 +36,7 @@ At this point you can try if your build works correctly by typing **make test**, * **redis-benchmark** is used to check Redis performances. * **redis-check-aof** and **redis-check-rdb** (**redis-check-dump** in 3.0 and below) are useful in the rare event of corrupted data files. -It is a good idea to copy both the Redis server and the command line interface in proper places, either manually using the following commands: +It is a good idea to copy both the Redis server and the command line interface into the proper places, either manually using the following commands: * sudo cp src/redis-server /usr/local/bin/ * sudo cp src/redis-cli /usr/local/bin/ @@ -89,7 +89,7 @@ Securing Redis === By default Redis binds to **all the interfaces** and has no authentication at -all. If you use Redis into a very controlled environment, separated from the +all. If you use Redis in a very controlled environment, separated from the external internet and in general from attackers, that's fine. However if Redis without any hardening is exposed to the internet, it is a big security concern. If you are not 100% sure your environment is secured properly, please @@ -97,7 +97,7 @@ check the following steps in order to make Redis more secure, which are enlisted in order of increased security. 1. Make sure the port Redis uses to listen for connections (by default 6379 and additionally 16379 if you run Redis in cluster mode, plus 26379 for Sentinel) is firewalled, so that it is not possible to contact Redis from the outside world. -2. Use a configuration file where the `bind` directive is set in order to guarantee that Redis listens just in as little network interfaces you are using. For example only the loopback interface (127.0.0.1) if you are accessing Redis just locally from the same computer, and so forth. +2. Use a configuration file where the `bind` directive is set in order to guarantee that Redis listens on only the network interfaces you are using. For example only the loopback interface (127.0.0.1) if you are accessing Redis just locally from the same computer, and so forth. 3. Use the `requirepass` option in order to add an additional layer of security so that clients will require to authenticate using the `AUTH` command. 4. Use [spiped](http://www.tarsnap.com/spiped.html) or another SSL tunnelling software in order to encrypt traffic between Redis servers and Redis clients if your environment requires encryption. @@ -157,7 +157,7 @@ The following instructions can be used to perform a proper installation using th We assume you already copied **redis-server** and **redis-cli** executables under /usr/local/bin. -* Create a directory where to store your Redis config files and your data: +* Create a directory in which to store your Redis config files and your data: sudo mkdir /etc/redis sudo mkdir /var/redis diff --git a/topics/rediscli.md b/topics/rediscli.md index 7a4b6a5a1f..18a2d6b52c 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -203,7 +203,7 @@ of Lua scripting, available starting with Redis 3.2. For this feature, please refer to the [Redis Lua debugger documentation](/topics/ldb). However, even without using the debugger, you can use `redis-cli` to -run scripts from a file in a way more comfortable compared to typing +run scripts from a file in a way more comfortable way compared to typing the script interactively into the shell or as an argument: $ cat /tmp/script.lua @@ -317,7 +317,7 @@ Because `redis-cli` uses the always has line editing capabilities, without depending on `libreadline` or other optional libraries. -You can access an history of commands executed, in order to avoid retyping +You can access a history of commands executed, in order to avoid retyping them again and again, by pressing the arrow keys (up and down). The history is preserved between restarts of the CLI, in a file called `.rediscli_history` inside the user home directory, as specified @@ -598,7 +598,7 @@ You can change the sampling sessions' length with the `-i ` option. The most advanced latency study tool, but also a bit harder to interpret for non experienced users, is the ability to use color terminals -to show a spectrum of latencies. You'll see a colored output that indicate the +to show a spectrum of latencies. You'll see a colored output that indicates the different percentages of samples, and different ASCII characters that indicate different latency figures. This mode is enabled using the `--latency-dist` option: @@ -714,11 +714,11 @@ different LRU settings (number of samples) and LRU's implementation, which is approximated in Redis, changes a lot between different versions. Similarly the amount of memory per key may change between versions. That is why this tool was built: its main motivation was for testing the quality of Redis' LRU -implementation, but now is also useful in for testing how a given version +implementation, but now is also useful in for testing how a given version behaves with the settings you had in mind for your deployment. In order to use this mode, you need to specify the amount of keys -in the test. You also need to configure a `maxmemory` setting that +in the test. You also need to configure a `maxmemory` setting that makes sense as a first try. IMPORTANT NOTE: Configuring the `maxmemory` setting in the Redis configuration @@ -754,8 +754,8 @@ the actual figure we can expect in the long time: 124250 Gets/sec | Hits: 50147 (40.36%) | Misses: 74103 (59.64%) A miss rage of 59% may not be acceptable for our use case. So we know that -100MB of memory are no enough. Let's try with half gigabyte. After a few -minutes we'll see the output to stabilize to the following figures: +100MB of memory is not enough. Let's try with half gigabyte. After a few +minutes we'll see the output stabilize to the following figures: 140000 Gets/sec | Hits: 135376 (96.70%) | Misses: 4624 (3.30%) 141250 Gets/sec | Hits: 136523 (96.65%) | Misses: 4727 (3.35%) diff --git a/topics/replication.md b/topics/replication.md index be9687e9e4..18e83d4b40 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -138,8 +138,8 @@ two random instances mean they have the same data set. Diskless replication --- -Normally a full resynchronization requires to create an RDB file on disk, -then reload the same RDB from disk in order to feed the slaves with the data. +Normally a full resynchronization requires creating an RDB file on disk, +then reloading the same RDB from disk in order to feed the slaves with the data. With slow disks this can be a very stressing operation for the master. Redis version 2.8.18 is the first version to have support for diskless @@ -162,8 +162,8 @@ in memory by the master to perform the partial resynchronization. See the exampl `redis.conf` shipped with the Redis distribution for more information. Diskless replication can be enabled using the `repl-diskless-sync` configuration -parameter. The delay to start the transfer in order to wait more slaves to -arrive after the first one, is controlled by the `repl-diskless-sync-delay` +parameter. The delay to start the transfer in order to wait for more slaves to +arrive after the first one is controlled by the `repl-diskless-sync-delay` parameter. Please refer to the example `redis.conf` file in the Redis distribution for more details. @@ -251,13 +251,13 @@ scripts. To implement such a feature Redis cannot rely on the ability of the master and slave to have synchronized clocks, since this is a problem that cannot be solved -and would result into race conditions and diverging data sets, so Redis +and would result in race conditions and diverging data sets, so Redis uses three main techniques in order to make the replication of expired keys able to work: 1. Slaves don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the slaves. -2. However because of master-driven expire, sometimes slaves may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. In order to deal with that the slave uses its logical clock in order to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way slaves avoid to report logically expired keys are still existing. In practical terms, an HTML fragments cache that uses slaves to scale will avoid returning items that are already older than the desired time to live. -3. During Lua scripts executions no keys expires are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys to expire in the middle of a script, and is needed in order to send the same script to the slave in a way that is guaranteed to have the same effects in the data set. +2. However because of master-driven expire, sometimes slaves may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. In order to deal with that the slave uses its logical clock in order to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way slaves avoid reporting logically expired keys are still existing. In practical terms, an HTML fragments cache that uses slaves to scale will avoid returning items that are already older than the desired time to live. +3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed in order to send the same script to the slave in a way that is guaranteed to have the same effects in the data set. Once a slave is promoted to a master it will start to expire keys independently, and will not require any help from its old master. diff --git a/topics/security.md b/topics/security.md index 7bc153a0d3..a320eb2846 100644 --- a/topics/security.md +++ b/topics/security.md @@ -83,7 +83,7 @@ unauthenticated clients. A client can authenticate itself by sending the **AUTH** command followed by the password. The password is set by the system administrator in clear text inside the -redis.conf file. It should be long enough to prevent brute force attacks +redis.conf file. It should be long enough to prevent brute force attacks for two reasons: * Redis is very fast at serving queries. Many passwords per second can be tested by an external client. @@ -91,11 +91,11 @@ for two reasons: The goal of the authentication layer is to optionally provide a layer of redundancy. If firewalling or any other system implemented to protect Redis -from external attackers fail, an external client will still not be able to +from external attackers fail, an external client will still not be able to access the Redis instance without knowledge of the authentication password. -The AUTH command, like every other Redis command, is sent unencrypted, so it -does not protect against an attacker that has enough access to the network to +The AUTH command, like every other Redis command, is sent unencrypted, so it +does not protect against an attacker that has enough access to the network to perform eavesdropping. Data encryption support @@ -117,8 +117,8 @@ service. In this context, normal users should probably not be able to call the Redis **CONFIG** command to alter the configuration of the instance, but the systems that provide and remove instances should be able to do so. -In this case, it is possible to either rename or completely shadow commands from -the command table. This feature is available as a statement that can be used +In this case, it is possible to either rename or completely shadow commands from +the command table. This feature is available as a statement that can be used inside the redis.conf configuration file. For example: rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 @@ -136,21 +136,21 @@ the ability to insert data into Redis that triggers pathological (worst case) algorithm complexity on data structures implemented inside Redis internals. For instance an attacker could supply, via a web form, a set of strings that -is known to hash to the same bucket into a hash table in order to turn the +are known to hash to the same bucket into a hash table in order to turn the O(1) expected time (the average time) to the O(N) worst case, consuming more CPU than expected, and ultimately causing a Denial of Service. To prevent this specific attack, Redis uses a per-execution pseudo-random seed to the hash function. -Redis implements the SORT command using the qsort algorithm. Currently, +Redis implements the SORT command using the qsort algorithm. Currently, the algorithm is not randomized, so it is possible to trigger a quadratic worst-case behavior by carefully selecting the right set of inputs. String escaping and NoSQL injection --- -The Redis protocol has no concept of string escaping, so injection +The Redis protocol has no concept of string escaping, so injection is impossible under normal circumstances using a normal client library. The protocol uses prefixed-length strings and is completely binary safe. @@ -162,8 +162,8 @@ While it would be a very strange use case, the application should avoid composin Code security --- -In a classical Redis setup, clients are allowed full access to the command set, -but accessing the instance should never result in the ability to control the +In a classical Redis setup, clients are allowed full access to the command set, +but accessing the instance should never result in the ability to control the system where Redis is running. Internally, Redis uses all the well known practices for writing secure code, to diff --git a/topics/sentinel-clients.md b/topics/sentinel-clients.md index 659b3114da..1fd8435987 100644 --- a/topics/sentinel-clients.md +++ b/topics/sentinel-clients.md @@ -6,8 +6,8 @@ Guidelines for Redis clients with support for Redis Sentinel Redis Sentinel is a monitoring solution for Redis instances that handles automatic failover of Redis masters and service discovery (who is the current master for a given group of instances?). Since Sentinel is both responsible -to reconfigure instances during failovers, and to provide configurations to -clients connecting to Redis masters or slaves, clients require to have +for reconfiguring instances during failovers, and providing configurations to +clients connecting to Redis masters or slaves, clients are required to have explicit support for Redis Sentinel. This document is targeted at Redis clients developers that want to support Sentinel in their clients implementation with the following goals: @@ -20,7 +20,7 @@ For details about how Redis Sentinel works, please check the [Redis Documentatio Redis service discovery via Sentinel === -Redis Sentinel identify every master with a name like "stats" or "cache". +Redis Sentinel identifies every master with a name like "stats" or "cache". Every name actually identifies a *group of instances*, composed of a master and a variable number of slaves. diff --git a/topics/sentinel.md b/topics/sentinel.md index 614176c2d2..ccf79ac400 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -3,7 +3,7 @@ Redis Sentinel Documentation Redis Sentinel provides high availability for Redis. In practical terms this means that using Sentinel you can create a Redis deployment that resists -without human intervention to certain kind of failures. +without human intervention certain kinds of failures. Redis Sentinel also provides other collateral tasks such as monitoring, notifications and acts as a configuration provider for clients. @@ -11,8 +11,8 @@ notifications and acts as a configuration provider for clients. This is the full list of Sentinel capabilities at a macroscopical level (i.e. the *big picture*): * **Monitoring**. Sentinel constantly checks if your master and slave instances are working as expected. -* **Notification**. Sentinel can notify the system administrator, another computer programs, via an API, that something is wrong with one of the monitored Redis instances. -* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server informed about the new address to use when connecting. +* **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. +* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. * **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address. Distributed nature of Sentinel @@ -23,7 +23,7 @@ Redis Sentinel is a distributed system: Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together. The advantage of having multiple Sentinel processes cooperating are the following: 1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives. -2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a fail over system which is itself a single point of failure, after all. +2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all. The sum of Sentinels, Redis instances (masters and slaves) and clients connecting to Sentinel and Redis, are also a larger distributed system with @@ -110,7 +110,7 @@ order to retain the information in case of restart). The configuration is also rewritten every time a slave is promoted to master during a failover and every time a new Sentinel is discovered. -The example configuration above, basically monitor two sets of Redis +The example configuration above basically monitors two sets of Redis instances, each composed of a master and an undefined number of slaves. One set of instances is called `mymaster`, and the other `resque`. @@ -167,7 +167,7 @@ Example Sentinel deployments --- Now that you know the basic information about Sentinel, you may wonder where -you should place your Sentinel processes, how much Sentinel processes you need +you should place your Sentinel processes, how many Sentinel processes you need and so forth. This section shows a few example deployments. We use ASCII art in order to show you configuration examples in a *graphical* @@ -262,10 +262,10 @@ a Redis process and a Sentinel process. If the master M1 fails, S2 and S3 will agree about the failure and will be able to authorize a failover, making clients able to continue. -In every Sentinel setup, being Redis asynchronously replicated, there is -always the risk of losing some write because a given acknowledged write +In every Sentinel setup, as Redis uses asynchronous replication, there is +always the risk of losing some writes because a given acknowledged write may not be able to reach the slave which is promoted to master. However in -the above setup there is an higher risk due to clients partitioned away +the above setup there is an higher risk due to clients being partitioned away with an old master, like in the following picture: +----+ @@ -289,14 +289,14 @@ discarding its data set. This problem can be mitigated using the following Redis replication feature, that allows to stop accepting writes if a master detects that -is no longer able to transfer its writes to the specified number of slaves. +it is no longer able to transfer its writes to the specified number of slaves. min-slaves-to-write 1 min-slaves-max-lag 10 With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 slave. Since replication is asynchronous *not being able to write* actually means that the slave is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. -Using this configuration the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master. +Using this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master. However there is no free lunch. With this refinement, if the two slaves are down, the master will stop accepting writes. It's a trade off. @@ -333,9 +333,8 @@ an application server, a Rails app, or something like that. If the box where M1 and S1 are running fails, the failover will happen without issues, however it is easy to see that different network partitions will result in different behaviors. For example Sentinel will not be able -to setup if the network between the clients and the Redis servers will -get disconnected, since the Redis master and slave will be both not -available. +to setup if the network between the clients and the Redis servers is +disconnected, since the Redis master and slave will both be unavailable. Note that if C3 gets partitioned with M1 (hardly possible with the network described above, but more likely possible with different @@ -348,12 +347,12 @@ otherwise the master would never be available during slave failures. So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself which may be simpler to manage, and the ability to put a bound on the amount -of time a master into the minority partition can receive writes. +of time a master in the minority partition can receive writes. Example 4: Sentinel client side with less than three clients --- -The setup described in the Example 3 cannot be used if there are not enough +The setup described in the Example 3 cannot be used if there are less than three boxes in the client side (for example three web servers). In this case we need to resort to a mixed setup like the following: @@ -373,7 +372,7 @@ case we need to resort to a mixed setup like the following: Configuration: quorum = 3 This is similar to the setup in Example 3, but here we run four Sentinels -in the four boxes we have available. If the master M1 becomes not available +in the four boxes we have available. If the master M1 becomes unavailable the other three Sentinels will perform the failover. In theory this setup works removing the box where C2 and S4 are running, and @@ -728,7 +727,7 @@ more time than the configured Lua script time limit. When this happens before triggering a fail over Redis Sentinel will try to send a `SCRIPT KILL` command, that will only succeed if the script was read-only. -If the instance will still be in an error condition after this try, it will +If the instance is still in an error condition after this try, it will eventually be failed over. Slaves priority @@ -812,8 +811,8 @@ Sentinel requires explicit client support, unless the system is configured to ex More advanced concepts === -In the following sections we'll cover a few details about how Sentinel work, -without to resorting to implementation details and algorithms that will be +In the following sections we'll cover a few details about how Sentinel works, +without resorting to implementation details and algorithms that will be covered in the final part of this document. SDOWN and ODOWN failure state @@ -929,7 +928,7 @@ the time the master is also not available from the point of view of the Sentinel doing the failover, is considered to be not suitable for the failover and is skipped. -In more rigorous terms, a slave whose the `INFO` output suggests to be +In more rigorous terms, a slave whose the `INFO` output suggests it has been disconnected from the master for more than: (down-after-milliseconds * 10) + milliseconds_since_master_is_in_SDOWN_state @@ -986,7 +985,7 @@ If instead the quorum is configured to 5, all the Sentinels must agree about the This means that the quorum can be used to tune Sentinel in two ways: -1. If a the quorum is set to a value smaller than the majority of Sentinels we deploy, we are basically making Sentinel more sensible to master failures, triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master. +1. If a the quorum is set to a value smaller than the majority of Sentinels we deploy, we are basically making Sentinel more sensitive to master failures, triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master. 2. If a quorum is set to a value greater than the majority of Sentinels, we are making Sentinel able to failover only when there are a very large number (larger than majority) of well connected Sentinels which agree about the master being down. Configuration epochs From 5163349c2dd7a7def80e1c8fab6a2d1be34bc15a Mon Sep 17 00:00:00 2001 From: rabotyaga Date: Wed, 16 Oct 2019 01:21:07 +0300 Subject: [PATCH 0169/1457] fix typos/grammar @ streams-intro.md (#1023) --- topics/streams-intro.md | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 17a9b09f75..f88ff98b3b 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -169,7 +169,7 @@ The blocking form of **XREAD** is also able to listen to multiple Streams, just Similarly to blocking list operations, blocking stream reads are *fair* from the point of view of clients waiting for data, since the semantics is FIFO style. The first client that blocked for a given stream is the first that will be unblocked as new items are available. -**XREAD** has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attack consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called **XREADGROUP**, covered in the next section of this guide. +**XREAD** has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called **XREADGROUP**, covered in the next section of this guide. ## Consumer groups @@ -345,7 +345,7 @@ check_backlog = true while true # Pick the ID based on the iteration: the first time we want to # read our pending messages, in case we crashed and are recovering. - # Once we consumer our history, we can start getting new messages. + # Once we consumed our history, we can start getting new messages. if check_backlog myid = $lastid else @@ -353,7 +353,7 @@ while true end items = r.xreadgroup('GROUP',GroupName,ConsumerName,'BLOCK','2000','COUNT','10','STREAMS',:my_stream_key,myid) - + if items == nil puts "Timeout!" next @@ -362,7 +362,7 @@ while true # If we receive an empty reply, it means we were consuming our history # and that the history is now empty. Let's start to consume new messages. check_backlog = false if items[0][1].length == 0 - + items[0][1].each{|i| id,fields = i @@ -459,9 +459,9 @@ This is the result of the command execution: The message was successfully claimed by Alice, that can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. -It is clear from the example above that as a side effect of successfully claiming a given message, the **XCLAIM** command also returns it. However this is not mandatory. The **JUSTID** option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server, but also the performance of the command, and you are not interested in the message because later your consumer is implemented in a way that will rescan the history of pending messages from time to time. +It is clear from the example above that as a side effect of successfully claiming a given message, the **XCLAIM** command also returns it. However this is not mandatory. The **JUSTID** option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server, but also the performance of the command, and you are not interested in the message because your consumer is implemented in a way that it will rescan the history of pending messages from time to time. -Claiming may also be implemented by a separated process: one that just checks the list of pending messages, and assigns idle messages to consumers that appear to be active. Active consumers can be obtained using one of the observability features of Redis streams. This is the topic of the next section. +Claiming may also be implemented by a separate process: one that just checks the list of pending messages, and assigns idle messages to consumers that appear to be active. Active consumers can be obtained using one of the observability features of Redis streams. This is the topic of the next section. ## Claiming and the delivery counter @@ -471,11 +471,11 @@ When there are failures, it is normal that messages are delivered multiple times ## Streams observability -Messaging systems that lack observability are very hard to work with. Not knowing who is consuming messages, what messages are pending, the set of consumer groups active in a given stream, makes everything opaque. For this reason, Redis streams and consumer groups, have different ways to observe what is happening. We already covered **XPENDING**, which allows us to inspect the list of messages that are under processing at a given moment, together with their idle time and number of deliveries. +Messaging systems that lack observability are very hard to work with. Not knowing who is consuming messages, what messages are pending, the set of consumer groups active in a given stream, makes everything opaque. For this reason, Redis streams and consumer groups have different ways to observe what is happening. We already covered **XPENDING**, which allows us to inspect the list of messages that are under processing at a given moment, together with their idle time and number of deliveries. However we may want to do more than that, and the **XINFO** command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. -This command uses subcommands in order to show different informations about the status of the stream and its consumer groups. For instance using **XINFO STREAM ** reports information about the stream itself. +This command uses subcommands in order to show different informations about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. ``` > XINFO STREAM mystream @@ -488,11 +488,9 @@ This command uses subcommands in order to show different informations about the 7) groups 8) (integer) 2 9) first-entry -10) 1) 1524494395530-0 - 2) 1) "a" - 2) "1" - 3) "b" - 4) "2" +10) 1) 1526569495631-0 + 2) 1) "message" + 2) "apple" 11) last-entry 12) 1) 1526569544280-0 2) 1) "message" @@ -550,16 +548,16 @@ In case you do not remember the syntax of the command, just ask for help to the ## Differences with Kafka (TM) partitions -Consumer groups in Redis streams may resemble in some way Kafka (TM) partitioning-based consumer groups, however note that Redis streams are practically very different. The partitions are only *logical* and the messages are just put into a single Redis key, so the way the different clients are served is based on who is ready to process new messages, and not from which partition clients are reading. For instance, if the consumer C3 at some point fails permanently, Redis will continue to serve C1 and C2 will all the new messages arriving, as if now there are only two *logical* partitions. +Consumer groups in Redis streams may resemble in some way Kafka (TM) partitioning-based consumer groups, however note that Redis streams are practically very different. The partitions are only *logical* and the messages are just put into a single Redis key, so the way the different clients are served is based on who is ready to process new messages, and not from which partition clients are reading. For instance, if the consumer C3 at some point fails permanently, Redis will continue to serve C1 and C2 all the new messages arriving, as if now there are only two *logical* partitions. Similarly, if a given consumer is much faster at processing messages than the other consumers, this consumer will receive proportionally more messages in the same unit of time. This is possible since Redis tracks all the unacknowledged messages explicitly, and remembers who received which message and the ID of the first message never delivered to any consumer. -However, this also means that in Redis if you really want to partition messages about the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. +However, this also means that in Redis if you really want to partition messages in the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. We could say that schematically the following is true: * If you use 1 stream -> 1 consumer, you are processing messages in order. -* If you use N stream with N consumers, so only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. +* If you use N streams with N consumers, so that only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. * If you use 1 stream -> N consumers, you are load balancing to N consumers, however in that case, messages about the same logical item may be consumed out of order, because a given consumer may process message 3 faster than another consumer is processing message 4. So basically Kafka partitions are more similar to using N different Redis keys. @@ -567,7 +565,7 @@ While Redis consumer groups are a server-side load balancing system of messages ## Capped Streams -Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to take the history for potentially decades to come. Redis streams have some support for this. One the **MAXLEN** option of the **XADD** command. Such option is very simple to use: +Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to take the history for potentially decades to come. Redis streams have some support for this. One is the **MAXLEN** option of the **XADD** command. This option is very simple to use: ``` > XADD mystream MAXLEN 2 * value 1 @@ -597,7 +595,7 @@ XADD mystream MAXLEN ~ 1000 * ... entry fields here ... The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. -There is also the **XTRIM** command available, which performs something very similar to what the **MAXLEN** option does above, but this command does not need to add anything, can be run against any stream in a standalone way. +There is also the **XTRIM** command available, which performs something very similar to what the **MAXLEN** option does above, but this command does not need to add anything, it can be run against any stream in a standalone way. ``` > XTRIM mystream MAXLEN 10 From 182b4ffa1fc1ec21d56cd08c6fa4ec1e055c1743 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Wed, 16 Oct 2019 01:24:26 +0300 Subject: [PATCH 0170/1457] adding missing GROUP keyword (#1028) --- commands/xreadgroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index e19324e1f5..6379d3c779 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -72,7 +72,7 @@ process them. In pseudo-code: ``` WHILE true - entries = XREADGROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream > + entries = XREADGROUP GROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream > if entries == nil puts "Timeout... try again" CONTINUE From 83f4fad8ed2423b9c5f2181ff0b6d9dade0e2c89 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Wed, 16 Oct 2019 01:47:17 +0300 Subject: [PATCH 0171/1457] fix typo (#1032) --- commands/xack.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xack.md b/commands/xack.md index 145ce21b38..0c110cebbe 100644 --- a/commands/xack.md +++ b/commands/xack.md @@ -9,7 +9,7 @@ So new calls to `XREADGROUP` to grab the messages history for a consumer Similarly the pending message will be listed by the `XPENDING` command, that inspects the PEL. -Once a consumer *succesfully* processes a message, it should call `XACK` +Once a consumer *successfully* processes a message, it should call `XACK` so that such message does not get processed again, and as a side effect, the PEL entry about this message is also purged, releasing memory from the Redis server. From f3ff2f58248a6071c5de9a57623c6448f6a26ee4 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Wed, 16 Oct 2019 01:47:59 +0300 Subject: [PATCH 0172/1457] fix typo (#1033) --- commands/xpending.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xpending.md b/commands/xpending.md index d1a0f65fcc..1b7b0747e6 100644 --- a/commands/xpending.md +++ b/commands/xpending.md @@ -23,7 +23,7 @@ explained in the [streams intro](/topics/streams-intro) and in the When `XPENDING` is called with just a key name and a consumer group name, it just outputs a summary about the pending messages in a given consumer group. In the following example, we create a consumed group and -immediatelycreate a pending message by reading from the group with +immediately create a pending message by reading from the group with `XREADGROUP`. ``` From 2c646c932e0f7c1a75aeff4bdcba1607dc197d52 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Wed, 16 Oct 2019 01:48:36 +0300 Subject: [PATCH 0173/1457] fix typos (#1034) --- commands/xrange.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/xrange.md b/commands/xrange.md index 1beac0f646..231edf5e6a 100644 --- a/commands/xrange.md +++ b/commands/xrange.md @@ -1,5 +1,5 @@ The command returns the stream entries matching a given range of IDs. -The range is specified by a minimum and maximum ID. All the entires having +The range is specified by a minimum and maximum ID. All the entries having an ID between the two specified or exactly one of the two IDs specified (closed interval) are returned. @@ -7,7 +7,7 @@ The `XRANGE` command has a number of applications: * Returning items in a specific time range. This is possible because Stream IDs are [related to time](/topics/streams-intro). -* Iteratating a stream incrementally, returning just +* Iterating a stream incrementally, returning just a few items at every iteration. However it is semantically much more robust than the `SCAN` family of functions. * Fetching a single entry from a stream, providing the ID of the entry From 16c4dc929cb7d2e335f724a9334f0eacc38a7cbd Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Wed, 16 Oct 2019 01:49:22 +0300 Subject: [PATCH 0174/1457] fix typos (#1035) --- commands/xreadgroup.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index 6379d3c779..a534c96265 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -12,7 +12,7 @@ so that following how this command works will be simpler. The difference between this command and the vanilla `XREAD` is that this one supports consumer groups. -Without consumer groups, just using `XREAD`, all the clients are served with all the entries arriving in a stream. Instead using consumer groups with `XREADGROUP`, it is possible to create groups of clients that consume different parts of the messages arriving in a given stream. If, for instance, the stream gets the new entires A, B, and C and there are two consumers reading via a consumer group, one client will get, for instance, the messages A and C, and the other the message B, and so forth. +Without consumer groups, just using `XREAD`, all the clients are served with all the entries arriving in a stream. Instead using consumer groups with `XREADGROUP`, it is possible to create groups of clients that consume different parts of the messages arriving in a given stream. If, for instance, the stream gets the new entries A, B, and C and there are two consumers reading via a consumer group, one client will get, for instance, the messages A and C, and the other the message B, and so forth. Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with an unique *consumer name*. Which is just a string. @@ -21,7 +21,7 @@ One of the guarantees of consumer groups is that a given consumer can only see t This is how to understand if you want to use a consumer group or not: 1. If you have a stream and multiple clients, and you want all the clients to get all the messages, you do not need a consumer group. -2. If you have a stream and multiple clients, and you want the stream to be *partitioned* or *shareded* across your clients, so that each client will get a sub set of the messages arriving in a stream, you need a consumer group. +2. If you have a stream and multiple clients, and you want the stream to be *partitioned* or *sharded* across your clients, so that each client will get a sub set of the messages arriving in a stream, you need a consumer group. ## Differences between XREAD and XREADGROUP From 594a27090ca3dbd9ed3b7da115be0c6acafefa20 Mon Sep 17 00:00:00 2001 From: Brian Picciano Date: Tue, 15 Oct 2019 16:50:55 -0600 Subject: [PATCH 0175/1457] Update project url of the radix go redis client (#1037) After a few years of design and testing, with lots of feedback from people testing it, I'm ready to say that the newest version of radix is ready to be used. This version is faster and more flexible, and is better able to handle new features like streams and RESP3. Due to go's new module dependency system, this will be the last time the url of the project needs to change due to a major version change. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index ceba053bd2..9f2c16bfa0 100644 --- a/clients.json +++ b/clients.json @@ -106,7 +106,7 @@ { "name": "Radix", "language": "Go", - "repository": "https://github.com/mediocregopher/radix.v2", + "repository": "https://github.com/mediocregopher/radix", "description": "MIT licensed Redis client which supports pipelining, pooling, redis cluster, scripting, pub/sub, scanning, and more.", "authors": ["fzzbt", "mediocre_gopher"], "recommended": true, From f9d60cdad251815a2576f2cd9b6f62b16de2d1e9 Mon Sep 17 00:00:00 2001 From: Shubham Bhattar Date: Wed, 16 Oct 2019 04:22:09 +0530 Subject: [PATCH 0176/1457] fixed typo. In the quorum argument, it should mark the master as failing instead of the slave (#1039) --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index ccf79ac400..eac3da92bc 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -125,7 +125,7 @@ The first line is used to tell Redis to monitor a master called *mymaster*, that is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything is pretty obvious but the **quorum** argument: -* The **quorum** is the number of Sentinels that need to agree about the fact the master is not reachable, in order for really mark the slave as failing, and eventually start a fail over procedure if possible. +* The **quorum** is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible. * However **the quorum is only used to detect the failure**. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the **majority of the Sentinel processes**. So for example if you have 5 Sentinel processes, and the quorum for a given From 6a686cc9df54a0ea9e73bc8841b75d23123c6ac9 Mon Sep 17 00:00:00 2001 From: Roman Filonenko Date: Wed, 16 Oct 2019 00:55:33 +0200 Subject: [PATCH 0177/1457] fix the streams link from the intro (#1041) --- topics/introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/introduction.md b/topics/introduction.md index 1268a6e140..bbe24e2e80 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -2,7 +2,7 @@ Introduction to Redis === Redis is an open source (BSD licensed), in-memory **data structure store**, used as a database, cache and message broker. It supports data structures such as -[strings](/topics/data-types-intro#strings), [hashes](/topics/data-types-intro#hashes), [lists](/topics/data-types-intro#lists), [sets](/topics/data-types-intro#sets), [sorted sets](/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](/topics/data-types-intro#bitmaps), [hyperloglogs](/topics/data-types-intro#hyperloglogs), [geospatial indexes](/commands/geoadd) with radius queries and [streams](/topics/streams-intro.md). Redis has built-in [replication](/topics/replication), [Lua scripting](/commands/eval), [LRU eviction](/topics/lru-cache), [transactions](/topics/transactions) and different levels of [on-disk persistence](/topics/persistence), and provides high availability via [Redis Sentinel](/topics/sentinel) and automatic partitioning with [Redis Cluster](/topics/cluster-tutorial). +[strings](/topics/data-types-intro#strings), [hashes](/topics/data-types-intro#hashes), [lists](/topics/data-types-intro#lists), [sets](/topics/data-types-intro#sets), [sorted sets](/topics/data-types-intro#sorted-sets) with range queries, [bitmaps](/topics/data-types-intro#bitmaps), [hyperloglogs](/topics/data-types-intro#hyperloglogs), [geospatial indexes](/commands/geoadd) with radius queries and [streams](/topics/streams-intro). Redis has built-in [replication](/topics/replication), [Lua scripting](/commands/eval), [LRU eviction](/topics/lru-cache), [transactions](/topics/transactions) and different levels of [on-disk persistence](/topics/persistence), and provides high availability via [Redis Sentinel](/topics/sentinel) and automatic partitioning with [Redis Cluster](/topics/cluster-tutorial). You can run **atomic operations** on these types, like [appending to a string](/commands/append); From bb95e26695793a89cfaf1ef6f1cb8df41de06733 Mon Sep 17 00:00:00 2001 From: Origin Date: Wed, 16 Oct 2019 06:57:34 +0800 Subject: [PATCH 0178/1457] Update clients.json (#1043) add client for nodejs --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 9f2c16bfa0..8d7018176d 100644 --- a/clients.json +++ b/clients.json @@ -759,6 +759,16 @@ "active": true }, + { + "name": "tedis", + "language": "Node.js", + "repository": "https://github.com/myour-cc/tedis", + "description": "Tedis is a redis client developed for nodejs platform. Its name was inspired by the Java platform jedis and the development language was typescript. Therefore, Tedis is named as Tedis", + "authors": ["dasoncheng"], + "recommended": true, + "active": true + }, + { "name": "redis-fast-driver", "language": "Node.js", From 9546410efd00ee953e40760d5833ae2448363eda Mon Sep 17 00:00:00 2001 From: LiZhen Date: Wed, 16 Oct 2019 07:04:34 +0800 Subject: [PATCH 0179/1457] add distlock csharp implementation repertory (#1046) --- topics/distlock.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/distlock.md b/topics/distlock.md index 6fbd013800..ae70a53ab9 100644 --- a/topics/distlock.md +++ b/topics/distlock.md @@ -36,6 +36,7 @@ already available that can be used for reference. * [Redlock-cs](https://github.com/kidfashion/redlock-cs) (C#/.NET implementation). * [RedLock.net](https://github.com/samcook/RedLock.net) (C#/.NET implementation). Includes async and lock extension support. * [ScarletLock](https://github.com/psibernetic/scarletlock) (C# .NET implementation with configurable datastore) +* [Redlock4Net](https://github.com/LiZhenNet/Redlock4Net) (C# .NET implementation) * [node-redlock](https://github.com/mike-marcacci/node-redlock) (NodeJS implementation). Includes support for lock extension. Safety and Liveness guarantees From 9a179f9e3c0c23843afd987a72a16a752c2c364b Mon Sep 17 00:00:00 2001 From: Richard Lin Date: Tue, 15 Oct 2019 16:07:40 -0700 Subject: [PATCH 0180/1457] Update link to deprecated redsync.go repository (#1049) --- topics/distlock.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/distlock.md b/topics/distlock.md index ae70a53ab9..9697947062 100644 --- a/topics/distlock.md +++ b/topics/distlock.md @@ -29,7 +29,7 @@ already available that can be used for reference. * [Redlock-php](https://github.com/ronnylt/redlock-php) (PHP implementation). * [PHPRedisMutex](https://github.com/malkusch/lock#phpredismutex) (further PHP implementation) * [cheprasov/php-redis-lock](https://github.com/cheprasov/php-redis-lock) (PHP library for locks) -* [Redsync.go](https://github.com/hjr265/redsync.go) (Go implementation). +* [Redsync](https://github.com/go-redsync/redsync) (Go implementation). * [Redisson](https://github.com/mrniko/redisson) (Java implementation). * [Redis::DistLock](https://github.com/sbertrang/redis-distlock) (Perl implementation). * [Redlock-cpp](https://github.com/jacket-code/redlock-cpp) (C++ implementation). From 5c107c34b141eeeaa7c8170936540eb006c0f272 Mon Sep 17 00:00:00 2001 From: nashid Date: Wed, 16 Oct 2019 00:09:23 +0100 Subject: [PATCH 0181/1457] Add the laserdisc scala client (#1050) * add the laserdisc scala client * address PR comments --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 278c48b99c..863c7cd40a 100644 --- a/clients.json +++ b/clients.json @@ -554,6 +554,15 @@ "authors": ["vitaliykhamin"] }, + { + "name": "laserdisc", + "language": "Scala", + "repository": "https://github.com/laserdisc-io/laserdisc", + "description": "Future free Fs2 native pure FP Redis client http://laserdisc.io", + "authors": ["JSirocchi", "barambani"], + "active": true + }, + { "name": "scala-redis", "language": "Scala", From 69b3ab2a8833acddd7a788a560bef8fd3d18c72e Mon Sep 17 00:00:00 2001 From: Santos Solorzano Date: Tue, 15 Oct 2019 16:09:50 -0700 Subject: [PATCH 0182/1457] Fix typo in rediscli topic (#1051) From a039a898af7dffd93087f5c43172e606774e57ef Mon Sep 17 00:00:00 2001 From: Mark Lavrynenko Date: Wed, 16 Oct 2019 02:15:25 +0300 Subject: [PATCH 0183/1457] Fix typo in streams doc (#1053) --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index f0ce798bf4..e48b363f46 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -406,7 +406,7 @@ We can ask for more info by giving more arguments to **XPENDING**, because the f XPENDING [ []] ``` -By providing a start and end ID (that can be just `-` and `+` as in **XRANGE**) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer group name, is used if we want to limit the output to just messages pending for a given consumer group, but we'll not use this feature in the following example. +By providing a start and end ID (that can be just `-` and `+` as in **XRANGE**) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but we'll not use this feature in the following example. ``` > XPENDING mystream mygroup - + 10 From 784c895db9c24e6ff2eec015e53165107b9fa14e Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 16 Oct 2019 02:15:35 +0300 Subject: [PATCH 0184/1457] Documents the REDISCLI_AUTH environment variable (#1052) See: https://github.com/antirez/redis/pull/5460 --- topics/rediscli.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/topics/rediscli.md b/topics/rediscli.md index 18a2d6b52c..06bca46caa 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -78,6 +78,9 @@ preform authentication saving the need of explicitly using the `AUTH` command: $ redis-cli -a myUnguessablePazzzzzword123 ping PONG +Alternatively, it is possible to provide the password to `redis-cli` via the +`REDISCLI_AUTH` environment variable. + Finally, it's possible to send a command that operates on a database number other than the default number zero by using the `-n ` option: From 2c36cd827c351352665f22dd6ee99c84004050fd Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 16 Oct 2019 23:55:58 +0300 Subject: [PATCH 0185/1457] Documents the MODULE command(s) (#931) * Documents the MODULE command(s) * Update module-load.md --- commands.json | 36 ++++++++++++++++++++++++++++++++++++ commands/module-list.md | 10 ++++++++++ commands/module-load.md | 13 +++++++++++++ commands/module-unload.md | 13 +++++++++++++ 4 files changed, 72 insertions(+) create mode 100644 commands/module-list.md create mode 100644 commands/module-load.md create mode 100644 commands/module-unload.md diff --git a/commands.json b/commands.json index 7cd3a58fb8..fdac5a7a80 100644 --- a/commands.json +++ b/commands.json @@ -1733,6 +1733,42 @@ "since": "2.6.0", "group": "generic" }, + "MODULE LIST": { + "summary": "List all modules loaded by the server", + "complexity": "O(N) where N is the number of loaded modules.", + "since": "4.0.0", + "group": "server" + }, + "MODULE LOAD": { + "summary": "Load a module", + "complexity": "O(1)", + "arguments": [ + { + "name": "path", + "type": "string" + }, + { + "name": "arg", + "type": "string", + "variadic": true, + "optional": true + } + ], + "since": "4.0.0", + "group": "server" + }, + "MODULE UNLOAD": { + "summary": "Unload a module", + "complexity": "O(1)", + "arguments": [ + { + "name": "name", + "type": "string" + } + ], + "since": "4.0.0", + "group": "server" + }, "MONITOR": { "summary": "Listen for all requests received by the server in real time", "since": "1.0.0", diff --git a/commands/module-list.md b/commands/module-list.md new file mode 100644 index 0000000000..1bfa3e232b --- /dev/null +++ b/commands/module-list.md @@ -0,0 +1,10 @@ +Returns information about the modules loaded to the server. + +@return + +@array-reply: list of loaded modules. Each element in the list represents a +module, and is in itself a list of property names and their values. The +following properties is reported for each loaded module: + +* `name`: Name of the module +* `ver`: Version of the module diff --git a/commands/module-load.md b/commands/module-load.md new file mode 100644 index 0000000000..c5919c0077 --- /dev/null +++ b/commands/module-load.md @@ -0,0 +1,13 @@ +Loads a module from a dynamic library at runtime. + +This command loads and initializes the Redis module from the dynamic library +specified by the `path` argument. The `path` should be the absolute path of the +library, including the full filename. Any additional arguments are passed +unmodified to the module. + +**Note**: modules can also be loaded at server startup with 'loadmodule' +configuration directive in `redis.conf`. + +@return + +@simple-string-reply: `OK` if module was loaded. diff --git a/commands/module-unload.md b/commands/module-unload.md new file mode 100644 index 0000000000..84ebebf010 --- /dev/null +++ b/commands/module-unload.md @@ -0,0 +1,13 @@ +Unloads a module. + +This command unloads the module specified by `name`. Note that the module's name +is reported by the `MODULE LIST` command, and may differ from the dynamic +library's filename. + +Known limitations: + +* Modules that register custom data types can not be unloaded. + +@return + +@simple-string-reply: `OK` if module was unloaded. From 3d3e40a8574e07ab7177203c536a4cc189c252b9 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 17 Oct 2019 00:03:00 +0300 Subject: [PATCH 0186/1457] Adds minimal references to SYNC and PSYNC (#1177) * Adds minimal references to SYNC and PSYNC * Update sync.md --- commands.json | 15 +++++++++++++++ commands/psync.md | 13 +++++++++++++ commands/sync.md | 13 ++++++++++++- 3 files changed, 40 insertions(+), 1 deletion(-) create mode 100644 commands/psync.md diff --git a/commands.json b/commands.json index fdac5a7a80..523b63ab6d 100644 --- a/commands.json +++ b/commands.json @@ -2741,6 +2741,21 @@ "since": "1.0.0", "group": "server" }, + "PSYNC": { + "summary": "Internal command used for replication", + "arguments": [ + { + "name": "replicationid", + "type": "integer" + }, + { + "name": "offset", + "type": "integer" + } + ], + "since": "2.8.0", + "group": "server" + }, "TIME": { "summary": "Return the current server time", "complexity": "O(1)", diff --git a/commands/psync.md b/commands/psync.md new file mode 100644 index 0000000000..8cbacf2fa6 --- /dev/null +++ b/commands/psync.md @@ -0,0 +1,13 @@ +Initiates a replication stream from the master. + +The `PSYNC` command is called by Redis replicas for initiating a replication +stream from the master. + +For more information about replication in Redis please check the +[replication page][tr]. + +[tr]: /topics/replication + +@return + +**Non standard return value**, a bulk transfer of the data followed by `PING` and write requests from the master. diff --git a/commands/sync.md b/commands/sync.md index e3159429b9..cb958479ca 100644 --- a/commands/sync.md +++ b/commands/sync.md @@ -1,3 +1,14 @@ -@examples +Initiates a replication stream from the master. + +The `SYNC` command is called by Redis replicas for initiating a replication +stream from the master. It has been replaced in newer versions of Redis by + `PSYNC`. + +For more information about replication in Redis please check the +[replication page][tr]. + +[tr]: /topics/replication @return + +**Non standard return value**, a bulk transfer of the data followed by `PING` and write requests from the master. From 79f4320cb96d4e394da46804ff310a886708cde3 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 17 Oct 2019 00:34:31 +0300 Subject: [PATCH 0187/1457] Clarifies the return of nil with XX or NX (#1042) --- commands/zadd.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/zadd.md b/commands/zadd.md index 064673a6d9..e112a19118 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -58,12 +58,12 @@ If the user inserts all the elements in a sorted set with the same score (for ex @integer-reply, specifically: -* The number of elements added to the sorted sets, not including elements +* The number of elements added to the sorted set, not including elements already existing for which the score was updated. If the `INCR` option is specified, the return value will be @bulk-string-reply: -* the new score of `member` (a double precision floating point number), represented as string. +* The new score of `member` (a double precision floating point number) represented as string, or `nil` if the operation was aborted (when called with either the `XX` or the `NX` option). @history From 4e925b5a86e6b7e676ecdea9609e9f8823145f5a Mon Sep 17 00:00:00 2001 From: Aaron Schumacher Date: Wed, 16 Oct 2019 17:35:47 -0400 Subject: [PATCH 0188/1457] typo: "into" -> "in" (#1054) From e8a82090cf8a05f3b112b3cbd11c1a7cdb0fb5cd Mon Sep 17 00:00:00 2001 From: Miles Crawford Date: Wed, 16 Oct 2019 14:37:15 -0700 Subject: [PATCH 0189/1457] Specify behavior of flushall async on new keys (#1055) * Specify behavior of flushall with async option as regards subsequently inserted keys As suggested in https://github.com/antirez/redis-io/issues/163#issuecomment-447572197 * PR feedback fix. --- commands/flushall.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/flushall.md b/commands/flushall.md index b31e0b51a8..fc7b597eed 100644 --- a/commands/flushall.md +++ b/commands/flushall.md @@ -10,6 +10,8 @@ keys in all existing databases. Redis is now able to delete keys in the background in a different thread without blocking the server. An `ASYNC` option was added to `FLUSHALL` and `FLUSHDB` in order to let the entire dataset or a single database to be freed asynchronously. +Asynchronous `FLUSHALL` and `FLUSHDB` commands only delete keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected. + @return @simple-string-reply From 9206fe8fac06471d31725b69cc240e8eed55d3ce Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Wed, 16 Oct 2019 14:41:30 -0700 Subject: [PATCH 0190/1457] Minor fixes to Streams documentation (#1057) --- topics/streams-intro.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index e48b363f46..7ca8225ab6 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -614,20 +614,20 @@ One useful eviction strategy that **XTRIM** should have is probably the ability ## Special IDs in the streams API You may have noticed that there are several special IDs that can be -used in the Redis API. Here is a short recap, so that they can make more +used in the Redis streams API. Here is a short recap, so that they can make more sense in the future. -The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively means the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. +The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively mean the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. -Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entires with `XREADGROUP` I use such ID to tell that I already have all the existing entries, but not the news that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entires to the consumers using the group. +Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entries with `XREADGROUP` I use such ID to tell that I already have all the existing entries, but not the new ones that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entries to the consumers using the group. -As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol of multiple meanings. +As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol with multiple meanings. -Another special ID is `>`, that has a special meaning only in the context of consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entires that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. +Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. Such special ID means that we want only entries that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. -Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry that we are going to create. +Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry. -So we have `-`, `+`, `$`, `>` and `*`, and all have a different meanings, and most of the times, can only be used in different contexts. +So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and most of the times, can be used in different contexts. ## Persistence, replication and message safety From 132cd34c852d87e733b20452fcfe3436256ec78b Mon Sep 17 00:00:00 2001 From: Sokolov Yura Date: Thu, 17 Oct 2019 00:51:51 +0300 Subject: [PATCH 0191/1457] Add link to 5.0 redis.conf (#1061) --- topics/config.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/config.md b/topics/config.md index ba8e0e8479..d1fe0b68f3 100644 --- a/topics/config.md +++ b/topics/config.md @@ -26,6 +26,7 @@ The list of configuration directives, and their meaning and intended usage is available in the self documented example redis.conf shipped into the Redis distribution. +* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf). * The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf). * The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/antirez/redis/3.2/redis.conf). * The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf). From f45e6ef726622086ca72883a508c8c01563f9c14 Mon Sep 17 00:00:00 2001 From: Sokolov Yura Date: Thu, 17 Oct 2019 00:52:03 +0300 Subject: [PATCH 0192/1457] Add Go's RedisPipe (#1060) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Add Go's RedisPipe Add link to RedisPipe - Go's client with implicit pipelining. Note: I've copied `"recommended": true` mark since there no rules about it, and I personally will recommend it :-) * Removes recommended There are always rules, even if unwritten and totally made up 😄As the property isn't maintained, I'd rather leave it unused until a better solution comes. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 863c7cd40a..94e2b5bae1 100644 --- a/clients.json +++ b/clients.json @@ -122,6 +122,15 @@ "recommended": true, "active": true }, + + { + "name": "RedisPipe", + "language": "Go", + "repository": "https://github.com/joomcode/redispipe", + "description": "RedisPipe is the high-throughput Go client with implicit pipelining and robust Cluster support.", + "authors": ["funny_falcon"], + "active": true + }, { "name": "Tideland Go Redis Client", From 554f8457d34d2727df6224b762510f9348b45ab5 Mon Sep 17 00:00:00 2001 From: Alex Offshore Date: Thu, 17 Oct 2019 00:53:38 +0300 Subject: [PATCH 0193/1457] Minor typo fix (#1062) --- topics/streams-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 7ca8225ab6..513cc3c742 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -671,7 +671,7 @@ The reason why such an asymmetry exists is because Streams may have associated c ## Total latency of consuming a message -Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLOCK option are server synchronously like any other Redis command, so to discuss latency of such commands is meaningless: more interesting is to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half million to one million of items per second in an average machine if pipelining is used. +Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: more interesting is to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half million to one million of items per second in an average machine if pipelining is used. However latency becomes an interesting parameter if we want to understand the delay of processing the message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. From d3dbcf3e29bcef9d77706be29ebfd08459faaeb1 Mon Sep 17 00:00:00 2001 From: Josh Leverette Date: Wed, 16 Oct 2019 17:55:01 -0400 Subject: [PATCH 0194/1457] Address typo / legibility issues (#1065) * Update commands.json * Update xread.md --- commands.json | 4 ++-- commands/xread.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/commands.json b/commands.json index 523b63ab6d..a03bc5817d 100644 --- a/commands.json +++ b/commands.json @@ -3642,7 +3642,7 @@ }, "XREAD": { "summary": "Return never seen elements in multiple streams, with IDs greater than the ones reported by the caller for each stream. Can block.", - "complexity": "For each stream mentioned: O(N) with N being the number of elements being returned, it menas that XREAD-ing with a fixed COUNT is O(1). Note that when the BLOCK option is used, XADD will pay O(M) time in order to serve the M clients blocked on the stream getting new data.", + "complexity": "For each stream mentioned: O(N) with N being the number of elements being returned, it means that XREAD-ing with a fixed COUNT is O(1). Note that when the BLOCK option is used, XADD will pay O(M) time in order to serve the M clients blocked on the stream getting new data.", "arguments": [ { "command": "COUNT", @@ -3667,7 +3667,7 @@ "multiple": true }, { - "name": "ID", + "name": "id", "type": "string", "multiple": true } diff --git a/commands/xread.md b/commands/xread.md index 87a2b80d4d..484ca3871f 100644 --- a/commands/xread.md +++ b/commands/xread.md @@ -128,7 +128,7 @@ the command is able to block if it could not return any data, according to the specified streams and IDs, and automatically unblock once one of the requested keys accept data. -It is important to understand that this command is *fans out* to all the +It is important to understand that this command *fans out* to all the clients that are waiting for the same range of IDs, so every consumer will get a copy of the data, unlike to what happens when blocking list pop operations are used. From ca2583276a2665b7e78ff0b763fdf485fd9bda73 Mon Sep 17 00:00:00 2001 From: Alexander Bird Date: Wed, 16 Oct 2019 16:24:50 -0600 Subject: [PATCH 0195/1457] update msetnx example to make it obvious that no partial updates are made (#1068) --- commands/msetnx.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/msetnx.md b/commands/msetnx.md index 138450f655..2c75bd5162 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -21,6 +21,6 @@ others are unchanged. ```cli MSETNX key1 "Hello" key2 "there" -MSETNX key2 "there" key3 "world" +MSETNX key2 "new" key3 "world" MGET key1 key2 key3 ``` From 4ecfd6e64417886cae0f2a81b7e0be21e7a418d8 Mon Sep 17 00:00:00 2001 From: Matthew Peterson Date: Wed, 16 Oct 2019 17:27:34 -0500 Subject: [PATCH 0196/1457] Fixed confusing description of the select command (#1071) --- commands/select.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/select.md b/commands/select.md index 653f684fa2..9ebc04e969 100644 --- a/commands/select.md +++ b/commands/select.md @@ -5,7 +5,7 @@ Selectable Redis databases are a form of namespacing: all databases are still pe In practical terms, Redis databases should be used to separate different keys belonging to the same application (if needed), and not to use a single Redis instance for multiple unrelated applications. -When using Redis Cluster, the `SELECT` command cannot be used, since Redis Cluster only supports database zero. In the case of Redis Cluster, having multiple databases would be useless, and a worthless source of complexity, because anyway commands operating atomically on a single database would not be possible with the Redis Cluster design and goals. +When using Redis Cluster, the `SELECT` command cannot be used, since Redis Cluster only supports database zero. In the case of a Redis Cluster, having multiple databases would be useless and an unnecessary source of complexity. Commands operating atomically on a single database would not be possible with the Redis Cluster design and goals. Since the currently selected database is a property of the connection, clients should track the currently selected database and re-select it on reconnection. While there is no command in order to query the selected database in the current connection, the `CLIENT LIST` output shows, for each client, the currently selected database. From 960802ce5256b9df2faef121b3a7eae0b83a182c Mon Sep 17 00:00:00 2001 From: appkins Date: Wed, 16 Oct 2019 17:31:07 -0500 Subject: [PATCH 0197/1457] Cpp Redis Repository (#1073) Simon is no longer maintaining cpp_redis. I am maintaining and providing updates for Redis 5 in [this fork](https://github.com/cpp-redis/cpp_redis). For verification, please view the README at [https://github.com/Cylix/cpp_redis](https://github.com/Cylix/cpp_redis) with a link to my fork. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 94e2b5bae1..a0ef491d12 100644 --- a/clients.json +++ b/clients.json @@ -1391,7 +1391,7 @@ { "name": "cpp_redis", "language": "C++", - "repository": "https://github.com/cylix/cpp_redis", + "repository": "https://github.com/cpp-redis/cpp_redis", "description": "C++11 Lightweight Redis client: async, thread-safe, no dependency, pipelining, multi-platform.", "authors": ["simon_ninon"], "active": true From 81608a0d018429d9810515e972e06723d1f4b13c Mon Sep 17 00:00:00 2001 From: Gerard van Helden Date: Thu, 17 Oct 2019 00:32:43 +0200 Subject: [PATCH 0198/1457] add `drm/java-redis-client` (#1072) * add `drm/java-redis-client` * Update clients.json * Update clients.json Removes the recommended property. Auther property needs to be a Twitter handle, not GH. --- clients.json | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index a0ef491d12..ebc884bb8f 100644 --- a/clients.json +++ b/clients.json @@ -214,7 +214,14 @@ "recommended": true, "active": true }, - + { + "name": "java-redis-client", + "language": "Java", + "repository": "https://github.com/drm/java-redis-client", + "description": "A very simple yet very complete java client in less than 200 lines with 0 dependencies.", + "authors": [], + "active": true + }, { "name": "Jedipus", "language": "Java", From a28a2db094d3126a421afea5b345ebb811398072 Mon Sep 17 00:00:00 2001 From: Grygorii Iermolenko Date: Thu, 17 Oct 2019 01:38:44 +0300 Subject: [PATCH 0199/1457] Update streams-intro.md (#1076) fixes two typos and one markdown update (Bold nested inside italics was rendered incorrectly). --- topics/streams-intro.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 513cc3c742..776055a17d 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -135,7 +135,7 @@ Note that the **XREVRANGE** command takes the *start* and *stop* arguments in re When we do not want to access items by a range in a stream, usually what we want instead is to *subscribe* to new items arriving to the stream. This concept may appear related to Redis Pub/Sub, where you subscribe to a channel, or to Redis blocking lists, where you wait for a key to get new elements to fetch, but there are fundamental differences in the way you consume a stream: 1. A stream can have multiple clients (consumers) waiting for data. Every new item, by default, will be delivered to *every consumer* that is waiting for data in a given stream. This behavior is different than blocking lists, where each consumer will get a different element. However, the ability to *fan out* to multiple consumers is similar to Pub/Sub. -2. While in Pub/Sub messages are *fire and forget* and are never stored anyway, and while when using blocking lists, when a message is received by the client it is *popped* (effectively removed) form the list, streams work in a fundamentally different way. All the messages are appended in the stream indefinitely (unless the user explicitly asks to delete entries): different consumers will know what is a new message from its point of view by remembering the ID of the last message received. +2. While in Pub/Sub messages are *fire and forget* and are never stored anyway, and while when using blocking lists, when a message is received by the client it is *popped* (effectively removed) from the list, streams work in a fundamentally different way. All the messages are appended in the stream indefinitely (unless the user explicitly asks to delete entries): different consumers will know what is a new message from its point of view by remembering the ID of the last message received. 3. Streams Consumer Groups provide a level of control that Pub/Sub or blocking lists cannot achieve, with different groups for the same stream, explicit acknowledge of processed items, ability to inspect the pending items, claiming of unprocessed messages, and coherent history visibility for each single client, that is only able to see its private past history of messages. The command that provides the ability to listen for new messages arriving into a stream is called **XREAD**. It's a bit more complex than **XRANGE**, so we'll start showing simple forms, and later the whole command layout will be provided. @@ -231,7 +231,7 @@ Assuming I have a key `mystream` of type stream already existing, in order to cr OK ``` -Note: *Currently it is not possible to create consumer groups for non-existing streams, however it is possible that in the short future we'll add an option to the **XGROUP** command in order to create an empty stream in such cases.* +Note: _Currently it is not possible to create consumer groups for non-existing streams, however it is possible that in the short future we'll add an option to the **XGROUP** command in order to create an empty stream in such cases._ As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what is the current *last message ID* when the group was just created? If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume *all* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. From 96346c780f74b58af37b4f4a14405e370ae62a8b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E5=AD=90=E9=AA=85?= Date: Thu, 17 Oct 2019 06:39:50 +0800 Subject: [PATCH 0200/1457] Add introduction to ioredis (#1077) --- topics/cluster-tutorial.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 9e6b27ba37..c2b1c6b8b0 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -379,6 +379,7 @@ I'm aware of the following implementations: * [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) offers support for C# (and should work fine with most .NET languages; VB, F#, etc) * [thunk-redis](https://github.com/thunks/thunk-redis) offers support for Node.js and io.js, it is a thunk/promise-based redis client with pipelining and cluster. * [redis-go-cluster](https://github.com/chasex/redis-go-cluster) is an implementation of Redis Cluster for the Go language using the [Redigo library client](https://github.com/garyburd/redigo) as the base client. Implements MGET/MSET via result aggregation. +* [ioredis](https://github.com/luin/ioredis) is a popular Node.js client, providing a robust support for Redis Cluster. * The `redis-cli` utility in the unstable branch of the Redis repository at GitHub implements a very basic cluster support when started with the `-c` switch. An easy way to test Redis Cluster is either to try any of the above clients From db3b7f458a7f43899ad70e44acb7a74c396769bc Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 17 Oct 2019 01:51:17 +0300 Subject: [PATCH 0201/1457] Update cluster-tutorial.md (#1183) --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index c2b1c6b8b0..120a4dbd62 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -380,7 +380,7 @@ I'm aware of the following implementations: * [thunk-redis](https://github.com/thunks/thunk-redis) offers support for Node.js and io.js, it is a thunk/promise-based redis client with pipelining and cluster. * [redis-go-cluster](https://github.com/chasex/redis-go-cluster) is an implementation of Redis Cluster for the Go language using the [Redigo library client](https://github.com/garyburd/redigo) as the base client. Implements MGET/MSET via result aggregation. * [ioredis](https://github.com/luin/ioredis) is a popular Node.js client, providing a robust support for Redis Cluster. -* The `redis-cli` utility in the unstable branch of the Redis repository at GitHub implements a very basic cluster support when started with the `-c` switch. +* The `redis-cli` utility implements basic cluster support when started with the `-c` switch. An easy way to test Redis Cluster is either to try any of the above clients or simply the `redis-cli` command line utility. The following is an example From ffd28e985ac66807596318c72aa8e6bbd0a288ac Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E6=96=87=E6=9D=B0?= Date: Thu, 17 Oct 2019 06:55:43 +0800 Subject: [PATCH 0202/1457] fix typo (#1081) --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 120a4dbd62..a6e21893ed 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -691,7 +691,7 @@ In order to trigger the failover, the simplest thing we can do (that is also the semantically simplest failure that can occur in a distributed system) is to crash a single process, in our case a single master. -We can identify a cluster and crash it with the following command: +We can identify a master and crash it with the following command: ``` $ redis-cli -p 7000 cluster nodes | grep master From 5341a3785ec4dd3cc0302cca94918ba2ff668a24 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 17 Oct 2019 01:58:34 +0300 Subject: [PATCH 0203/1457] Fixes a couple of typos (#1184) --- topics/replication.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/replication.md b/topics/replication.md index 18e83d4b40..e3705eb0d2 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -98,7 +98,7 @@ Replication ID explained In the previous section we said that if two instances have the same replication ID and replication offset, they have exactly the same data. However it is useful -to understand what exctly is the replication ID, and why instances have actually +to understand what exactly is the replication ID, and why instances have actually two replication IDs the main ID and the secondary ID. A replication ID basically marks a given *history* of the data set. Every time @@ -122,7 +122,7 @@ was the one of the former master. In this way, when other slaves will synchroniz with the new master, they will try to perform a partial resynchronization using the old master replication ID. This will work as expected, because when the slave is promoted to master it sets its secondary ID to its main ID, remembering what -was the offset when this ID switch happend. Later it will select a new random +was the offset when this ID switch happened. Later it will select a new random replication ID, because a new history begins. When handling the new slaves connecting, the master will match their IDs and offsets both with the current ID and the secondary ID (up to a given offset, for safety). In short this means @@ -316,5 +316,5 @@ Moreover slaves when powered off gently and restarted, are able to store in the This is useful in case of upgrades. When this is needed, it is better to use the `SHUTDOWN` command in order to perform a `save & quit` operation on the slave. -It is not possilbe to partially resynchronize a slave that restarted via the AOF file. However the instance may be turned to RDB persistence before shutting down it, than can be restarted, and finally AOF can be enabled again. +It is not possible to partially resynchronize a slave that restarted via the AOF file. However the instance may be turned to RDB persistence before shutting down it, than can be restarted, and finally AOF can be enabled again. From bedbdd6c451bc60addd8045a1063fddd37c5b86b Mon Sep 17 00:00:00 2001 From: Samuel Colvin Date: Thu, 17 Oct 2019 00:04:29 +0100 Subject: [PATCH 0204/1457] fix typo in topics/transactions.md (#1090) --- topics/transactions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/transactions.md b/topics/transactions.md index 2da9b4e2fd..a624019585 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -245,7 +245,7 @@ usually the script will be both simpler and faster. This duplication is due to the fact that scripting was introduced in Redis 2.6 while transactions already existed long before. However we are unlikely to -remove the support for transactions in the short time because it seems +remove the support for transactions in the short-term because it seems semantically opportune that even without resorting to Redis scripting it is still possible to avoid race conditions, especially since the implementation complexity of Redis transactions is minimal. From 0438791682c687316144eedb1ad6b82eff43a2f8 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 17 Oct 2019 15:11:36 +0300 Subject: [PATCH 0205/1457] Rebases for merge --- topics/memory-optimization.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index a1ab206240..85e7c460f1 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -207,12 +207,12 @@ again, you'll see the RSS (Resident Set Size) stay steady and not grow more, as you add up to 2GB of additional keys. The allocator is basically trying to reuse the 2GB of memory previously (logically) freed. * Because of all this, the fragmentation ratio is not reliable when you -have a memory usage that at peak is much larger than the currently used memory. -The fragmentation is calculated as the amount of memory currently in use -(as the sum of all the allocations performed by Redis) divided by the physical -memory actually used (the RSS value). Because the RSS reflects the peak memory, +had a memory usage that at peak is much larger than the currently used memory. +The fragmentation is calculated as the physical memory actually used (the RSS +value) divided by the amount of memory currently in use (as the sum of all +the allocations performed by Redis). Because the RSS reflects the peak memory, when the (virtually) used memory is low since a lot of keys / values were -freed, but the RSS is high, the ratio `mem_used / RSS` will be very high. +freed, but the RSS is high, the ratio `RSS / mem_used` will be very high. If `maxmemory` is not set Redis will keep allocating memory as it finds fit and thus it can (gradually) eat up all your free memory. From 32ef1f4a5a02991004b28e370d8d70f3478a9c91 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E6=96=87=E6=9D=B0?= Date: Thu, 17 Oct 2019 20:15:48 +0800 Subject: [PATCH 0206/1457] fix doc about fragmentation ratio (#1091) From eb80b278fabb3e5c93d23fc4fee11a2afa9e4902 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Thu, 17 Oct 2019 16:26:47 +0300 Subject: [PATCH 0207/1457] XREADGROUP ignores BLOCK and NOACK (#1185) * XREADGROUP ignores BLOCK and NOACK ref #1029 * Update xreadgroup.md * Connected the note to the bullet * Additional clarifications --- commands/xreadgroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index a534c96265..78e7284d53 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -53,7 +53,7 @@ The ID to specify in the **STREAMS** option when using `XREADGROUP` can be one of the following two: * The special `>` ID, which means that the consumer want to receive only messages that were *never delivered to any other consumer*. It just means, give me new messages. -* Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command. So basically if the ID is not `>`, then the command will just let the client access its pending entries: delivered to it, but not yet acknowledged. +* Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs equal or greater to the one provided. So basically if the ID is not `>`, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored. Like `XREAD` the `XREADGROUP` command can be used in a blocking way. There are no differences in this regard. From 8f6c9fd82c3f425f5643005c84e95021bab16456 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=8E=E6=96=87=E6=9D=B0?= Date: Thu, 17 Oct 2019 23:25:19 +0800 Subject: [PATCH 0208/1457] fix memory stats (#1094) --- commands/memory-stats.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/memory-stats.md b/commands/memory-stats.md index 54d1728d0c..4de15e7e6b 100644 --- a/commands/memory-stats.md +++ b/commands/memory-stats.md @@ -5,13 +5,13 @@ The information about memory usage is provided as metrics and their respective values. The following metrics are reported: * `peak.allocated`: Peak memory consumed by Redis in bytes (see `INFO`'s - `used_memory`) + `used_memory_peak`) * `total.allocated`: Total number of bytes allocated by Redis using its allocator (see `INFO`'s `used_memory`) * `startup.allocated`: Initial amount of memory consumed by Redis at startup in bytes (see `INFO`'s `used_memory_startup`) * `replication.backlog`: Size in bytes of the replication backlog (see - `INFO`'s `repl_backlog_size`) + `INFO`'s `repl_backlog_active`) * `clients.slaves`: The total size in bytes of all replicas overheads (output and query buffers, connection contexts) * `clients.normal`: The total size in bytes of all clients overheads (output From 69208c8e29778f9053fea9a3be8af710d1fabbef Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Thu, 17 Oct 2019 09:43:35 -0700 Subject: [PATCH 0209/1457] Fixed typos, made grammar improvements. (#1095) --- topics/streams-intro.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 776055a17d..ddcc0f36fe 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -385,7 +385,7 @@ Once the history was consumed, and we get an empty list of messages, we can swit The example above allows us to write consumers that participate to the same consumer group, taking each a subset of messages to process, and recovering from failures re-reading the pending messages that were delivered just to them. However in the real world consumers may permanently fail and never recover. What happens to the pending messages of the consumer that never recovers after stopping for any reason? -Redis consumer groups offer a feature that is used exactly in this situations in order to *claim* the pending messages of a given consumer so that such messages will change ownership and will be re-assigned to a different consumer. The feature is very explicit, a consumer has to inspect the list of pending messages, and will have to claim specific messages using a special command, otherwise the server will take the messages pending forever assigned to the old consumer, in this way different applications can choose if to use such a feature or not, and exactly the way to use it. +Redis consumer groups offer a feature that is used in these situations in order to *claim* the pending messages of a given consumer so that such messages will change ownership and will be re-assigned to a different consumer. The feature is very explicit, a consumer has to inspect the list of pending messages, and will have to claim specific messages using a special command, otherwise the server will take the messages pending forever assigned to the old consumer, in this way different applications can choose if to use such a feature or not, and exactly the way to use it. The first step of this process is just a command that provides observability of pending entries in the consumer group and is called **XPENDING**. This is just a read-only command which is always safe to call and will not change ownership of any message. In its simplest form, the command is just called with two arguments, which are the name of the stream and the name of the consumer group. @@ -535,7 +535,7 @@ The output of the example above, where the **GROUPS** subcommand is used, should 6) (integer) 83841983 ``` -In case you do not remember the syntax of the command, just ask for help to the command itself: +In case you do not remember the syntax of the command, just ask the command itself for help: ``` > XINFO HELP @@ -637,9 +637,9 @@ However note that Redis streams and consumer groups are persisted and replicated * AOF must be used with a strong fsync policy if persistence of messages is important in your application. * By default the asynchronous replication will not guarantee that **XADD** commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of slaves to receive the data from the master. -* The **WAIT** command may be used in order to force the propagation of the changes to a set of slaves. However note that while this makes very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the slave which is the most updated, and under certain specific failures may promote a slave that lacks some data. +* The **WAIT** command may be used in order to force the propagation of the changes to a set of slaves. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the slave which is the most updated, and under certain specific failures may promote a slave that lacks some data. -So when designing application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating if it is safe enough for your use case. +So when designing an application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating if it is safe enough for your use case. ## Removing single items from a stream @@ -671,7 +671,7 @@ The reason why such an asymmetry exists is because Streams may have associated c ## Total latency of consuming a message -Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: more interesting is to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half million to one million of items per second in an average machine if pipelining is used. +Non blocking stream commands like XRANGE and XREAD or XREADGROUP without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: it is more interesting to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half million to one million of items per second in an average machine if pipelining is used. However latency becomes an interesting parameter if we want to understand the delay of processing the message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. @@ -706,7 +706,7 @@ Processed between 4 and 5 ms -> 0.02% So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. -Adding a few millions of not acknowledged messages in the stream does not change the gist of the benchmark, with most queries still processed with very short latency. +Adding a few million unacknowledged messages to the stream does not change the gist of the benchmark, with most queries still processed with very short latency. A few remarks: From d02273bec0a800ed3e27172325b92b4b0e4dbea9 Mon Sep 17 00:00:00 2001 From: xmonader Date: Thu, 17 Oct 2019 19:13:59 +0200 Subject: [PATCH 0210/1457] Add nim-redisclient (#1102) --- clients.json | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index ebc884bb8f..af25da1f69 100644 --- a/clients.json +++ b/clients.json @@ -1045,7 +1045,16 @@ "authors": [], "active": true }, - + + { + "name": "redisclient", + "language": "Nim", + "repository": "https://github.com/xmonader/nim-redisclient", + "description": "Redis client for Nim", + "authors": ["xmonader"], + "active": true + }, + { "name": "libvmod-redis", "language": "VCL", From 8412824104571db6b3918ed918eb3ad28a4632c0 Mon Sep 17 00:00:00 2001 From: ddbilik Date: Thu, 17 Oct 2019 19:14:08 +0200 Subject: [PATCH 0211/1457] Update clients.json. (#1101) * Update clients.json. Mention UniRedis for Swift. * Update clients.json Author should be a Twitter handle --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index af25da1f69..0bcbbc6944 100644 --- a/clients.json +++ b/clients.json @@ -1368,6 +1368,15 @@ "active": true }, + { + "name": "UniRedis", + "language": "Swift", + "repository": "https://github.com/seznam/swift-uniredis", + "description": "Redis client for Swift on macOS and Linux, capable of pipelining and transactions, with transparent support for authentication and sentinel.", + "authors": [], + "active": true + }, + { "name": "Rackdis", "language": "Racket", From ff0370bd6aafb9bbd7684c056d70cfd392f4e541 Mon Sep 17 00:00:00 2001 From: Quinn Diggity Date: Thu, 17 Oct 2019 10:15:02 -0700 Subject: [PATCH 0212/1457] fixes typo (#1103) From 39dc05ae808180c40edaac79f6fb00d3d0382581 Mon Sep 17 00:00:00 2001 From: stdupanda <865480187@qq.com> Date: Fri, 18 Oct 2019 01:15:39 +0800 Subject: [PATCH 0213/1457] fix typo in topics/replication (#1104) From 1ce89931238b8f47f53c1d54e29cb1e24ccc44eb Mon Sep 17 00:00:00 2001 From: Henry Date: Fri, 18 Oct 2019 01:16:12 +0800 Subject: [PATCH 0214/1457] add c# client BeetleX.Redis (#1099) * add c# client BeetleX.Redis * add repository url * change authors * Update clients.json * Update clients.json --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 0bcbbc6944..0f73e68157 100644 --- a/clients.json +++ b/clients.json @@ -1619,6 +1619,16 @@ "active": true }, + { + "name": "BeetleX.Redis", + "language": "C#", + "url": "https://github.com/IKende/BeetleX.Redis", + "repository": "https://github.com/IKende/BeetleX.Redis", + "description": "A high-performance async/non-blocking redis client components for dotnet core, default support json and protobuf data format", + "authors": [], + "active": true + }, + { "name": "wiredis", "language": "C++", From e885ac6e628a9e88875c656b66eeca00463c0c35 Mon Sep 17 00:00:00 2001 From: cherrydev Date: Thu, 17 Oct 2019 12:24:29 -0700 Subject: [PATCH 0215/1457] Update example of bit level operations (#1109) Existing example used sex of a user as being a binary male/female option. I believe this example has become outdated since it was written and can be better explained using a neutral example that is undeniably binary, such as a subscription to a mailing list. --- topics/memory-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 85e7c460f1..eadf1bfc55 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -26,7 +26,7 @@ Redis compiled with 32 bit target uses a lot less memory per key, since pointers Bit and byte level operations ----------------------------- -Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`. Using these commands you can treat the Redis string type as a random access array. For instance if you have an application where users are identified by a unique progressive integer number, you can use a bitmap in order to save information about the sex of users, setting the bit for females and clearing it for males, or the other way around. With 100 million users this data will take just 12 megabytes of RAM in a Redis instance. You can do the same using `GETRANGE` and `SETRANGE` in order to store one byte of information for each user. This is just an example but it is actually possible to model a number of problems in very little space with these new primitives. +Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`. Using these commands you can treat the Redis string type as a random access array. For instance if you have an application where users are identified by a unique progressive integer number, you can use a bitmap in order to save information about the subscription of users in a mailing list, setting the bit for subscribed and clearing it for unsubscribed, or the other way around. With 100 million users this data will take just 12 megabytes of RAM in a Redis instance. You can do the same using `GETRANGE` and `SETRANGE` in order to store one byte of information for each user. This is just an example but it is actually possible to model a number of problems in very little space with these new primitives. Use hashes when possible ------------------------ From cd8201054a5deceff47d04c9dfe0aed83c3b66e0 Mon Sep 17 00:00:00 2001 From: boyang9602 <30889920+boyang9602@users.noreply.github.com> Date: Thu, 17 Oct 2019 15:24:46 -0400 Subject: [PATCH 0216/1457] =?UTF-8?q?Remove=20=E2=80=9Cabc=E2=80=9D=20in?= =?UTF-8?q?=20the=20example=20of=20transactions=20(#1106)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Remove “abc” in the example of transactions * Update transactions.md Changes 3 to abc to clarify it is a string value and not a mysterious count or something. --- topics/transactions.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/topics/transactions.md b/topics/transactions.md index a624019585..2bbc8e6bba 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -85,8 +85,7 @@ command will fail when executed even if the syntax is right: Escape character is '^]'. MULTI +OK - SET a 3 - abc + SET a abc +QUEUED LPOP a +QUEUED From 831f7fec81a7f0fd8d2e046569472db8a3146688 Mon Sep 17 00:00:00 2001 From: Nicholas Fitton Date: Thu, 17 Oct 2019 20:34:08 +0100 Subject: [PATCH 0217/1457] Update LLOOGG (#926) Previous link directs to a redirect page to the open-source github repo, suggest cutting out the middle man and directing to the github repo instead. --- topics/faq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/faq.md b/topics/faq.md index 3dc209cec7..589e3b07f0 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -170,7 +170,7 @@ It means REmote DIctionary Server. Originally Redis was started in order to scale [LLOOGG][lloogg]. But after I got the basic server working I liked the idea to share the work with other people, and Redis was turned into an open source project. -[lloogg]: http://lloogg.com +[lloogg]: https://github.com/antirez/lloogg ## How is Redis pronounced? From 03b337dc9e32d00d2b7214881089184c565510e2 Mon Sep 17 00:00:00 2001 From: oneoneonepig Date: Fri, 18 Oct 2019 03:35:39 +0800 Subject: [PATCH 0218/1457] Typo fixed (#1113) From 3b31087c1e63d065ba5eec48a4b27fda840f8e9b Mon Sep 17 00:00:00 2001 From: Stephan Dilly Date: Thu, 17 Oct 2019 21:40:35 +0200 Subject: [PATCH 0219/1457] Update replication.md (#1116) fix typo From 1dc6fb247f1caa9509f79cd719de19f5c6736bff Mon Sep 17 00:00:00 2001 From: icerlion Date: Fri, 18 Oct 2019 03:41:42 +0800 Subject: [PATCH 0220/1457] Add FlyRedis (#1115) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 0f73e68157..3e12cb04de 100644 --- a/clients.json +++ b/clients.json @@ -1618,6 +1618,15 @@ "authors": ["kemtekinay"], "active": true }, + + { + "name": "FlyRedis", + "language": "C++", + "repository": "https://github.com/icerlion/FlyRedis", + "description": "C++ Redis Client, base on Boost.asio, Easy To Use", + "authors": [], + "active": true + }, { "name": "BeetleX.Redis", From 564a28dbe6f42f85d535efba5b736420eab6fbef Mon Sep 17 00:00:00 2001 From: Dainel Vera Date: Thu, 17 Oct 2019 16:27:08 -0400 Subject: [PATCH 0221/1457] Add the mini_redis client for Crystal (#1118) --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 3e12cb04de..2f6cdda417 100644 --- a/clients.json +++ b/clients.json @@ -1619,6 +1619,16 @@ "active": true }, + { + "name": "mini_redis", + "language": "Crystal", + "url": "http://github.vladfaust.com/mini_redis/", + "repository": "https://github.com/vladfaust/mini_redis", + "description": "A light-weight low-level Redis client for Crystal", + "authors": ["vladfaust"], + "active": true + }, + { "name": "FlyRedis", "language": "C++", From ecaa1ab97451f4d38f01468d029daeb24fd6b481 Mon Sep 17 00:00:00 2001 From: vreemt Date: Thu, 17 Oct 2019 21:34:18 +0100 Subject: [PATCH 0222/1457] Minor: move space (#1130) grammar typo --- topics/data-types-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 6f0b392bdd..ed23e703ce 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -270,7 +270,7 @@ First steps with Redis Lists The `LPUSH` command adds a new element into a list, on the left (at the head), while the `RPUSH` command adds a new -element into a list ,on the right (at the tail). Finally the +element into a list, on the right (at the tail). Finally the `LRANGE` command extracts ranges of elements from lists: > rpush mylist A From a465628659014aae6dae79c7ee52b320089d84af Mon Sep 17 00:00:00 2001 From: vreemt Date: Thu, 17 Oct 2019 21:35:33 +0100 Subject: [PATCH 0223/1457] Seconds in a day in words (#1131) Not all days have 24 hours, not every hour has 3600 seconds Using words avoids spreading the practice of using flat numbers --- topics/data-types-intro.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index ed23e703ce..dbf3f23678 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -966,7 +966,8 @@ For example imagine you want to know the longest streak of daily visits of your web site users. You start counting days starting from zero, that is the day you made your web site public, and set a bit with `SETBIT` every time the user visits the web site. As a bit index you simply take the current unix -time, subtract the initial offset, and divide by 3600\*24. +time, subtract the initial offset, and divide by the number of seconds in a day +(normally, 3600\*24). This way for each user you have a small string containing the visit information for each day. With `BITCOUNT` it is possible to easily get From 96bb2c6381dea1d704f4acf907bd920ae7512623 Mon Sep 17 00:00:00 2001 From: Tomer Brisker Date: Thu, 17 Oct 2019 23:54:58 +0300 Subject: [PATCH 0224/1457] correct typo in clients.json (#1141) --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 2f6cdda417..6a727f970d 100644 --- a/clients.json +++ b/clients.json @@ -3,7 +3,7 @@ "name": "redis-rb", "language": "Ruby", "repository": "https://github.com/redis/redis-rb", - "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performances.", + "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performance.", "authors": ["ezmobius", "soveran", "djanowski", "pnoordhuis"], "recommended": true, "active": true From 8e25bb8ad944a71572f4ad4b4fdfa76ec450e052 Mon Sep 17 00:00:00 2001 From: wusongwei <47240096+wusongwei@users.noreply.github.com> Date: Fri, 18 Oct 2019 05:00:11 +0800 Subject: [PATCH 0225/1457] add soce-redis (#1140) --- clients.json | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 6a727f970d..9d00f2df26 100644 --- a/clients.json +++ b/clients.json @@ -1618,7 +1618,17 @@ "authors": ["kemtekinay"], "active": true }, - + + { + "name": "soce-redis", + "language": "C++", + "url": "https://github.com/wusongwei/soce/tree/master/soce-redis", + "repository": "https://github.com/wusongwei/soce/tree/master/soce-redis", + "description": "Based on hiredis, accesses the sever(single, sentinel, cluster) with the same interface, supports pipeline and async(by coroutine)", + "authors": [], + "active": true + }, + { "name": "mini_redis", "language": "Crystal", @@ -1694,4 +1704,5 @@ "recommended": true, "active": true } + ] From d68fce3a703713cdbad6b6cce34eb0b45f573f00 Mon Sep 17 00:00:00 2001 From: Alexander Cheprasov Date: Thu, 17 Oct 2019 22:05:13 +0100 Subject: [PATCH 0226/1457] Updared description for cheprasov/php-redis-client (#1142) --- clients.json | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 9d00f2df26..25761a12f5 100644 --- a/clients.json +++ b/clients.json @@ -1461,7 +1461,7 @@ "name": "cheprasov/php-redis-client", "language": "PHP", "repository": "https://github.com/cheprasov/php-redis-client", - "description": "Supported PHP client for versions of Redis from 2.6 to 4.0", + "description": "Supported PHP client for Redis. PHP ver 5.5 - 7.3 / REDIS ver 2.6 - 5.0", "authors": ["cheprasov84"], "active": true }, @@ -1628,7 +1628,6 @@ "authors": [], "active": true }, - { "name": "mini_redis", "language": "Crystal", From 0c97b33c0e377be96f1960765c74432a7f1ce49b Mon Sep 17 00:00:00 2001 From: Quinton Parker Date: Thu, 17 Oct 2019 23:06:08 +0200 Subject: [PATCH 0227/1457] Sorted set score is double-precision float (#1146) At least this line contradicts line 26 Which is it? Float or integer? --- topics/indexes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/indexes.md b/topics/indexes.md index a7dd0b0c30..04e05bf4a5 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -138,7 +138,7 @@ retrieve elements by radius. Limits of the score --- -Sorted set elements scores are double precision integers. It means that +Sorted set elements scores are double precision floats. It means that they can represent different decimal or integer values with different errors, because they use an exponential representation internally. However what is interesting for indexing purposes is that the score is From 4b347f2d4b5d513dc7adc0ef3b7fd0dcba3e3454 Mon Sep 17 00:00:00 2001 From: Quinton Parker Date: Thu, 17 Oct 2019 23:06:33 +0200 Subject: [PATCH 0228/1457] Fix ZRANGEBYLEX syntax (#1147) Mandatory prefix missing --- topics/indexes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/indexes.md b/topics/indexes.md index 04e05bf4a5..7cd9a84f70 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -348,7 +348,7 @@ we just store the entry as `key:value`: And search for the key with: - ZRANGEBYLEX myindex mykey: + LIMIT 1 1 + ZRANGEBYLEX myindex [mykey: + LIMIT 1 1 1) "mykey:myvalue" Then we extract the part after the colon to retrieve the value. From 437759664f0a575710dc8a3fc4f67bb4cd809958 Mon Sep 17 00:00:00 2001 From: Grant Holly Date: Thu, 17 Oct 2019 14:19:25 -0700 Subject: [PATCH 0229/1457] fixing some spelling and grammar (#1152) * fixing some spelling and grammar * another one --- commands/info.md | 2 +- commands/object.md | 2 +- commands/role.md | 2 +- commands/xinfo.md | 2 +- commands/xpending.md | 2 +- commands/xrevrange.md | 4 ++-- topics/streams-intro.md | 4 ++-- wordlist | 17 +++++++++++++++++ 8 files changed, 26 insertions(+), 9 deletions(-) diff --git a/commands/info.md b/commands/info.md index b348c98531..fef8a28d82 100644 --- a/commands/info.md +++ b/commands/info.md @@ -149,7 +149,7 @@ Here is the meaning of all fields in the **persistence** section: * `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if any * `rdb_last_cow_size`: The size in bytes of copy-on-write allocations during - the last RBD save operation + the last RDB save operation * `aof_enabled`: Flag indicating AOF logging is activated * `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going diff --git a/commands/object.md b/commands/object.md index ee86c74b56..4561d54990 100644 --- a/commands/object.md +++ b/commands/object.md @@ -21,7 +21,7 @@ The `OBJECT` command supports multiple sub commands: * `OBJECT FREQ ` returns the logarithmic access frequency counter of the object stored at the specified key. This subcommand is available when `maxmemory-policy` is set to an LFU policy. -* `OBJECT HELP` returns a succint help text. +* `OBJECT HELP` returns a succinct help text. Objects can be encoded in different ways: diff --git a/commands/role.md b/commands/role.md index 6353dd5928..9ddf62d4b8 100644 --- a/commands/role.md +++ b/commands/role.md @@ -46,7 +46,7 @@ An example of output when `ROLE` is called in a replica instance: The replica output is composed of the following parts: -1. The string `slave`, because of backward compatbility (see note at the end of this page). +1. The string `slave`, because of backward compatibility (see note at the end of this page). 2. The IP of the master. 3. The port number of the master. 4. The state of the replication from the point of view of the master, that can be `connect` (the instance needs to connect to its master), `connecting` (the master-replica connection is in progress), `sync` (the master and replica are trying to perform the synchronization), `connected` (the replica is online). diff --git a/commands/xinfo.md b/commands/xinfo.md index 25c8cef400..e795f00254 100644 --- a/commands/xinfo.md +++ b/commands/xinfo.md @@ -94,7 +94,7 @@ the items will likely be reported back in a linear array should document that the order is undefined. Finally it is possible to get help from the command, in case the user can't -remember the exact syntax, by using the `HELP` subcommnad: +remember the exact syntax, by using the `HELP` subcommand: ``` > XINFO HELP diff --git a/commands/xpending.md b/commands/xpending.md index 1b7b0747e6..aa8ee3734b 100644 --- a/commands/xpending.md +++ b/commands/xpending.md @@ -43,7 +43,7 @@ OK We expect the pending entries list for the consumer group `group55` to have a message right now: consumer named `consumer-123` fetched the -message without acknowledging its processing. The simples `XPENDING` +message without acknowledging its processing. The simple `XPENDING` form will give us this information: ``` diff --git a/commands/xrevrange.md b/commands/xrevrange.md index 5ddcbcd4e7..757693f6b5 100644 --- a/commands/xrevrange.md +++ b/commands/xrevrange.md @@ -18,10 +18,10 @@ enough to send: Like `XRANGE` this command can be used in order to iterate the whole stream content, however note that in this case, the next command calls -should use the ID of the last entry, with the sequence number decremneted +should use the ID of the last entry, with the sequence number decremented by one. However if the sequence number is already 0, the time part of the ID should be decremented by 1, and the sequence part should be set to -the maxium possible sequence number, that is, 18446744073709551615, or +the maximum possible sequence number, that is, 18446744073709551615, or could be omitted at all, and the command will automatically assume it to be such a number (see `XRANGE` for more info about incomplete IDs). diff --git a/topics/streams-intro.md b/topics/streams-intro.md index ddcc0f36fe..b89e1e44d0 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -1,4 +1,4 @@ -# Introduction to Redis Streams +# Introduction to Redis Streams The Stream is a new data type introduced with Redis 5.0, which models a *log data structure* in a more abstract way, however the essence of the log is still intact: like a log file, often implemented as a file open in append only mode, Redis streams are primarily an append only data structure. At least conceptually, because being Redis Streams an abstract data type represented in memory, they implement more powerful operations, to overcome the limits of the log file itself. @@ -475,7 +475,7 @@ Messaging systems that lack observability are very hard to work with. Not knowin However we may want to do more than that, and the **XINFO** command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. -This command uses subcommands in order to show different informations about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. +This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. ``` > XINFO STREAM mystream diff --git a/wordlist b/wordlist index f3f63ac075..37ce95f651 100644 --- a/wordlist +++ b/wordlist @@ -3,7 +3,9 @@ ACLs AMD AOF API +Atomicvar BitOp +Bitfields CAS CJSON CJSON @@ -30,6 +32,7 @@ Fsyncing GCC GDB GEODEL +GeoHashes GETs GHz GPG @@ -55,11 +58,13 @@ JPEG JSON LDB LF +LFU LLOOGG LRU Linode Liveness Lua +MAXLEN MERCHANTABILITY MX MacBook @@ -78,11 +83,13 @@ NX Nehalem Netflix NoSQL +NOOP Noordhuis ODOWN OOM OSGEO Opteron +PEL PHP PINGs POSIX @@ -142,6 +149,7 @@ Xen Xeon Yukihiro ZPOP +ZSET addr afterwards allkeys @@ -187,6 +195,8 @@ dataset datasets decrement decrementing +defragmented +defragmentation denyoom deserialize deserializing @@ -278,6 +288,7 @@ mutex mylist mymaster myzset +namespacing netcat netsplits newjobs @@ -306,8 +317,10 @@ pubsub qsort queueing rdb +radix readonly readwrite +reallocations realtime rebalance rebalancing @@ -327,6 +340,7 @@ resharded resharding reshardings resync +resyncs resynchronization resynchronizations resynchronize @@ -336,8 +350,10 @@ rss runid runtime scalable +selectable semantical sharding +sharded sismember slowlog smaps @@ -396,3 +412,4 @@ vtype wildcards ziplist ziplists +zset From 0ff4bb0a17844ce7ecc3d26135513e4d1bce31e7 Mon Sep 17 00:00:00 2001 From: patpatbear Date: Thu, 17 Oct 2019 17:35:26 -0400 Subject: [PATCH 0230/1457] update LPUSHX and RPUSHX for variable elements support. (#1155) * update LPUSHX and RPUSHX for variable elements support. * update LPUSHX RPUSHX complexity and command history. --- commands.json | 10 ++++++---- commands/lpushx.md | 10 ++++++++-- commands/rpushx.md | 10 ++++++++-- 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/commands.json b/commands.json index a03bc5817d..7104636556 100644 --- a/commands.json +++ b/commands.json @@ -1535,8 +1535,9 @@ "type": "key" }, { - "name": "element", - "type": "string" + "name": "value", + "type": "string", + "multiple": true } ], "since": "2.2.0", @@ -2193,8 +2194,9 @@ "type": "key" }, { - "name": "element", - "type": "string" + "name": "value", + "type": "string", + "multiple": true } ], "since": "2.2.0", diff --git a/commands/lpushx.md b/commands/lpushx.md index fbaeed992e..af2c3ef402 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -1,5 +1,5 @@ -Inserts `value` at the head of the list stored at `key`, only if `key` already -exists and holds a list. +Inserts specified values at the head of the list stored at `key`, only if `key` +already exists and holds a list. In contrary to `LPUSH`, no operation will be performed when `key` does not yet exist. @@ -7,6 +7,12 @@ exist. @integer-reply: the length of the list after the push operation. +@history + +* `>= 4.0`: Accepts multiple `value` arguments. + In Redis versions older than 4.0 it was possible to push a single value per + command. + @examples ```cli diff --git a/commands/rpushx.md b/commands/rpushx.md index 5748a35fcb..0e79e3733c 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -1,5 +1,5 @@ -Inserts `value` at the tail of the list stored at `key`, only if `key` already -exists and holds a list. +Inserts specified values at the tail of the list stored at `key`, only if `key` +already exists and holds a list. In contrary to `RPUSH`, no operation will be performed when `key` does not yet exist. @@ -7,6 +7,12 @@ exist. @integer-reply: the length of the list after the push operation. +@history + +* `>= 4.0`: Accepts multiple `value` arguments. + In Redis versions older than 4.0 it was possible to push a single value per + command. + @examples ```cli From 3088f569a58d34f301f248635200990f8e9f4165 Mon Sep 17 00:00:00 2001 From: Doug Date: Thu, 17 Oct 2019 23:41:44 +0100 Subject: [PATCH 0231/1457] Assorted typos (#1162) --- topics/ARM.md | 2 +- topics/modules-blocking-ops.md | 6 +++--- topics/modules-intro.md | 2 +- topics/modules-native-types.md | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/topics/ARM.md b/topics/ARM.md index a7048bc789..d04da53add 100644 --- a/topics/ARM.md +++ b/topics/ARM.md @@ -29,7 +29,7 @@ run as expected. ## Building Redis in the Pi -* Download Redis verison 4 or 5. +* Download Redis version 4 or 5. * Just use `make` as usual to create the executable. There is nothing special in the process. The only difference is that by diff --git a/topics/modules-blocking-ops.md b/topics/modules-blocking-ops.md index 6bb6e0adb9..349fc6fab9 100644 --- a/topics/modules-blocking-ops.md +++ b/topics/modules-blocking-ops.md @@ -19,7 +19,7 @@ that can be used in order to model blocking commands. NOTE: This API is currently *experimental*, so it can only be used if the macro `REDISMODULE_EXPERIMENTAL_API` is defined. This is required because these calls are still not in their final stage of design, so may change -in the future, certain parts may be reprecated and so forth. +in the future, certain parts may be deprecated and so forth. To use this part of the modules API include the modules header like that: @@ -97,7 +97,7 @@ int his command, in order to take the example simple. RedisModule_UnblockClient(bc,NULL); } -The above command blocks the client ASAP, spawining a thread that will +The above command blocks the client ASAP, spawning a thread that will wait a second and will unblock the client. Let's check the reply and timeout callbacks, which are in our case very similar, since they just reply the client with a different reply type. @@ -148,7 +148,7 @@ caller. In order to make this working, we modify the functions as follow: As you can see, now the unblocking call is passing some private data, that is the `mynumber` pointer, to the reply callback. In order to obtain this private data, the reply callback will use the following -fnuction: +function: void *RedisModule_GetBlockedClientPrivateData(RedisModuleCtx *ctx); diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 21dfb9c8be..900b226020 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -686,7 +686,7 @@ or head, using the following macros: REDISMODULE_LIST_HEAD REDISMODULE_LIST_TAIL -Elements returned by `RedisModule_ListPop()` are like strings craeted with +Elements returned by `RedisModule_ListPop()` are like strings created with `RedisModule_CreateString()`, they must be released with `RedisModule_FreeString()` or by enabling automatic memory management. diff --git a/topics/modules-native-types.md b/topics/modules-native-types.md index 4d497356a2..3b5da1b3dd 100644 --- a/topics/modules-native-types.md +++ b/topics/modules-native-types.md @@ -182,7 +182,7 @@ and to test if a given key is already associated to a value of a specific data type. The API uses the normal modules `RedisModule_OpenKey()` low level key access -interface in order to deal with this. This is an eaxmple of setting a +interface in order to deal with this. This is an example of setting a native type private data structure to a Redis key: RedisModuleKey *key = RedisModule_OpenKey(ctx,keyname,REDISMODULE_WRITE); From 6605fece7566eae09728014818dd735ee98b4d31 Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Fri, 18 Oct 2019 00:44:09 +0200 Subject: [PATCH 0232/1457] Fix XADD field value name type (#1165) --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 7104636556..507ebbeabc 100644 --- a/commands.json +++ b/commands.json @@ -3526,8 +3526,8 @@ "type": "string" }, { - "name": ["field", "string"], - "type": ["value", "string"], + "name": ["field", "value"], + "type": ["string", "string"], "multiple": true } ], From 9adf6381772df804d461539bdd2467afb7ca797d Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 18 Oct 2019 01:46:05 +0300 Subject: [PATCH 0233/1457] Uses `element` instead of `value` for lists (#1187) --- commands.json | 10 +++++----- commands/linsert.md | 2 +- commands/lpush.md | 2 +- commands/lpushx.md | 2 +- commands/lrem.md | 8 ++++---- commands/lset.md | 2 +- commands/rpush.md | 2 +- commands/rpushx.md | 2 +- 8 files changed, 15 insertions(+), 15 deletions(-) diff --git a/commands.json b/commands.json index 507ebbeabc..4885e8f394 100644 --- a/commands.json +++ b/commands.json @@ -1535,7 +1535,7 @@ "type": "key" }, { - "name": "value", + "name": "element", "type": "string", "multiple": true } @@ -1565,7 +1565,7 @@ }, "LREM": { "summary": "Remove elements from a list", - "complexity": "O(N) where N is the length of the list.", + "complexity": "O(N+M) where N is the length of the list and M is the number of elements removed.", "arguments": [ { "name": "key", @@ -1576,7 +1576,7 @@ "type": "integer" }, { - "name": "value", + "name": "element", "type": "string" } ], @@ -1596,7 +1596,7 @@ "type": "integer" }, { - "name": "value", + "name": "element", "type": "string" } ], @@ -2194,7 +2194,7 @@ "type": "key" }, { - "name": "value", + "name": "element", "type": "string", "multiple": true } diff --git a/commands/linsert.md b/commands/linsert.md index fb2edf2291..9fe8f6131b 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -1,4 +1,4 @@ -Inserts `value` in the list stored at `key` either before or after the reference +Inserts `element` in the list stored at `key` either before or after the reference value `pivot`. When `key` does not exist, it is considered an empty list and no operation is diff --git a/commands/lpush.md b/commands/lpush.md index fd15b5b8d4..297e5966df 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -16,7 +16,7 @@ containing `c` as first element, `b` as second element and `a` as third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. +* `>= 2.4`: Accepts multiple `element` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. diff --git a/commands/lpushx.md b/commands/lpushx.md index af2c3ef402..de28925f83 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -9,7 +9,7 @@ exist. @history -* `>= 4.0`: Accepts multiple `value` arguments. +* `>= 4.0`: Accepts multiple `element` arguments. In Redis versions older than 4.0 it was possible to push a single value per command. diff --git a/commands/lrem.md b/commands/lrem.md index 573deae958..36c0c7df00 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -1,10 +1,10 @@ -Removes the first `count` occurrences of elements equal to `value` from the list +Removes the first `count` occurrences of elements equal to `element` from the list stored at `key`. The `count` argument influences the operation in the following ways: -* `count > 0`: Remove elements equal to `value` moving from head to tail. -* `count < 0`: Remove elements equal to `value` moving from tail to head. -* `count = 0`: Remove all elements equal to `value`. +* `count > 0`: Remove elements equal to `element` moving from head to tail. +* `count < 0`: Remove elements equal to `element` moving from tail to head. +* `count = 0`: Remove all elements equal to `element`. For example, `LREM list -2 "hello"` will remove the last two occurrences of `"hello"` in the list stored at `list`. diff --git a/commands/lset.md b/commands/lset.md index 458b193ff6..8f1c391594 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -1,4 +1,4 @@ -Sets the list element at `index` to `value`. +Sets the list element at `index` to `element`. For more information on the `index` argument, see `LINDEX`. An error is returned for out of range indexes. diff --git a/commands/rpush.md b/commands/rpush.md index 182ec88a38..a6d4c642ab 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -16,7 +16,7 @@ containing `a` as first element, `b` as second element and `c` as third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. +* `>= 2.4`: Accepts multiple `element` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. diff --git a/commands/rpushx.md b/commands/rpushx.md index 0e79e3733c..0f255b9472 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -9,7 +9,7 @@ exist. @history -* `>= 4.0`: Accepts multiple `value` arguments. +* `>= 4.0`: Accepts multiple `element` arguments. In Redis versions older than 4.0 it was possible to push a single value per command. From f867eafe47d82371e2fec64634a10d0211e8dc8c Mon Sep 17 00:00:00 2001 From: "Pascal S. de Kloe" Date: Fri, 18 Oct 2019 00:47:01 +0200 Subject: [PATCH 0234/1457] Add another Go client (#1163) * Add another Go client. * Update clients.json --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 25761a12f5..f2f2b87388 100644 --- a/clients.json +++ b/clients.json @@ -177,6 +177,15 @@ "active": true }, + { + "name": "Redis", + "language": "Go", + "repository": "https://github.com/pascaldekloe/redis", + "description": "clean, fully asynchronous, high-performance, low-memory", + "authors": [], + "active": true + }, + { "name": "shipwire/redis", "language": "Go", From 05e0a90ef8fd0aa4268f80747984bae0c5e1c463 Mon Sep 17 00:00:00 2001 From: Jamie Scott <5336227+IAmATeaPot418@users.noreply.github.com> Date: Sat, 19 Oct 2019 06:22:01 -0700 Subject: [PATCH 0235/1457] Adding Hashes to ACL Topic. (#1188) Adding documentation for adding hashes to the ACL topic. --- topics/acl.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/acl.md b/topics/acl.md index 0e4cae5265..754bac07df 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -113,6 +113,8 @@ Configure valid passwords for the user: * `>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. * `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. +* `#`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the acl.conf file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only container lowercase hexadecimal characters. +* `!`: Remove this hash value from from the list of valid passwords. This is useful when you do not know the password specified by the hash value but would like to remove the password from the user. * `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. * `resetpass`: Flush the list of allowed passwords. Moreover removes the *nopass* status. After *resetpass* the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). From a1e82803efdb9c727c7e93a60522b92a508466d6 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 19 Oct 2019 19:13:07 +0300 Subject: [PATCH 0236/1457] Merge master --- clients.json | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/clients.json b/clients.json index 25761a12f5..a92f9b2d87 100644 --- a/clients.json +++ b/clients.json @@ -122,7 +122,7 @@ "recommended": true, "active": true }, - + { "name": "RedisPipe", "language": "Go", @@ -1045,7 +1045,7 @@ "authors": [], "active": true }, - + { "name": "redisclient", "language": "Nim", @@ -1054,7 +1054,7 @@ "authors": ["xmonader"], "active": true }, - + { "name": "libvmod-redis", "language": "VCL", @@ -1609,7 +1609,7 @@ "authors": [], "active": true }, - + { "name": "Redis_MTC", "language": "Xojo", @@ -1628,6 +1628,7 @@ "authors": [], "active": true }, + { "name": "mini_redis", "language": "Crystal", @@ -1637,7 +1638,7 @@ "authors": ["vladfaust"], "active": true }, - + { "name": "FlyRedis", "language": "C++", @@ -1702,6 +1703,15 @@ "authors": [], "recommended": true, "active": true + }, + + { + "name": "redisio", + "language": "Python", + "repository": "https://github.com/cf020031308/redisio", + "description": "A tiny and fast redis client for script boys.", + "authors": [], + "active": true } ] From 66986df1a96604a67dda78bee7f437cbe0d7faed Mon Sep 17 00:00:00 2001 From: joyield Date: Sun, 20 Oct 2019 00:35:58 +0800 Subject: [PATCH 0237/1457] Add predixy in tools.json (#853) --- tools.json | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools.json b/tools.json index 6bd37c341a..b94d310f9f 100644 --- a/tools.json +++ b/tools.json @@ -627,6 +627,15 @@ "description": "Synchronizes nonces across node instances.", "authors": [] }, + + { + "name": "predixy", + "language": "C++", + "repository": "https://github.com/joyieldInc/predixy", + "description": "A high performance and full features proxy for redis, supports redis sentinel and redis cluster.", + "authors": [] + }, + { "name": "redis-browser", "language": "javascript", @@ -634,6 +643,7 @@ "description": "Cross platform GUI tool for redis that includes support for ReJSON", "authors": ["anandtrex"] }, + { "name": "p3x-redis-ui", "language": "javascript", @@ -642,6 +652,7 @@ "description": "📡 P3X Redis UI that uses Socket.IO, AngularJs Material and IORedis with statistics, console - terminal, tree, dark mode, internationalization, multiple connections, web and desktop by Electron. Works as an app without Node.JS GUI or with the latest Node.Js version. Can test it at https://p3x.redis.patrikx3.com/.", "authors": ["patrikx3"] }, + { "name": "Redis Server", "language": "Xojo", From 0568e5a47359d690e6a3d21f2ec9cfeed8066265 Mon Sep 17 00:00:00 2001 From: Roland Rifandi Utama Date: Sat, 19 Oct 2019 23:41:21 +0700 Subject: [PATCH 0238/1457] add redis-cluster in clients.json (#858) * add redis-cluster in clients.json * Update clients.json * Update clients.json --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 7b82e7cc08..9a83b29529 100644 --- a/clients.json +++ b/clients.json @@ -1619,6 +1619,15 @@ "active": true }, + { + "name": "redis-cluster", + "language": "Ruby", + "repository": "https://github.com/bukalapak/redis-cluster", + "description": "Redis cluster client on top of redis-rb. Support pipelining.", + "authors": ["bukalapak"], + "active": true + }, + { "name": "Redis_MTC", "language": "Xojo", From 524183d6c48d0088dc5c2b9726cf70bdbe78bc8b Mon Sep 17 00:00:00 2001 From: gisTao Date: Sun, 20 Oct 2019 00:46:06 +0800 Subject: [PATCH 0239/1457] Update clients.json (#866) --- clients.json | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 9a83b29529..d37d509735 100644 --- a/clients.json +++ b/clients.json @@ -1618,7 +1618,15 @@ "authors": [], "active": true }, - + + { + "name": "RedisGo-Async", + "language": "Go", + "repository": "https://github.com/gistao/RedisGo-Async", + "description": "RedisGo-Async is a Go client for Redis, both asynchronous and synchronous modes are supported,,its API is fully compatible with redigo.", + "authors": ["gistao"] + }, + { "name": "redis-cluster", "language": "Ruby", From 49a2a69a1839527cfa9171d54aa12af754f53aa8 Mon Sep 17 00:00:00 2001 From: Adam Wallner Date: Sat, 19 Oct 2019 18:52:55 +0200 Subject: [PATCH 0240/1457] Added Noderis client (#867) --- clients.json | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index d37d509735..1b18fe9851 100644 --- a/clients.json +++ b/clients.json @@ -1619,6 +1619,14 @@ "active": true }, + { + "name": "Noderis", + "language": "Node.js", + "repository": "https://github.com/wallneradam/noderis", + "description": "A fast, standalone Redis client without external dependencies. It can be used with callbacks, Promises and async-await as well at the same time. Clean, well designed and documented source code. Because of this supports code completion (WebStorm/PHPStorm).", + "authors": [] + }, + { "name": "RedisGo-Async", "language": "Go", @@ -1626,7 +1634,7 @@ "description": "RedisGo-Async is a Go client for Redis, both asynchronous and synchronous modes are supported,,its API is fully compatible with redigo.", "authors": ["gistao"] }, - + { "name": "redis-cluster", "language": "Ruby", From 67505b3b27dc8becde50b2a7d52e27014adc2224 Mon Sep 17 00:00:00 2001 From: Tom Date: Sat, 19 Oct 2019 17:55:26 +0100 Subject: [PATCH 0241/1457] Restructure sentence (#890) Restructured into two sentences so that it is clearer to read --- topics/persistence.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 49b23df3cb..4ea79a0023 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -39,8 +39,8 @@ AOF disadvantages --- * AOF files are usually bigger than the equivalent RDB files for the same dataset. -* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load. -* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. These bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but these kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - +* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. +* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. These bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is fine. However, these kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works by incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). 2) We have never had a single report from users about an AOF corruption that was detected in the real world. From ad8e7df89baf11171c8bae9261bd3c3b74af12e3 Mon Sep 17 00:00:00 2001 From: Yaroslav Derman Date: Sat, 19 Oct 2019 19:56:54 +0300 Subject: [PATCH 0242/1457] added new redis scala client (#901) * added new redis scala client * Update clients.json Removed authors --- clients.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/clients.json b/clients.json index 1b18fe9851..2b33d815b5 100644 --- a/clients.json +++ b/clients.json @@ -595,6 +595,13 @@ "description": "", "authors": ["alejandrocrosa"] }, + { + "name": "scala-redis", + "language": "Scala", + "repository": "https://github.com/yarosman/redis-client-scala-netty", + "description": "Non-blocking, netty 4.1.x based Scala Redis client", + "authors": [] + }, { "name": "scala-redis", From bda9ff6fb9700b50b602042a34e1e5b160aaeed7 Mon Sep 17 00:00:00 2001 From: James Halsall Date: Mon, 21 Oct 2019 14:18:48 +0100 Subject: [PATCH 0243/1457] Fix some grammar and spelling in persistence.md (#1189) --- topics/persistence.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 4ea79a0023..df01f7c90b 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -146,7 +146,7 @@ writes Redis will try to perform a single `fsync` operation. ### What should I do if my AOF gets truncated? It is possible that the server crashed while writing the AOF file, or that the -volume where the AOF file is stored is store was full. When this happens the +volume where the AOF file is stored was full at the time of writing. When this happens the AOF still contains consistent data representing a given point-in-time version of the dataset (that may be old up to one second with the default AOF fsync policy), but the last command in the AOF could be truncated. @@ -194,8 +194,8 @@ offset in the file, and see if it is possible to manually repair the file: the AOF uses the same format of the Redis protocol and is quite simple to fix manually. Otherwise it is possible to let the utility fix the file for us, but in that case all the AOF portion from the invalid part to the end of the -file may be discareded, leading to a massive amount of data lost if the -corruption happen to be in the initial part of the file. +file may be discarded, leading to a massive amount of data loss if the +corruption happened to be in the initial part of the file. ### How it works From 59e3d4325ae46a37679cade00b623db82008bc70 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 22 Oct 2019 14:35:34 +0300 Subject: [PATCH 0244/1457] Adds note about conversion of associative arrays (#803) * Adds note about conversion of associative arrays * Update eval.md --- commands/eval.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 3fa49f1443..20bc4d1de8 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -109,12 +109,13 @@ Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. -**RESP3 mode conversion rules**: note that the Lua engine can work in RESP3 mode using the new Redis 6 protocol. In this case there are additional conversion rules, and certain conversions are also modified compared to the RESP2 mode. Please refer to the RESP3 section of this document for more information. - -Also there are two important rules to note: +Lastly, there are three important rules to note: * Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. **If you want to return a float from Lua you should return it as a string**, exactly like Redis itself does (see for instance the `ZSCORE` command). * There is [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html), this is a result of Lua table semantics, so when Redis converts a Lua array into Redis protocol the conversion is stopped if a nil is encountered. +* When a Lua table contains keys (and their values), the converted Redis reply will **not** include them. + +**RESP3 mode conversion rules**: note that the Lua engine can work in RESP3 mode using the new Redis 6 protocol. In this case there are additional conversion rules, and certain conversions are also modified compared to the RESP2 mode. Please refer to the RESP3 section of this document for more information. Here are a few conversion examples: @@ -135,17 +136,17 @@ The last example shows how it is possible to receive the exact return value of `redis.call()` or `redis.pcall()` from Lua that would be returned if the command was called directly. -In the following example we can see how floats and arrays with nils are handled: +In the following example we can see how floats and arrays containing nils and keys are handled: ``` -> eval "return {1,2,3.3333,'foo',nil,'bar'}" 0 +> eval "return {1,2,3.3333,somekey='somevalue','foo',nil,'bar'}" 0 1) (integer) 1 2) (integer) 2 3) (integer) 3 4) "foo" ``` -As you can see 3.333 is converted into 3, and the *bar* string is never returned as there is a nil before. +As you can see 3.333 is converted into 3, *somekey* is excluded, and the *bar* string is never returned as there is a nil before. ## Helper functions to return Redis types From d1e812455dd9f647ffdf22f509001de015188a02 Mon Sep 17 00:00:00 2001 From: Hamid Alaei Varnosfaderani Date: Tue, 22 Oct 2019 21:05:24 +0330 Subject: [PATCH 0245/1457] update lqrm author (#1178) --- modules.json | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index d665915400..85b2bb4cfa 100644 --- a/modules.json +++ b/modules.json @@ -203,7 +203,9 @@ "license": "BSD", "repository": "https://github.com/halaei/lqrm", "description": "A Laravel compatible queue driver for Redis that supports reliable blocking pop from FIFO and scheduled queues.", - "authors": [], + "authors": [ + "halaei" +], "stars": 4 }, From 204dcfc38c06a8e520141fa7df8e7450753a39a4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=B5=96=E4=BF=A1=E6=B6=9B?= Date: Wed, 23 Oct 2019 20:17:18 +0800 Subject: [PATCH 0246/1457] Add A command line tool iredis. (#1192) --- tools.json | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/tools.json b/tools.json index b94d310f9f..373b988fa2 100644 --- a/tools.json +++ b/tools.json @@ -660,6 +660,14 @@ "description": "Cross platform GUI to spin up and control redis-server, included in the project", "url":"https://github.com/ktekinay/XOJO-Redis/releases/", "authors": ["KemTekinay"] - } + }, + { + "name": "iredis", + "language": "Python", + "repository": "https://github.com/laixintao/iredis", + "description": "A Terminal Client for Redis with AutoCompletion and Syntax Highlighting.", + "url": "https://iredis.io", + "authors": ["laixintao"] + } ] From 3d06903511731d748d2e70d7f77ad702963eb6a7 Mon Sep 17 00:00:00 2001 From: Xiaodong Date: Fri, 25 Oct 2019 23:05:04 +0800 Subject: [PATCH 0247/1457] Add Rediseen to tools.json (#1195) * Add Rediseen to tools.json Rediseen is a tool helping start REST-like API service for your Redis database, without writing a single line of code. * Use twitter ID for author field --- tools.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/tools.json b/tools.json index 373b988fa2..87e143a8b8 100644 --- a/tools.json +++ b/tools.json @@ -669,5 +669,14 @@ "description": "A Terminal Client for Redis with AutoCompletion and Syntax Highlighting.", "url": "https://iredis.io", "authors": ["laixintao"] + }, + + { + "name": "Rediseen", + "language": "Go", + "repository": "https://github.com/XD-DENG/rediseen", + "description": "Start a REST-like API service for your Redis database, without writing a single line of code.", + "url": "https://github.com/XD-DENG/rediseen", + "authors": ["XiaodongDENG1"] } ] From 8a282fc9340f02e58b4c2c9dfeeb5ef75dd934a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=9D=9E=E6=B3=95=E6=93=8D=E4=BD=9C?= Date: Fri, 25 Oct 2019 23:11:00 +0800 Subject: [PATCH 0248/1457] Add a redis web tool for flask user (#1193) --- tools.json | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/tools.json b/tools.json index 87e143a8b8..e707e3c356 100644 --- a/tools.json +++ b/tools.json @@ -670,8 +670,16 @@ "url": "https://iredis.io", "authors": ["laixintao"] }, - - { + + { + "name": "flask-redisboard", + "language": "Python", + "repository": "https://github.com/hjlarry/flask-redisboard", + "description": "A flask extension to support user view and manage redis with beautiful interface.", + "authors": ["hjlarry"] + }, + + { "name": "Rediseen", "language": "Go", "repository": "https://github.com/XD-DENG/rediseen", From 118723db1c9dd3ca78edf904962f6f633da8e004 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 15:02:43 +0200 Subject: [PATCH 0249/1457] Update modules.json (#1191) --- modules.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 85b2bb4cfa..a9ebfef692 100644 --- a/modules.json +++ b/modules.json @@ -224,7 +224,7 @@ "repository": "https://github.com/fcerbell/redismodule-smartcache", "description": "A redis module that provides a pass-through cache", "authors": [ - "fcerbelle" + "fcerbell" ], "stars": 2 }, From ca444d1d1fb6a92582f9ed5833a01c9a8c3ba399 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 15:53:28 +0200 Subject: [PATCH 0250/1457] Adds access patterns to bitmaps (#1186) * Adds access patterns to bitmaps * Update setbit.md --- commands/setbit.md | 129 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) diff --git a/commands/setbit.md b/commands/setbit.md index 9163c90b54..770d4abb41 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -2,6 +2,7 @@ Sets or clears the bit at _offset_ in the string value stored at _key_. The bit is either set or cleared depending on _value_, which can be either 0 or 1. + When _key_ does not exist, a new string value is created. The string is grown to make sure it can hold a bit at _offset_. The _offset_ argument is required to be greater than or equal to 0, and smaller @@ -30,3 +31,131 @@ SETBIT mykey 7 1 SETBIT mykey 7 0 GET mykey ``` + +## Pattern: accessing the entire bitmap + +There are cases when you need to set all the bits of single bitmap at once, for +example when initializing it to a default non-zero value. It is possible to do +this with multiple calls to the `SETBIT` command, one for each bit that needs to +be set. However, so as an optimization you can use a single `SET` command to set +the entire bitmap. + +Bitmaps are not an actual data type, but a set of bit-oriented operations +defined on the String type (for more information refer to the +[Bitmaps section of the Data Types Introduction page][ti]). This means that +bitmaps can be used with in string command, and most importantly with `SET` and +`GET`. + +Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes +stream. The first byte of the string corresponds to offsets 0..7 of +the bitmap, the second byte to the 8..15 range, and so forth. + +For example, after setting a few bits, getting the string value of the bitmap +would look like this: + +``` +> SETBIT bitmapsarestrings 2 1 +> SETBIT bitmapsarestrings 3 1 +> SETBIT bitmapsarestrings 5 1 +> SETBIT bitmapsarestrings 10 1 +> SETBIT bitmapsarestrings 11 1 +> SETBIT bitmapsarestrings 14 1 +> GET bitmapsarestrings +"42" +``` + +By getting the string representation of a bitmap, the client can then parse the +response's bytes by extracting the bit values using native bit operations in its +native programming language. Symmetrically, it is also possible to set an entire +bitmap by performing the bits-to-bytes encoding in the client and calling `SET` +with the resultant string. + +[ti]: /topics/data-types-intro#bitmaps + +## Pattern: setting multiple bits + +`SETBIT` excels at setting single bits, and can be called several times when +multiple bits need to be set. To optimize this operation you can replace +multiple `SETBIT` calls with a single call to the variadic `BITFIELD` command +and the use of fields of type `u1`. + +For example, the example above could be replaced by: + +``` +> BITFIELD bitsinabitmap SET u1 2 1 SET u1 3 1 SET u1 5 1 SET u1 10 1 SET u1 11 1 SET u1 14 1 +``` + +## Advanced Pattern: accessing bitmap ranges + +It is also possible to use the `GETRANGE` and `SETRANGE` string commands to +efficiently access a range of bit offsets in a bitmap. Below is a sample +implementation in idiomatic Redis Lua scripting that can be run with the `EVAL` +command: + +``` +--[[ +Sets a bitmap range + +Bitmaps are stored as Strings in Redis. A range spans one or more bytes, +so we can call `SETRANGE` when entire bytes need to be set instead of flipping +individual bits. Also, to avoid multiple internal memory allocations in +Redis, we traverse in reverse. +Expected input: + KEYS[1] - bitfield key + ARGV[1] - start offset (0-based, inclusive) + ARGV[2] - end offset (same, should be bigger than start, no error checking) + ARGV[3] - value (should be 0 or 1, no error checking) +]]-- + +-- A helper function to stringify a binary string to semi-binary format +local function tobits(str) + local r = '' + for i = 1, string.len(str) do + local c = string.byte(str, i) + local b = ' ' + for j = 0, 7 do + b = tostring(bit.band(c, 1)) .. b + c = bit.rshift(c, 1) + end + r = r .. b + end + return r +end + +-- Main +local k = KEYS[1] +local s, e, v = tonumber(ARGV[1]), tonumber(ARGV[2]), tonumber(ARGV[3]) + +-- First treat the dangling bits in the last byte +local ms, me = s % 8, (e + 1) % 8 +if me > 0 then + local t = math.max(e - me + 1, s) + for i = e, t, -1 do + redis.call('SETBIT', k, i, v) + end + e = t +end + +-- Then the danglings in the first byte +if ms > 0 then + local t = math.min(s - ms + 7, e) + for i = s, t, 1 do + redis.call('SETBIT', k, i, v) + end + s = t + 1 +end + +-- Set a range accordingly, if at all +local rs, re = s / 8, (e + 1) / 8 +local rl = re - rs +if rl > 0 then + local b = '\255' + if 0 == v then + b = '\0' + end + redis.call('SETRANGE', k, rs, string.rep(b, rl)) +end +``` + +**Note:** the implementation for getting a range of bit offsets from a bitmap is +left as an exercise to the reader. From 42e933cd0c166ad825fb6cce7dbf7e402594f621 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 17:21:10 +0200 Subject: [PATCH 0251/1457] Adds version history for SCAN TYPE (#1196) addresses #1166 --- commands/scan.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/commands/scan.md b/commands/scan.md index 4a435ab2a1..5238a9a975 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -142,7 +142,7 @@ As you can see most of the calls returned zero elements, but the last call where ## The TYPE option -This option asks `SCAN` to only return objects that match a given `type`, allowing you to iterate through the database looking for keys of a specific type. The **TYPE** option is only available on the whole-database `SCAN`, not `HSCAN` or `ZSCAN` etc. +As of version 6.0 you can use this option to ask `SCAN` to only return objects that match a given `type`, allowing you to iterate through the database looking for keys of a specific type. The **TYPE** option is only available on the whole-database `SCAN`, not `HSCAN` or `ZSCAN` etc. The `type` argument is the same string name that the `TYPE` command returns. Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps, and Bitfields, may internally be implemented using other Redis types, such as a string or zset, so can't be distinguished from other keys of that same type by `SCAN`. For example, a ZSET and GEOHASH: @@ -203,6 +203,10 @@ Also note that this behavior is specific of `SSCAN`, `HSCAN` and `ZSCAN`. `SCAN` * `HSCAN` array of elements contain two elements, a field and a value, for every returned element of the Hash. * `ZSCAN` array of elements contain two elements, a member and its associated score, for every returned element of the sorted set. +@history + + * `>= 6.0`: Supports the `TYPE` subcommand. + ## Additional examples Iteration of a Hash value. From 3be7453b0315362c981f9586926bb19a21a5ee93 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 21:18:14 +0200 Subject: [PATCH 0252/1457] Adds `CLUSTER`'s `BUMBEPOCH`, `FLUSHSLOTS` and `MYID` (#1197) --- commands.json | 18 ++++++++++++++++++ commands/cluster-bumpepoch.md | 9 +++++++++ commands/cluster-flushslots.md | 7 +++++++ commands/cluster-myid.md | 7 +++++++ 4 files changed, 41 insertions(+) create mode 100644 commands/cluster-bumpepoch.md create mode 100644 commands/cluster-flushslots.md create mode 100644 commands/cluster-myid.md diff --git a/commands.json b/commands.json index 4885e8f394..33e4e677fc 100644 --- a/commands.json +++ b/commands.json @@ -355,6 +355,12 @@ "since": "3.0.0", "group": "cluster" }, + "CLUSTER BUMPEPOCH": { + "summary": "Advance the cluster config epoch", + "complexity": "O(1)", + "since": "3.0.0", + "group": "cluster" + }, "CLUSTER COUNT-FAILURE-REPORTS": { "summary": "Return the number of failure reports active for a given node", "complexity": "O(N) where N is the number of failure reports", @@ -406,6 +412,12 @@ "since": "3.0.0", "group": "cluster" }, + "CLUSTER FLUSHSLOTS": { + "summary": "Delete a node's own slots information", + "complexity": "O(1)", + "since": "3.0.0", + "group": "cluster" + }, "CLUSTER FORGET": { "summary": "Remove a node from the nodes table", "complexity": "O(1)", @@ -468,6 +480,12 @@ "since": "3.0.0", "group": "cluster" }, + "CLUSTER MYID": { + "summary": "Return the node id", + "complexity": "O(1)", + "since": "3.0.0", + "group": "cluster" + }, "CLUSTER NODES": { "summary": "Get Cluster config for the node", "complexity": "O(N) where N is the total number of Cluster nodes", diff --git a/commands/cluster-bumpepoch.md b/commands/cluster-bumpepoch.md new file mode 100644 index 0000000000..b05694a442 --- /dev/null +++ b/commands/cluster-bumpepoch.md @@ -0,0 +1,9 @@ +Advances the cluster config epoch. + +The `CLUSTER BUMPEPOCH` command triggers an increment to the cluster's config epoch from the connected node. The epoch will be incremented if the node's config epoch is zero, or if it is less than the cluster's greatest epoch. + +**Note:** config epoch management is performed internally by the cluster, and relies on obtaining a consensus of nodes. The `CLUSTER BUMPEPOCH` attempts to increment the config epoch **WITHOUT** getting the consensus, so using it may violate the "last failover wins" rule. Use it with caution. + +@return + +@simple-string-reply: `BUMPED` if the epoch was incremented, or `STILL` if the node already has the greatest config epoch in the cluster. diff --git a/commands/cluster-flushslots.md b/commands/cluster-flushslots.md new file mode 100644 index 0000000000..2279f3b738 --- /dev/null +++ b/commands/cluster-flushslots.md @@ -0,0 +1,7 @@ +Deletes all slots from a node. + +The `CLUSTER FLUSHSLOTS` deletes all information about slots from the connected node. It can only be called when the database is empty. + +@reply + +@simple-string-reply: `OK` diff --git a/commands/cluster-myid.md b/commands/cluster-myid.md new file mode 100644 index 0000000000..02e8b1d3b6 --- /dev/null +++ b/commands/cluster-myid.md @@ -0,0 +1,7 @@ +Returns the node's id. + +The `CLUSTER MYID` command returns the unique, auto-generated identifier that is associated with the connected cluster node. + +@return + +@bulk-string-reply: The node id. \ No newline at end of file From 8895bae0f84e8a2d34f51f4854e59a50fbb87201 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 21:57:32 +0200 Subject: [PATCH 0253/1457] Create LICENSE Pasted the plain-text version from https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt --- LICENSE | 349 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 349 insertions(+) create mode 100644 LICENSE diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000..7e3883593b --- /dev/null +++ b/LICENSE @@ -0,0 +1,349 @@ +Creative Commons Attribution-ShareAlike 4.0 International Public +License + +By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution-ShareAlike 4.0 International Public License ("Public +License"). To the extent this Public License may be interpreted as a +contract, You are granted the Licensed Rights in consideration of Your +acceptance of these terms and conditions, and the Licensor grants You +such rights in consideration of benefits the Licensor receives from +making the Licensed Material available under these terms and +conditions. + + +Section 1 -- Definitions. + + a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image. + + b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License. + + c. BY-SA Compatible License means a license listed at + creativecommons.org/compatiblelicenses, approved by Creative + Commons as essentially the equivalent of this Public License. + + d. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights. + + e. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements. + + f. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material. + + g. License Elements means the license attributes listed in the name + of a Creative Commons Public License. The License Elements of this + Public License are Attribution and ShareAlike. + + h. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License. + + i. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license. + + j. Licensor means the individual(s) or entity(ies) granting rights + under this Public License. + + k. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them. + + l. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world. + + m. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning. + + +Section 2 -- Scope. + + a. License grant. + + 1. Subject to the terms and conditions of this Public License, + the Licensor hereby grants You a worldwide, royalty-free, + non-sublicensable, non-exclusive, irrevocable license to + exercise the Licensed Rights in the Licensed Material to: + + a. reproduce and Share the Licensed Material, in whole or + in part; and + + b. produce, reproduce, and Share Adapted Material. + + 2. Exceptions and Limitations. For the avoidance of doubt, where + Exceptions and Limitations apply to Your use, this Public + License does not apply, and You do not need to comply with + its terms and conditions. + + 3. Term. The term of this Public License is specified in Section + 6(a). + + 4. Media and formats; technical modifications allowed. The + Licensor authorizes You to exercise the Licensed Rights in + all media and formats whether now known or hereafter created, + and to make technical modifications necessary to do so. The + Licensor waives and/or agrees not to assert any right or + authority to forbid You from making technical modifications + necessary to exercise the Licensed Rights, including + technical modifications necessary to circumvent Effective + Technological Measures. For purposes of this Public License, + simply making modifications authorized by this Section 2(a) + (4) never produces Adapted Material. + + 5. Downstream recipients. + + a. Offer from the Licensor -- Licensed Material. Every + recipient of the Licensed Material automatically + receives an offer from the Licensor to exercise the + Licensed Rights under the terms and conditions of this + Public License. + + b. Additional offer from the Licensor -- Adapted Material. + Every recipient of Adapted Material from You + automatically receives an offer from the Licensor to + exercise the Licensed Rights in the Adapted Material + under the conditions of the Adapter's License You apply. + + c. No downstream restrictions. You may not offer or impose + any additional or different terms or conditions on, or + apply any Effective Technological Measures to, the + Licensed Material if doing so restricts exercise of the + Licensed Rights by any recipient of the Licensed + Material. + + 6. No endorsement. Nothing in this Public License constitutes or + may be construed as permission to assert or imply that You + are, or that Your use of the Licensed Material is, connected + with, or sponsored, endorsed, or granted official status by, + the Licensor or others designated to receive attribution as + provided in Section 3(a)(1)(A)(i). + + b. Other rights. + + 1. Moral rights, such as the right of integrity, are not + licensed under this Public License, nor are publicity, + privacy, and/or other similar personality rights; however, to + the extent possible, the Licensor waives and/or agrees not to + assert any such rights held by the Licensor to the limited + extent necessary to allow You to exercise the Licensed + Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this + Public License. + + 3. To the extent possible, the Licensor waives any right to + collect royalties from You for the exercise of the Licensed + Rights, whether directly or through a collecting society + under any voluntary or waivable statutory or compulsory + licensing scheme. In all other cases the Licensor expressly + reserves any right to collect such royalties. + + +Section 3 -- License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the +following conditions. + + a. Attribution. + + 1. If You Share the Licensed Material (including in modified + form), You must: + + a. retain the following if it is supplied by the Licensor + with the Licensed Material: + + i. identification of the creator(s) of the Licensed + Material and any others designated to receive + attribution, in any reasonable manner requested by + the Licensor (including by pseudonym if + designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of + warranties; + + v. a URI or hyperlink to the Licensed Material to the + extent reasonably practicable; + + b. indicate if You modified the Licensed Material and + retain an indication of any previous modifications; and + + c. indicate the Licensed Material is licensed under this + Public License, and include the text of, or the URI or + hyperlink to, this Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any + reasonable manner based on the medium, means, and context in + which You Share the Licensed Material. For example, it may be + reasonable to satisfy the conditions by providing a URI or + hyperlink to a resource that includes the required + information. + + 3. If requested by the Licensor, You must remove any of the + information required by Section 3(a)(1)(A) to the extent + reasonably practicable. + + b. ShareAlike. + + In addition to the conditions in Section 3(a), if You Share + Adapted Material You produce, the following conditions also apply. + + 1. The Adapter's License You apply must be a Creative Commons + license with the same License Elements, this version or + later, or a BY-SA Compatible License. + + 2. You must include the text of, or the URI or hyperlink to, the + Adapter's License You apply. You may satisfy this condition + in any reasonable manner based on the medium, means, and + context in which You Share Adapted Material. + + 3. You may not offer or impose any additional or different terms + or conditions on, or apply any Effective Technological + Measures to, Adapted Material that restrict exercise of the + rights granted under the Adapter's License You apply. + + +Section 4 -- Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material: + + a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database; + + b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material, + + including for purposes of Section 3(b); and + c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights. + + +Section 5 -- Disclaimer of Warranties and Limitation of Liability. + + a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. + + b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. + + c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability. + + +Section 6 -- Term and Termination. + + a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically. + + b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided + it is cured within 30 days of Your discovery of the + violation; or + + 2. upon express reinstatement by the Licensor. + + For the avoidance of doubt, this Section 6(b) does not affect any + right the Licensor may have to seek remedies for Your violations + of this Public License. + + c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License. + + d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License. + + +Section 7 -- Other Terms and Conditions. + + a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed. + + b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License. + + +Section 8 -- Interpretation. + + a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License. + + b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions. + + c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor. + + d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority. From 4c6b8c6a3812818192a773601ba139a448d5bcea Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 22:40:07 +0200 Subject: [PATCH 0254/1457] Corrects minor typos (#1198) --- topics/acl.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 754bac07df..985cb9aaad 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -59,8 +59,8 @@ important to understand what the user is really able to do. By default there is a single user defined, that is called *default*. We can use the `ACL LIST` command in order to check the currently active ACLs -and verify what the configuration of a freshly stared and unconfigured Redis -instance is: +and verify what the configuration of a freshly started, defaults-configured +Redis instance is: > ACL LIST 1) "user default on nopass ~* +@all" @@ -85,7 +85,7 @@ without any explicit `AUTH` call needed. The following is the list of the valid ACL rules. Certain rules are just single words that are used in order to activate or remove a flag, or to perform a given change to the user ACL. Other rules are char prefixes that -are concatenated with command or cagetories names, or key patterns, and +are concatenated with command or categories names, or key patterns, and so forth. Enable and disallow users: @@ -113,7 +113,7 @@ Configure valid passwords for the user: * `>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. * `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. -* `#`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the acl.conf file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only container lowercase hexadecimal characters. +* `#`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the `acl.conf` file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only container lowercase hexadecimal characters. * `!`: Remove this hash value from from the list of valid passwords. This is useful when you do not know the password specified by the hash value but would like to remove the password from the user. * `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. * `resetpass`: Flush the list of allowed passwords. Moreover removes the *nopass* status. After *resetpass* the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). @@ -230,7 +230,7 @@ the following sequence: > ACL SETUSER myuser +get OK -Will result into myuser to be able to call both `GET` and `SET`: +Will result in myuser being able to call both `GET` and `SET`: > ACL LIST 1) "user default on nopass ~* +@all" @@ -245,7 +245,7 @@ really annoying, so instead we do things like that: By saying +@all and -@dangerous we included all the commands and later removed all the commands that are tagged as dangerous inside the Redis command table. -Please note that command categories **never include modules commnads** with +Please note that command categories **never include modules commands** with the exception of +@all. If you say +@all all the commands can be executed by the user, even future commands loaded via the modules system. However if you use the ACL rule +@readonly or any other, the modules commands are always @@ -313,7 +313,7 @@ dangerous and non dangerous operations. Many deployments may not be happy to provide the ability to execute `CLIENT KILL` to non admin-level users, but may still want them to be able to run `CLIENT SETNAME`. -_Note: probably the new RESP3 `HELLO` command will provide a SETNAME option soon, but this is still a good exmaple anyway._ +_Note: the new RESP3 `HELLO` command will probably provide a SETNAME option soon, but this is still a good example anyway._ In such case I could alter the ACL of a user in the following way: @@ -400,7 +400,7 @@ There are two ways in order to store users inside the Redis configuration. 2. It is possible to specify an external ACL file. The two methods are *mutually incompatible*, Redis will ask you to use one -or the other. To specify useres inside `redis.conf` is a very simple way +or the other. To specify users inside `redis.conf` is a very simple way good for simple use cases. When there are multiple users to define, in a complex environment, we strongly suggest you to use the ACL file. From f5d96cc19c3aacaa7b3f9046fd918876705ae042 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 27 Oct 2019 22:50:07 +0200 Subject: [PATCH 0255/1457] Fixes typos (#1199) --- topics/client-side-caching.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 562dcbad65..e96d6eb6e6 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -16,7 +16,7 @@ database about such information, like in the following picture: +-------------+ +----------+ When client side caching is used, the application will store the reply of -popular queries directily inside the application memory, so that it can +popular queries directly inside the application memory, so that it can reuse such replies later, without contacting the database again. +-------------+ +----------+ @@ -63,7 +63,7 @@ Once clients can retrieve an important amount of information without even asking a networked server at all, but just accessing their local memory, then it is possible to fetch more data per second (since many queries will not hit the database or the cache at all) with much smaller latency. -For this reason Redis 6 implements direct support for client side cachig, +For this reason Redis 6 implements direct support for client side caching, in order to make this pattern much simpler to implement, more accessible, reliable and efficient. @@ -73,7 +73,7 @@ The Redis client side caching support is called _Tracking_. It basically consist in a few very simple ideas: 1. Clients can enable tracking if they want. Connections start without tracking enabled. -2. When tracking is enabled, the server remembers what keys each client requseted during the connection lifetime (by sending read commands about such keys). +2. When tracking is enabled, the server remembers what keys each client requested during the connection lifetime (by sending read commands about such keys). 3. When a key is modified by some client, or is evicted because it has an associated expire time, or evicted because of a _maxmemory_ policy, all the clients with tracking enabled that may have the key cached, are notified with an _invalidation message_. 4. When clients receive invalidation messages, they are required to remove the corresponding keys, in order to avoid serving stale data. @@ -93,13 +93,13 @@ implementation uses the following ideas: * The keyspace is divided into a bit more than 16 millions caching slots. Given a key, the caching slot is obtained by taking the CRC64(key) modulo 16777216 (this basically means that just the lower 24 bits of the result are taken). * The server remembers which client may have cached keys about a given caching slots. To do so we just need a table with 16 millions of entries (one for each caching slot), associated with a dictionary of all the clients that may have keys about it. This table is called the **Invalidation Table**. -* Inside the invalidation table we don't really need to store pointers to clients structures and do any garbage collection when the client disconnects: instead what we do is just storing client IDs (each Redis client has an unique numberical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. +* Inside the invalidation table we don't really need to store pointers to clients structures and do any garbage collection when the client disconnects: instead what we do is just storing client IDs (each Redis client has an unique numerical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. This means that clients also have to organize their local cache according to the caching slots, so that when they receive an invalidation message about a given caching slot, such group of keys are no longer considered valid. Another advantage of caching slots, other than being more space efficient, is that, once the user memory in the server side in order to track client side information become too big, it is very simple to release some memory, just picking a random caching slot and evicting it, even if there was no actual modification hitting any key of such caching slot. -Note that by using 16 millions of caching slots, it is still possible to have plenty of keys per instance, with just a few keys hashing to the same caching slot: this means that invalidation messages will expire just a couple of keys in the avareage case, even if the instance has tens of millions of keys. +Note that by using 16 millions of caching slots, it is still possible to have plenty of keys per instance, with just a few keys hashing to the same caching slot: this means that invalidation messages will expire just a couple of keys in the average case, even if the instance has tens of millions of keys. ## Two connections mode @@ -144,7 +144,7 @@ SET foo bar +OK ``` -As a result, the invalidations connection will receive a message that invalidates cachign slot 1872974. That number is obtained by doing the CRC64("foo") taking the least 24 significant bits. +As a result, the invalidations connection will receive a message that invalidates caching slot 1872974. That number is obtained by doing the CRC64("foo") taking the least 24 significant bits. ``` (Connection 1 -- used for invalidations) @@ -211,7 +211,7 @@ To make the protocol more efficient, the `CACHING` command can be sent with the GET foo "bar" -The `CACHING` command affects the command executed immadietely after it, +The `CACHING` command affects the command executed immediately after it, however in case the next command is `MULTI`, all the commands in the transaction will be tracked. Similarly in case of Lua scripts, all the commands executed by the script will be tracked. From 806d231066e52377e38686717baf5095d54c2e04 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 28 Oct 2019 01:28:53 +0200 Subject: [PATCH 0256/1457] Misc typos and small edits to topics (#1201) --- topics/cluster-spec.md | 14 +- topics/cluster-tutorial.md | 4 +- topics/gopher.md | 8 +- topics/indexes.md | 2 +- topics/internals-sds.md | 4 +- topics/internals-vm.md | 8 +- topics/introduction.md | 5 +- topics/latency.md | 2 +- topics/modules-blocking-ops.md | 2 +- topics/modules-intro.md | 14 +- topics/modules-native-types.md | 4 +- topics/persistence.md | 18 +-- topics/quickstart.md | 4 +- topics/replication.md | 177 +++++++++++++------------ topics/security.md | 4 +- topics/sentinel-clients.md | 22 +-- topics/sentinel.md | 236 ++++++++++++++++----------------- 17 files changed, 265 insertions(+), 263 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 51b093e889..b8eb801420 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -30,7 +30,7 @@ as long as the keys all hash to the same slot. Redis Cluster implements a concept called **hash tags** that can be used in order to force certain keys to be stored in the same hash slot. However during -manual reshardings, multi-key operations may become unavailable for some time +manual resharding, multi-key operations may become unavailable for some time while single key operations are always available. Redis Cluster does not support multiple databases like the stand alone version @@ -974,9 +974,9 @@ Liveness property: because of the second rule, eventually all nodes in the clust This mechanism in Redis Cluster is called **last failover wins**. -The same happens during reshardings. When a node importing a hash slot -completes the import operation, its configuration epoch is incremented to make -sure the change will be propagated throughout the cluster. +The same happens during resharding. When a node importing a hash slot completes +the import operation, its configuration epoch is incremented to make sure the +change will be propagated throughout the cluster. UPDATE messages, a closer look --- @@ -1110,14 +1110,14 @@ Both the events are system-administrator triggered: 1. `CLUSTER FAILOVER` command with `TAKEOVER` option is able to manually promote a slave node into a master *without the majority of masters being available*. This is useful, for example, in multi data center setups. 2. Migration of slots for cluster rebalancing also generates new configuration epochs inside the local node without agreement for performance reasons. -Specifically, during manual reshardings, when a hash slot is migrated from +Specifically, during manual resharding, when a hash slot is migrated from a node A to a node B, the resharding program will force B to upgrade its configuration to an epoch which is the greatest found in the cluster, plus 1 (unless the node is already the one with the greatest configuration epoch), without requiring agreement from other nodes. Usually a real world resharding involves moving several hundred hash slots (especially in small clusters). Requiring an agreement to generate new -configuration epochs during reshardings, for each hash slot moved, is +configuration epochs during resharding, for each hash slot moved, is inefficient. Moreover it requires an fsync in each of the cluster nodes every time in order to store the new configuration. Because of the way it is performed instead, we only need a new config epoch when the first hash slot is moved, @@ -1136,7 +1136,7 @@ When masters serving different hash slots have the same `configEpoch`, there are no issues. It is more important that slaves failing over a master have unique configuration epochs. -That said, manual interventions or reshardings may change the cluster +That said, manual interventions or resharding may change the cluster configuration in different ways. The Redis Cluster main liveness property requires that slot configurations always converge, so under every circumstance we really want all the master nodes to have a different `configEpoch`. diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index a6e21893ed..98ec829faf 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -604,7 +604,7 @@ All the slots will be covered as usual, but this time the master at Scripting a resharding operation --- -Reshardings can be performed automatically without the need to manually +Resharding can be performed automatically without the need to manually enter the parameters in an interactive way. This is possible using a command line like the following: @@ -947,7 +947,7 @@ resistant to failures as the number of replicas attached to a given master. For example a cluster where every master has a single replica can't continue operations if the master and its replica fail at the same time, simply because there is no other instance to have a copy of the hash slots the master was -serving. However while netsplits are likely to isolate a number of nodes +serving. However while net-splits are likely to isolate a number of nodes at the same time, many other kind of failures, like hardware or software failures local to a single node, are a very notable class of failures that are unlikely to happen at the same time, so it is possible that in your cluster where diff --git a/topics/gopher.md b/topics/gopher.md index 8381a74fc5..ab5303e9f8 100644 --- a/topics/gopher.md +++ b/topics/gopher.md @@ -3,7 +3,7 @@ Redis contains an implementation of the Gopher protocol, as specified in the [RFC 1436](https://www.ietf.org/rfc/rfc1436.txt). -The Gopher protocol was very popular in the late '90s. It is an alternative +The Gopher protocol was very popular in the late '90s. It is an alternative to the web, and the implementation both server and client side is so simple that the Redis server has just 100 lines of code in order to implement this support. @@ -31,9 +31,7 @@ a string like "/foo", if there is a key named "/foo" it is served via the Gopher protocol. In order to create a real Gopher "hole" (the name of a Gopher site in Gopher -talking), you likely need a script like the following: - - https://github.com/antirez/gopher2redis +talking), you likely need a script such as the one in [https://github.com/antirez/gopher2redis](https://github.com/antirez/gopher2redis). ## SECURITY WARNING @@ -46,7 +44,7 @@ Once a password is set: So use the `requirepass` option to protect your instance. -To enable Gopher support use hte following configuration line. +To enable Gopher support use the following configuration line. gopher-enabled yes diff --git a/topics/indexes.md b/topics/indexes.md index 7cd9a84f70..f15a285573 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -719,7 +719,7 @@ property or not. Similarly lists can be used in order to index items into a fixed order. I can add all my items into a Redis list and rotate the list with -RPOPLPUSH using the same key name as source and destination. This is useful +`RPOPLPUSH` using the same key name as source and destination. This is useful when I want to process a given set of items again and again forever in the same order. Think of an RSS feed system that needs to refresh the local copy periodically. diff --git a/topics/internals-sds.md b/topics/internals-sds.md index 9edc4a6de9..77210f6b9c 100644 --- a/topics/internals-sds.md +++ b/topics/internals-sds.md @@ -1,7 +1,9 @@ Hacking Strings === -The implementation of Redis strings is contained in `sds.c` (`sds` stands for Simple Dynamic Strings). +The implementation of Redis strings is contained in `sds.c` (`sds` stands for +Simple Dynamic Strings). The implementation is available as a standalone library +at [https://github.com/antirez/sds](https://github.com/antirez/sds). The C structure `sdshdr` declared in `sds.h` represents a Redis string: diff --git a/topics/internals-vm.md b/topics/internals-vm.md index b327195f6a..05afed7e73 100644 --- a/topics/internals-vm.md +++ b/topics/internals-vm.md @@ -81,7 +81,7 @@ As you can see if the VM system is not enabled we allocate just `sizeof(*o)-size The Swap File --- -The next step in order to understand how the VM subsystem works is understanding how objects are stored inside the swap file. The good news is that's not some kind of special format, we just use the same format used to store the objects in .rdb files, that are the usual dump files produced by Redis using the SAVE command. +The next step in order to understand how the VM subsystem works is understanding how objects are stored inside the swap file. The good news is that's not some kind of special format, we just use the same format used to store the objects in .rdb files, that are the usual dump files produced by Redis using the `SAVE` command. The swap file is composed of a given number of pages, where every page size is a given number of bytes. This parameters can be changed in redis.conf, since different Redis instances may work better with different values: it depends on the actual data you store inside it. The following are the default values: @@ -159,7 +159,7 @@ So this is what happens: * The command implementation calls the lookup function * The lookup function search for the key in the top level hash table. If the value associated with the requested key is swapped (we can see that checking the _storage_ field of the key object), we load it back in memory in a blocking way before to return to the user. -This is pretty straightforward, but things will get more _interesting_ with the threads. From the point of view of the blocking VM the only real problem is the saving of the dataset using another process, that is, handling BGSAVE and BGREWRITEAOF commands. +This is pretty straightforward, but things will get more _interesting_ with the threads. From the point of view of the blocking VM the only real problem is the saving of the dataset using another process, that is, handling `BGSAVE` and `BGREWRITEAOF` commands. Background saving when VM is active --- @@ -175,7 +175,7 @@ The child process will just store the whole dataset into the dump.rdb file and f In order to avoid problems while both the processes are accessing the same swap file we do a simple thing, that is, not allowing values to be swapped out in the parent process while a background saving is in progress. This way both the processes will access the swap file in read only. This approach has the problem that while the child process is saving no new values can be transferred on the swap file even if Redis is using more memory than the max memory parameters dictates. This is usually not a problem as the background saving will terminate in a short amount of time and if still needed a percentage of values will be swapped on disk ASAP. -An alternative to this scenario is to enable the Append Only File that will have this problem only when a log rewrite is performed using the BGREWRITEAOF command. +An alternative to this scenario is to enable the Append Only File that will have this problem only when a log rewrite is performed using the `BGREWRITEAOF` command. The problem with the blocking VM --- @@ -246,7 +246,7 @@ So you can think of this as a blocked VM that almost always happen to have the r If the function checking what argument is a key fails in some way, there is no problem: the lookup function will see that a given key is associated to a swapped out value and will block loading it. So our non blocking VM reverts to a blocking one when it is not possible to anticipate what keys are touched. -For instance in the case of the SORT command used together with the GET or BY options, it is not trivial to know beforehand what keys will be requested, so at least in the first implementation, SORT BY/GET resorts to the blocking VM implementation. +For instance in the case of the `SORT` command used together with the `GET` or `BY` options, it is not trivial to know beforehand what keys will be requested, so at least in the first implementation, `SORT BY/GET` resorts to the blocking VM implementation. Blocking clients on swapped keys --- diff --git a/topics/introduction.md b/topics/introduction.md index bbe24e2e80..19d8b9f42d 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -33,6 +33,5 @@ Other features include: You can use Redis from [most programming languages](/clients) out there. Redis is written in **ANSI C** and works in most POSIX systems like Linux, -\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and tested the most, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There -is no official support for Windows builds, but Microsoft develops and -maintains a [Win-64 port of Redis](https://github.com/MSOpenTech/redis). +\*BSD, OS X without external dependencies. Linux and OS X are the two operating systems where Redis is developed and tested the most, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. +There is no official support for Windows builds. diff --git a/topics/latency.md b/topics/latency.md index 7b42556120..664ef124ba 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -150,7 +150,7 @@ placement (taskset), cgroups, real-time priorities (chrt), NUMA configuration (numactl), or by using a low-latency kernel. Please note vanilla Redis is not really suitable to be bound on a **single** CPU core. Redis can fork background tasks that can be extremely CPU consuming -like bgsave or AOF rewrite. These tasks must **never** run on the same core +like `BGSAVE` or `BGREWRITEAOF`. These tasks must **never** run on the same core as the main event loop. In most situations, these kind of system level optimizations are not needed. diff --git a/topics/modules-blocking-ops.md b/topics/modules-blocking-ops.md index 349fc6fab9..10f6ba8640 100644 --- a/topics/modules-blocking-ops.md +++ b/topics/modules-blocking-ops.md @@ -172,7 +172,7 @@ long value must be freed. Our callback will look like the following: } NOTE: It is important to stress that the private data is best freed in the -`free_privdata` callback becaues the reply function may not be called +`free_privdata` callback because the reply function may not be called if the client disconnects or timeout. Also note that the private data is also accessible from the timeout diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 900b226020..9fb08c4dd1 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -4,7 +4,7 @@ Redis Modules: an introduction to the API The modules documentation is composed of the following files: * `INTRO.md` (this file). An overview about Redis Modules system and API. It's a good idea to start your reading here. -* `API.md` is generated from module.c top comments of RedisMoule functions. It is a good reference in order to understand how each function works. +* `API.md` is generated from module.c top comments of RedisModule functions. It is a good reference in order to understand how each function works. * `TYPES.md` covers the implementation of native data types into modules. * `BLOCK.md` shows how to write blocking commands that will not reply immediately, but will block the client, without blocking the Redis server, and will provide a reply whenever will be possible. @@ -387,7 +387,7 @@ memory ASAP). Like normal Redis commands, new commands implemented via modules must be able to return values to the caller. The API exports a set of functions for this goal, in order to return the usual types of the Redis protocol, and -arrays of such types as elemented. Also errors can be returned with any +arrays of such types as elements. Also errors can be returned with any error string and code (the error code is the initial uppercase letters in the error message, like the "BUSY" string in the "BUSY the sever is busy" error message). @@ -423,7 +423,7 @@ two different functions: int RedisModule_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str); -The first function gets a C pointer and length. The second a RedisMoudleString +The first function gets a C pointer and length. The second a RedisModuleString object. Use one or the other depending on the source type you have at hand. In order to reply with an array, you just need to use a function to emit the @@ -451,7 +451,7 @@ with a special argument to `RedisModule_ReplyWithArray()`: RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); The above call starts an array reply so we can use other `ReplyWith` calls -in order to produce the array items. Finally in order to set the length, +in order to produce the array items. Finally in order to set the length, use the following call: RedisModule_ReplySetArrayLength(ctx, number_of_items); @@ -537,7 +537,7 @@ both modes. Currently a key opened for writing can also be accessed for reading but this is to be considered an implementation detail. The right mode should be used in sane modules. -You can open non exisitng keys for writing, since the keys will be created +You can open non existing keys for writing, since the keys will be created when an attempt to write to the key is performed. However when opening keys just for reading, `RedisModule_OpenKey` will return NULL if the key does not exist. @@ -666,7 +666,7 @@ is used. Example: RedisModule_StringTruncate(mykey,1024); The function truncates, or enlarges the string as needed, padding it with -zero bytes if the previos length is smaller than the new length we request. +zero bytes if the previous length is smaller than the new length we request. If the string does not exist since `key` is associated to an open empty key, a string value is created and associated to the key. @@ -822,7 +822,7 @@ They work exactly like their `libc` equivalent calls, however they use the same allocator Redis uses, and the memory allocated using these functions is reported by the `INFO` command in the memory section, is accounted when enforcing the `maxmemory` policy, and in general is -a first citizen of the Redis executable. On the contrar, the method +a first citizen of the Redis executable. On the contrary, the method allocated inside modules with libc `malloc()` is transparent to Redis. Another reason to use the modules functions in order to allocate memory diff --git a/topics/modules-native-types.md b/topics/modules-native-types.md index 3b5da1b3dd..342e1fb3cc 100644 --- a/topics/modules-native-types.md +++ b/topics/modules-native-types.md @@ -272,7 +272,7 @@ that can automatically store inside the RDB file the following types: It is up to the module to find a viable representation using the above base types. However note that while the integer and double values are stored -and loaded in an architecture and *endianess* agnostic way, if you use +and loaded in an architecture and *endianness* agnostic way, if you use the raw string saving API to, for example, save a structure on disk, you have to care those details yourself. @@ -354,7 +354,7 @@ in order to allocate, reallocate and release heap memory used to implement the n This is not just useful in order for Redis to be able to account for the memory used by the module, but there are also more advantages: -* Redis uses the `jemalloc` allcator, that often prevents fragmentation problems that could be caused by using the libc allocator. +* Redis uses the `jemalloc` allocator, that often prevents fragmentation problems that could be caused by using the libc allocator. * When loading strings from the RDB file, the native types API is able to return strings allocated directly with `RedisModule_Alloc()`, so that the module can directly link this memory into the data structure representation, avoiding an useless copy of the data. Even if you are using external libraries implementing your data structures, the diff --git a/topics/persistence.md b/topics/persistence.md index df01f7c90b..967807110a 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -33,14 +33,14 @@ AOF advantages * Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second write performances are still great (fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress.) but you can only lose one second worth of writes. * The AOF log is an append only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with an half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily. * Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one. -* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you flushed everything for an error using a FLUSHALL command, if no rewrite of the log was performed in the meantime you can still save your data set just stopping the server, removing the latest command, and restarting Redis again. +* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you flushed everything for an error using a `FLUSHALL` command, if no rewrite of the log was performed in the meantime you can still save your data set just stopping the server, removing the latest command, and restarting Redis again. AOF disadvantages --- * AOF files are usually bigger than the equivalent RDB files for the same dataset. * AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. -* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. These bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is fine. However, these kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works by incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - +* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like `BRPOPLPUSH`) causing the AOF produced to not reproduce exactly the same dataset on reloading. These bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is fine. However, these kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works by incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). 2) We have never had a single report from users about an AOF corruption that was detected in the real world. @@ -134,9 +134,9 @@ You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: -* appendfsync always: `fsync` every time a new command is appended to the AOF. Very very slow, very safe. -* appendfsync everysec: `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. -* appendfsync no: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel exact tuning. +* `appendfsync always`: `fsync` every time a new command is appended to the AOF. Very very slow, very safe. +* `appendfsync everysec`: `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. +* `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel exact tuning. The suggested (and default) policy is to `fsync` every second. It is both very fast and pretty safe. The `always` policy is very slow in @@ -165,7 +165,7 @@ server will emit a log like the following: You can change the default configuration to force Redis to stop in such cases if you want, but the default configuration is to continue regardless the fact the last command in the file is not well-formed, in order to guarantee -availabiltiy after a restart. +availability after a restart. Older versions of Redis may not recover, and may require the following steps: @@ -246,7 +246,7 @@ server will start again with the old configuration. * Make a backup of your latest dump.rdb file. * Transfer this backup into a safe place. * Stop all the writes against the database! -* Issue a redis-cli bgrewriteaof. This will create the append only file. +* Issue a `redis-cli BGREWRITEAOF`. This will create the append only file. * Stop the server when Redis finished generating the AOF dump. * Edit redis.conf end enable append only file persistence. * Restart the server. @@ -257,12 +257,12 @@ Interactions between AOF and RDB persistence --- Redis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB -snapshotting operation is already in progress, or allowing a BGSAVE while the +snapshotting operation is already in progress, or allowing a `BGSAVE` while the AOF rewrite is in progress. This prevents two Redis background processes from doing heavy disk I/O at the same time. When snapshotting is in progress and the user explicitly requests a log -rewrite operation using BGREWRITEAOF the server will reply with an OK +rewrite operation using `BGREWRITEAOF` the server will reply with an OK status code telling the user the operation is scheduled, and the rewrite will start once the snapshotting is completed. diff --git a/topics/quickstart.md b/topics/quickstart.md index 54232b4830..733023884a 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -75,7 +75,7 @@ Running **redis-cli** followed by a command name and its arguments will send thi Another interesting way to run redis-cli is without arguments: the program will start in interactive mode, you can type different commands and see their replies. - $ redis-cli + $ redis-cli redis 127.0.0.1:6379> ping PONG redis 127.0.0.1:6379> set mykey somevalue @@ -99,7 +99,7 @@ enlisted in order of increased security. 1. Make sure the port Redis uses to listen for connections (by default 6379 and additionally 16379 if you run Redis in cluster mode, plus 26379 for Sentinel) is firewalled, so that it is not possible to contact Redis from the outside world. 2. Use a configuration file where the `bind` directive is set in order to guarantee that Redis listens on only the network interfaces you are using. For example only the loopback interface (127.0.0.1) if you are accessing Redis just locally from the same computer, and so forth. 3. Use the `requirepass` option in order to add an additional layer of security so that clients will require to authenticate using the `AUTH` command. -4. Use [spiped](http://www.tarsnap.com/spiped.html) or another SSL tunnelling software in order to encrypt traffic between Redis servers and Redis clients if your environment requires encryption. +4. Use [spiped](http://www.tarsnap.com/spiped.html) or another SSL tunneling software in order to encrypt traffic between Redis servers and Redis clients if your environment requires encryption. Note that a Redis exposed to the internet without any security [is very simple to exploit](http://antirez.com/news/96), so make sure you understand the above and apply **at least** a firewalling layer. After the firewalling is in place, try to connect with `redis-cli` from an external host in order to prove yourself the instance is actually not reachable. diff --git a/topics/replication.md b/topics/replication.md index e3705eb0d2..d4901ffd3b 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -1,20 +1,20 @@ Replication === -At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a very simple to use and configure *leader follower* (master-slave) replication: it allows slave Redis instances to be exact copies of master instances. The slave will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master. +At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a very simple to use and configure *leader follower* (master-slave) replication: it allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master. This system works using three main mechanisms: -1. When a master and a slave instances are well-connected, the master keeps the slave updated by sending a stream of commands to the slave, in order to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset. -2. When the link between the master and the slave breaks, for network issues or because a timeout is sensed in the master or the slave, the slave reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection. -3. When a partial resynchronization is not possible, the slave will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the slave, and then continue sending the stream of commands as the dataset changes. +1. When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica, in order to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset. +2. When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection. +3. When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes. Redis uses by default asynchronous replication, which being low latency and high performance, is the natural replication mode for the vast majority of Redis -use cases. However Redis slaves asynchronously acknowledge the amount of data +use cases. However Redis replicas asynchronously acknowledge the amount of data they received periodically with the master. So the master does not wait every time -for a command to be processed by the slaves, however it knows, if needed, what -slave already processed what command. This allows to have optional synchronous replication. +for a command to be processed by the replicas, however it knows, if needed, what +replica already processed what command. This allows to have optional synchronous replication. Synchronous replication of certain data can be requested by the clients using the `WAIT` command. However `WAIT` is only able to ensure that there are the @@ -30,25 +30,25 @@ about high availability and failover. The rest of this document mainly describe The following are some very important facts about Redis replication: -* Redis uses asynchronous replication, with asynchronous slave-to-master acknowledges of the amount of data processed. -* A master can have multiple slaves. -* Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a cascading-like structure. Since Redis 4.0, all the sub-slaves will receive exactly the same replication stream from the master. -* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more slaves perform the initial synchronization or a partial resynchronization. -* Replication is also largely non-blocking on the slave side. While the slave is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis slaves to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The slave will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 it is possible to configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the slave. -* Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, slow O(N) operations can be offloaded to slaves), or simply for improving data safety and high availability. -* It is possible to use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a slave configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the slave tries to synchronized with it, the slave will be emptied as well. +* Redis uses asynchronous replication, with asynchronous replica-to-master acknowledges of the amount of data processed. +* A master can have multiple replicas. +* Replicas are able to accept connections from other replicas. Aside from connecting a number of replicas to the same master, replicas can also be connected to other replicas in a cascading-like structure. Since Redis 4.0, all the sub-replicas will receive exactly the same replication stream from the master. +* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization. +* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 it is possible to configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica. +* Replication can be used both for scalability, in order to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability. +* It is possible to use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to synchronized with it, the replica will be emptied as well. Safety of replication when master has persistence turned off --- In setups where Redis replication is used, it is strongly advised to have -persistence turned on in the master and in the slaves. When this is not possible, +persistence turned on in the master and in the replicas. When this is not possible, for example because of latency concerns due to very slow disks, instances should be configured to **avoid restarting automatically** after a reboot. To better understand why masters with persistence turned off configured to auto restart are dangerous, check the following failure mode where data -is wiped from the master and all its slaves: +is wiped from the master and all its replicas: 1. We have a setup with node A acting as master, with persistence turned down, and nodes B and C replicating from node A. 2. Node A crashes, however it has some auto-restart system, that restarts the process. However since persistence is turned off, the node restarts with an empty data set. @@ -65,24 +65,24 @@ How Redis replication works Every Redis master has a replication ID: it is a large pseudo random string that marks a given story of the dataset. Each master also takes an offset that increments for every byte of replication stream that it is produced to be -sent to slaves, in order to update the state of the slaves with the new changes -modifying the dataset. The replication offset is incremented even if no slave +sent to replicas, in order to update the state of the replicas with the new changes +modifying the dataset. The replication offset is incremented even if no replica is actually connected, so basically every given pair of: Replication ID, offset Identifies an exact version of the dataset of a master. -When slaves connects to masters, they use the `PSYNC` command in order to send +When replicas connect to masters, they use the `PSYNC` command in order to send their old master replication ID and the offsets they processed so far. This way the master can send just the incremental part needed. However if there is not -enough *backlog* in the master buffers, or if the slave is referring to an +enough *backlog* in the master buffers, or if the replica is referring to an history (replication ID) which is no longer known, than a full resynchronization -happens: in this case the slave will get a full copy of the dataset, from scratch. +happens: in this case the replica will get a full copy of the dataset, from scratch. This is how a full synchronization works in more details: -The master starts a background saving process in order to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the slave, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the slave. This is done as a stream of commands and is in the same format of the Redis protocol itself. +The master starts a background saving process in order to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the replica, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the replica. This is done as a stream of commands and is in the same format of the Redis protocol itself. You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the `SYNC` command. You'll see a bulk @@ -91,7 +91,7 @@ in the telnet session. Actually `SYNC` is an old protocol no longer used by newer Redis instances, but is still there for backward compatibility: it does not allow partial resynchronizations, so now `PSYNC` is used instead. -As already said, slaves are able to automatically reconnect when the master-slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests, it performs a single background save in order to serve all of them. +As already said, replicas are able to automatically reconnect when the master-replica link goes down for some reason. If the master receives multiple concurrent replica synchronization requests, it performs a single background save in order to serve all of them. Replication ID explained --- @@ -102,8 +102,8 @@ to understand what exactly is the replication ID, and why instances have actuall two replication IDs the main ID and the secondary ID. A replication ID basically marks a given *history* of the data set. Every time -an instance restarts from scratch as a master, or a slave is promoted to master, -a new replication ID is generated for this instance. The slaves connected to +an instance restarts from scratch as a master, or a replica is promoted to master, +a new replication ID is generated for this instance. The replicas connected to a master will inherit its replication ID after the handshake. So two instances with the same ID are related by the fact that they hold the same data, but potentially at a different time. It is the offset that works as a logical time @@ -115,21 +115,21 @@ with offset 1000 and one with offset 1023, it means that the first lacks certain commands applied to the data set. It also means that A, by applying just a few commands, may reach exactly the same state of B. -The reason why Redis instances have two replication IDs is because of slaves -that are promoted to masters. After a failover, the promoted slave requires +The reason why Redis instances have two replication IDs is because of replicas +that are promoted to masters. After a failover, the promoted replica requires to still remember what was its past replication ID, because such replication ID -was the one of the former master. In this way, when other slaves will synchronize +was the one of the former master. In this way, when other replicas will synchronize with the new master, they will try to perform a partial resynchronization using the -old master replication ID. This will work as expected, because when the slave +old master replication ID. This will work as expected, because when the replica is promoted to master it sets its secondary ID to its main ID, remembering what was the offset when this ID switch happened. Later it will select a new random -replication ID, because a new history begins. When handling the new slaves +replication ID, because a new history begins. When handling the new replicas connecting, the master will match their IDs and offsets both with the current ID and the secondary ID (up to a given offset, for safety). In short this means -that after a failover, slaves connecting to the new promoted master don't have +that after a failover, replicas connecting to the new promoted master don't have to perform a full sync. -In case you wonder why a slave promoted to master needs to change its +In case you wonder why a replica promoted to master needs to change its replication ID after a failover: it is possible that the old master is still working as a master because of some network partition: retaining the same replication ID would violate the fact that the same ID and same offset of any @@ -139,68 +139,68 @@ Diskless replication --- Normally a full resynchronization requires creating an RDB file on disk, -then reloading the same RDB from disk in order to feed the slaves with the data. +then reloading the same RDB from disk in order to feed the replicas with the data. With slow disks this can be a very stressing operation for the master. Redis version 2.8.18 is the first version to have support for diskless replication. In this setup the child process directly sends the -RDB over the wire to slaves, without using the disk as intermediate storage. +RDB over the wire to replicas, without using the disk as intermediate storage. Configuration --- -To configure basic Redis replication is trivial: just add the following line to the slave configuration file: +To configure basic Redis replication is trivial: just add the following line to the replica configuration file: - slaveof 192.168.1.1 6379 + replicaof 192.168.1.1 6379 Of course you need to replace 192.168.1.1 6379 with your master IP address (or -hostname) and port. Alternatively, you can call the `SLAVEOF` command and the -master host will start a sync with the slave. +hostname) and port. Alternatively, you can call the `REPLICAOF` command and the +master host will start a sync with the replica. There are also a few parameters for tuning the replication backlog taken in memory by the master to perform the partial resynchronization. See the example `redis.conf` shipped with the Redis distribution for more information. Diskless replication can be enabled using the `repl-diskless-sync` configuration -parameter. The delay to start the transfer in order to wait for more slaves to +parameter. The delay to start the transfer in order to wait for more replicas to arrive after the first one is controlled by the `repl-diskless-sync-delay` parameter. Please refer to the example `redis.conf` file in the Redis distribution for more details. -Read-only slave +Read-only replica --- -Since Redis 2.6, slaves support a read-only mode that is enabled by default. -This behavior is controlled by the `slave-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. +Since Redis 2.6, replicas support a read-only mode that is enabled by default. +This behavior is controlled by the `replica-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. -Read-only slaves will reject all write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is intended to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However, security of read-only instances can be improved by disabling commands in redis.conf using the `rename-command` directive. +Read-only replicas will reject all write commands, so that it is not possible to write to a replica because of a mistake. This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However, security of read-only instances can be improved by disabling commands in redis.conf using the `rename-command` directive. You may wonder why it is possible to revert the read-only setting -and have slave instances that can be targeted by write operations. -While those writes will be discarded if the slave and the master -resynchronize or if the slave is restarted, there are a few legitimate -use case for storing ephemeral data in writable slaves. +and have replica instances that can be targeted by write operations. +While those writes will be discarded if the replica and the master +resynchronize or if the replica is restarted, there are a few legitimate +use case for storing ephemeral data in writable replicas. -For example computing slow Set or Sorted set operations and storing them into local keys is an use case for writable slaves that was observed multiple times. +For example computing slow Set or Sorted set operations and storing them into local keys is an use case for writable replicas that was observed multiple times. -However note that **writable slaves before version 4.0 were incapable of expiring keys with a time to live set**. This means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory. So in general mixing writable slaves (previous version 4.0) and keys with TTL is going to create issues. +However note that **writable replicas before version 4.0 were incapable of expiring keys with a time to live set**. This means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory. So in general mixing writable replicas (previous version 4.0) and keys with TTL is going to create issues. Redis 4.0 RC3 and greater versions totally solve this problem and now writable -slaves are able to evict keys with TTL as masters do, with the exceptions +replicas are able to evict keys with TTL as masters do, with the exceptions of keys written in DB numbers greater than 63 (but by default Redis instances only have 16 databases). -Also note that since Redis 4.0 slave writes are only local, and are not propagated to sub-slaves attached to the instance. Sub slaves instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate slaves. So for example in the following setup: +Also note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in the following setup: A ---> B ---> C Even if `B` is writable, C will not see `B` writes and will instead have identical dataset as the master instance `A`. -Setting a slave to authenticate to a master +Setting a replica to authenticate to a master --- If your master has a password via `requirepass`, it's trivial to configure the -slave to use that password in all sync operations. +replica to use that password in all sync operations. To do it on a running instance, use `redis-cli` and type: @@ -214,20 +214,20 @@ Allow writes only with N attached replicas --- Starting with Redis 2.8, it is possible to configure a Redis master to -accept write queries only if at least N slaves are currently connected to the +accept write queries only if at least N replicas are currently connected to the master. However, because Redis uses asynchronous replication it is not possible to ensure -the slave actually received a given write, so there is always a window for data +the replica actually received a given write, so there is always a window for data loss. This is how the feature works: -* Redis slaves ping the master every second, acknowledging the amount of replication stream processed. -* Redis masters will remember the last time it received a ping from every slave. -* The user can configure a minimum number of slaves that have a lag not greater than a maximum number of seconds. +* Redis replicas ping the master every second, acknowledging the amount of replication stream processed. +* Redis masters will remember the last time it received a ping from every replica. +* The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds. -If there are at least N slaves, with a lag less than M seconds, then the write will be accepted. +If there are at least N replicas, with a lag less than M seconds, then the write will be accepted. You may think of it as a best effort data safety mechanism, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. In general bound data loss is better than unbound one. @@ -235,8 +235,8 @@ If the conditions are not met, the master will instead reply with an error and t There are two configuration parameters for this feature: -* min-slaves-to-write `` -* min-slaves-max-lag `` +* min-replicas-to-write `` +* min-replicas-max-lag `` For more information, please check the example `redis.conf` file shipped with the Redis source distribution. @@ -245,43 +245,43 @@ How Redis replication deals with expires on keys --- Redis expires allow keys to have a limited time to live. Such a feature depends -on the ability of an instance to count the time, however Redis slaves correctly +on the ability of an instance to count the time, however Redis replicas correctly replicate keys with expires, even when such keys are altered using Lua scripts. To implement such a feature Redis cannot rely on the ability of the master and -slave to have synchronized clocks, since this is a problem that cannot be solved +replica to have synchronized clocks, since this is a problem that cannot be solved and would result in race conditions and diverging data sets, so Redis uses three main techniques in order to make the replication of expired keys able to work: -1. Slaves don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the slaves. -2. However because of master-driven expire, sometimes slaves may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. In order to deal with that the slave uses its logical clock in order to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way slaves avoid reporting logically expired keys are still existing. In practical terms, an HTML fragments cache that uses slaves to scale will avoid returning items that are already older than the desired time to live. -3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed in order to send the same script to the slave in a way that is guaranteed to have the same effects in the data set. +1. Replicas don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the replicas. +2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. In order to deal with that the replica uses its logical clock in order to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live. +3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed in order to send the same script to the replica in a way that is guaranteed to have the same effects in the data set. -Once a slave is promoted to a master it will start to expire keys independently, and will not require any help from its old master. +Once a replica is promoted to a master it will start to expire keys independently, and will not require any help from its old master. Configuring replication in Docker and NAT --- -When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output are scanned in order to discover slaves addresses. +When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output are scanned in order to discover replicas' addresses. The problem is that the `ROLE` command, and the replication section of -the `INFO` output, when issued into a master instance, will show slaves +the `INFO` output, when issued into a master instance, will show replicas as having the IP address they use to connect to the master, which, in environments using NAT may be different compared to the logical address of the -slave instance (the one that clients should use to connect to slaves). +replica instance (the one that clients should use to connect to replicas). -Similarly the slaves will be listed with the listening port configured +Similarly the replicas will be listed with the listening port configured into `redis.conf`, that may be different than the forwarded port in case the port is remapped. In order to fix both issues, it is possible, since Redis 3.2.2, to force -a slave to announce an arbitrary pair of IP and port to the master. +a replica to announce an arbitrary pair of IP and port to the master. The two configurations directives to use are: - slave-announce-ip 5.5.5.5 - slave-announce-port 1234 + replica-announce-ip 5.5.5.5 + replica-announce-port 1234 And are documented in the example `redis.conf` of recent Redis distributions. @@ -289,32 +289,35 @@ The INFO and ROLE command --- There are two Redis commands that provide a lot of information on the current -replication parameters of master and slave instances. One is `INFO`. If the +replication parameters of master and replica instances. One is `INFO`. If the command is called with the `replication` argument as `INFO replication` only information relevant to the replication are displayed. Another more computer-friendly command is `ROLE`, that provides the replication status of -masters and slaves together with their replication offsets, list of connected -slaves and so forth. +masters and replicas together with their replication offsets, list of connected +replicas and so forth. Partial resynchronizations after restarts and failovers --- Since Redis 4.0, when an instance is promoted to master after a failover, -it will be still able to perform a partial resynchronization with the slaves -of the old master. To do so, the slave remembers the old replication ID and +it will be still able to perform a partial resynchronization with the replicas +of the old master. To do so, the replica remembers the old replication ID and offset of its former master, so can provide part of the backlog to the connecting -slaves even if they ask for the old replication ID. +replicas even if they ask for the old replication ID. -However the new replication ID of the promoted slave will be different, since it +However the new replication ID of the promoted replica will be different, since it constitutes a different history of the data set. For example, the master can return available and can continue accepting writes for some time, so using the -same replication ID in the promoted slave would violate the rule that a +same replication ID in the promoted replica would violate the rule that a of replication ID and offset pair identifies only a single data set. -Moreover slaves when powered off gently and restarted, are able to store in the -`RDB` file the information needed in order to resynchronize with their master. -This is useful in case of upgrades. When this is needed, it is better to use -the `SHUTDOWN` command in order to perform a `save & quit` operation on the slave. +Moreover, replicas - when powered off gently and restarted - are able to store +in the `RDB` file the information needed in order to resynchronize with their +master. This is useful in case of upgrades. When this is needed, it is better to +use the `SHUTDOWN` command in order to perform a `save & quit` operation on the +replica. -It is not possible to partially resynchronize a slave that restarted via the AOF file. However the instance may be turned to RDB persistence before shutting down it, than can be restarted, and finally AOF can be enabled again. +It is not possible to partially resynchronize a replica that restarted via the +AOF file. However the instance may be turned to RDB persistence before shutting +down it, than can be restarted, and finally AOF can be enabled again. diff --git a/topics/security.md b/topics/security.md index a320eb2846..4fb03cae0d 100644 --- a/topics/security.md +++ b/topics/security.md @@ -51,7 +51,7 @@ like the following to the **redis.conf** file: bind 127.0.0.1 Failing to protect the Redis port from the outside can have a big security -impact because of the nature of Redis. For instance, a single **FLUSHALL** command can be used by an external attacker to delete the whole data set. +impact because of the nature of Redis. For instance, a single `FLUSHALL` command can be used by an external attacker to delete the whole data set. Protected mode --- @@ -154,7 +154,7 @@ The Redis protocol has no concept of string escaping, so injection is impossible under normal circumstances using a normal client library. The protocol uses prefixed-length strings and is completely binary safe. -Lua scripts executed by the **EVAL** and **EVALSHA** commands follow the +Lua scripts executed by the `EVAL` and `EVALSHA` commands follow the same rules, and thus those commands are also safe. While it would be a very strange use case, the application should avoid composing the body of the Lua script using strings obtained from untrusted sources. diff --git a/topics/sentinel-clients.md b/topics/sentinel-clients.md index 1fd8435987..61f0999bca 100644 --- a/topics/sentinel-clients.md +++ b/topics/sentinel-clients.md @@ -7,7 +7,7 @@ Redis Sentinel is a monitoring solution for Redis instances that handles automatic failover of Redis masters and service discovery (who is the current master for a given group of instances?). Since Sentinel is both responsible for reconfiguring instances during failovers, and providing configurations to -clients connecting to Redis masters or slaves, clients are required to have +clients connecting to Redis masters or replicas, clients are required to have explicit support for Redis Sentinel. This document is targeted at Redis clients developers that want to support Sentinel in their clients implementation with the following goals: @@ -22,7 +22,7 @@ Redis service discovery via Sentinel Redis Sentinel identifies every master with a name like "stats" or "cache". Every name actually identifies a *group of instances*, composed of a master -and a variable number of slaves. +and a variable number of replicas. The address of the Redis master that is used for a specific purpose inside a network may change after events like an automatic failover, a manually triggered failover (for instance in order to upgrade a Redis instance), and other reasons. @@ -85,32 +85,32 @@ Sentinel failover disconnection === Starting with Redis 2.8.12, when Redis Sentinel changes the configuration of -an instance, for example promoting a slave to a master, demoting a master to +an instance, for example promoting a replica to a master, demoting a master to replicate to the new master after a failover, or simply changing the master -address of a stale slave instance, it sends a `CLIENT KILL type normal` +address of a stale replica instance, it sends a `CLIENT KILL type normal` command to the instance in order to make sure all the clients are disconnected from the reconfigured instance. This will force clients to resolve the master address again. If the client will contact a Sentinel with yet not updated information, the verification of the Redis instance role via the `ROLE` command will fail, allowing the client to detect that the contacted Sentinel provided stale information, and will try again. -Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to slave, triggering a new disconnection. The same reasoning applies to connecting to stale slaves that will get reconfigured to replicate with a different master. +Note: it is possible that a stale master returns online at the same time a client contacts a stale Sentinel instance, so the client may connect with a stale master, and yet the ROLE output will match. However when the master is back again Sentinel will try to demote it to replica, triggering a new disconnection. The same reasoning applies to connecting to stale replicas that will get reconfigured to replicate with a different master. -Connecting to slaves +Connecting to replicas === -Sometimes clients are interested to connect to slaves, for example in order to scale read requests. This protocol supports connecting to slaves by modifying step 2 slightly. Instead of calling the following command: +Sometimes clients are interested to connect to replicas, for example in order to scale read requests. This protocol supports connecting to replicas by modifying step 2 slightly. Instead of calling the following command: SENTINEL get-master-addr-by-name master-name The clients should call instead: - SENTINEL slaves master-name + SENTINEL replicas master-name -In order to retrieve a list of slave instances. +In order to retrieve a list of replica instances. Symmetrically the client should verify with the `ROLE` command that the -instance is actually a slave, in order to avoid scaling read queries with +instance is actually a replica, in order to avoid scaling read queries with the master. Connection pools @@ -146,7 +146,7 @@ Redis instances configurations. This mechanism can be used in order to speedup the reconfiguration of clients, that is, clients may listen to Pub/Sub in order to know when a configuration change happened in order to run the three steps protocol explained in this -document in order to resolve the new Redis master (or slave) address. +document in order to resolve the new Redis master (or replica) address. However update messages received via Pub/Sub should not substitute the above procedure, since there is no guarantee that a client is able to diff --git a/topics/sentinel.md b/topics/sentinel.md index eac3da92bc..123f309013 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -10,9 +10,9 @@ notifications and acts as a configuration provider for clients. This is the full list of Sentinel capabilities at a macroscopical level (i.e. the *big picture*): -* **Monitoring**. Sentinel constantly checks if your master and slave instances are working as expected. +* **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected. * **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. -* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. +* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. * **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address. Distributed nature of Sentinel @@ -25,7 +25,7 @@ Sentinel itself is designed to run in a configuration where there are multiple S 1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives. 2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all. -The sum of Sentinels, Redis instances (masters and slaves) and clients +The sum of Sentinels, Redis instances (masters and replicas) and clients connecting to Sentinel and Redis, are also a larger distributed system with specific properties. In this document concepts will be introduced gradually starting from basic information needed in order to understand the basic @@ -82,7 +82,7 @@ Fundamental things to know about Sentinel before deploying 3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it. 4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all. 5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working). -6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of slaves for a master. Check the section about Sentinel and Docker later in this document for more information. +6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the section about Sentinel and Docker later in this document for more information. Configuring Sentinel --- @@ -103,15 +103,15 @@ following: sentinel parallel-syncs resque 5 You only need to specify the masters to monitor, giving to each separated -master (that may have any number of slaves) a different name. There is no -need to specify slaves, which are auto-discovered. Sentinel will update the -configuration automatically with additional information about slaves (in +master (that may have any number of replicas) a different name. There is no +need to specify replicas, which are auto-discovered. Sentinel will update the +configuration automatically with additional information about replicas (in order to retain the information in case of restart). The configuration is -also rewritten every time a slave is promoted to master during a failover +also rewritten every time a replica is promoted to master during a failover and every time a new Sentinel is discovered. The example configuration above basically monitors two sets of Redis -instances, each composed of a master and an undefined number of slaves. +instances, each composed of a master and an undefined number of replicas. One set of instances is called `mymaster`, and the other `resque`. The meaning of the arguments of `sentinel monitor` statements is the following: @@ -148,13 +148,13 @@ And are used for the following purposes: * `down-after-milliseconds` is the time in milliseconds an instance should not be reachable (either does not reply to our PINGs or it is replying with an error) for a Sentinel starting to think it is down. -* `parallel-syncs` sets the number of slaves that can be reconfigured to use +* `parallel-syncs` sets the number of replicas that can be reconfigured to use the new master after a failover at the same time. The lower the number, the more time it will take for the failover process to complete, however if the -slaves are configured to serve old data, you may not want all the slaves to +replicas are configured to serve old data, you may not want all the replicas to re-synchronize with the master at the same time. While the replication -process is mostly non blocking for a slave, there is a moment when it stops to -load the bulk data from the master. You may want to make sure only one slave +process is mostly non blocking for a replica, there is a moment when it stops to +load the bulk data from the master. You may want to make sure only one replica at a time is not reachable by setting this option to the value of 1. Additional options are described in the rest of this document and @@ -202,7 +202,7 @@ Network partitions are shown as interrupted lines using slashes: Also note that: * Masters are called M1, M2, M3, ..., Mn. -* Slaves are called R1, R2, R3, ..., Rn (R stands for *replica*). +* replicas are called R1, R2, R3, ..., Rn (R stands for *replica*). * Sentinels are called S1, S2, S3, ..., Sn. * Clients are called C1, C2, C3, ..., Cn. * When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention. @@ -264,7 +264,7 @@ be able to authorize a failover, making clients able to continue. In every Sentinel setup, as Redis uses asynchronous replication, there is always the risk of losing some writes because a given acknowledged write -may not be able to reach the slave which is promoted to master. However in +may not be able to reach the replica which is promoted to master. However in the above setup there is an higher risk due to clients being partitioned away with an old master, like in the following picture: @@ -281,31 +281,31 @@ with an old master, like in the following picture: +------+ +----+ In this case a network partition isolated the old master M1, so the -slave R2 is promoted to master. However clients, like C1, that are +replica R2 is promoted to master. However clients, like C1, that are in the same partition as the old master, may continue to write data to the old master. This data will be lost forever since when the partition -will heal, the master will be reconfigured as a slave of the new master, +will heal, the master will be reconfigured as a replica of the new master, discarding its data set. This problem can be mitigated using the following Redis replication feature, that allows to stop accepting writes if a master detects that -it is no longer able to transfer its writes to the specified number of slaves. +it is no longer able to transfer its writes to the specified number of replicas. - min-slaves-to-write 1 - min-slaves-max-lag 10 + min-replicas-to-write 1 + min-replicas-max-lag 10 -With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 slave. Since replication is asynchronous *not being able to write* actually means that the slave is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. +With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. Using this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master. -However there is no free lunch. With this refinement, if the two slaves are +However there is no free lunch. With this refinement, if the two replicas are down, the master will stop accepting writes. It's a trade off. Example 3: Sentinel in the client boxes --- Sometimes we have only two Redis boxes available, one for the master and -one for the slave. The configuration in the example 2 is not viable in +one for the replica. The configuration in the example 2 is not viable in that case, so we can resort to the following, where Sentinels are placed where clients are: @@ -334,15 +334,15 @@ If the box where M1 and S1 are running fails, the failover will happen without issues, however it is easy to see that different network partitions will result in different behaviors. For example Sentinel will not be able to setup if the network between the clients and the Redis servers is -disconnected, since the Redis master and slave will both be unavailable. +disconnected, since the Redis master and replica will both be unavailable. Note that if C3 gets partitioned with M1 (hardly possible with the network described above, but more likely possible with different layouts, or because of failures at the software layer), we have a similar issue as described in Example 2, with the difference that here we have -no way to break the symmetry, since there is just a slave and master, so -the master can't stop accepting queries when it is disconnected from its slave, -otherwise the master would never be available during slave failures. +no way to break the symmetry, since there is just a replica and master, so +the master can't stop accepting queries when it is disconnected from its replica, +otherwise the master would never be available during replica failures. So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself @@ -362,7 +362,7 @@ case we need to resort to a mixed setup like the following: +----+ | +----+ | +------+-----+ - | | + | | | | +----+ +----+ | C1 | | C2 | @@ -394,13 +394,13 @@ not ports but also IP addresses. Remapping ports and addresses creates issues with Sentinel in two ways: 1. Sentinel auto-discovery of other Sentinels no longer works, since it is based on *hello* messages where each Sentinel announce at which port and IP address they are listening for connection. However Sentinels have no way to understand that an address or port is remapped, so it is announcing an information that is not correct for other Sentinels to connect. -2. Slaves are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the slave itself during the handshake, however the port may be wrong for the same reason as exposed in point 1. +2. Replicas are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the replica itself during the handshake, however the port may be wrong for the same reason as exposed in point 1. -Since Sentinels auto detect slaves using masters `INFO` output information, -the detected slaves will not be reachable, and Sentinel will never be able to -failover the master, since there are no good slaves from the point of view of +Since Sentinels auto detect replicas using masters `INFO` output information, +the detected replicas will not be reachable, and Sentinel will never be able to +failover the master, since there are no good replicas from the point of view of the system, so there is currently no way to monitor with Sentinel a set of -master and slave instances deployed with Docker, **unless you instruct Docker +master and replica instances deployed with Docker, **unless you instruct Docker to map the port 1:1**. For the first problem, in case you want to run a set of Sentinel @@ -423,7 +423,7 @@ how to configure and interact with 3 Sentinel instances. Here we assume that the instances are executed at port 5000, 5001, 5002. We also assume that you have a running Redis master at port 6379 with a -slave running at port 6380. We will use the IPv4 loopback address 127.0.0.1 +replica running at port 6380. We will use the IPv4 loopback address 127.0.0.1 everywhere during the tutorial, assuming you are running the simulation on your personal computer. @@ -440,7 +440,7 @@ as port numbers. A few things to note about the above configuration: -* The master set is called `mymaster`. It identifies the master and its slaves. Since each *master set* has a different name, Sentinel can monitor different sets of masters and slaves at the same time. +* The master set is called `mymaster`. It identifies the master and its replicas. Since each *master set* has a different name, Sentinel can monitor different sets of masters and replicas at the same time. * The quorum was set to the value of 2 (last argument of `sentinel monitor` configuration directive). * The `down-after-milliseconds` value is 5000 milliseconds, that is 5 seconds, so masters will be detected as failing as soon as we don't receive any reply from our pings within this amount of time. @@ -508,7 +508,7 @@ a few that are of particular interest for us: 1. `num-other-sentinels` is 2, so we know the Sentinel already detected two more Sentinels for this master. If you check the logs you'll see the `+sentinel` events generated. 2. `flags` is just `master`. If the master was down we could expect to see `s_down` or `o_down` flag as well here. -3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached slave to our master. +3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached replica to our master. In order to explore more about this instance, you may want to try the following two commands: @@ -516,14 +516,14 @@ two commands: SENTINEL slaves mymaster SENTINEL sentinels mymaster -The first will provide similar information about the slaves connected to the +The first will provide similar information about the replicas connected to the master, and the second about the other Sentinels. Obtaining the address of the current master --- As we already specified, Sentinel also acts as a configuration provider for -clients that want to connect to a set of master and slaves. Because of +clients that want to connect to a set of master and replicas. Because of possible failovers or reconfigurations, clients have no idea about who is the currently active master for a given set of instances, so Sentinel exports an API to ask this question: @@ -565,7 +565,7 @@ Sentinel API === Sentinel provides an API in order to inspect its state, check the health -of monitored masters and slaves, subscribe in order to receive specific +of monitored masters and replicas, subscribe in order to receive specific notifications, and change the Sentinel configuration at run time. By default Sentinel runs using TCP port 26379 (note that 6379 is the normal @@ -589,10 +589,10 @@ order to modify the Sentinel configuration, which are covered later. * **PING** This command simply returns PONG. * **SENTINEL masters** Show a list of monitored masters and their state. * **SENTINEL master ``** Show the state and info of the specified master. -* **SENTINEL slaves ``** Show a list of slaves for this master, and their state. +* **SENTINEL slaves ``** Show a list of replicas for this master, and their state. * **SENTINEL sentinels ``** Show a list of sentinel instances for this master, and their state. -* **SENTINEL get-master-addr-by-name ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. -* **SENTINEL reset ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. +* **SENTINEL get-master-addr-by-name ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. +* **SENTINEL reset ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. * **SENTINEL failover ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). * **SENTINEL ckquorum ``** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. * **SENTINEL flushconfig** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. @@ -625,7 +625,7 @@ Adding a new Sentinel to your deployment is a simple process because of the auto-discover mechanism implemented by Sentinel. All you need to do is to start the new Sentinel configured to monitor the currently active master. Within 10 seconds the Sentinel will acquire the list of other Sentinels and -the set of slaves attached to the master. +the set of replicas attached to the master. If you need to add multiple Sentinels at once, it is suggested to add it one after the other, waiting for all the other Sentinels to already know @@ -649,23 +649,23 @@ the following steps should be performed in absence of network partitions: 2. Send a `SENTINEL RESET *` command to all the other Sentinel instances (instead of `*` you can use the exact master name if you want to reset just a single master). One after the other, waiting at least 30 seconds between instances. 3. Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of `SENTINEL MASTER mastername` of every Sentinel. -Removing the old master or unreachable slaves +Removing the old master or unreachable replicas --- -Sentinels never forget about slaves of a given master, even when they are +Sentinels never forget about replicas of a given master, even when they are unreachable for a long time. This is useful, because Sentinels should be able -to correctly reconfigure a returning slave after a network partition or a +to correctly reconfigure a returning replica after a network partition or a failure event. Moreover, after a failover, the failed over master is virtually added as a -slave of the new master, this way it will be reconfigured to replicate with +replica of the new master, this way it will be reconfigured to replicate with the new master as soon as it will be available again. -However sometimes you want to remove a slave (that may be the old master) -forever from the list of slaves monitored by Sentinels. +However sometimes you want to remove a replica (that may be the old master) +forever from the list of replicas monitored by Sentinels. In order to do this, you need to send a `SENTINEL RESET mastername` command -to all the Sentinels: they'll refresh the list of slaves within the next +to all the Sentinels: they'll refresh the list of replicas within the next 10 seconds, only adding the ones listed as correctly replicating from the current master `INFO` output. @@ -694,12 +694,12 @@ The part identifying the master (from the @ argument to the end) is optional and is only specified if the instance is not a master itself. * **+reset-master** `` -- The master was reset. -* **+slave** `` -- A new slave was detected and attached. +* **+slave** `` -- A new replica was detected and attached. * **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. -* **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached slave turned into a master). -* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. -* **+slave-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. -* **+slave-reconf-done** `` -- The slave is now synchronized with the new master. +* **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached replica turned into a master). +* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new replica. +* **+slave-reconf-inprog** `` -- The replica being reconfigured showed to be a replica of the new master ip:port pair, but the synchronization process is not yet complete. +* **+slave-reconf-done** `` -- The replica is now synchronized with the new master. * **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). * **+sentinel** `` -- A new sentinel for this master was detected and attached. * **+sdown** `` -- The specified instance is now in Subjectively Down state. @@ -709,12 +709,12 @@ and is only specified if the instance is not a master itself. * **+new-epoch** `` -- The current epoch was updated. * **+try-failover** `` -- New failover in progress, waiting to be elected by the majority. * **+elected-leader** `` -- Won the election for the specified epoch, can do the failover. -* **+failover-state-select-slave** `` -- New failover state is `select-slave`: we are trying to find a suitable slave for promotion. -* **no-good-slave** `` -- There is no good slave to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. -* **selected-slave** `` -- We found the specified good slave to promote. -* **failover-state-send-slaveof-noone** `` -- We are trying to reconfigure the promoted slave as master, waiting for it to switch. -* **failover-end-for-timeout** `` -- The failover terminated for timeout, slaves will eventually be configured to replicate with the new master anyway. -* **failover-end** `` -- The failover terminated with success. All the slaves appears to be reconfigured to replicate with the new master. +* **+failover-state-select-slave** `` -- New failover state is `select-slave`: we are trying to find a suitable replica for promotion. +* **no-good-slave** `` -- There is no good replica to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. +* **selected-slave** `` -- We found the specified good replica to promote. +* **failover-state-send-slaveof-noone** `` -- We are trying to reconfigure the promoted replica as master, waiting for it to switch. +* **failover-end-for-timeout** `` -- The failover terminated for timeout, replicas will eventually be configured to replicate with the new master anyway. +* **failover-end** `` -- The failover terminated with success. All the replicas appears to be reconfigured to replicate with the new master. * **switch-master** ` ` -- The master new IP and address is the specified one after a configuration change. This is **the message most external users are interested in**. * **+tilt** -- Tilt mode entered. * **-tilt** -- Tilt mode exited. @@ -730,49 +730,49 @@ command, that will only succeed if the script was read-only. If the instance is still in an error condition after this try, it will eventually be failed over. -Slaves priority +Replicas priority --- -Redis instances have a configuration parameter called `slave-priority`. -This information is exposed by Redis slave instances in their `INFO` output, -and Sentinel uses it in order to pick a slave among the ones that can be +Redis instances have a configuration parameter called `replica-priority`. +This information is exposed by Redis replica instances in their `INFO` output, +and Sentinel uses it in order to pick a replica among the ones that can be used in order to failover a master: -1. If the slave priority is set to 0, the slave is never promoted to master. -2. Slaves with a *lower* priority number are preferred by Sentinel. +1. If the replica priority is set to 0, the replica is never promoted to master. +2. Replicas with a *lower* priority number are preferred by Sentinel. -For example if there is a slave S1 in the same data center of the current -master, and another slave S2 in another data center, it is possible to set +For example if there is a replica S1 in the same data center of the current +master, and another replica S2 in another data center, it is possible to set S1 with a priority of 10 and S2 with a priority of 100, so that if the master fails and both S1 and S2 are available, S1 will be preferred. -For more information about the way slaves are selected, please check the **slave selection and priority** section of this documentation. +For more information about the way replicas are selected, please check the **replica selection and priority** section of this documentation. Sentinel and Redis authentication --- When the master is configured to require a password from clients, -as a security measure, slaves need to also be aware of this password in -order to authenticate with the master and create the master-slave connection +as a security measure, replicas need to also be aware of this password in +order to authenticate with the master and create the master-replica connection used for the asynchronous replication protocol. This is achieved using the following configuration directives: * `requirepass` in the master, in order to set the authentication password, and to make sure the instance will not process requests for non authenticated clients. -* `masterauth` in the slaves in order for the slaves to authenticate with the master in order to correctly replicate data from it. +* `masterauth` in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it. When Sentinel is used, there is not a single master, since after a failover -slaves may play the role of masters, and old masters can be reconfigured in -order to act as slaves, so what you want to do is to set the above directives -in all your instances, both masters and slaves. +replicas may play the role of masters, and old masters can be reconfigured in +order to act as replicas, so what you want to do is to set the above directives +in all your instances, both masters and replicas. This is also usually a sane setup since you don't want to protect -data only in the master, having the same data accessible in the slaves. +data only in the master, having the same data accessible in the replicas. -However, in the uncommon case where you need a slave that is accessible -without authentication, you can still do it by setting up **a slave priority -of zero**, to prevent this slave from being promoted to master, and -configuring in this slave only the `masterauth` directive, without +However, in the uncommon case where you need a replica that is accessible +without authentication, you can still do it by setting up **a replica priority +of zero**, to prevent this replica from being promoted to master, and +configuring in this replica only the `masterauth` directive, without using the `requirepass` directive, so that data will be readable by unauthenticated clients. @@ -838,7 +838,7 @@ An acceptable reply to PING is one of the following: * PING replied with -MASTERDOWN error. Any other reply (or no reply at all) is considered non valid. -However note that **a logical master that advertises itself as a slave in +However note that **a logical master that advertises itself as a replica in the INFO output is considered to be down**. Note that SDOWN requires that no acceptable reply is received for the whole @@ -860,13 +860,13 @@ order to really start the failover, but no failover can be triggered without reaching the ODOWN state. The ODOWN condition **only applies to masters**. For other kind of instances -Sentinel doesn't require to act, so the ODOWN state is never reached for slaves +Sentinel doesn't require to act, so the ODOWN state is never reached for replicas and other sentinels, but only SDOWN is. -However SDOWN has also semantic implications. For example a slave in SDOWN +However SDOWN has also semantic implications. For example a replica in SDOWN state is not selected to be promoted by a Sentinel performing a failover. -Sentinels and Slaves auto discovery +Sentinels and replicas auto discovery --- Sentinels stay connected with other Sentinels in order to reciprocally @@ -874,16 +874,16 @@ check the availability of each other, and to exchange messages. However you don't need to configure a list of other Sentinel addresses in every Sentinel instance you run, as Sentinel uses the Redis instances Pub/Sub capabilities in order to discover the other Sentinels that are monitoring the same masters -and slaves. +and replicas. This feature is implemented by sending *hello messages* into the channel named `__sentinel__:hello`. -Similarly you don't need to configure what is the list of the slaves attached +Similarly you don't need to configure what is the list of the replicas attached to a master, as Sentinel will auto discover this list querying Redis. -* Every Sentinel publishes a message to every monitored master and slave Pub/Sub channel `__sentinel__:hello`, every two seconds, announcing its presence with ip, port, runid. -* Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master and slave, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master. +* Every Sentinel publishes a message to every monitored master and replica Pub/Sub channel `__sentinel__:hello`, every two seconds, announcing its presence with ip, port, runid. +* Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master and replica, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master. * Hello messages also include the full current configuration of the master. If the receiving Sentinel has a configuration for a given master which is older than the one received, it updates to the new configuration immediately. * Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added. @@ -893,64 +893,64 @@ Sentinel reconfiguration of instances outside the failover procedure Even when no failover is in progress, Sentinels will always try to set the current configuration on monitored instances. Specifically: -* Slaves (according to the current configuration) that claim to be masters, will be configured as slaves to replicate with the current master. -* Slaves connected to a wrong master, will be reconfigured to replicate with the right master. +* Replicas (according to the current configuration) that claim to be masters, will be configured as replicas to replicate with the current master. +* Replicas connected to a wrong master, will be reconfigured to replicate with the right master. -For Sentinels to reconfigure slaves, the wrong configuration must be observed for some time, that is greater than the period used to broadcast new configurations. +For Sentinels to reconfigure replicas, the wrong configuration must be observed for some time, that is greater than the period used to broadcast new configurations. -This prevents Sentinels with a stale configuration (for example because they just rejoined from a partition) will try to change the slaves configuration before receiving an update. +This prevents Sentinels with a stale configuration (for example because they just rejoined from a partition) will try to change the replicas configuration before receiving an update. Also note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions: -* Masters failed over are reconfigured as slaves when they return available. -* Slaves partitioned away during a partition are reconfigured once reachable. +* Masters failed over are reconfigured as replicas when they return available. +* Replicas partitioned away during a partition are reconfigured once reachable. The important lesson to remember about this section is: **Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances**. -Slave selection and priority +Replica selection and priority --- When a Sentinel instance is ready to perform a failover, since the master is in `ODOWN` state and the Sentinel received the authorization to failover -from the majority of the Sentinel instances known, a suitable slave needs +from the majority of the Sentinel instances known, a suitable replica needs to be selected. -The slave selection process evaluates the following information about slaves: +The replica selection process evaluates the following information about replicas: 1. Disconnection time from the master. -2. Slave priority. +2. Replica priority. 3. Replication offset processed. 4. Run ID. -A slave that is found to be disconnected from the master for more than ten +A replica that is found to be disconnected from the master for more than ten times the configured master timeout (down-after-milliseconds option), plus the time the master is also not available from the point of view of the Sentinel doing the failover, is considered to be not suitable for the failover and is skipped. -In more rigorous terms, a slave whose the `INFO` output suggests it has been +In more rigorous terms, a replica whose the `INFO` output suggests it has been disconnected from the master for more than: (down-after-milliseconds * 10) + milliseconds_since_master_is_in_SDOWN_state Is considered to be unreliable and is disregarded entirely. -The slave selection only considers the slaves that passed the above test, +The replica selection only considers the replicas that passed the above test, and sorts it based on the above criteria, in the following order. -1. The slaves are sorted by `slave-priority` as configured in the `redis.conf` file of the Redis instance. A lower priority will be preferred. -2. If the priority is the same, the replication offset processed by the slave is checked, and the slave that received more data from the master is selected. -3. If multiple slaves have the same priority and processed the same data from the master, a further check is performed, selecting the slave with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a slave, but is useful in order to make the process of slave selection more deterministic, instead of resorting to select a random slave. +1. The replicas are sorted by `replica-priority` as configured in the `redis.conf` file of the Redis instance. A lower priority will be preferred. +2. If the priority is the same, the replication offset processed by the replica is checked, and the replica that received more data from the master is selected. +3. If multiple replicas have the same priority and processed the same data from the master, a further check is performed, selecting the replica with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a replica, but is useful in order to make the process of replica selection more deterministic, instead of resorting to select a random replica. -Redis masters (that may be turned into slaves after a failover), and slaves, all -must be configured with a `slave-priority` if there are machines to be strongly +Redis masters (that may be turned into replicas after a failover), and replicas, all +must be configured with a `replica-priority` if there are machines to be strongly preferred. Otherwise all the instances can run with the default run ID (which -is the suggested setup, since it is far more interesting to select the slave +is the suggested setup, since it is far more interesting to select the replica by replication offset). -A Redis instance can be configured with a special `slave-priority` of zero +A Redis instance can be configured with a special `replica-priority` of zero in order to be **never selected** by Sentinels as the new master. -However a slave configured in this way will still be reconfigured by +However a replica configured in this way will still be reconfigured by Sentinels in order to replicate with the new master after a failover, the only difference is that it will never become a master itself. @@ -1007,14 +1007,14 @@ Configuration propagation Once a Sentinel is able to failover a master successfully, it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master. -For a failover to be considered successful, it requires that the Sentinel was able to send the `SLAVEOF NO ONE` command to the selected slave, and that the switch to master was later observed in the `INFO` output of the master. +For a failover to be considered successful, it requires that the Sentinel was able to send the `SLAVEOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master. -At this point, even if the reconfiguration of the slaves is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration. +At this point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration. The way a new configuration is propagated is the reason why we need that every Sentinel failover is authorized with a different version number (configuration epoch). -Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub/Sub messages, both in the master and all the slaves. At the same time all the Sentinels wait for messages to see what is the configuration +Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub/Sub messages, both in the master and all the replicas. At the same time all the Sentinels wait for messages to see what is the configuration advertised by the other Sentinels. Configurations are broadcast in the `__sentinel__:hello` Pub/Sub channel. @@ -1061,7 +1061,7 @@ a Redis instance, and a Sentinel instance: +-------------+ +------------+ In this system the original state was that Redis 3 was the master, while -Redis 1 and 2 were slaves. A partition occurred isolating the old master. +Redis 1 and 2 were replicas. A partition occurred isolating the old master. Sentinels 1 and 2 started a failover promoting Sentinel 1 as the new master. The Sentinel properties guarantee that Sentinel 1 and 2 now have the new @@ -1073,7 +1073,7 @@ partition will heal, however what happens during the partition if there are clients partitioned with the old master? Clients will be still able to write to Redis 3, the old master. When the -partition will rejoin, Redis 3 will be turned into a slave of Redis 1, and +partition will rejoin, Redis 3 will be turned into a replica of Redis 1, and all the data written during the partition will be lost. Depending on your configuration you may want or not that this scenario happens: @@ -1084,10 +1084,10 @@ Depending on your configuration you may want or not that this scenario happens: Since Redis is asynchronously replicated, there is no way to totally prevent data loss in this scenario, however you can bound the divergence between Redis 3 and Redis 1 using the following Redis configuration option: - min-slaves-to-write 1 - min-slaves-max-lag 10 + min-replicas-to-write 1 + min-replicas-max-lag 10 -With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 slave. Since replication is asynchronous *not being able to write* actually means that the slave is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. +With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. Using this configuration the Redis 3 in the above example will become unavailable after 10 seconds. When the partition heals, the Sentinel 3 configuration will converge to the new one, and Client B will be able to fetch a valid configuration and continue. From ef3473f9f6e34fde2527becc15dc21e611a0746f Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 30 Oct 2019 20:48:58 +0200 Subject: [PATCH 0257/1457] Refactors latency-monitoring to subcommands (#1194) --- commands.json | 50 ++++++++++- commands/latency-doctor.md | 45 ++++++++++ commands/latency-graph.md | 64 ++++++++++++++ commands/latency-help.md | 10 +++ commands/latency-history.md | 44 ++++++++++ commands/latency-latest.md | 37 ++++++++ commands/latency-reset.md | 34 ++++++++ topics/latency-monitor.md | 163 ++++++------------------------------ 8 files changed, 309 insertions(+), 138 deletions(-) create mode 100644 commands/latency-doctor.md create mode 100644 commands/latency-graph.md create mode 100644 commands/latency-help.md create mode 100644 commands/latency-history.md create mode 100644 commands/latency-latest.md create mode 100644 commands/latency-reset.md diff --git a/commands.json b/commands.json index 33e4e677fc..175b53e692 100644 --- a/commands.json +++ b/commands.json @@ -1650,7 +1650,6 @@ "summary": "Show helpful text about the different subcommands", "since": "4.0.0", "group": "server" - }, "MEMORY MALLOC-STATS": { "summary": "Show allocator internal stats", @@ -3876,5 +3875,54 @@ ], "since": "5.0.0", "group": "stream" + }, + "LATENCY DOCTOR": { + "summary": "Return a human readable latency analysis report.", + "since": "2.8.13", + "group": "server" + }, + "LATENCY GRAPH": { + "summary": "Return a latency graph for the event.", + "arguments": [ + { + "name": "event", + "type": "string" + } + ], + "since": "2.8.13", + "group": "server" + }, + "LATENCY HISTORY": { + "summary": "Return timestamp-latency samples for the event.", + "arguments": [ + { + "name": "event", + "type": "string" + } + ], + "since": "2.8.13", + "group": "server" + }, + "LATENCY LATEST": { + "summary": "Return the latest latency samples for all events.", + "since": "2.8.13", + "group": "server" + }, + "LATENCY RESET": { + "summary": "Reset latency data for one or more events.", + "arguments": [ + { + "name": "event", + "type": "string", + "optional": true + } + ], + "since": "2.8.13", + "group": "server" + }, + "LATENCY HELP": { + "summary": "Show helpful text about the different subcommands.", + "since": "2.8.13", + "group": "server" } } diff --git a/commands/latency-doctor.md b/commands/latency-doctor.md new file mode 100644 index 0000000000..c8081d2421 --- /dev/null +++ b/commands/latency-doctor.md @@ -0,0 +1,45 @@ +The `LATENCY DOCTOR` command reports about different latency-related issues and advises about possible remedies. + +This command is the most powerful analysis tool in the latency monitoring +framework, and is able to provide additional statistical data like the average +period between latency spikes, the median deviation, and a human-readable +analysis of the event. For certain events, like `fork`, additional information +is provided, like the rate at which the system forks processes. + +This is the output you should post in the Redis mailing list if you are +looking for help about Latency related issues. + +@example + +``` +127.0.0.1:6379> latency doctor + +Dave, I have observed latency spikes in this Redis instance. +You don't mind talking about it, do you Dave? + +1. command: 5 latency spikes (average 300ms, mean deviation 120ms, + period 73.40 sec). Worst all time event 500ms. + +I have a few advices for you: + +- Your current Slow Log configuration only logs events that are + slower than your configured latency monitor threshold. Please + use 'CONFIG SET slowlog-log-slower-than 1000'. +- Check your Slow Log to understand what are the commands you are + running which are too slow to execute. Please check + http://redis.io/commands/slowlog for more information. +- Deleting, expiring or evicting (because of maxmemory policy) + large objects is a blocking operation. If you have very large + objects that are often deleted, expired, or evicted, try to + fragment those objects into multiple smaller objects. +``` + +**Note:** the doctor has erratic psychological behaviors, so we recommend interacting with it carefully. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@return + +@bulk-string-reply diff --git a/commands/latency-graph.md b/commands/latency-graph.md new file mode 100644 index 0000000000..6431feda47 --- /dev/null +++ b/commands/latency-graph.md @@ -0,0 +1,64 @@ +Produces an ASCII-art style graph for the specified event. + +`LATENCY GRAPH` lets you intuitively understand the latency trend of an `event` via state-of-the-art visualization. It can be used for quickly grasping the situation before resorting to means such parsing the raw data from `LATENCY HISTORY` or external tooling. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +@example + +``` +127.0.0.1:6379> latency reset command +(integer) 0 +127.0.0.1:6379> debug sleep .1 +OK +127.0.0.1:6379> debug sleep .2 +OK +127.0.0.1:6379> debug sleep .3 +OK +127.0.0.1:6379> debug sleep .5 +OK +127.0.0.1:6379> debug sleep .4 +OK +127.0.0.1:6379> latency graph command +command - high 500 ms, low 101 ms (all time high 500 ms) +-------------------------------------------------------------------------------- + #_ + _|| + _||| +_|||| + +11186 +542ss +sss +``` + +The vertical labels under each graph column represent the amount of seconds, +minutes, hours or days ago the event happened. For example "15s" means that the +first graphed event happened 15 seconds ago. + +The graph is normalized in the min-max scale so that the zero (the underscore +in the lower row) is the minimum, and a # in the higher row is the maximum. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@return + +@bulk-string-reply \ No newline at end of file diff --git a/commands/latency-help.md b/commands/latency-help.md new file mode 100644 index 0000000000..8077bf07d9 --- /dev/null +++ b/commands/latency-help.md @@ -0,0 +1,10 @@ +The `LATENCY HELP` command returns a helpful text describing the different +subcommands. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/latency-history.md b/commands/latency-history.md new file mode 100644 index 0000000000..dc8b3305f5 --- /dev/null +++ b/commands/latency-history.md @@ -0,0 +1,44 @@ +The `LATENCY HISTORY` command returns the raw data of the `event`'s latency spikes time series. + +This is useful to an application that wants to fetch raw data in order to perform monitoring, display graphs, and so forth. + +The command will return up to 160 timestamp-latency pairs for the `event`. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +@example + +``` +127.0.0.1:6379> latency history command +1) 1) (integer) 1405067822 + 2) (integer) 251 +2) 1) (integer) 1405067941 + 2) (integer) 1001 +``` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@return + +@array-reply: specifically: + +The command returns an array where each element is a two elements array +representing the timestamp and the latency of the event. \ No newline at end of file diff --git a/commands/latency-latest.md b/commands/latency-latest.md new file mode 100644 index 0000000000..39ffc55336 --- /dev/null +++ b/commands/latency-latest.md @@ -0,0 +1,37 @@ +The `LATENCY LATEST` command reports the latest latency events logged. + +Each reported event has the following fields: + +* Event name. +* Unix timestamp of the latest latency spike for the event. +* Latest event latency in millisecond. +* All-time maximum latency for this event. + +"All-time" means the maximum latency since the Redis instance was +started, or the time that events were reset `LATENCY RESET`. + +@example: + +``` +127.0.0.1:6379> debug sleep 1 +OK +(1.00s) +127.0.0.1:6379> debug sleep .25 +OK +127.0.0.1:6379> latency latest +1) 1) "command" + 2) (integer) 1405067976 + 3) (integer) 251 + 4) (integer) 1001 +``` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@return + +@array-reply: specifically: + +The command returns an array where each element is a four elements array +representing the event's name, timestamp, latest and all-time latency measurements. diff --git a/commands/latency-reset.md b/commands/latency-reset.md new file mode 100644 index 0000000000..826cf71ac1 --- /dev/null +++ b/commands/latency-reset.md @@ -0,0 +1,34 @@ +The `LATENCY RESET` command resets the latency spikes time series of all, or only some, events. + +When the command is called without arguments, it resets all the +events, discarding the currently logged latency spike events, and resetting +the maximum event time register. + +It is possible to reset only specific events by providing the `event` names +as arguments. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor + +@reply + +@integer-reply: the number of event time series that were reset. diff --git a/topics/latency-monitor.md b/topics/latency-monitor.md index c18927ee01..ff10c77716 100644 --- a/topics/latency-monitor.md +++ b/topics/latency-monitor.md @@ -54,6 +54,25 @@ event. This is how the time series work: * Latency spikes for the same event happening in the same second are merged (by taking the maximum latency), so even if continuous latency spikes are measured for a given event, for example because the user set a very low threshold, at least 180 seconds of history are available. * For every element the all-time maximum latency is recorded. +The framework monitors and logs latency spikes in the execution time of these events: + +* `command`: regular commands. +* `fast-command`: O(1) and O(log N) commands. +* `fork`: the `fork(2)` system call. +* `rdb-unlink-temp-file`: the `unlink(2)` system call. +* `aof-write`: writing to the AOF - a catchall event `fsync(2)` system calls. +* `aof-fsync-always`: the `fsync(2)` system call when invoked by the `appendfsync allways` policy. +* `aof-write-pending-fsync`: the `fsync(2)` system call when there are pending writes. +* `aof-write-active-child`: the `fsync(2)` system call when performed by a child process. +* `aof-write-alone`: the `fsync(2)` system call when performed by the main process. +* `aof-fstat`: the `fstat(2)` system call. +* `aof-rename`: the `rename(2)` system call for renaming the temporary file after completing `BGREWRITEAOF`. +* `aof-rewrite-diff-write`: writing the differences accumulated while performing `BGREWRITEAOF`. +* `active-defrag-cycle`: the active defragmentation cycle. +* `expire-cycle`: the expiration cycle. +* `eviction-cycle`: the eviction cycle. +* `eviction-del`: deletes during the eviction cycle. + How to enable latency monitoring --- @@ -72,142 +91,12 @@ Information reporting with the LATENCY command --- The user interface to the latency monitoring subsystem is the `LATENCY` command. -Like many other Redis commands, `LATENCY` accept subcommands that modify the -behavior of the command. The next sections document each subcommand. - -LATENCY LATEST ---- - -The `LATENCY LATEST` command reports the latest latency events logged. Each event has the following fields: - -* Event name. -* Unix timestamp of the latest latency spike for the event. -* Latest event latency in millisecond. -* All-time maximum latency for this event. - -All-time does not really mean the maximum latency since the Redis instance was -started, because it is possible to reset events data using `LATENCY RESET` as we'll see later. - -The following is an example output: - -``` -127.0.0.1:6379> debug sleep 1 -OK -(1.00s) -127.0.0.1:6379> debug sleep .25 -OK -127.0.0.1:6379> latency latest -1) 1) "command" - 2) (integer) 1405067976 - 3) (integer) 251 - 4) (integer) 1001 -``` - -LATENCY HISTORY `event-name` ---- - -The `LATENCY HISTORY` command is useful in order to fetch raw data from the -event time series, as timestamp-latency pairs. The command will return up -to 160 elements for a given event. An application may want to fetch raw data -in order to perform monitoring, display graphs, and so forth. - -Example output: - -``` -127.0.0.1:6379> latency history command -1) 1) (integer) 1405067822 - 2) (integer) 251 -2) 1) (integer) 1405067941 - 2) (integer) 1001 -``` - -LATENCY RESET [`event-name` ... `event-name`] ---- - -The `LATENCY RESET` command, if called without arguments, resets all the -events, discarding the currently logged latency spike events, and resetting -the maximum event time register. - -It is possible to reset only specific events by providing the event names -as arguments. The command returns the number of events time series that were -reset during the command execution. - -LATENCY GRAPH `event-name` ---- - -Produces an ASCII-art style graph for the specified event: - -``` -127.0.0.1:6379> latency reset command -(integer) 0 -127.0.0.1:6379> debug sleep .1 -OK -127.0.0.1:6379> debug sleep .2 -OK -127.0.0.1:6379> debug sleep .3 -OK -127.0.0.1:6379> debug sleep .5 -OK -127.0.0.1:6379> debug sleep .4 -OK -127.0.0.1:6379> latency graph command -command - high 500 ms, low 101 ms (all time high 500 ms) --------------------------------------------------------------------------------- - #_ - _|| - _||| -_|||| - -11186 -542ss -sss -``` - -The vertical labels under each graph column represent the amount of seconds, -minutes, hours or days ago the event happened. For example "15s" means that the -first graphed event happened 15 seconds ago. - -The graph is normalized in the min-max scale so that the zero (the underscore -in the lower row) is the minimum, and a # in the higher row is the maximum. - -The graph subcommand is useful in order to get a quick idea about the trend -of a given latency event without using additional tooling, and without the -need to interpret raw data as provided by `LATENCY HISTORY`. - -LATENCY DOCTOR ---- - -The `LATENCY DOCTOR` command is the most powerful analysis tool in the latency -monitoring, and is able to provide additional statistical data like the average -period between latency spikes, the median deviation, and an human readable -analysis of the event. For certain events, like `fork`, additional information -is provided, like the rate at which the system forks processes. - -This is the output you should post in the Redis mailing list if you are -looking for help about Latency related issues. - -Example output: - - 127.0.0.1:6379> latency doctor - - Dave, I have observed latency spikes in this Redis instance. - You don't mind talking about it, do you Dave? - - 1. command: 5 latency spikes (average 300ms, mean deviation 120ms, - period 73.40 sec). Worst all time event 500ms. - - I have a few advices for you: +Like many other Redis commands, `LATENCY` accepts subcommands that modifies its behavior. These subcommands are: - - Your current Slow Log configuration only logs events that are - slower than your configured latency monitor threshold. Please - use 'CONFIG SET slowlog-log-slower-than 1000'. - - Check your Slow Log to understand what are the commands you are - running which are too slow to execute. Please check - http://redis.io/commands/slowlog for more information. - - Deleting, expiring or evicting (because of maxmemory policy) - large objects is a blocking operation. If you have very large - objects that are often deleted, expired, or evicted, try to - fragment those objects into multiple smaller objects. +* `LATENCY LATEST` - returns the latest latency samples for all events. +* `LATENCY HISTORY` - returns latency time series for a given event. +* `LATENCY RESET` - resets latency time series data for one or more events. +* `LATENCY GRAPH` - renders an ASCII-art graph of an event's latency samples. +* `LATENCY DOCTOR` - replies with a human-readable latency analysis report. -The doctor has erratic psychological behaviors, so we recommend interacting with -it carefully. +Refer to each subcommand's documentation page for further information. From 62bfc1b8a22ac394d014f0889bde107044416f3e Mon Sep 17 00:00:00 2001 From: Avram Lyon Date: Wed, 30 Oct 2019 12:07:21 -0700 Subject: [PATCH 0258/1457] PFMERGE includes target HLL in the merge (#605) This surprised me and wasn't a documented behavior, but it is 100% reproducible. The docs state that the source sets are merged into a new HyperLogLog, which set to the destination key; they do not state that the destination key is party to the merge. ``` 127.0.0.1:6379> PFADD test 1 (integer) 1 127.0.0.1:6379> PFADD test 2 (integer) 1 127.0.0.1:6379> PFADD test 3 (integer) 1 127.0.0.1:6379> PFCOUNT test (integer) 3 127.0.0.1:6379> PFADD foo 1 (integer) 1 127.0.0.1:6379> PFADD foo 2 (integer) 1 127.0.0.1:6379> PFCOUNT foo (integer) 2 127.0.0.1:6379> PFMERGE targ foo test OK 127.0.0.1:6379> PFCOUNT targ (integer) 3 127.0.0.1:6379> PFADD bar 6 (integer) 1 127.0.0.1:6379> PFMERGE targ foo bar OK 127.0.0.1:6379> PFCOUNT targ (integer) 4 ``` --- commands/pfmerge.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands/pfmerge.md b/commands/pfmerge.md index 38500b55d7..e90329e58f 100644 --- a/commands/pfmerge.md +++ b/commands/pfmerge.md @@ -5,6 +5,10 @@ structures. The computed merged HyperLogLog is set to the destination variable, which is created if does not exist (defaulting to an empty HyperLogLog). +If the destination variable exists, it is treated as one of the source sets +and its cardinality will be included in the cardinality of the computed +HyperLogLog. + @return @simple-string-reply: The command just returns `OK`. From b9e3c8fd2df4bab0cc2bd12332358cb23bf76844 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 30 Oct 2019 22:21:27 +0200 Subject: [PATCH 0259/1457] Sorts commands.json alphabetically (as redis.io doesn't) (#1204) --- commands.json | 493 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 394 insertions(+), 99 deletions(-) diff --git a/commands.json b/commands.json index 175b53e692..16d2845b72 100644 --- a/commands.json +++ b/commands.json @@ -45,8 +45,14 @@ "type": "key" }, { - "name": ["start", "end"], - "type": ["integer", "integer"], + "name": [ + "start", + "end" + ], + "type": [ + "integer", + "integer" + ], "optional": true } ], @@ -63,26 +69,52 @@ }, { "command": "GET", - "name": ["type", "offset"], - "type": ["type", "integer"], + "name": [ + "type", + "offset" + ], + "type": [ + "type", + "integer" + ], "optional": true }, { "command": "SET", - "name": ["type", "offset", "value"], - "type": ["type", "integer", "integer"], + "name": [ + "type", + "offset", + "value" + ], + "type": [ + "type", + "integer", + "integer" + ], "optional": true }, { "command": "INCRBY", - "name": ["type", "offset", "increment"], - "type": ["type", "integer", "integer"], + "name": [ + "type", + "offset", + "increment" + ], + "type": [ + "type", + "integer", + "integer" + ], "optional": true }, { "command": "OVERFLOW", "type": "enum", - "enum": ["WRAP", "SAT", "FAIL"], + "enum": [ + "WRAP", + "SAT", + "FAIL" + ], "optional": true } ], @@ -248,7 +280,12 @@ { "command": "TYPE", "type": "enum", - "enum": ["normal", "master", "slave", "pubsub"], + "enum": [ + "normal", + "master", + "slave", + "pubsub" + ], "optional": true }, { @@ -274,7 +311,12 @@ { "command": "TYPE", "type": "enum", - "enum": ["normal", "master", "replica", "pubsub"], + "enum": [ + "normal", + "master", + "replica", + "pubsub" + ], "optional": true } ], @@ -306,7 +348,11 @@ { "name": "reply-mode", "type": "enum", - "enum": ["ON", "OFF", "SKIP"] + "enum": [ + "ON", + "OFF", + "SKIP" + ] } ], "since": "3.2", @@ -335,7 +381,10 @@ { "name": "unblock-type", "type": "enum", - "enum": ["TIMEOUT", "ERROR"], + "enum": [ + "TIMEOUT", + "ERROR" + ], "optional": true } ], @@ -405,7 +454,10 @@ { "name": "options", "type": "enum", - "enum": ["FORCE","TAKEOVER"], + "enum": [ + "FORCE", + "TAKEOVER" + ], "optional": true } ], @@ -511,7 +563,10 @@ { "name": "reset-type", "type": "enum", - "enum": ["HARD", "SOFT"], + "enum": [ + "HARD", + "SOFT" + ], "optional": true } ], @@ -547,7 +602,12 @@ { "name": "subcommand", "type": "enum", - "enum": ["IMPORTING", "MIGRATING", "STABLE", "NODE"] + "enum": [ + "IMPORTING", + "MIGRATING", + "STABLE", + "NODE" + ] }, { "name": "node-id", @@ -854,7 +914,9 @@ { "name": "async", "type": "enum", - "enum": ["ASYNC"], + "enum": [ + "ASYNC" + ], "optional": true } ], @@ -867,7 +929,9 @@ { "name": "async", "type": "enum", - "enum": ["ASYNC"], + "enum": [ + "ASYNC" + ], "optional": true } ], @@ -883,8 +947,16 @@ "type": "key" }, { - "name": ["longitude", "latitude", "member"], - "type": ["double", "double", "string"], + "name": [ + "longitude", + "latitude", + "member" + ], + "type": [ + "double", + "double", + "string" + ], "multiple": true } ], @@ -973,24 +1045,35 @@ { "name": "unit", "type": "enum", - "enum": ["m", "km", "ft", "mi"] + "enum": [ + "m", + "km", + "ft", + "mi" + ] }, { "name": "withcoord", "type": "enum", - "enum": ["WITHCOORD"], + "enum": [ + "WITHCOORD" + ], "optional": true }, { "name": "withdist", "type": "enum", - "enum": ["WITHDIST"], + "enum": [ + "WITHDIST" + ], "optional": true }, { "name": "withhash", "type": "enum", - "enum": ["WITHHASH"], + "enum": [ + "WITHHASH" + ], "optional": true }, { @@ -1002,7 +1085,10 @@ { "name": "order", "type": "enum", - "enum": ["ASC", "DESC"], + "enum": [ + "ASC", + "DESC" + ], "optional": true }, { @@ -1040,24 +1126,35 @@ { "name": "unit", "type": "enum", - "enum": ["m", "km", "ft", "mi"] + "enum": [ + "m", + "km", + "ft", + "mi" + ] }, { "name": "withcoord", "type": "enum", - "enum": ["WITHCOORD"], + "enum": [ + "WITHCOORD" + ], "optional": true }, { "name": "withdist", "type": "enum", - "enum": ["WITHDIST"], + "enum": [ + "WITHDIST" + ], "optional": true }, { "name": "withhash", "type": "enum", - "enum": ["WITHHASH"], + "enum": [ + "WITHHASH" + ], "optional": true }, { @@ -1069,7 +1166,10 @@ { "name": "order", "type": "enum", - "enum": ["ASC", "DESC"], + "enum": [ + "ASC", + "DESC" + ], "optional": true }, { @@ -1303,8 +1403,14 @@ "type": "key" }, { - "name": ["field", "value"], - "type": ["string", "string"], + "name": [ + "field", + "value" + ], + "type": [ + "string", + "string" + ], "multiple": true } ], @@ -1320,8 +1426,14 @@ "type": "key" }, { - "name": ["field", "value"], - "type": ["string", "string"], + "name": [ + "field", + "value" + ], + "type": [ + "string", + "string" + ], "multiple": true } ], @@ -1489,7 +1601,10 @@ { "name": "where", "type": "enum", - "enum": ["BEFORE", "AFTER"] + "enum": [ + "BEFORE", + "AFTER" + ] }, { "name": "pivot", @@ -1712,7 +1827,10 @@ { "name": "key", "type": "enum", - "enum": ["key", "\"\""] + "enum": [ + "key", + "\"\"" + ] }, { "name": "destination-db", @@ -1725,13 +1843,17 @@ { "name": "copy", "type": "enum", - "enum": ["COPY"], + "enum": [ + "COPY" + ], "optional": true }, { "name": "replace", "type": "enum", - "enum": ["REPLACE"], + "enum": [ + "REPLACE" + ], "optional": true }, { @@ -1813,8 +1935,14 @@ "complexity": "O(N) where N is the number of keys to set.", "arguments": [ { - "name": ["key", "value"], - "type": ["key", "string"], + "name": [ + "key", + "value" + ], + "type": [ + "key", + "string" + ], "multiple": true } ], @@ -1826,8 +1954,14 @@ "complexity": "O(N) where N is the number of keys to set.", "arguments": [ { - "name": ["key", "value"], - "type": ["key", "string"], + "name": [ + "key", + "value" + ], + "type": [ + "key", + "string" + ], "multiple": true } ], @@ -1985,8 +2119,12 @@ "complexity": "O(N) where N is the number of patterns the client is already subscribed to.", "arguments": [ { - "name": ["pattern"], - "type": ["pattern"], + "name": [ + "pattern" + ], + "type": [ + "pattern" + ], "multiple": true } ], @@ -2127,13 +2265,17 @@ { "name": "replace", "type": "enum", - "enum": ["REPLACE"], + "enum": [ + "REPLACE" + ], "optional": true }, { "name": "absttl", "type": "enum", - "enum": ["ABSTTL"], + "enum": [ + "ABSTTL" + ], "optional": true }, { @@ -2260,7 +2402,11 @@ { "name": "mode", "type": "enum", - "enum": ["YES", "SYNC", "NO"] + "enum": [ + "YES", + "SYNC", + "NO" + ] } ], "since": "3.2.0", @@ -2359,13 +2505,19 @@ { "name": "expiration", "type": "enum", - "enum": ["EX seconds", "PX milliseconds"], + "enum": [ + "EX seconds", + "PX milliseconds" + ], "optional": true }, { "name": "condition", "type": "enum", - "enum": ["NX", "XX"], + "enum": [ + "NX", + "XX" + ], "optional": true } ], @@ -2454,7 +2606,10 @@ { "name": "save-mode", "type": "enum", - "enum": ["NOSAVE", "SAVE"], + "enum": [ + "NOSAVE", + "SAVE" + ], "optional": true } ], @@ -2601,8 +2756,14 @@ }, { "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], "optional": true }, { @@ -2615,13 +2776,18 @@ { "name": "order", "type": "enum", - "enum": ["ASC", "DESC"], + "enum": [ + "ASC", + "DESC" + ], "optional": true }, { "name": "sorting", "type": "enum", - "enum": ["ALPHA"], + "enum": [ + "ALPHA" + ], "optional": true }, { @@ -2702,8 +2868,12 @@ "complexity": "O(N) where N is the number of channels to subscribe to.", "arguments": [ { - "name": ["channel"], - "type": ["string"], + "name": [ + "channel" + ], + "type": [ + "string" + ], "multiple": true } ], @@ -2891,24 +3061,37 @@ { "name": "condition", "type": "enum", - "enum": ["NX","XX"], + "enum": [ + "NX", + "XX" + ], "optional": true }, { "name": "change", "type": "enum", - "enum": ["CH"], + "enum": [ + "CH" + ], "optional": true }, { "name": "increment", "type": "enum", - "enum": ["INCR"], + "enum": [ + "INCR" + ], "optional": true }, { - "name": ["score", "member"], - "type": ["double", "string"], + "name": [ + "score", + "member" + ], + "type": [ + "double", + "string" + ], "multiple": true } ], @@ -2995,7 +3178,11 @@ "command": "AGGREGATE", "name": "aggregate", "type": "enum", - "enum": ["SUM", "MIN", "MAX"], + "enum": [ + "SUM", + "MIN", + "MAX" + ], "optional": true } ], @@ -3075,7 +3262,9 @@ { "name": "withscores", "type": "enum", - "enum": ["WITHSCORES"], + "enum": [ + "WITHSCORES" + ], "optional": true } ], @@ -3100,8 +3289,14 @@ }, { "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], "optional": true } ], @@ -3126,8 +3321,14 @@ }, { "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], "optional": true } ], @@ -3153,13 +3354,21 @@ { "name": "withscores", "type": "enum", - "enum": ["WITHSCORES"], + "enum": [ + "WITHSCORES" + ], "optional": true }, { "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], "optional": true } ], @@ -3278,7 +3487,9 @@ { "name": "withscores", "type": "enum", - "enum": ["WITHSCORES"], + "enum": [ + "WITHSCORES" + ], "optional": true } ], @@ -3304,13 +3515,21 @@ { "name": "withscores", "type": "enum", - "enum": ["WITHSCORES"], + "enum": [ + "WITHSCORES" + ], "optional": true }, { "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], "optional": true } ], @@ -3377,7 +3596,11 @@ "command": "AGGREGATE", "name": "aggregate", "type": "enum", - "enum": ["SUM", "MIN", "MAX"], + "enum": [ + "SUM", + "MIN", + "MAX" + ], "optional": true } ], @@ -3504,8 +3727,14 @@ "arguments": [ { "command": "CONSUMERS", - "name": ["key", "groupname"], - "type": ["key", "string"], + "name": [ + "key", + "groupname" + ], + "type": [ + "key", + "string" + ], "optional": true }, { @@ -3523,7 +3752,9 @@ { "name": "help", "type": "enum", - "enum": ["HELP"], + "enum": [ + "HELP" + ], "optional": true } ], @@ -3543,8 +3774,14 @@ "type": "string" }, { - "name": ["field", "value"], - "type": ["string", "string"], + "name": [ + "field", + "value" + ], + "type": [ + "string", + "string" + ], "multiple": true } ], @@ -3562,12 +3799,16 @@ { "name": "strategy", "type": "enum", - "enum": ["MAXLEN"] + "enum": [ + "MAXLEN" + ] }, { "name": "approx", "type": "enum", - "enum": ["~"], + "enum": [ + "~" + ], "optional": true }, { @@ -3678,7 +3919,9 @@ { "name": "streams", "type": "enum", - "enum": ["STREAMS"] + "enum": [ + "STREAMS" + ] }, { "name": "key", @@ -3700,26 +3943,56 @@ "arguments": [ { "command": "CREATE", - "name": ["key", "groupname", "id-or-$"], - "type": ["key", "string", "string"], + "name": [ + "key", + "groupname", + "id-or-$" + ], + "type": [ + "key", + "string", + "string" + ], "optional": true }, { "command": "SETID", - "name": ["key", "groupname", "id-or-$"], - "type": ["key", "string", "string"], + "name": [ + "key", + "groupname", + "id-or-$" + ], + "type": [ + "key", + "string", + "string" + ], "optional": true }, { "command": "DESTROY", - "name": ["key", "groupname"], - "type": ["key", "string"], + "name": [ + "key", + "groupname" + ], + "type": [ + "key", + "string" + ], "optional": true }, { "command": "DELCONSUMER", - "name": ["key", "groupname", "consumername"], - "type": ["key", "string", "string"], + "name": [ + "key", + "groupname", + "consumername" + ], + "type": [ + "key", + "string", + "string" + ], "optional": true } ], @@ -3732,8 +4005,14 @@ "arguments": [ { "command": "GROUP", - "name": ["group", "consumer"], - "type": ["string", "string"] + "name": [ + "group", + "consumer" + ], + "type": [ + "string", + "string" + ] }, { "command": "COUNT", @@ -3750,13 +4029,17 @@ { "name": "noack", "type": "enum", - "enum": ["NOACK"], + "enum": [ + "NOACK" + ], "optional": true }, { "name": "streams", "type": "enum", - "enum": ["STREAMS"] + "enum": [ + "STREAMS" + ] }, { "name": "key", @@ -3838,12 +4121,16 @@ }, { "name": "force", - "enum": ["FORCE"], + "enum": [ + "FORCE" + ], "optional": true }, { "name": "justid", - "enum": ["JUSTID"], + "enum": [ + "JUSTID" + ], "optional": true } ], @@ -3863,8 +4150,16 @@ "type": "string" }, { - "name": ["start", "end", "count"], - "type": ["string", "string", "integer"], + "name": [ + "start", + "end", + "count" + ], + "type": [ + "string", + "string", + "integer" + ], "optional": true }, { From f14e1bbd5fd6765f3ea38780e3c891fea69e83f7 Mon Sep 17 00:00:00 2001 From: stefanoarnone <57237824+stefanoarnone@users.noreply.github.com> Date: Fri, 1 Nov 2019 11:15:41 +0100 Subject: [PATCH 0260/1457] added Redily to tools (#1205) --- tools.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools.json b/tools.json index e707e3c356..617456b560 100644 --- a/tools.json +++ b/tools.json @@ -77,6 +77,13 @@ "description": "Parse Redis dump.rdb files, Analyze Memory, and Export Data to JSON.", "authors": ["srithedabbler"] }, + { + "name": "Redily", + "language": "Javascript", + "url": "https://www.redily.app", + "description": "An intuitive, cross-platform Redis GUI Client built in Electron.", + "authors": ["stefano_arnone"] + }, { "name": "Rdb-parser", "language": "Javascript", From dd07d66e58e70314bfebebd54a30d1fa2dc45bc6 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 5 Nov 2019 15:14:28 +0200 Subject: [PATCH 0261/1457] Minor typo fix --- commands/setbit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/setbit.md b/commands/setbit.md index 770d4abb41..e0b440b56c 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -43,7 +43,7 @@ the entire bitmap. Bitmaps are not an actual data type, but a set of bit-oriented operations defined on the String type (for more information refer to the [Bitmaps section of the Data Types Introduction page][ti]). This means that -bitmaps can be used with in string command, and most importantly with `SET` and +bitmaps can be used with string commands, and most importantly with `SET` and `GET`. Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes From 0b453467b3a7790bb0ba61e2561a6d2e500a2ece Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Wed, 6 Nov 2019 23:26:13 +0100 Subject: [PATCH 0262/1457] update commands.json: unique index parameters SWAPDB (#1206) --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 16d2845b72..a9b3750645 100644 --- a/commands.json +++ b/commands.json @@ -2914,11 +2914,11 @@ "summary": "Swaps two Redis databases", "arguments": [ { - "name": "index", + "name": "index1", "type": "integer" }, { - "name": "index", + "name": "index2", "type": "integer" } ], From 7651f66b0d05c8528e70cd7db5fa546138d038f9 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 12 Nov 2019 18:28:43 +0200 Subject: [PATCH 0263/1457] Removes claim of equality to last specified ID --- commands/xreadgroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index 78e7284d53..21fff4f720 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -53,7 +53,7 @@ The ID to specify in the **STREAMS** option when using `XREADGROUP` can be one of the following two: * The special `>` ID, which means that the consumer want to receive only messages that were *never delivered to any other consumer*. It just means, give me new messages. -* Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs equal or greater to the one provided. So basically if the ID is not `>`, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored. +* Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs greater than the one provided. So basically if the ID is not `>`, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored. Like `XREAD` the `XREADGROUP` command can be used in a blocking way. There are no differences in this regard. From b8a4524abee396eac54235f971f48879de520924 Mon Sep 17 00:00:00 2001 From: Dylan Thacker-Smith Date: Thu, 14 Nov 2019 08:57:31 -0500 Subject: [PATCH 0264/1457] Reference module documentation pages rather than files from intro page (#1207) * Reference module documentation pages rather than files from intro page The filename references were confusing, since it wasn't clear what repo those files were in and those filenames weren't accurate anymore. Linking to pages makes it clearer. * Use the same order for module links as on the main Documentation page. This makes it clearer that it is referring to the same thing when coming from the main documentation page. --- topics/modules-intro.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 9fb08c4dd1..c708d9d0ca 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -1,12 +1,12 @@ Redis Modules: an introduction to the API === -The modules documentation is composed of the following files: +The modules documentation is composed of the following pages: -* `INTRO.md` (this file). An overview about Redis Modules system and API. It's a good idea to start your reading here. -* `API.md` is generated from module.c top comments of RedisModule functions. It is a good reference in order to understand how each function works. -* `TYPES.md` covers the implementation of native data types into modules. -* `BLOCK.md` shows how to write blocking commands that will not reply immediately, but will block the client, without blocking the Redis server, and will provide a reply whenever will be possible. +* Introduction to Redis modules (this file). An overview about Redis Modules system and API. It's a good idea to start your reading here. +* [Implementing native data types](/topics/modules-native-types) covers the implementation of native data types into modules. +* [Blocking operations](/topics/modules-blocking-ops) shows how to write blocking commands that will not reply immediately, but will block the client, without blocking the Redis server, and will provide a reply whenever will be possible. +* [Redis modules API reference](/topics/modules-api-ref) is generated from module.c top comments of RedisModule functions. It is a good reference in order to understand how each function works. Redis modules make possible to extend Redis functionality using external modules, implementing new Redis commands at a speed and with features From 150f806b9109fdbf69d10023313703d16a03090f Mon Sep 17 00:00:00 2001 From: "Helmut K. C. Tessarek" Date: Thu, 14 Nov 2019 15:17:17 -0500 Subject: [PATCH 0265/1457] fix minor error in eval doc (#1208) The section 'Using Lua scripting in RESP3 mode' includes the following sentence: The RESP3 protocol is semantically more powerful, however most scripts are ok with using just RESP3. However, the sentence makes no sense. I suspect that it was a typo, which changes the meaning though. The sentence is probably supposed to go like this: The RESP3 protocol is semantically more powerful, however most scripts are ok with using just RESP2. --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 20bc4d1de8..faa84fa7f5 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -609,7 +609,7 @@ new protocol using the `HELLO` command: this way the connection is put in RESP3 mode. In this mode certain commands, like for instance `HGETALL`, reply with a new data type (the Map data type in this specific case). The RESP3 protocol is semantically more powerful, however most scripts are ok -with using just RESP3. +with using just RESP2. The Lua engine always assumes to run in RESP2 mode when talking with Redis, so whatever the connection that is invoking the `EVAL` or `EVALSHA` command From 6f558e3b8feae4882d819d4053110a87efab6d2b Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 18 Nov 2019 20:03:12 +0200 Subject: [PATCH 0266/1457] Adds a couple of hyphens (#1210) Fixes #1209 --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 123f309013..6bdec35116 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -39,7 +39,7 @@ Obtaining Sentinel --- The current version of Sentinel is called **Sentinel 2**. It is a rewrite of -the initial Sentinel implementation using stronger and simpler to predict +the initial Sentinel implementation using stronger and simpler-to-predict algorithms (that are explained in this documentation). A stable release of Redis Sentinel is shipped since Redis 2.8. From 4cd19bb1c3e3e00a8ff62a1dec5c2c6bcf9bc4bf Mon Sep 17 00:00:00 2001 From: Wenzel Lowe Date: Tue, 19 Nov 2019 08:10:05 -0800 Subject: [PATCH 0267/1457] Update docs to include MKSTREAM usage (#1211) * Update streams-intro.md to include MKSTREAM usage See #1007 * Update xgroup.md * Minor editing * Minor edits --- commands/xgroup.md | 7 +++++++ topics/streams-intro.md | 9 +++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/commands/xgroup.md b/commands/xgroup.md index 15fb235d46..5738303dc1 100644 --- a/commands/xgroup.md +++ b/commands/xgroup.md @@ -25,6 +25,13 @@ consumer group already exists, the command returns a `-BUSYGROUP` error. Otherwise the operation is performed and OK is returned. There are no hard limits to the number of consumer groups you can associate to a given stream. +If the specified stream doesn't exist when creating a group, an error will be +returned. You can use the optional `MKSTREAM` subcommand as the last argument +after the `ID` to automatically create the stream, if it doesn't exist. Note +that if the stream is created in this way it will have a length of 0: + + XGROUP CREATE mystream consumer-group-name $ MKSTREAM + A consumer can be destroyed completely by using the following form: XGROUP DESTROY mystream consumer-group-name diff --git a/topics/streams-intro.md b/topics/streams-intro.md index b89e1e44d0..86ee3ab663 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -231,10 +231,15 @@ Assuming I have a key `mystream` of type stream already existing, in order to cr OK ``` -Note: _Currently it is not possible to create consumer groups for non-existing streams, however it is possible that in the short future we'll add an option to the **XGROUP** command in order to create an empty stream in such cases._ - As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what is the current *last message ID* when the group was just created? If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume *all* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. +`XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: + +``` +> XGROUP CREATE newstream mygroup $ MKSTREAM +OK +``` + Now that the consumer group is created we can immediately start trying to read messages via the consumer group, by using the **XREADGROUP** command. We'll read from the consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice and Bob. **XREADGROUP** is very similar to **XREAD** and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in **XREAD**. From a3fbe6383c84124987e7839f84869ae04ff61d5f Mon Sep 17 00:00:00 2001 From: Angus Pearson Date: Wed, 20 Nov 2019 19:26:33 +0000 Subject: [PATCH 0268/1457] Add Cluster note to `RENAME` to indicate that it has reduced functionality in Redis Cluster (#1212) * Add Cluster note to to indicate that it has reduced functionality in Redis Cluster * Update RENAME, RENAMENX use @history (#1182) --- commands/rename.md | 6 +++++- commands/renamenx.md | 7 ++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/commands/rename.md b/commands/rename.md index a86f001c03..999c1a3bf5 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -2,7 +2,11 @@ Renames `key` to `newkey`. It returns an error when `key` does not exist. If `newkey` already exists it is overwritten, when this happens `RENAME` executes an implicit `DEL` operation, so if the deleted key contains a very big value it may cause high latency even if `RENAME` itself is usually a constant-time operation. -**Note:** Before Redis 3.2.0, an error is returned if source and destination names are the same. +In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. + +@history + +* `<= 3.2.0`: Before Redis 3.2.0, an error is returned if source and destination names are the same. @return diff --git a/commands/renamenx.md b/commands/renamenx.md index 8fa6395b96..e60c208a37 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -1,7 +1,12 @@ Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error when `key` does not exist. -**Note:** Before Redis 3.2.0, an error is returned if source and destination names are the same. +In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. + + +@history + +* `<= 3.2.0`: Before Redis 3.2.0, an error is returned if source and destination names are the same. @return From c2c75dd782fa55e501ce0bfa90d7c70ca8513564 Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Mon, 25 Nov 2019 17:33:38 +0100 Subject: [PATCH 0269/1457] Update clients.json (#1213) Go Client based on RESP3. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 2b33d815b5..457fa4a26d 100644 --- a/clients.json +++ b/clients.json @@ -193,6 +193,15 @@ "description": "A Redis client focused on streaming, with support for a print-like API, pipelining, Pub/Sub, and connection pooling.", "authors": ["stephensearles"] }, + + { + "name": "go-resp3", + "language": "Go", + "repository": "https://github.com/d024441/go-resp3", + "description": "A Redis Go client implementation based on the Redis RESP3 protocol.", + "authors": [], + "active": true + }, { "name": "hedis", From 66e7fe4631c4c8af3c1072d7200533a2364a00a2 Mon Sep 17 00:00:00 2001 From: Origin Date: Wed, 27 Nov 2019 23:10:25 +0800 Subject: [PATCH 0270/1457] Update clients.json (#1214) update `tedis` meta info --- clients.json | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 457fa4a26d..7613a46412 100644 --- a/clients.json +++ b/clients.json @@ -812,8 +812,9 @@ { "name": "tedis", "language": "Node.js", - "repository": "https://github.com/myour-cc/tedis", - "description": "Tedis is a redis client developed for nodejs platform. Its name was inspired by the Java platform jedis and the development language was typescript. Therefore, Tedis is named as Tedis", + "repository": "https://github.com/silkjs/tedis", + "url": "https://tedis.silkjs.org", + "description": "Tedis is a redis client developed for Node.js . Its name was inspired by the Jedis and TypeScript.", "authors": ["dasoncheng"], "recommended": true, "active": true From bfeca4ccbcf6a6d8bcfb56f64b3983f11b5aadb8 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 4 Dec 2019 02:13:33 +0200 Subject: [PATCH 0271/1457] Documents streams keyspace events (#1215) --- topics/notifications.md | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index e82a8ba637..369e83287b 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -77,9 +77,10 @@ following table: s Set commands h Hash commands z Sorted set commands + t Stream commands x Expired events (events generated every time a key expires) e Evicted events (events generated when a key is evicted for maxmemory) - A Alias for g$lshzxe, so that the "AKE" string means all the events. + A Alias for g$lshztxe, so that the "AKE" string means all the events. At least `K` or `E` should be present in the string, otherwise no event will be delivered regardless of the rest of the string. @@ -128,6 +129,14 @@ Different commands generate different kind of events according to the following * `ZREMBYSCORE` generates a single `zrembyscore` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. * `ZREMBYRANK` generates a single `zrembyrank` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. * `ZINTERSTORE` and `ZUNIONSTORE` respectively generate `zinterstore` and `zunionstore` events. In the special case the resulting sorted set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* `XADD` generates an `xadd` event, possibly followed an `xtrim` event when used with the `MAXLEN` subcommand. +* `XDEL` generates a single `xdel` event even when multiple entries are are deleted. +* `XGROUP CREATE` generates an `xgroup-create` event. +* `XGROUP DELCONSUMER` generates an `xgroup-delconsumer` event. +* `XGROUP DESTROY` generates an `xgroup-destroy` event. +* `XGROUP SETID` generates an `xgroup-setid` event. +* `XSETID` generates an `xsetid` event. +* `XTRIM` generates an `xtrim` event. * Every time a key with a time to live associated is removed from the data set because it expired, an `expired` event is generated. * Every time a key is evicted from the data set in order to free memory as a result of the `maxmemory` policy, an `evicted` event is generated. From 846dd58d777f82c5086e8fb9f45fede6b6275f5f Mon Sep 17 00:00:00 2001 From: "Helmut K. C. Tessarek" Date: Thu, 5 Dec 2019 19:19:29 -0500 Subject: [PATCH 0272/1457] add redis-stats to tools.json (#1217) --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index 617456b560..8ce09c651f 100644 --- a/tools.json +++ b/tools.json @@ -693,5 +693,13 @@ "description": "Start a REST-like API service for your Redis database, without writing a single line of code.", "url": "https://github.com/XD-DENG/rediseen", "authors": ["XiaodongDENG1"] + }, + + { + "name": "redis-stats", + "language": "PHP", + "repository": "https://github.com/tessus/redis-stats", + "description": "A lightweight dashboard to show statistics about your Redis server. Flushing databases is available when set in config.", + "authors": [] } ] From 7a4998c5faf7c778aa2328c9bd09e4d2bdb6c4ad Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Thu, 12 Dec 2019 00:24:10 +0100 Subject: [PATCH 0273/1457] update commands.json GEODIST: change unit type to enum (#1221) --- commands.json | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index a9b3750645..7812b0e64c 100644 --- a/commands.json +++ b/commands.json @@ -1015,7 +1015,13 @@ }, { "name": "unit", - "type": "string", + "type": "enum", + "enum": [ + "m", + "km", + "ft", + "mi" + ], "optional": true } ], From 83c73f2841fadffbbbf7d73b2b8dc82608939618 Mon Sep 17 00:00:00 2001 From: Kyle Banker Date: Thu, 12 Dec 2019 00:12:27 -0700 Subject: [PATCH 0274/1457] Multiple list encoding options no longer supported. (#1220) * Multiple list encoding options no longer supported. See https://github.com/antirez/redis/commit/02bb515a094c081fcbc3e33c60a5dbff440eb447 * Update hash encoding directives --- topics/memory-optimization.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index eadf1bfc55..35b3b8fefd 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -8,10 +8,8 @@ Since Redis 2.2 many data types are optimized to use less space up to a certain This is completely transparent from the point of view of the user and API. Since this is a CPU / memory trade off it is possible to tune the maximum number of elements and maximum element size for special encoded types using the following redis.conf directives. - hash-max-zipmap-entries 512 (hash-max-ziplist-entries for Redis >= 2.6) - hash-max-zipmap-value 64 (hash-max-ziplist-value for Redis >= 2.6) - list-max-ziplist-entries 512 - list-max-ziplist-value 64 + hash-max-ziplist-entries 512 + hash-max-ziplist-value 64 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 set-max-intset-entries 512 From d2d593e121ea75c4f26c371907fe5b95d2a7ee24 Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Sat, 14 Dec 2019 15:46:46 +0100 Subject: [PATCH 0275/1457] update commands.json SETBIT: change value type to integer (#1222) * update commands.json GEODIST: change unit type to enum * update commands.json SETBIT: change value type to integer --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 7812b0e64c..05fea94bf9 100644 --- a/commands.json +++ b/commands.json @@ -2544,7 +2544,7 @@ }, { "name": "value", - "type": "string" + "type": "integer" } ], "since": "2.2.0", From 207e09f2098b8e2a8fcefbcb0847b2241e528c69 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 14 Dec 2019 20:14:39 +0200 Subject: [PATCH 0276/1457] Moves some commands to a new 'bit' category This moves the 6 bit-related commands from the 'string' to a new category. The motivation is giving more exposure for these lesser- known capabilities. --- commands.json | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/commands.json b/commands.json index 05fea94bf9..0102c805fb 100644 --- a/commands.json +++ b/commands.json @@ -57,7 +57,7 @@ } ], "since": "2.6.0", - "group": "string" + "group": "bit" }, "BITFIELD": { "summary": "Perform arbitrary bitfield integer operations on strings", @@ -119,7 +119,7 @@ } ], "since": "3.2.0", - "group": "string" + "group": "bit" }, "BITOP": { "summary": "Perform bitwise operations between strings", @@ -140,7 +140,7 @@ } ], "since": "2.6.0", - "group": "string" + "group": "bit" }, "BITPOS": { "summary": "Find first bit set or clear in a string", @@ -166,7 +166,7 @@ } ], "since": "2.8.7", - "group": "string" + "group": "bit" }, "BLPOP": { "summary": "Remove and get the first element in a list, or block until one is available", @@ -1220,7 +1220,7 @@ } ], "since": "2.2.0", - "group": "string" + "group": "bit" }, "GETRANGE": { "summary": "Get a substring of the string stored at a key", @@ -2548,7 +2548,7 @@ } ], "since": "2.2.0", - "group": "string" + "group": "bit" }, "SETEX": { "summary": "Set the value and expiration of a key", From 36bece70803b4b69800b5c83df018d0cee6f0279 Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Mon, 16 Dec 2019 11:13:52 -0800 Subject: [PATCH 0277/1457] A few improvements to the phrasing in the FAQ. (#1224) --- topics/faq.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index 589e3b07f0..c44a7c64a4 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -25,12 +25,12 @@ To give you a few examples (all obtained using 64-bit instances): * 1 Million small Keys -> String Value pairs use ~ 85MB of memory. * 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory. -To test your use case is trivial using the `redis-benchmark` utility to generate random data sets and check with the `INFO memory` command the space used. +Testing your use case is trivial. Use the `redis-benchmark` utility to generate random data sets then check the space used with the `INFO memory` command. 64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can have a lot of memory in 64-bit systems, so in order to run large Redis servers a 64-bit system is more or less required. The alternative is sharding. -## I like Redis's high level operations and features, but I don't like that it takes everything in memory and I can't have a dataset larger the memory. Plans to change this? +## I like Redis's high level operations and features, but I don't like that it keeps everything in memory and I can't have a dataset larger than memory. Are there any plans to change this? In the past the Redis developers experimented with Virtual Memory and other systems in order to allow larger than RAM datasets, but after all we are very happy if we can do one thing well: data served from memory, disk used for storage. So for now there are no plans to create an on disk backend for Redis. Most of what Redis is, after all, is a direct result of its current design. @@ -77,11 +77,11 @@ usage, using the `maxmemory` option in the configuration file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands), or you can configure it to evict keys when the max memory limit -is reached in the case you are using Redis for caching. +is reached in the case where you are using Redis for caching. We have detailed documentation in case you plan to use [Redis as an LRU cache](/topics/lru-cache). -The INFO command will report the amount of memory Redis is using so you can +The `INFO` command reports the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions before they are reached. @@ -119,7 +119,7 @@ available values. ## Are Redis on-disk-snapshots atomic? -Yes, redis background saving process is always forked when the server is +Yes, Redis background saving process is always forked when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot. @@ -139,10 +139,10 @@ You can find more information about using multiple Redis instances in the [Parti However with Redis 4.0 we started to make Redis more threaded. For now this is limited to deleting objects in the background, and to blocking commands -implemented via Redis modules. For the next releases, the plan is to make Redis +implemented via Redis modules. For future releases, the plan is to make Redis more and more threaded. -## What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a Hash, List, Set, Sorted Set? +## What is the maximum number of keys a single Redis instance can hold? and what is the max number of elements in a Hash, List, Set, Sorted Set? Redis can handle up to 2^32 keys, and was tested in practice to handle at least 250 million keys per instance. From fd8b6d9f6138754a65f9e49c056ac43291d3541e Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 18 Dec 2019 12:59:07 +0100 Subject: [PATCH 0278/1457] Revert "Update GEOHASH man page." This reverts commit 9d3f8dce000c5d21b38eaf4020d0bc1bba5ee761. --- commands/geohash.md | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/commands/geohash.md b/commands/geohash.md index 6bdbe003f5..2517c3f28d 100644 --- a/commands/geohash.md +++ b/commands/geohash.md @@ -10,17 +10,14 @@ described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and Geohash string properties --- -The command returns 10 characters Geohash strings, so only two bits of -precision are lost compared to the Redis internal 52 bit representation, but -this loss doesn't affect the precision in a sensible way: normally geohashes -are cut to up to 8 characters, giving anyway a precision of +/- 0.019 km. +The command returns 11 characters Geohash strings, so no precision is loss +compared to the Redis internal 52 bit representation. The returned Geohashes +have the following properties: -1. Geo hashes can be shortened removing characters from the right. It will lose precision but will still point to the same area. +1. They can be shortened removing characters from the right. It will lose precision but will still point to the same area. 2. It is possible to use them in `geohash.org` URLs such as `http://geohash.org/`. This is an [example of such URL](http://geohash.org/sqdtr74hyu0). 3. Strings with a similar prefix are nearby, but the contrary is not true, it is possible that strings with different prefixes are nearby too. -Note: older versions of Redis used to return 11 characters instead of 10, however because of a bug the last character was not correct and was not helping in having a better precision. - @return @array-reply, specifically: From 418c5e4d169b637133a3d8d077973a38ae2a882a Mon Sep 17 00:00:00 2001 From: Marcelo Date: Fri, 20 Dec 2019 15:34:14 +0100 Subject: [PATCH 0279/1457] Adds aedis to the list of C++ redis clients. (#1225) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 7613a46412..8ada627124 100644 --- a/clients.json +++ b/clients.json @@ -939,6 +939,15 @@ "authors": [] }, + { + "name": "aedis", + "language": "C++", + "repository": "https://github.com/mzimbres/aedis", + "description": "An async redis client designed for simplicity and reliability.", + "authors": ["mzimbres"], + "active": true + }, + { "name": "libredis", "language": "C", From c882f1d3a767a096453489b00d7ec5bcc2eceaa7 Mon Sep 17 00:00:00 2001 From: Poga Po Date: Sat, 21 Dec 2019 00:17:17 +0800 Subject: [PATCH 0280/1457] add redis-percentile module (#1226) --- modules.json | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index a9ebfef692..540f33cbca 100644 --- a/modules.json +++ b/modules.json @@ -228,7 +228,7 @@ ], "stars": 2 }, - + { "name": "RedisPushIptables", "license": "GPL-3.0", @@ -247,5 +247,16 @@ "xxlabaza" ], "stars": 7 + }, + + { + "name": "redis-percentile", + "license": "MIT", + "repository": "https://github.com/poga/redis-percentile", + "description": "Redis module for efficient percentile estimation of streaming or distributed data with t-digest algorithm.", + "authors": [ + "devpoga" + ], + "stars": 2 } ] From 7d688bf7c4231e563961ff1a7e1ffaddc9e6428e Mon Sep 17 00:00:00 2001 From: Stefan Miller Date: Tue, 24 Dec 2019 18:51:33 +0100 Subject: [PATCH 0281/1457] update commands.json CLIENT UNBLOCK: change client-id type to integer (#1230) --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 05fea94bf9..0db9af628d 100644 --- a/commands.json +++ b/commands.json @@ -376,7 +376,7 @@ "arguments": [ { "name": "client-id", - "type": "string" + "type": "integer" }, { "name": "unblock-type", From 6fb67f09ac8d2de4ed17962bc0fce8eb88b663ab Mon Sep 17 00:00:00 2001 From: Misha Kaletsky <15040698+mmkal@users.noreply.github.com> Date: Thu, 26 Dec 2019 16:18:57 +0000 Subject: [PATCH 0282/1457] fix: subscribe should be of type string (#1229) arguments for SUBSCRIBE were in the wrong format, which caused this bug: https://github.com/mmkal/handy-redis/issues/46 this change makes them match unsubscribe: https://github.com/antirez/redis-doc/blob/c882f1d3a767a096453489b00d7ec5bcc2eceaa7/commands.json#L2997-L3010 --- commands.json | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/commands.json b/commands.json index 0db9af628d..0950502615 100644 --- a/commands.json +++ b/commands.json @@ -2874,12 +2874,8 @@ "complexity": "O(N) where N is the number of channels to subscribe to.", "arguments": [ { - "name": [ - "channel" - ], - "type": [ - "string" - ], + "name": "channel", + "type": "string", "multiple": true } ], From 643b2917dfdaa283923d3752069c68ab60d33d3d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=8E=8B=E8=80=80?= Date: Wed, 1 Jan 2020 04:51:08 +0800 Subject: [PATCH 0283/1457] fix(vm): fix typo error in vm-intro and beautify styles (#1233) --- topics/internals-vm.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/topics/internals-vm.md b/topics/internals-vm.md index 05afed7e73..025128fc6d 100644 --- a/topics/internals-vm.md +++ b/topics/internals-vm.md @@ -49,12 +49,12 @@ This is how the Redis Object structure _robj_ looks like: As you can see there are a few fields about VM. The most important one is _storage_, that can be one of this values: - * REDIS_VM_MEMORY: the associated value is in memory. - * REDIS_VM_SWAPPED: the associated values is swapped, and the value entry of the hash table is just set to NULL. - * REDIS_VM_LOADING: the value is swapped on disk, the entry is NULL, but there is a job to load the object from the swap to the memory (this field is only used when threaded VM is active). - * REDIS_VM_SWAPPING: the value is in memory, the entry is a pointer to the actual Redis Object, but there is an I/O job in order to transfer this value to the swap file. + * `REDIS_VM_MEMORY`: the associated value is in memory. + * `REDIS_VM_SWAPPED`: the associated values is swapped, and the value entry of the hash table is just set to NULL. + * `REDIS_VM_LOADING`: the value is swapped on disk, the entry is NULL, but there is a job to load the object from the swap to the memory (this field is only used when threaded VM is active). + * `REDIS_VM_SWAPPING`: the value is in memory, the entry is a pointer to the actual Redis Object, but there is an I/O job in order to transfer this value to the swap file. -If an object is swapped on disk (REDIS_VM_SWAPPED or REDIS_VM_LOADING), how do we know where it is stored, what type it is, and so forth? That's simple: the _vtype_ field is set to the original type of the Redis object swapped, while the _vm_ field (that is a _redisObjectVM_ structure) holds information about the location of the object. This is the definition of this additional structure: +If an object is swapped on disk (`REDIS_VM_SWAPPED` or `REDIS_VM_LOADING`), how do we know where it is stored, what type it is, and so forth? That's simple: the _vtype_ field is set to the original type of the Redis object swapped, while the _vm_ field (that is a _redisObjectVM_ structure) holds information about the location of the object. This is the definition of this additional structure: /* The VM object structure */ struct redisObjectVM { @@ -101,14 +101,14 @@ In order to transfer an object from memory to disk we need to perform the follow * Now that we know how many pages are required in the swap file, we need to find this number of contiguous free pages inside the swap file. This task is accomplished by the `vmFindContiguousPages` function. As you can guess this function may fail if the swap is full, or so fragmented that we can't easily find the required number of contiguous free pages. When this happens we just abort the swapping of the object, that will continue to live in memory. * Finally we can write the object on disk, at the specified position, just calling the function `vmWriteObjectOnSwap`. -As you can guess once the object was correctly written in the swap file, it is freed from memory, the storage field in the associated key is set to REDIS_VM_SWAPPED, and the used pages are marked as used in the page table. +As you can guess once the object was correctly written in the swap file, it is freed from memory, the storage field in the associated key is set to `REDIS_VM_SWAPPED`, and the used pages are marked as used in the page table. Loading objects back in memory --- Loading an object from swap to memory is simpler, as we already know where the object is located and how many pages it is using. We also know the type of the object (the loading functions are required to know this information, as there is no header or any other information about the object type on disk), but this is stored in the _vtype_ field of the associated key as already seen above. -Calling the function `vmLoadObject` passing the key object associated to the value object we want to load back is enough. The function will also take care of fixing the storage type of the key (that will be REDIS_VM_MEMORY), marking the pages as freed in the page table, and so forth. +Calling the function `vmLoadObject` passing the key object associated to the value object we want to load back is enough. The function will also take care of fixing the storage type of the key (that will be `REDIS_VM_MEMORY`), marking the pages as freed in the page table, and so forth. The return value of the function is the loaded Redis Object itself, that we'll have to set again as value in the main hash table (instead of the NULL value we put in place of the object pointer when the value was originally swapped out). @@ -130,7 +130,7 @@ vmSwapOneObject acts performing the following steps: * The key space in inspected in order to find a good candidate for swapping (we'll see later what a good candidate for swapping is). * The associated value is transferred to disk, in a blocking way. - * The key storage field is set to REDIS_VM_SWAPPED, while the _vm_ fields of the object are set to the right values (the page index where the object was swapped, and the number of pages used to swap it). + * The key storage field is set to `REDIS_VM_SWAPPED`, while the _vm_ fields of the object are set to the right values (the page index where the object was swapped, and the number of pages used to swap it). * Finally the value object is freed and the value entry of the hash table is set to NULL. The function is called again and again until one of the following happens: there is no way to swap more objects because either the swap file is full or nearly all the objects are already transferred on disk, or simply the memory usage is already under the vm-max-memory parameter. @@ -223,11 +223,11 @@ This is how the `iojob` structure looks like: There are just three type of jobs that an I/O thread can perform (the type is specified by the `type` field of the structure): -* REDIS_IOJOB_LOAD: load the value associated to a given key from swap to memory. The object offset inside the swap file is `page`, the object type is `key->vtype`. The result of this operation will populate the `val` field of the structure. -* REDIS_IOJOB_PREPARE_SWAP: compute the number of pages needed in order to save the object pointed by `val` into the swap. The result of this operation will populate the `pages` field. -* REDIS_IOJOB_DO_SWAP: Transfer the object pointed by `val` to the swap file, at page offset `page`. +* `REDIS_IOJOB_LOAD`: load the value associated to a given key from swap to memory. The object offset inside the swap file is `page`, the object type is `key->vtype`. The result of this operation will populate the `val` field of the structure. +* `REDIS_IOJOB_PREPARE_SWAP`: compute the number of pages needed in order to save the object pointed by `val` into the swap. The result of this operation will populate the `pages` field. +* `REDIS_IOJOB_DO_SWAP`: Transfer the object pointed by `val` to the swap file, at page offset `page`. -The main thread delegates just the above three tasks. All the rest is handled by the main thread itself, for instance finding a suitable range of free pages in the swap file page table (that is a fast operation), deciding what object to swap, altering the storage field of a Redis object to reflect the current state of a value. +The main thread delegates just the above three tasks. All the rest is handled by the I/O thread itself, for instance finding a suitable range of free pages in the swap file page table (that is a fast operation), deciding what object to swap, altering the storage field of a Redis object to reflect the current state of a value. Non blocking VM as probabilistic enhancement of blocking VM --- @@ -260,7 +260,7 @@ There is something hard to solve about the interactions between our blocking and For instance while SORT BY is executed, a few keys are being loaded in a blocking manner by the sort command. At the same time, another client may request the same keys with a simple _GET key_ command, that will trigger the creation of an I/O job to load the key in background. -The only simple way to deal with this problem is to be able to kill I/O jobs in the main thread, so that if a key that we want to load or swap in a blocking way is in the REDIS_VM_LOADING or REDIS_VM_SWAPPING state (that is, there is an I/O job about this key), we can just kill the I/O job about this key, and go ahead with the blocking operation we want to perform. +The only simple way to deal with this problem is to be able to kill I/O jobs in the main thread, so that if a key that we want to load or swap in a blocking way is in the `REDIS_VM_LOADING` or `REDIS_VM_SWAPPING` state (that is, there is an I/O job about this key), we can just kill the I/O job about this key, and go ahead with the blocking operation we want to perform. This is not as trivial as it is. In a given moment an I/O job can be in one of the following three queues: From d7503e6d9708a0ceb66d65cbcda2af9cbeb6c7d1 Mon Sep 17 00:00:00 2001 From: Madelyn Olson <34459052+madolson@users.noreply.github.com> Date: Sat, 4 Jan 2020 08:17:09 -0800 Subject: [PATCH 0284/1457] Add documentation for cluster down read config (#1180) --- topics/cluster-tutorial.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 98ec829faf..71ff6ddca1 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -224,6 +224,8 @@ as you continue reading. * **cluster-slave-validity-factor ``**: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. * **cluster-migration-barrier ``**: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. * **cluster-require-full-coverage ``**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. +* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when when the cluster is marked as fail, either when a node can't reach a quorum of masters or full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. + Creating and using a Redis Cluster === From 9a24f05246234e01f299703d5a39068cc9ea1195 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 4 Jan 2020 22:19:02 +0200 Subject: [PATCH 0285/1457] Documents the `KEEPTTL` option for `SET` (#1235) --- commands.json | 8 ++++++++ commands/set.md | 9 +++++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 0950502615..3a2e0375f4 100644 --- a/commands.json +++ b/commands.json @@ -2525,6 +2525,14 @@ "XX" ], "optional": true + }, + { + "name": "keepttl", + "type": "enum", + "enum": [ + "KEEPTTL" + ], + "optional": true } ], "since": "1.0.0", diff --git a/commands/set.md b/commands/set.md index d691e9e7f4..a4d2f411e1 100644 --- a/commands/set.md +++ b/commands/set.md @@ -4,13 +4,13 @@ Any previous time to live associated with the key is discarded on successful `SE ## Options -Starting with Redis 2.6.12 `SET` supports a set of options that modify its -behavior: +The `SET` command supports a set of options that modify its behavior: * `EX` *seconds* -- Set the specified expire time, in seconds. * `PX` *milliseconds* -- Set the specified expire time, in milliseconds. * `NX` -- Only set the key if it does not already exist. * `XX` -- Only set the key if it already exist. +* `KEEPTTL` -- Retain the time to live associated with the key. Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it is possible that in future versions of Redis these three commands will be deprecated and finally removed. @@ -19,6 +19,11 @@ Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it @simple-string-reply: `OK` if `SET` was executed correctly. @nil-reply: a Null Bulk Reply is returned if the `SET` operation was not performed because the user specified the `NX` or `XX` option but the condition was not met. +@history + +* `>= 2.6.12`: Added the `EX`, `PX`, `NX` and `XX` options. +* `>= 6.0`: Added the `KEEPTTL` option. + @examples ```cli From ad405818dae36e25a43269a01e0332f7b5c53146 Mon Sep 17 00:00:00 2001 From: Adam Date: Wed, 8 Jan 2020 23:36:57 +0800 Subject: [PATCH 0286/1457] add redlock module (#1237) --- modules.json | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/modules.json b/modules.json index 540f33cbca..5b24640400 100644 --- a/modules.json +++ b/modules.json @@ -258,5 +258,16 @@ "devpoga" ], "stars": 2 + }, + + { + "name": "redlock", + "license": "MIT", + "repository": "https://github.com/wujunwei/redlock", + "description": "Redis module for distributed lock without using LUA script,safe unlock for different redis client.", + "authors": [ + "wujunwei" + ], + "stars": 4 } ] From ec38114d3668c3d4587d045c9792ea5dac1ac069 Mon Sep 17 00:00:00 2001 From: Martin Forstner Date: Fri, 10 Jan 2020 15:52:40 +0100 Subject: [PATCH 0287/1457] add missing word (#1238) --- commands/xgroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xgroup.md b/commands/xgroup.md index 5738303dc1..f503c12267 100644 --- a/commands/xgroup.md +++ b/commands/xgroup.md @@ -32,7 +32,7 @@ that if the stream is created in this way it will have a length of 0: XGROUP CREATE mystream consumer-group-name $ MKSTREAM -A consumer can be destroyed completely by using the following form: +A consumer group can be destroyed completely by using the following form: XGROUP DESTROY mystream consumer-group-name From e20f1746efdbb653d997d76e74e68fdf7ee98800 Mon Sep 17 00:00:00 2001 From: Martin Forstner Date: Sun, 12 Jan 2020 16:12:35 +0100 Subject: [PATCH 0288/1457] fix typo (#1241) --- commands/xpending.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xpending.md b/commands/xpending.md index aa8ee3734b..7fe4fdad19 100644 --- a/commands/xpending.md +++ b/commands/xpending.md @@ -22,7 +22,7 @@ explained in the [streams intro](/topics/streams-intro) and in the When `XPENDING` is called with just a key name and a consumer group name, it just outputs a summary about the pending messages in a given -consumer group. In the following example, we create a consumed group and +consumer group. In the following example, we create a consumer group and immediately create a pending message by reading from the group with `XREADGROUP`. From 2d3eb4ffbe2ada6e33297e25039583958515c45b Mon Sep 17 00:00:00 2001 From: Martin Forstner Date: Sun, 12 Jan 2020 16:13:20 +0100 Subject: [PATCH 0289/1457] fix typo (#1240) --- commands/xreadgroup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index 21fff4f720..e458ebf0bb 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -95,7 +95,7 @@ not complete, because it does not handle recovering after a crash. What will happen if we crash in the middle of processing messages, is that our messages will remain in the pending entries list, so we can access our history by giving `XREADGROUP` initially an ID of 0, and performing the same -loop. Once providing and ID of 0 the reply is an empty set of messages, we +loop. Once providing an ID of 0 the reply is an empty set of messages, we know that we processed and acknowledged all the pending messages: we can start to use `>` as ID, in order to get the new messages and rejoin the consumers that are processing new things. From 615b40e82f412f143e792b917eeb3ee3ae9f2571 Mon Sep 17 00:00:00 2001 From: filipe oliveira Date: Sun, 12 Jan 2020 17:14:13 +0200 Subject: [PATCH 0290/1457] [add] added Disque and RedisGears module info. Updated module stars. (#1239) --- modules.json | 100 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 59 insertions(+), 41 deletions(-) diff --git a/modules.json b/modules.json index 5b24640400..c32b6356a2 100644 --- a/modules.json +++ b/modules.json @@ -1,4 +1,25 @@ [ + { + "name": "Disque", + "license": "AGPL-3.0", + "repository": "https://github.com/antirez/disque-module", + "description": "Disque, an in-memory, distributed job queue, ported as Redis module.", + "authors": [ + "antirez" + ], + "stars": 332 + }, + { + "name": "RedisGears", + "license": "Redis Source Available License", + "repository": "https://github.com/RedisGears/RedisGears", + "description": "Dynamic execution framework for your Redis data.", + "authors": [ + "MeirShpilraien", + "RedisLabs" + ], + "stars": 67 + }, { "name": "redis-roaring", "license": "MIT", @@ -7,7 +28,7 @@ "authors": [ "aviggiano" ], - "stars": 60 + "stars": 130 }, { "name": "redis-cell", @@ -17,18 +38,18 @@ "authors": [ "brandur" ], - "stars": 403 + "stars": 651 }, { "name": "RedisGraph", "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/redis-graph", + "repository": "https://github.com/RedisGraph/RedisGraph", "description": "A graph database with a Cypher-based querying language using sparse adjacency matrices", "authors": [ "swilly22", "RedisLabs" ], - "stars": 401 + "stars": 884 }, { "name": "redis-tdigest", @@ -38,18 +59,18 @@ "authors": [ "usmanm" ], - "stars": 45 + "stars": 54 }, { "name": "RedisJSON", "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/redisjson", + "repository": "https://github.com/RedisJSON/RedisJSON", "description": "A JSON data type for Redis", "authors": [ "itamarhaber", "RedisLabs" ], - "stars": 641 + "stars": 939 }, { "name": "RedisML", @@ -60,18 +81,18 @@ "shaynativ", "RedisLabs" ], - "stars": 212 + "stars": 286 }, { "name": "RediSearch", "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/RediSearch", + "repository": "https://github.com/RediSearch/RediSearch", "description": "Full-Text search over Redis", "authors": [ "dvirsky", "RedisLabs" ], - "stars": 1323 + "stars": 1936 }, { "name": "topk", @@ -82,7 +103,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 21 + "stars": 32 }, { "name": "countminsketch", @@ -93,18 +114,18 @@ "itamarhaber", "RedisLabs" ], - "stars": 31 + "stars": 39 }, { "name": "RedisBloom", "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/redisbloom", + "repository": "https://github.com/RedisBloom/RedisBloom", "description": "Scalable Bloom filters", "authors": [ "mnunberg", "RedisLabs" ], - "stars": 136 + "stars": 473 }, { "name": "neural-redis", @@ -114,17 +135,17 @@ "authors": [ "antirez" ], - "stars": 2073 + "stars": 2160 }, { "name": "RedisTimeSeries", "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/RedisTimeSeries", + "repository": "https://github.com/RedisTimeSeries/RedisTimeSeries", "description": "Time-series data structure for redis", "authors": [ "danni-m" ], - "stars": 186 + "stars": 310 }, { "name": "RedisAI", @@ -134,7 +155,7 @@ "authors": [ "lantiga" ], - "stars": 61 + "stars": 289 }, { "name": "ReDe", @@ -144,7 +165,7 @@ "authors": [ "daTokenizer" ], - "stars": 23 + "stars": 36 }, { "name": "commentDis", @@ -154,7 +175,7 @@ "authors": [ "daTokenizer" ], - "stars": 6 + "stars": 9 }, { "name": "redis-cuckoofilter", @@ -164,7 +185,7 @@ "authors": [ "kristoff-it" ], - "stars": 66 + "stars": 123 }, { "name": "cthulhu", @@ -174,7 +195,7 @@ "authors": [ "sklivvz" ], - "stars": 117 + "stars": 139 }, { "name": "Session Gate", @@ -184,7 +205,7 @@ "authors": [ "f0rmiga" ], - "stars": 32 + "stars": 45 }, { "name": "rediSQL", @@ -195,9 +216,8 @@ "siscia", "RedBeardLab" ], - "stars": 613 + "stars": 1125 }, - { "name": "lqrm", "license": "BSD", @@ -205,19 +225,19 @@ "description": "A Laravel compatible queue driver for Redis that supports reliable blocking pop from FIFO and scheduled queues.", "authors": [ "halaei" -], + ], "stars": 4 }, - { "name": "redis-rating", - "license" : "MIT", + "license": "MIT", "repository": "https://github.com/poga/redis-rating", "description": "Estimate actual rating from postive/negative ratings", - "authors": ["devpoga"], + "authors": [ + "devpoga" + ], "stars": 14 }, - { "name": "smartcache", "license": "AGPL-3.0", @@ -226,18 +246,18 @@ "authors": [ "fcerbell" ], - "stars": 2 + "stars": 5 }, - { "name": "RedisPushIptables", "license": "GPL-3.0", "repository": "https://github.com/limithit/RedisPushIptables", "description": "RedisPushIptables is used to update firewall rules to reject the IP addresses for a specified amount of time or forever reject.", - "authors": ["Gandalf"], - "stars": 16 + "authors": [ + "Gandalf" + ], + "stars": 19 }, - { "name": "redis-fpn", "license": "Apache 2.0", @@ -246,9 +266,8 @@ "authors": [ "xxlabaza" ], - "stars": 7 + "stars": 8 }, - { "name": "redis-percentile", "license": "MIT", @@ -257,9 +276,8 @@ "authors": [ "devpoga" ], - "stars": 2 + "stars": 8 }, - { "name": "redlock", "license": "MIT", @@ -268,6 +286,6 @@ "authors": [ "wujunwei" ], - "stars": 4 + "stars": 28 } -] +] \ No newline at end of file From 4404c30f4f121c27c85c87a09706e2f87c1545d6 Mon Sep 17 00:00:00 2001 From: lifeblood Date: Sat, 18 Jan 2020 01:33:00 +0800 Subject: [PATCH 0291/1457] Update tools.json (#1243) In order to add my Redis tool "Lua Redis Admin" --- tools.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools.json b/tools.json index 8ce09c651f..91919734d4 100644 --- a/tools.json +++ b/tools.json @@ -701,5 +701,12 @@ "repository": "https://github.com/tessus/redis-stats", "description": "A lightweight dashboard to show statistics about your Redis server. Flushing databases is available when set in config.", "authors": [] + }, + { + "name": "Lua Redis Admin", + "language": "Lua", + "repository": "https://github.com/lifeblood/lua-redis-admin", + "description": "Redis client tool, Redis web client, Redis web UI, openresty lor Lua ", + "authors": ["nbdanny"] } ] From 8a8299709164baf727dd948fa7432f9a52b220a7 Mon Sep 17 00:00:00 2001 From: sewenew Date: Sat, 18 Jan 2020 23:16:11 +0800 Subject: [PATCH 0292/1457] Update modules.json: add redis-protobuf module (#1244) --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index c32b6356a2..582481beb6 100644 --- a/modules.json +++ b/modules.json @@ -287,5 +287,15 @@ "wujunwei" ], "stars": 28 + }, + { + "name": "redis-protobuf", + "license": "Apache-2.0", + "repository": "https://github.com/sewenew/redis-protobuf", + "description": "Redis module for reading and writing Protobuf messages", + "authors": [ + "sewenew" + ], + "stars": 30 } -] \ No newline at end of file +] From 5e673817f376b93bda6d55b92dd72b0e5f18a7b2 Mon Sep 17 00:00:00 2001 From: Stefanescu Marian Date: Wed, 22 Jan 2020 16:32:32 +0200 Subject: [PATCH 0293/1457] Fixing reference of "MULTI" to "EXEC" (#1245) --- topics/transactions.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/transactions.md b/topics/transactions.md index 2bbc8e6bba..5d263ef2c6 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -15,7 +15,7 @@ isolated operation. transaction is also atomic. The `EXEC` command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a -transaction before calling the `MULTI` command none of the operations +transaction before calling the `EXEC` command none of the operations are performed, instead if the `EXEC` command is called, all the operations are performed. When using the [append-only file](/topics/persistence#append-only-file) Redis makes sure From fcb8fdbe8fcc91d2fb796ae7f1737c561400a839 Mon Sep 17 00:00:00 2001 From: virendradhankar <60283810+virendradhankar@users.noreply.github.com> Date: Sat, 25 Jan 2020 20:36:42 +0530 Subject: [PATCH 0294/1457] Adds viredis to the java clients (#1246) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 8ada627124..7ce89d64d6 100644 --- a/clients.json +++ b/clients.json @@ -1772,6 +1772,15 @@ "description": "A tiny and fast redis client for script boys.", "authors": [], "active": true + }, + + { + "name": "viredis", + "language": "Java", + "repository": "https://github.com/virendradhankar/viredis", + "description": "A simple and small redis client for java.", + "authors": [], + "active": true } ] From 52c976a600e75945fe14ab52608e77a9e7491c55 Mon Sep 17 00:00:00 2001 From: D G Starkweather Date: Sun, 2 Feb 2020 09:21:03 -0500 Subject: [PATCH 0295/1457] add reventis entry to modules.json file (#1247) * add reventis entry to modules.json file * Update stars Co-authored-by: Itamar Haber --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 582481beb6..c0a7ace5f0 100644 --- a/modules.json +++ b/modules.json @@ -297,5 +297,15 @@ "sewenew" ], "stars": 30 - } + }, + { + "name": "Reventis", + "license": "Redis Source Available License", + "repository": "https://github.com/starkdg/reventis", + "description": "Redis module for storing and querying spatio-temporal event data", + "authors": [ + "starkdg" + ], + "stars": 2 + } ] From 718970e9edad6de8b5e223f1953cd1537d737013 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 2 Feb 2020 16:55:48 +0200 Subject: [PATCH 0296/1457] Fixes the example of simplest module to compile (#1249) Fixes #1248 --- topics/modules-intro.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index c708d9d0ca..8f874dd3ce 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -71,7 +71,8 @@ simple module that implements a command that outputs a random number. == REDISMODULE_ERR) return REDISMODULE_ERR; if (RedisModule_CreateCommand(ctx,"helloworld.rand", - HelloworldRand_RedisCommand) == REDISMODULE_ERR) + HelloworldRand_RedisCommand, "fast random", + 0, 0, 0) == REDISMODULE_ERR) return REDISMODULE_ERR; return REDISMODULE_OK; @@ -117,17 +118,19 @@ otherwise the module will segfault and the Redis instance will crash. The second function called, `RedisModule_CreateCommand`, is used in order to register commands into the Redis core. The following is the prototype: - int RedisModule_CreateCommand(RedisModuleCtx *ctx, const char *cmdname, - RedisModuleCmdFunc cmdfunc); + int RedisModule_CreateCommand(RedisModuleCtx *ctx, const char *name, + RedisModuleCmdFunc cmdfunc, const char *strflags, + int firstkey, int lastkey, int keystep); As you can see, most Redis modules API calls all take as first argument the `context` of the module, so that they have a reference to the module calling it, to the command and client executing a given command, and so forth. -To create a new command, the above function needs the context, the command -name, and the function pointer of the function implementing the command, -which must have the following prototype: +To create a new command, the above function needs the context, the command's +name, a pointer to the function implementing the command, the command's flags +and the positions of key names in the command's arguments. +The function that implements the command must have the following prototype: int mycommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc); From e0fb54518b634230eeeed1721a6cc5c7f8838ebc Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 5 Feb 2020 19:15:38 +0200 Subject: [PATCH 0297/1457] Renames bit group to bitmap to conform to internals --- commands.json | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/commands.json b/commands.json index 0102c805fb..efd2a2ce76 100644 --- a/commands.json +++ b/commands.json @@ -57,7 +57,7 @@ } ], "since": "2.6.0", - "group": "bit" + "group": "bitmap" }, "BITFIELD": { "summary": "Perform arbitrary bitfield integer operations on strings", @@ -119,7 +119,7 @@ } ], "since": "3.2.0", - "group": "bit" + "group": "bitmap" }, "BITOP": { "summary": "Perform bitwise operations between strings", @@ -140,7 +140,7 @@ } ], "since": "2.6.0", - "group": "bit" + "group": "bitmap" }, "BITPOS": { "summary": "Find first bit set or clear in a string", @@ -166,7 +166,7 @@ } ], "since": "2.8.7", - "group": "bit" + "group": "bitmap" }, "BLPOP": { "summary": "Remove and get the first element in a list, or block until one is available", @@ -1220,7 +1220,7 @@ } ], "since": "2.2.0", - "group": "bit" + "group": "bitmap" }, "GETRANGE": { "summary": "Get a substring of the string stored at a key", @@ -2548,7 +2548,7 @@ } ], "since": "2.2.0", - "group": "bit" + "group": "bitmap" }, "SETEX": { "summary": "Set the value and expiration of a key", From b8e928f200531cbf6f66282f20bcc45e534b8a98 Mon Sep 17 00:00:00 2001 From: Boris Date: Wed, 12 Feb 2020 15:18:35 +0100 Subject: [PATCH 0298/1457] Add RedisWebManager (Ruby) (#1251) * Add RedisWebManager (Ruby) * Remove authors --- tools.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools.json b/tools.json index 91919734d4..454f774c00 100644 --- a/tools.json +++ b/tools.json @@ -708,5 +708,12 @@ "repository": "https://github.com/lifeblood/lua-redis-admin", "description": "Redis client tool, Redis web client, Redis web UI, openresty lor Lua ", "authors": ["nbdanny"] + }, + { + "name": "RedisWebManager", + "language": "Ruby", + "repository": "https://github.com/OpenGems/redis_web_manager", + "description": "Web interface that allows you to manage easily your Redis instance (see keys, memory used, connected client, etc...).", + "authors": [] } ] From 5bde61698262d98f84c488cd1ad115f981f1dc46 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 14 Feb 2020 18:15:59 +0100 Subject: [PATCH 0299/1457] Client side caching, update the information to match the new implementation. --- topics/client-side-caching.md | 105 +++++++++++++++++++++++----------- 1 file changed, 73 insertions(+), 32 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index e96d6eb6e6..b9774eb4c0 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -6,7 +6,7 @@ usually are distinct computers compared to the database nodes, in order to store some subset of the database information directly in the application side. Normally when some data is required, the application servers will ask the -database about such information, like in the following picture: +database about such information, like in the following diagram: +-------------+ +----------+ @@ -38,46 +38,62 @@ Since often the same small percentage of data are accessed very frequently this pattern can greatly reduce the latency for the application to get data and, at the same time, the load in the database side. +Moreover there are many datasets where items change very infrequently. +For instance most user posts in a social network are either immutable or +rarely edited by the user. Adding this to the fact that usually a small +percentage of the posts are very popular, either because a small set of users +have a lot of follower and/or because recent posts have a lot more +visibility, it is clear why such pattern can be very useful. + +Usually the two key advantages of client side caching are: + +1. Data is available with a very small latency. +2. The database system receives less queries, allowing to serve the same dataset with a smaller number of nodes. + ## There are only two big problems in computer science... A problem with the above pattern is how to invalidate the information that -the application is holding, in order to avoid presenting to the user stale -data. For example after the application above locally cached the user:1234 +the application is holding, in order to avoid presenting stale data to the +user. For example after the application above locally cached the user:1234 information, Alice may update her username to Flora. Yet the application may continue to serve the old username for user 1234. -Sometimes this problem is not a big deal, so the client will just use a +Sometimes, depending on the exact application we are modeling, this problem +is not a big deal, so the client will just use a fixed maximum "time to live" for the cached information. Once a given amount of time has elapsed, the information will no longer be considered valid. More complex -patterns, when using Redis, leverage Pub/Sub messages in order to +patterns, when using Redis, leverage the Pub/Sub system in order to send invalidation messages to clients listening. This can be made to work but is tricky and costly from the point of view of the bandwidth used, because often such patterns involve sending the invalidation messages to every client in the application, even if certain clients may not have any copy of the -invalidated data. +invalidated data. Moreover every application query altering the data +requires to use the `PUBLISH` command, costing the database more CPU time +to process this command. -Regardless of what schema is used, there is however a simple fact: many +Regardless of what schema is used, there is a simple fact: many very large applications implement some form of client side caching, because it is the next logical step to having a fast store or a fast cache server. -Once clients can retrieve an important amount of information without even -asking a networked server at all, but just accessing their local memory, -then it is possible to fetch more data per second (since many queries will -not hit the database or the cache at all) with much smaller latency. For this reason Redis 6 implements direct support for client side caching, in order to make this pattern much simpler to implement, more accessible, reliable and efficient. ## The Redis implementation of client side caching -The Redis client side caching support is called _Tracking_. It basically -consist in a few very simple ideas: +The Redis client side caching support is called _Tracking_, and has two modes: + +* In the default mode, the server remembers what keys a given client accessed, and send invalidation messages when the same keys are modified. This costs memory in the server side, but sends invalidation messages only for the set of keys that the client could have in memory. +* In the _broadcasting_ mode instead the server does not attempt to remember what keys a given client accessed, so this mode does not cost any memory at all in the server side. Instead clients subscribe to key prefixes such as `object:` or `user:`, and will receive a notification message every time a key matching such prefix is touched. + +To recap, for now let's forget for a moment about the broadcasting mode, to +focus on the first mode. We'll describe broadcasting later more in details. 1. Clients can enable tracking if they want. Connections start without tracking enabled. 2. When tracking is enabled, the server remembers what keys each client requested during the connection lifetime (by sending read commands about such keys). 3. When a key is modified by some client, or is evicted because it has an associated expire time, or evicted because of a _maxmemory_ policy, all the clients with tracking enabled that may have the key cached, are notified with an _invalidation message_. 4. When clients receive invalidation messages, they are required to remove the corresponding keys, in order to avoid serving stale data. -This is an example of the protocol (the actual details are very different as you'll discover reading this document till the end): +This is an example of the protocol: * Client 1 `->` Server: CLIENT TRACKING ON * Client 1 `->` Server: GET foo @@ -86,20 +102,15 @@ This is an example of the protocol (the actual details are very different as you * Client 2 `->` Server: SET foo SomeOtherValue * Server `->` Client 1: INVALIDATE "foo" -While this is the general idea, the actual implementation and the details are very different, because the vanilla implementation of what exposed above would be extremely inefficient. For instance a Redis instance may have 10k clients all caching 1 million keys each. In such situation Redis would be required to remember 10 billions distinct informations, including the key name itself, which could be quite expensive. Moreover once a client disconnects, there is to garbage collect all the no longer useful information associated with it. - -In order to make client side caching more viable the Redis actual -implementation uses the following ideas: - -* The keyspace is divided into a bit more than 16 millions caching slots. Given a key, the caching slot is obtained by taking the CRC64(key) modulo 16777216 (this basically means that just the lower 24 bits of the result are taken). -* The server remembers which client may have cached keys about a given caching slots. To do so we just need a table with 16 millions of entries (one for each caching slot), associated with a dictionary of all the clients that may have keys about it. This table is called the **Invalidation Table**. -* Inside the invalidation table we don't really need to store pointers to clients structures and do any garbage collection when the client disconnects: instead what we do is just storing client IDs (each Redis client has an unique numerical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. - -This means that clients also have to organize their local cache according to the caching slots, so that when they receive an invalidation message about a given caching slot, such group of keys are no longer considered valid. - -Another advantage of caching slots, other than being more space efficient, is that, once the user memory in the server side in order to track client side information become too big, it is very simple to release some memory, just picking a random caching slot and evicting it, even if there was no actual modification hitting any key of such caching slot. +This looks great superficially, but if you think at 10k connected clients all +asking for millions of keys in the story of each long living connection, the +server would end storing too much information. For this reason Redis uses two +key ideas in order to limit the amount of memory used server side, and the +CPU cost of handling the data structures implementing the feature: -Note that by using 16 millions of caching slots, it is still possible to have plenty of keys per instance, with just a few keys hashing to the same caching slot: this means that invalidation messages will expire just a couple of keys in the average case, even if the instance has tens of millions of keys. +* The server remembers the list of clients that may have cached a given key in a single global table. This table is called the **Invalidation Table**. Such invalidation table can contain a maximum number of entries, if a new key is inserted, the server may evict an older entry by pretending that such key was modified (even if it was not), and sending an invalidation message to the clients. Doing so, it can reclaim the memory used for this key, even if this will force the clients having a local copy of the key to evict it. +* Inside the invalidation table we don't really need to store pointers to clients structures, that would force a garbage collection procedure when the client disconnects: instead what we do is just storing client IDs (each Redis client has an unique numerical ID). If a client disconnects, the information will be incrementally garbage collected as caching slots are invalidated. +* There is a single keys namespace, not divided by database numbers. So if a client is caching the key `foo` in database 2, and some other client changes the value of the key `foo` in database 3, an invalidation message will still be sent. This way we can ignore database numbers reducing both the memory usage and the implementation complexity. ## Two connections mode @@ -126,7 +137,7 @@ Now we can enable tracking from the data connection: ``` (Connection 2 -- data connection) -CLIENT TRACKING ON redirect 4 +CLIENT TRACKING on REDIRECT 4 +OK GET foo @@ -144,7 +155,7 @@ SET foo bar +OK ``` -As a result, the invalidations connection will receive a message that invalidates caching slot 1872974. That number is obtained by doing the CRC64("foo") taking the least 24 significant bits. +As a result, the invalidations connection will receive a message that invalidates the specified key. ``` (Connection 1 -- used for invalidations) @@ -153,12 +164,28 @@ $7 message $20 __redis__:invalidate -$7 -1872974 +*1 +$3 +foo ``` - The client will check if there are cached keys in such caching slot, and will evict the information that is no longer valid. +Note that the third element of the Pub/Sub message is not a single key but +is a Redis array with just a single element. Since we send an array, if there +are groups of keys to invalidate, we can do that in a single message. + +A very important thing to understand about client side caching used with +RESP2, and a Pub/Sub connection in order to read the invalidation messages, +is that using Pub/Sub is entirely a trick **in order to reuse old client +implementations**, but actually the message is not really sent a channel +and received by all the clients subscribed to it. Only the connection we +specified in the `REDIRECT` argument of the `CLIENT` command will actually +receive the Pub/Sub message, making the feature a lot more scalable. + +When RESP3 is used instead, invalidation messages are sent (either in the +same connection, or in the secondary connection when redirection is used) +as `push` messages (read the RESP3 specification for more information). + ## What tracking tracks As you can see clients do not need, by default, to tell the server what keys @@ -216,6 +243,20 @@ however in case the next command is `MULTI`, all the commands in the transaction will be tracked. Similarly in case of Lua scripts, all the commands executed by the script will be tracked. +## Broadcasting mode + +So far we described the first client side caching model that Redis implements. +There is another one, called broadcasting, that sees the problem from the +point of view of a different tradeoff, does not consume any memory on the +server side, but instead sends more invalidation messages to clients. +In this mode we have the following main behaviors: + +* Clients enable client side caching using the `BCAST` option, specifying one or more prefixes using the `PREFIX` option. For instance: `CLIENT TRACKING on REDIRECT 10 BCAST PREFIX object: PREFIX user:`. If no prefix is specified at all, the prefix is assumed to be the empty string, so the client will receive invalidation messages for every key that gets modified. Instead if one or more prefixes are used, only keys matching the one of the specified prefixes will be send in the invalidation messages. +* The server does not store anything in the invalidation table. Instead it only uses a different **Prefixes Table**, where each prefix is associated to a list of clients. +* Every time a key matching any of the prefixes is modified, all the clients subscribed to such prefix, will receive the invalidation message. +* The server will consume a CPU proportional to the number of registered prefixes. If you have just a few, it is hard to see any difference. With a big number of prefixes the CPU cost can become quite large. +* In this mode the server can perform the optimization of creating a single reply for all the clients subscribed to a given prefix, and send the same reply to all. This helps to lower the CPU usage. + ## When client side caching makes sense # Implementing client side caching in client libraries From 4ab012b4450a492645400db3d7d3e11cb2ee9cde Mon Sep 17 00:00:00 2001 From: Yossi Gottlieb Date: Mon, 17 Feb 2020 17:25:35 +0200 Subject: [PATCH 0300/1457] Add SSL/TLS support. (#1253) --- topics/encryption.md | 125 +++++++++++++++++++++++++++++++++++++++---- topics/rediscli.md | 10 ++++ topics/security.md | 8 ++- 3 files changed, 129 insertions(+), 14 deletions(-) diff --git a/topics/encryption.md b/topics/encryption.md index b1707df83c..d2a77af564 100644 --- a/topics/encryption.md +++ b/topics/encryption.md @@ -1,13 +1,120 @@ -Redis Encryption +TLS Support === -The idea of adding SSL support to Redis was proposed many times, however -currently we believe that given the small percentage of users requiring -SSL support, and the fact that each scenario tends to be different, using -a different "tunneling" strategy can be better. We may change the idea in the -future, but currently a good solution that may be suitable for many use cases -is to use the following project: +SSL/TLS is supported by Redis starting with version 6 as an optional feature +that needs to be enabled at compile time. -* [Spiped](http://www.tarsnap.com/spiped.html) is a utility for creating symmetrically encrypted and authenticated pipes between socket addresses, so that one may connect to one address (e.g., a UNIX socket on localhost) and transparently have a connection established to another address (e.g., a UNIX socket on a different system). +Getting Started +--- -The software is written in a similar spirit to Redis itself, it is a self-contained 4000 lines of C code utility that does a single thing well. +### Building + +To build with TLS support you'll need OpenSSL development libraries (e.g. +libssl-dev on Debian/Ubuntu). + +Run `make BUILD_TLS=yes`. + +### Tests + +To run Redis test suite with TLS, you'll need TLS support for TCL (i.e. +`tcl-tls` package on Debian/Ubuntu). + +1. Run `./utils/gen-test-certs.sh` to generate a root CA and a server + certificate. + +2. Run `./runtest --tls` or `./runtest-cluster --tls` to run Redis and Redis + Cluster tests in TLS mode. + +### Running manually + +To manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was +invoked so sample certificates/keys are available): + + ./src/redis-server --tls-port 6379 --port 0 \ + --tls-cert-file ./tests/tls/redis.crt \ + --tls-key-file ./tests/tls/redis.key \ + --tls-ca-cert-file ./tests/tls/ca.crt + +To connect to this Redis server with `redis-cli`: + + ./src/redis-cli --tls \ + --cert ./tests/tls/redis.crt \ + --key ./tests/tls/redis.key \ + --cacert ./tests/tls/ca.crt + +Certificate Configuration +--- + +In order to support TLS, Redis must be configured with a X.509 certificate and a +private key. In addition, it is necessary to specify a CA certificate bundle +file or path to be used as a trusted root when validating certificates. To +support DH based ciphers, a DH params file can also be configured. For example: + +``` +tls-cert-file /path/to/redis.crt +tls-key-file /path/to/redis.key +tls-ca-cert-file /path/to/ca.crt +tls-dh-params-file /path/to/redis.dh +``` + +TLS Listening Port +--- + +The `tls-port` configuration directive enables accepting SSL/TLS connections on +the specified port. This is **in addition** to listening on `port` for TCP +connections, so it is possible to access Redis on different ports using TLS and +non-TLS connections simultaneously. + +You may specify `port 0` to disable the non-TLS port completely. To enable only +TLS on the default Redis port, use: + +``` +port 0 +tls-port 6379 +``` + +Client Certificate Authentication +--- + +By default, Redis uses mutual TLS and requires clients to authenticate with a +valid certificate (authenticated against trusted root CAs specified by +`ca-cert-file` or `ca-cert-dir`). + +You may use `tls-auth-clients no` to disable client authentication. + +Replication +--- + +A Redis master server handles connecting clients and replica servers in the same +way, so the above `tls-port` and `tls-auth-clients` directives apply to +replication links as well. + +On the replica server side, it is necessary to specify `tls-replication yes` to +use TLS for outgoing connections to the master. + +Cluster +--- + +When Redis Cluster is used, use `tls-cluster yes` in order to enable TLS for the +cluster bus and cross-node connections. + +Sentinel +--- + +Sentinel inherits its networking configuration from the common Redis +configuration, so all of the above applies to Sentinel as well. + +When connecting to master servers, Sentinel will use the `tls-replication` +directive to determine if a TLS or non-TLS connection is required. + +Additional Configuration +--- + +Additional TLS configuration is available to control the choice of TLS protocol +versions, ciphers and cipher suites, etc. Please consult the self documented +`redis.conf` for more information. + +Limitations +--- + +I/O threading is currently not supported with TLS. diff --git a/topics/rediscli.md b/topics/rediscli.md index 06bca46caa..e42800f6b4 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -99,6 +99,16 @@ option and a valid URI: $ redis-cli -u redis://p%40ssw0rd@redis-16379.hosted.com:16379/0 ping PONG +## SSL/TLS + +By default, `redis-cli` uses a plain TCP connection to connect to Redis. +You may enable SSL/TLS using the `--tls` option, along with `--cacert` or +`--cacertdir` to configure a trusted root certificate bundle or directory. + +If the target server requires authentication using a client side certificate, +you can specify a certificate and a corresponding private key using `--cert` and +`--key`. + ## Getting input from other programs There are two ways you can use `redis-cli` in order to get the input from other diff --git a/topics/security.md b/topics/security.md index 4fb03cae0d..0b71fe30f0 100644 --- a/topics/security.md +++ b/topics/security.md @@ -98,13 +98,11 @@ The AUTH command, like every other Redis command, is sent unencrypted, so it does not protect against an attacker that has enough access to the network to perform eavesdropping. -Data encryption support +TLS support --- -Redis does not support encryption. In order to implement setups where -trusted parties can access a Redis instance over the internet or other -untrusted networks, an additional layer of protection should be implemented, -such as an SSL proxy. We recommend [spiped](http://www.tarsnap.com/spiped.html). +Redis has optional support for TLS on all communication channels, including +client connections, replication links and the Redis Cluster bus protocol. Disabling of specific commands --- From 961c32dbd2612cb0b485f162030fd892bc2e03ef Mon Sep 17 00:00:00 2001 From: Mark <40328786+MarkShen1992@users.noreply.github.com> Date: Tue, 18 Feb 2020 20:08:42 +0800 Subject: [PATCH 0301/1457] remove duplicated word: when (#1254) remove duplicated word: when --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 71ff6ddca1..31e1a7aa25 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -224,7 +224,7 @@ as you continue reading. * **cluster-slave-validity-factor ``**: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. * **cluster-migration-barrier ``**: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. * **cluster-require-full-coverage ``**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. -* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when when the cluster is marked as fail, either when a node can't reach a quorum of masters or full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. +* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as fail, either when a node can't reach a quorum of masters or full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. Creating and using a Redis Cluster From 27e90a93f2a2400d5911e8eca866031026045a1d Mon Sep 17 00:00:00 2001 From: Suo Lu Date: Tue, 18 Feb 2020 20:09:01 +0800 Subject: [PATCH 0302/1457] Listing other not logged commands by MONITOR (#1255) * Other not logged commands by MONITOR Other than admin commands ([1][2]), there are several not logged commands. [1] https://github.com/antirez/redis/commit/d6410ed19a4930895f2591eed224d5ec5449393a [2] https://github.com/antirez/redis-doc/commit/80129ef3a0fc0e48aeda73f9c9b790cf7bdb2636 * Update monitor.md Edits Co-authored-by: Itamar Haber --- commands/monitor.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/commands/monitor.md b/commands/monitor.md index 6f10a78a7e..5de6175e60 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -42,10 +42,17 @@ Manually issue the `QUIT` command to stop a `MONITOR` stream running via ## Commands not logged by MONITOR -For security concerns, certain special administration commands like `CONFIG` -are not logged into the `MONITOR` output. +Because of security concerns, all administrative commands are not logged +by `MONITOR`'s output. -## Cost of running `MONITOR` +Furthermore, the following commands are also not logged: + + * `AUTH` + * `EXEC` + * `HELLO` + * `QUIT` + +## Cost of running MONITOR Because `MONITOR` streams back **all** commands, its use comes at a cost. The following (totally unscientific) benchmark numbers illustrate what the cost @@ -81,3 +88,7 @@ Running more `MONITOR` clients will reduce throughput even more. **Non standard return value**, just dumps the received commands in an infinite flow. + +@history + +* `>=6.0`: `AUTH`, `EXEC`, `HELLO` and `QUIT` excluded from output. From 5c50700e79d530a59985684520fb433825328471 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 19 Feb 2020 19:23:12 +0100 Subject: [PATCH 0303/1457] More client side caching doc. --- topics/client-side-caching.md | 58 ++++++++++++++++++++++++++++++++--- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index b9774eb4c0..4361869677 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -257,14 +257,64 @@ In this mode we have the following main behaviors: * The server will consume a CPU proportional to the number of registered prefixes. If you have just a few, it is hard to see any difference. With a big number of prefixes the CPU cost can become quite large. * In this mode the server can perform the optimization of creating a single reply for all the clients subscribed to a given prefix, and send the same reply to all. This helps to lower the CPU usage. -## When client side caching makes sense +## Avoiding race conditions + +When implementing client side caching redirecting the invalidation messages +to a different connection, you should be aware that there is a possible +race condition. See the following example interaction, where we'll call +the data connection "D" and the invalidation connection "I": + + [D] client -> server: GET foo + [I] server <- client: Invalidate foo (somebody else touched it) + [D] server <- client: "bar" (the reply of "GET foo") + +As you can see, because the reply to the GET was slower to reach the +client, we received the invalidation message before the actual data that +is already no longer valid. So we'll keep serving a stale version of the +foo key. To avoid this problem, it is a good idea to populate the cache +when we send the command with a placeholder: + + Client cache: set the local copy of "foo" to "caching-in-progress" + [D] client-> server: GET foo. + [I] server <- client: Invalidate foo (somebody else touched it) + Client cahce: delete "foo" from the local cache. + [D] server <- client: "bar" (the reply of "GET foo") + Client cache: don't set "bar" since the entry for "foo" is missing. + +Such race condition is not possible when using a single connection for both +data and invalidation messages, since the order of the messages is always known +in that case. + +## What to do when losing connection with the server -# Implementing client side caching in client libraries +Similarly, if we lost the connection with the socket we use in order to +get the invalidation messages, we may end with stale data. In order to avoid +this problem, we need to do the following things: + +1. Make sure that if the connection is lost, the local cache is flushed. +2. Both when using RESP2 with Pub/Sub, or RESP3, ping the invalidation channel periodically (you can send PING commands even when the connection is in Pub/Sub mode!). If the connection looks broken and we are not able to receive ping backs, after a maximum amount of time, close the connection and flush the cache. ## What to cache -## Avoiding race conditions +Clients may want to run an internal statistics about the amount of times +a given cached key was actually served in a request, to understand in the +future what is good to cache. In general: + +* We don't want to cache much keys that change continuously. +* We don't want to cache much keys that are requested very rarely. +* We want to cache keys that are requested often and change at a reasonable rate. For an example of key not changing at a reasonable rate, think at a global counter that is continuously `INCR`emented. + +However simpler clients may just evict data using some random sampling just +remembering the last time a given cached value was served, trying to evict +keys that were not served recently. -## Limiting the amount of memory used by clients +## Other hitns about client libraries implementation + +* Handling TTLs: make sure you request also the key TTL and set the TTL in the local cache if you want to support caching keys with a TTL. +* Putting a max TTL in every key is a good idea, even if it had no TTL. This is a good protection against bugs or connection issues that would make the client having old data in the local copy. +* Limiting the amount of memory used by clients is absolutely needed. There must be a way to evict old keys when new ones are added. ## Limiting the amount of memory used by Redis + +Just make sure to configure a suitable value for the maxmimum number of keys remembered by Redis, or alternatively use the BCAST mode that consumes no memory at all in the Redis side. Note that the memory consumed by Redis when BCAST is not used, is proportional both to the number of keys tracked, and the number of clients requested such keys. + From 4c0fefdc42341196cee85935fc73e5c422a4edce Mon Sep 17 00:00:00 2001 From: Tom Christie Date: Thu, 20 Feb 2020 13:51:52 +0000 Subject: [PATCH 0304/1457] Toni Morrison (#1256) --- commands/xrange.md | 4 ++-- commands/xread.md | 2 +- commands/xrevrange.md | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/commands/xrange.md b/commands/xrange.md index 231edf5e6a..34a831ebcf 100644 --- a/commands/xrange.md +++ b/commands/xrange.md @@ -122,7 +122,7 @@ sequence to obtain `1526985685298-1`, and continue our iteration: 2) 1) "name" 2) "Toni" 3) "surname" - 4) "Morris" + 4) "Morrison" 2) 1) 1526985712947-0 2) 1) "name" 2) "Agatha" @@ -177,7 +177,7 @@ their fields and values in the exact same order as `XADD` added them. ```cli XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen -XADD writers * name Toni surname Morris +XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers diff --git a/commands/xread.md b/commands/xread.md index 484ca3871f..ea0f311ecc 100644 --- a/commands/xread.md +++ b/commands/xread.md @@ -89,7 +89,7 @@ To continue iterating the two streams I'll call: 2) 1) "name" 2) "Toni" 3) "surname" - 4) "Morris" + 4) "Morrison" 2) 1) 1526985712947-0 2) 1) "name" 2) "Agatha" diff --git a/commands/xrevrange.md b/commands/xrevrange.md index 757693f6b5..2518c43335 100644 --- a/commands/xrevrange.md +++ b/commands/xrevrange.md @@ -51,7 +51,7 @@ be `1526985712946-18446744073709551615`, or just `18446744073709551615`: 2) 1) "name" 2) "Toni" 3) "surname" - 4) "Morris" + 4) "Morrison" 2) 1) 1526985685298-0 2) 1) "name" 2) "Jane" @@ -77,7 +77,7 @@ their fields and values in the exact same order as `XADD` added them. ```cli XADD writers * name Virginia surname Woolf XADD writers * name Jane surname Austen -XADD writers * name Toni surname Morris +XADD writers * name Toni surname Morrison XADD writers * name Agatha surname Christie XADD writers * name Ngozi surname Adichie XLEN writers From 0bc9e22fb3ccf8daf2d47427d658f2caf22854f0 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 20 Feb 2020 16:25:35 +0200 Subject: [PATCH 0305/1457] Update monitor.md --- commands/monitor.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/monitor.md b/commands/monitor.md index 5de6175e60..2cd62f23e8 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -91,4 +91,4 @@ flow. @history -* `>=6.0`: `AUTH`, `EXEC`, `HELLO` and `QUIT` excluded from output. +* `>=6.0`: `AUTH` excluded from the command's output. From 6734d1adff3cbb266c9b827ed2cd65023e2f5cdb Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Tue, 25 Feb 2020 07:12:23 -0500 Subject: [PATCH 0306/1457] fixx typo in client side caching docs (#1258) --- topics/client-side-caching.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 4361869677..4a7c3d9f59 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -277,7 +277,7 @@ when we send the command with a placeholder: Client cache: set the local copy of "foo" to "caching-in-progress" [D] client-> server: GET foo. [I] server <- client: Invalidate foo (somebody else touched it) - Client cahce: delete "foo" from the local cache. + Client cache: delete "foo" from the local cache. [D] server <- client: "bar" (the reply of "GET foo") Client cache: don't set "bar" since the entry for "foo" is missing. From 6927ef0c638f2fb516c9a8c61be33b83494fabb8 Mon Sep 17 00:00:00 2001 From: Nick Kirby Date: Sun, 1 Mar 2020 10:11:45 +0000 Subject: [PATCH 0307/1457] Improve grammar (#1259) --- topics/data-types-intro.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index dbf3f23678..13b55cb591 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -2,7 +2,7 @@ An introduction to Redis data types and abstractions === Redis is not a *plain* key-value store, it is actually a *data structures server*, supporting different kinds of values. What this means is that, while in -traditional key-value stores you associated string keys to string values, in +traditional key-value stores you associate string keys to string values, in Redis the value is not limited to a simple string, but can also hold more complex data structures. The following is the list of all the data structures supported by Redis, which will be covered separately in this tutorial: @@ -30,7 +30,7 @@ by Redis, which will be covered separately in this tutorial: It's not always trivial to grasp how these data types work and what to use in order to solve a given problem from the [command reference](/commands), so this -document is a crash course to Redis data types and their most common patterns. +document is a crash course in Redis data types and their most common patterns. For all the examples we'll use the `redis-cli` utility, a simple but handy command-line utility, to issue commands against the Redis server. From f39ba2a6ec3f5e64bae43c444f48f231e9e60f57 Mon Sep 17 00:00:00 2001 From: Samuel Dion-Girardeau Date: Tue, 3 Mar 2020 05:34:19 -0500 Subject: [PATCH 0308/1457] Fix typos in Sorted set section (#1261) * s/leader board/leaderboard/ * s/an user/a user/ --- topics/data-types.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/data-types.md b/topics/data-types.md index 7b96dcb7e9..a44583079a 100644 --- a/topics/data-types.md +++ b/topics/data-types.md @@ -129,9 +129,9 @@ that are really hard to model in other kind of databases. With Sorted Sets you can: -* Take a leader board in a massive online game, where every time a new score +* Take a leaderboard in a massive online game, where every time a new score is submitted you update it using [ZADD](/commands/zadd). You can easily -take the top users using [ZRANGE](/commands/zrange), you can also, given an +take the top users using [ZRANGE](/commands/zrange), you can also, given a user name, return its rank in the listing using [ZRANK](/commands/zrank). Using ZRANK and ZRANGE together you can show users with a score similar to a given user. All very *quickly*. From 233c4cdd96e48273bff51bf30d23b7ee75b39b53 Mon Sep 17 00:00:00 2001 From: Petr Kozelek <187445+footcha@users.noreply.github.com> Date: Wed, 4 Mar 2020 15:04:47 +0100 Subject: [PATCH 0309/1457] ++tools: Reliable Message Delivery (#1262) --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index 454f774c00..e56a730539 100644 --- a/tools.json +++ b/tools.json @@ -715,5 +715,13 @@ "repository": "https://github.com/OpenGems/redis_web_manager", "description": "Web interface that allows you to manage easily your Redis instance (see keys, memory used, connected client, etc...).", "authors": [] + }, + + { + "name": "Reliable Message Delivery", + "language": "C#", + "repository": "https://github.com/Oriflame/RedisMessaging.ReliableDelivery", + "description": "This library provides reliability to delivering messages via Redis. By design Redis pub/sub message delivery is not reliable so it can happen that some messages can be lost due to network issues or they can be delivered more than once in case of Redis replication failure.", + "authors": ["PetrKozelek" , "OriflameSoftware"] } ] From 3a41bc371a894cf63af99046f61052614379f7b0 Mon Sep 17 00:00:00 2001 From: Stefan Miller <832146+stfnmllr@users.noreply.github.com> Date: Mon, 9 Mar 2020 13:34:43 +0100 Subject: [PATCH 0310/1457] Update repository link for go-resp3 client (#1264) --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 7ce89d64d6..3a122655a0 100644 --- a/clients.json +++ b/clients.json @@ -197,7 +197,7 @@ { "name": "go-resp3", "language": "Go", - "repository": "https://github.com/d024441/go-resp3", + "repository": "https://github.com/stfnmllr/go-resp3", "description": "A Redis Go client implementation based on the Redis RESP3 protocol.", "authors": [], "active": true From ed3e8bb4a66f1f8b12a02c5d1f3f52dffe7896f1 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 16 Mar 2020 13:39:09 +0100 Subject: [PATCH 0311/1457] ACL: document the Sentinel and replicas command set. --- topics/acl.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/topics/acl.md b/topics/acl.md index 985cb9aaad..c91ddc77dd 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -431,6 +431,30 @@ The external ACL file however is more powerful. You can do the following: Note that `CONFIG REWRITE` does not also trigger `ACL SAVE`: when you use an ACL file the configuration and the ACLs are handled separately. +## ACL rules for Sentinel and Replicas + +In case you don't want to provide Redis replicas and Redis Sentinel instances +full access to your Redis instances, the following is the set of commands +that must be allowed in order for everything to work correctly. + +For Sentinel, allow the user to access the following commands both in the master and replica instances: + +* AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF, CONFIG, CLIENT, EXEC. + +Sentinel does not need to access any key in the database, so the ACL rule would be the following (note: AUTH is not needed since is always allowed): + + ACL setuser sentinel-user >somepassword +client +subscribe +publish +ping +info +multi +slaveof +config +client +exec on + +Redis replicas require the following commands to be whitelisted on the master instance: + +* PSYNC, REPLCONF, PING + +No keys need to be accessed, so this translates to the following rules: + + ACL setuser replica-user >somepassword +psync +replconf +ping on + +Note that you don't need to configure the replicas to allow the master to be able to execute any set of commands: the master is always authenticated as the root user from the point of view of replicas. + ## TODO list for this document * Make sure to specify that modules commands are ignored when adding/removing categories. From 03c4ad501966606c0b1883486a876c15102279ae Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 16 Mar 2020 13:45:04 +0100 Subject: [PATCH 0312/1457] Remove the old Sentinel doc. --- topics/sentinel-old.md | 587 ----------------------------------------- 1 file changed, 587 deletions(-) delete mode 100644 topics/sentinel-old.md diff --git a/topics/sentinel-old.md b/topics/sentinel-old.md deleted file mode 100644 index 46c8178c91..0000000000 --- a/topics/sentinel-old.md +++ /dev/null @@ -1,587 +0,0 @@ -Redis Sentinel Documentation -=== - -Redis Sentinel is a system designed to help managing Redis instances. -It performs the following three tasks: - -* **Monitoring**. Sentinel constantly check if your master and slave instances are working as expected. -* **Notification**. Sentinel can notify the system administrator, or another computer program, via an API, that something is wrong with one of the monitored Redis instances. -* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server informed about the new address to use when connecting. - -Redis Sentinel is a distributed system, this means that usually you want to run -multiple Sentinel processes across your infrastructure, and this processes -will use agreement protocols in order to understand if a master is down and -to perform the failover. - -Redis Sentinel is shipped as a stand-alone executable called `redis-sentinel` -but actually it is a special execution mode of the Redis server itself, and -can be also invoked using the `--sentinel` option of the normal `redis-sever` -executable. - -**WARNING:** Redis Sentinel is currently a work in progress. This document -describes how to use what we is already implemented, and may change as the -Sentinel implementation evolves. - -Redis Sentinel is compatible with Redis 2.4.16 or greater, and redis 2.6.0-rc6 or greater. - -Obtaining Sentinel ---- - -Currently Sentinel is part of the Redis *unstable* branch at GitHub. -To compile it you need to clone the *unstable* branch and compile Redis. -You'll see a `redis-sentinel` executable in your `src` directory. - -Alternatively you can use directly the `redis-server` executable itself, -starting it in Sentinel mode as specified in the next paragraph. - -Running Sentinel ---- - -If you are using the `redis-sentinel` executable (or if you have a symbolic -link with that name to the `redis-server` executable) you can run Sentinel -with the following command line: - - redis-sentinel /path/to/sentinel.conf - -Otherwise you can use directly the `redis-server` executable starting it in -Sentinel mode: - - redis-server /path/to/sentinel.conf --sentinel - -Both ways work the same. - -Configuring Sentinel ---- - -The Redis source distribution contains a file called `sentinel.conf` -that is a self-documented example configuration file you can use to -configure Sentinel, however a typical minimal configuration file looks like the -following: - - sentinel monitor mymaster 127.0.0.1 6379 2 - sentinel down-after-milliseconds mymaster 60000 - sentinel failover-timeout mymaster 900000 - sentinel can-failover mymaster yes - sentinel parallel-syncs mymaster 1 - - sentinel monitor resque 192.168.1.3 6380 4 - sentinel down-after-milliseconds resque 10000 - sentinel failover-timeout resque 900000 - sentinel can-failover resque yes - sentinel parallel-syncs resque 5 - -The first line is used to tell Redis to monitor a master called *mymaster*, -that is at address 127.0.0.1 and port 6379, with a level of agreement needed -to detect this master as failing of 2 sentinels (if the agreement is not reached -the automatic failover does not start). - -The other options are almost always in the form: - - sentinel - -And are used for the following purposes: - -* `down-after-milliseconds` is the time in milliseconds an instance should not be reachable (either does not reply to our PINGs or it is replying with an error) for a Sentinel starting to think it is down. After this time has elapsed the Sentinel will mark an instance as **subjectively down** (also known as -`SDOWN`), that is not enough to -start the automatic failover. However if enough instances will think that there -is a subjectively down condition, then the instance is marked as -**objectively down**. The number of sentinels that needs to agree depends on -the configured agreement for this master. -* `can-failover` tells this Sentinel if it should start a failover when an -instance is detected as objectively down (also called `ODOWN` for simplicity). -You may configure all the Sentinels to perform the failover if needed, or you -may have a few Sentinels used only to reach the agreement, and a few more -that are actually in charge to perform the failover. -* `parallel-syncs` sets the number of slaves that can be reconfigured to use -the new master after a failover at the same time. The lower the number, the -more time it will take for the failover process to complete, however if the -slaves are configured to serve old data, you may not want all the slaves to -resync at the same time with the new master, as while the replication process -is mostly non blocking for a slave, there is a moment when it stops to load -the bulk data from the master during a resync. You may make sure only one -slave at a time is not reachable by setting this option to the value of 1. - -The other options are described in the rest of this document and -documented in the example sentinel.conf file shipped with the Redis -distribution. - -SDOWN and ODOWN ---- - -As already briefly mentioned in this document Redis Sentinel has two different -concepts of *being down*, one is called a *Subjectively Down* condition -(SDOWN) and is a down condition that is local to a given Sentinel instance. -Another is called *Objectively Down* condition (ODOWN) and is reached when -enough Sentinels (at least the number configured as the `quorum` parameter -of the monitored master) have an SDOWN condition, and get feedback from -other Sentinels using the `SENTINEL is-master-down-by-addr` command. - -From the point of view of a Sentinel an SDOWN condition is reached if we -don't receive a valid reply to PING requests for the number of seconds -specified in the configuration as `is-master-down-after-milliseconds` -parameter. - -An acceptable reply to PING is one of the following: - -* PING replied with +PONG. -* PING replied with -LOADING error. -* PING replied with -MASTERDOWN error. - -Any other reply (or no reply) is considered non valid. - -Note that SDOWN requires that no acceptable reply is received for the whole -interval configured, so for instance if the interval is 30000 milliseconds -(30 seconds) and we receive an acceptable ping reply every 29 seconds, the -instance is considered to be working. - -The ODOWN condition **only applies to masters**. For other kind of instances -Sentinel don't require any agreement, so the ODOWN state is never reached -for slaves and other sentinels. - -The behavior of Redis Sentinel can be described by a set of rules that every -Sentinel follows. The complete behavior of Sentinel as a distributed system -composed of multiple Sentinels only results from this rules followed by -every single Sentinel instance. The following is the first set of rules. -In the course of this document more rules will be added in the appropriate -sections. - -**Sentinel Rule #1**: Every Sentinel sends a **PING** request to every known master, slave, and sentinel instance, every second. - -**Sentinel Rule #2**: An instance is Subjectively Down (**SDOWN**) if the latest valid reply to **PING** was received more than `down-after-milliseconds` milliseconds ago. Acceptable PING replies are: +PONG, -LOADING, -MASTERDOWN. - -**Sentinel Rule #3**: Every Sentinel is able to reply to the command **SENTINEL is-master-down-by-addr ` `**. This command replies true if the specified address is the one of a master instance, and the master is in **SDOWN** state. - -**Sentinel Rule #4**: If a master is in **SDOWN** condition, every other Sentinel also monitoring this master, is queried for confirmation of this state, every second, using the **SENTINEL is-master-down-by-addr** command. - -**Sentinel Rule #5**: If a master is in **SDOWN** condition, and enough other Sentinels (to reach the configured quorum) agree about the condition, with a reply to **SENTINEL is-master-down-by-addr** that is no older than five seconds, then the master is marked as Objectively Down (**ODOWN**). - -**Sentinel Rule #6**: Every Sentinel sends an **INFO** request to every known master and slave instance, one time every 10 seconds. If a master is in **ODOWN** condition, its slaves are asked for **INFO** every second instead of being asked every 10 seconds. - -**Sentinel Rule #7**: If the **first** INFO reply a Sentinel receives about a master shows that it is actually a slave, Sentinel will update the configuration to actually monitor the master reported by the INFO output instead. So it is safe to start Sentinel against slaves. - -Sentinels and Slaves auto discovery ---- - -While Sentinels stay connected with other Sentinels in order to reciprocally -check the availability of each other, and to exchange messages, you don't -need to configure the other Sentinel addresses in every Sentinel instance you -run, as Sentinel uses the Redis master Pub/Sub capabilities in order to -discover the other Sentinels that are monitoring the same master. - -This is obtained by sending *Hello Messages* into the channel named -`__sentinel__:hello`. - -Similarly you don't need to configure what is the list of the slaves attached -to a master, as Sentinel will auto discover this list querying Redis. - -**Sentinel Rule #8**: Every Sentinel publishes a message to every monitored master Pub/Sub channel `__sentinel__:hello`, every five seconds, announcing its presence with ip, port, runid, and ability to failover (accordingly to `can-failover` configuration directive in `sentinel.conf`). - -**Sentinel Rule #9**: Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master, looking for unknown sentinels. When new sentinels are detected, we add them as sentinels of this master. - -**Sentinel Rule #10**: Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added. - -Sentinel API -=== - -By default Sentinel runs using TCP port 26379 (note that 6379 is the normal -Redis port). Sentinels accept commands using the Redis protocol, so you can -use `redis-cli` or any other unmodified Redis client in order to talk with -Sentinel. - -There are two ways to talk with Sentinel: it is possible to directly query -it to check what is the state of the monitored Redis instances from its point -of view, to see what other Sentinels it knows, and so forth. - -An alternative is to use Pub/Sub to receive *push style* notifications from -Sentinels, every time some event happens, like a failover, or an instance -entering an error condition, and so forth. - -Sentinel commands ---- - -The following is a list of accepted commands: - -* **PING** this command simply returns PONG. -* **SENTINEL masters** show a list of monitored masters and their state. -* **SENTINEL slaves ``** show a list of slaves for this master, and their state. -* **SENTINEL is-master-down-by-addr ` `** return a two elements multi bulk reply where the first is 0 or 1 (0 if the master with that address is known and is in `SDOWN` state, 1 otherwise). The second element of the reply is the -*subjective leader* for this master, that is, the `runid` of the Redis -Sentinel instance that should perform the failover accordingly to the queried -instance. -* **SENTINEL get-master-addr-by-name ``** return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. -* **SENTINEL reset ``** this command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. - -Pub/Sub Messages ---- - -A client can use a Sentinel as it was a Redis compatible Pub/Sub server -(but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to -channels and get notified about specific events. - -The channel name is the same as the name of the event. For instance the -channel named `+sdown` will receive all the notifications related to instances -entering an `SDOWN` condition. - -To get all the messages simply subscribe using `PSUBSCRIBE *`. - -The following is a list of channels and message formats you can receive using -this API. The first word is the channel / event name, the rest is the format of the data. - -Note: where *instance details* is specified it means that the following arguments are provided to identify the target instance: - - @ - -The part identifying the master (from the @ argument to the end) is optional -and is only specified if the instance is not a master itself. - -* **+reset-master** `` -- The master was reset. -* **+slave** `` -- A new slave was detected and attached. -* **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. -* **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached slave turned into a master). -* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. -* **+slave-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. -* **+slave-reconf-done** `` -- The slave is now synchronized with the new master. -* **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). -* **+sentinel** `` -- A new sentinel for this master was detected and attached. -* **+sdown** `` -- The specified instance is now in Subjectively Down state. -* **-sdown** `` -- The specified instance is no longer in Subjectively Down state. -* **+odown** `` -- The specified instance is now in Objectively Down state. -* **-odown** `` -- The specified instance is no longer in Objectively Down state. -* **+failover-takedown** `` -- 25% of the configured failover timeout has elapsed, but this sentinel can't see any progress, and is the new leader. It starts to act as the new leader reconfiguring the remaining slaves to replicate with the new master. -* **+failover-triggered** `` -- We are starting a new failover as a the leader sentinel. -* **+failover-state-wait-start** `` -- New failover state is `wait-start`: we are waiting a fixed number of seconds, plus a random number of seconds before starting the failover. -* **+failover-state-select-slave** `` -- New failover state is `select-slave`: we are trying to find a suitable slave for promotion. -* **no-good-slave** `` -- There is no good slave to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. -* **selected-slave** `` -- We found the specified good slave to promote. -* **failover-state-send-slaveof-noone** `` -- We are trying to reconfigure the promoted slave as master, waiting for it to switch. -* **failover-end-for-timeout** `` -- The failover terminated for timeout. If we are the failover leader, we sent a *best effort* `SLAVEOF` command to all the slaves yet to reconfigure. -* **failover-end** `` -- The failover terminated with success. All the slaves appears to be reconfigured to replicate with the new master. -* **switch-master** ` ` -- We are starting to monitor the new master, using the same name of the old one. The old master will be completely removed from our tables. -* **failover-abort-x-sdown** `` -- The failover was undone (aborted) because the promoted slave appears to be in extended SDOWN state. -* **-slave-reconf-undo** `` -- The failover aborted so we sent a `SLAVEOF` command to the specified instance to reconfigure it back to the original master instance. -* **+tilt** -- Tilt mode entered. -* **-tilt** -- Tilt mode exited. - -Sentinel failover -=== - -The failover process consists on the following steps: - -* Recognize that the master is in ODOWN state. -* Understand who is the Sentinel that should start the failover, called **The Leader**. All the other Sentinels will be **The Observers**. -* The leader selects a slave to promote to master. -* The promoted slave is turned into a master with the command **SLAVEOF NO ONE**. -* The observers see that a slave was turned into a master, so they know the failover started. **Note:** this means that any event that turns one of the slaves of a monitored master into a master (`SLAVEOF NO ONE` command) will be sensed as the start of a failover process. -* All the other slaves attached to the original master are configured with the **SLAVEOF** command in order to start the replication process with the new master. -* The leader terminates the failover process when all the slaves are reconfigured. It removes the old master from the table of monitored masters and adds the new master, *under the same name* of the original master. -* The observers detect the end of the failover process when all the slaves are reconfigured. They remove the old master from the table and start monitoring the new master, exactly as the leader does. - -The election of the Leader is performed using the same mechanism used to reach -the ODOWN state, that is, the **SENTINEL is-master-down-by-addr** command. -It returns the leader from the point of view of the queried Sentinel, we call -it the **Subjective Leader**, and is selected using the following rule: - -* We remove all the Sentinels that can't failover for configuration (this information is propagated using the Hello Channel to all the Sentinels). -* We remove all the Sentinels in SDOWN, disconnected, or with the last ping reply received more than `SENTINEL_INFO_VALIDITY_TIME` milliseconds ago (currently defined as 5 seconds). -* Of all the remaining instances, we get the one with the lowest `runid`, lexicographically (every Redis instance has a Run ID, that is an identifier of every single execution). - -For a Sentinel to sense to be the **Objective Leader**, that is, the Sentinel that should start the failover process, the following conditions are needed. - -* It thinks it is the subjective leader itself. -* It receives acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request should agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. - -Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves into an observer instead. This is just a redundancy layer and should in theory never happen. - -**Sentinel Rule #11**: A **Good Slave** is a slave with the following requirements: -* It is not in SDOWN nor in ODOWN condition. -* We have a valid connection to it currently (not in DISCONNECTED state). -* Latest PING reply we received from it is not older than five seconds. -* Latest INFO reply we received from it is not older than five seconds. -* The latest INFO reply reported that the link with the master is down for no more than the time elapsed since we saw the master entering SDOWN state, plus ten times the configured `down_after_milliseconds` parameter. So for instance if a Sentinel is configured to sense the SDOWN condition after 10 seconds, and the master is down since 50 seconds, we accept a slave as a Good Slave only if the replication link was disconnected less than `50+(10*10)` seconds (two minutes and half more or less). -* It is not flagged as DEMOTE (see the section about resurrecting masters). - -**Sentinel Rule #12**: A **Subjective Leader** from the point of view of a Sentinel, is the Sentinel (including itself) with the lower runid monitoring a given master, that also replied to PING less than 5 seconds ago, reported to be able to do the failover via Pub/Sub hello channel, and is not in DISCONNECTED state. - -**Sentinel Rule #12**: If a master is down we ask `SENTINEL is-master-down-by-addr` to every other connected Sentinel as explained in Sentinel Rule #4. This command will also reply with the runid of the **Subjective Leader** from the point of view of the asked Sentinel. A given Sentinel believes to be the **Objective Leader** of a master if it is reported to be the subjective leader by N Sentinels (including itself), where: -* N must be equal or greater to the configured quorum for this master. -* N mast be equal or greater to the majority of the voters (`num_votres/2+1`), considering only the Sentinels that also reported the master to be down. - -**Sentinel Rule #13**: A Sentinel starts the failover as a **Leader** (that is, the Sentinel actually sending the commands to reconfigure the Redis servers) if the following conditions are true at the same time: -* The master is in ODOWN condition. -* The Sentinel is configured to perform the failover with `can-failover` set to yes. -* There is at least a Good Slave from the point of view of the Sentinel. -* The Sentinel believes to be the Objective Leader. -* There is no failover in progress already detected for this master. - -**Sentinel Rule #14**: A Sentinel detects a failover as an **Observer** (that is, the Sentinel just follows the failover generating the appropriate events in the log file and Pub/Sub interface, but without actively reconfiguring instances) if the following conditions are true at the same time: -* There is no failover already in progress. -* A slave instance of the monitored master turned into a master. -However the failover **will NOT be sensed as started if the slave instance turns into a master and at the same time the runid has changed** from the previous one. This means the instance turned into a master because of a restart, and is not a valid condition to consider it a slave election. - -**Sentinel Rule #15**: A Sentinel starting a failover as leader does not immediately starts it. It enters a state called **wait-start**, that lasts a random amount of time between 5 seconds and 15 seconds. During this time **Sentinel Rule #14** still applies: if a valid slave promotion is detected the failover as leader is aborted and the failover as observer is detected. - -End of failover ---- - -The failover process is considered terminated from the point of view of a -single Sentinel if: - -* The promoted slave is not in SDOWN condition. -* A slave was promoted as new master. -* All the other slaves are configured to use the new master. - -Note: Slaves that are in SDOWN state are ignored. - -Also the failover state is considered terminate if: - -* The promoted slave is not in SDOWN condition. -* A slave was promoted as new master. -* At least `failover-timeout` milliseconds elapsed since the last progress. - -The `failover-timeout` value can be configured in sentinel.conf for every -different slave. - -Note that when a leader terminates a failover for timeout, it sends a -`SLAVEOF` command in a best-effort way to all the slaves yet to be -configured, in the hope that they'll receive the command and replicate -with the new master eventually. - -**Sentinel Rule #16** A failover is considered complete if for a leader or observer if: -* One slave was promoted to master (and the Sentinel can detect that this actually happened via INFO output), and all the additional slaves are all configured to replicate with the new slave (again, the sentinel needs to sense it using the INFO output). -* There is already a correctly promoted slave, but the configured `failover-timeout` time has already elapsed without any progress in the reconfiguration of the additional slaves. In this case a leader sends a best effort `SLAVEOF` command is sent to all the not yet configured slaves. -In both the two above conditions the promoted slave **must be reachable** (not in SDOWN state), otherwise a failover is never considered to be complete. - -Leader failing during failover ---- - -If the leader fails when it has yet to promote the slave into a master, and it -fails in a way that makes it in SDOWN state from the point of view of the other -Sentinels, if enough Sentinels remained to reach the quorum the failover -will automatically continue using a new leader (the subjective leader of -all the remaining Sentinels will change because of the SDOWN state of the -previous leader). - -If the failover was already in progress and the slave -was already promoted, and possibly a few other slaves were already reconfigured, -an observer that is the new objective leader will continue the failover in -case no progresses are made for more than 25% of the time specified by the -`failover-timeout` configuration option. - -Note that this is safe as multiple Sentinels trying to reconfigure slaves -with duplicated SLAVEOF commands do not create any race condition, but at the -same time we want to be sure that all the slaves are reconfigured in the -case the original leader is no longer working. - -**Sentinel Rule #17** A Sentinel that is an observer for a failover in progress -will turn itself into a failover leader, continuing the configuration of the -additional slaves, if all the following conditions are true: -* A failover is in progress, and this Sentinel is an observer. -* It detects to be an objective leader (so likely the previous leader is no longer reachable by other sentinels). -* At least 25% of the configured `failover-timeout` has elapsed without any progress in the observed failover process. - -Promoted slave failing during failover ---- - -If the promoted slave has an active SDOWN condition, a Sentinel will never -sense the failover as terminated. - -Additionally if there is an *extended SDOWN condition* (that is an SDOWN that -lasts for more than ten times `down-after-milliseconds` milliseconds) the -failover is aborted (this happens for leaders and observers), and the master -starts to be monitored again as usually, so that a new failover can start with -a different slave in case the master is still failing. - -Note that when this happens it is possible that there are a few slaves already -configured to replicate from the (now failing) promoted slave, so when the -leader sentinel aborts a failover it sends a `SLAVEOF` command to all the -slaves already reconfigured or in the process of being reconfigured to switch -the configuration back to the original master. - -**Sentinel Rule #18** A Sentinel will consider the failover process aborted, both when acting as leader and when acting as an observer, in the following conditions are true: -* A failover is in progress and a slave to promote was already selected (or in the case of the observer was already detected as master). -* The promoted slave is in **Extended SDOWN** condition (continually in SDOWN condition for at least ten times the configured `down-after-milliseconds`). - -Resurrecting master ---- - -After the failover, at some point the old master may return back online. Starting with Redis 2.6.13 Sentinel is able to handle this condition by automatically reconfiguring the old master as a slave of the new master. - -This happens in the following way: - -* After the failover has started from the point of view of a Sentinel, either as a leader, or as an observer that detected the promotion of a slave, the old master is put in the list of slaves of the new master, but with a special `DEMOTE` flag (the flag can be seen in the `SENTINEL SLAVES` command output). -* Once the master is back online and it is possible to contact it again, if it still claims to be a master (from INFO output) Sentinels will send a `SLAVEOF` command trying to reconfigure it. Once the instance claims to be a slave, the `DEMOTE` flag is cleared. - -There is no single Sentinel in charge of turning the old master into a slave, so the process is resistant against failing sentinels. At the same time instances with the `DEMOTE` flag set are never selected as promotable slaves. - -In this specific case the `+slave` event is only generated only when the old master will report to be actually a slave again in its `INFO` output. - -**Sentinel Rule #19**: Once the failover starts (either as observer or leader), the old master is added as a slave of the new master, flagged as `DEMOTE`. - -**Sentinel Rule #20**: A slave instance claiming to be a master, and flagged as `DEMOTE`, is reconfigured via `SLAVEOF` every time a Sentinel receives an `INFO` output where the wrong role is detected. - -**Sentinel Rule #21**: The `DEMOTE` flag is cleared as soon as an `INFO` output shows the instance to report itself as a slave. - -Manual interactions ---- - -* TODO: Manually triggering a failover with SENTINEL FAILOVER. -* TODO: Pausing Sentinels with SENTINEL PAUSE, RESUME. - -The failback process ---- - -* TODO: Sentinel does not perform automatic Failback. -* TODO: Document correct steps for the failback. - -Clients configuration update ---- - -Work in progress. - -TILT mode ---- - -Redis Sentinel is heavily dependent on the computer time: for instance in -order to understand if an instance is available it remembers the time of the -latest successful reply to the PING command, and compares it with the current -time to understand how old it is. - -However if the computer time changes in an unexpected way, or if the computer -is very busy, or the process blocked for some reason, Sentinel may start to -behave in an unexpected way. - -The TILT mode is a special "protection" mode that a Sentinel can enter when -something odd is detected that can lower the reliability of the system. -The Sentinel timer interrupt is normally called 10 times per second, so we -expect that more or less 100 milliseconds will elapse between two calls -to the timer interrupt. - -What a Sentinel does is to register the previous time the timer interrupt -was called, and compare it with the current call: if the time difference -is negative or unexpectedly big (2 seconds or more) the TILT mode is entered -(or if it was already entered the exit from the TILT mode postponed). - -When in TILT mode the Sentinel will continue to monitor everything, but: - -* It stops acting at all. -* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. - -If everything appears to be normal for 30 second, the TILT mode is exited. - -Handling of -BUSY state ---- - -(Warning: Yet not implemented) - -The -BUSY error is returned when a script is running for more time than the -configured script time limit. When this happens before triggering a fail over -Redis Sentinel will try to send a "SCRIPT KILL" command, that will only -succeed if the script was read-only. - -Notifications via user script ---- - -Work in progress. - -Suggested setup ---- - -Work in progress. - -APPENDIX A - Implementation and algorithms -=== - -Duplicate Sentinels removal ---- - -In order to reach the configured quorum we absolutely want to make sure that -the quorum is reached by different physical Sentinel instances. Under -no circumstance we should get agreement from the same instance that for some -reason appears to be two or multiple distinct Sentinel instances. - -This is enforced by an aggressive removal of duplicated Sentinels: every time -a Sentinel sends a message in the Hello Pub/Sub channel with its address -and runid, if we can't find a perfect match (same runid and address) inside -the Sentinels table for that master, we remove any other Sentinel with the same -runid OR the same address. And later add the new Sentinel. - -For instance if a Sentinel instance is restarted, the Run ID will be different, -and the old Sentinel with the same IP address and port pair will be removed. - -Selection of the Slave to promote ---- - -If a master has multiple slaves, the slave to promote to master is selected -checking the slave priority (a new configuration option of Redis instances -that is propagated via INFO output, still not implemented), and picking the -one with lower priority value (it is an integer similar to the one of the -MX field of the DNS system). - -All the slaves that appears to be disconnected from the master for a long -time are discarded. - -If slaves with the same priority exist, the one with the lexicographically -smaller Run ID is selected. - -Note: because currently slave priority is not implemented, the selection is -performed only discarding unreachable slaves and picking the one with the -lower Run ID. - -**Sentinel Rule #22**: A Sentinel performing the failover as leader will select the slave to promote, among the existing **Good Slaves** (See rule #11), taking the one with the lower slave priority. When priority is the same the slave with lexicographically lower runid is preferred. - -APPENDIX B - Get started with Sentinel in five minutes -=== - -If you want to try Redis Sentinel, please follow this steps: - -* Clone the *unstable* branch of the Redis repository at github (it is the default branch). -* Compile it with "make". -* Start a few normal Redis instances, using the `redis-server` compiled in the *unstable* branch. One master and one slave is enough. -* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. - -To create the three configurations just create three files where you put something like that: - - port 26379 - sentinel monitor mymaster 127.0.0.1 6379 2 - sentinel down-after-milliseconds mymaster 5000 - sentinel failover-timeout mymaster 900000 - sentinel can-failover mymaster yes - sentinel parallel-syncs mymaster 1 - -Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other different non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentinel, but not good for production environments. - -At this point you should see something like the following in every Sentinel you are running: - - [4747] 23 Jul 14:49:15.883 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 - [4747] 23 Jul 14:49:19.645 * +sentinel sentinel 127.0.0.1:26379 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 - [4747] 23 Jul 14:49:21.659 * +sentinel sentinel 127.0.0.1:26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 - - redis-cli -p 26379 sentinel masters - 1) 1) "name" - 2) "mymaster" - 3) "ip" - 4) "127.0.0.1" - 5) "port" - 6) "6379" - 7) "runid" - 8) "66215809eede5c0fdd20680cfb3dbd3bdf70a6f8" - 9) "flags" - 10) "master" - 11) "pending-commands" - 12) "0" - 13) "last-ok-ping-reply" - 14) "515" - 15) "last-ping-reply" - 16) "515" - 17) "info-refresh" - 18) "5116" - 19) "num-slaves" - 20) "1" - 21) "num-other-sentinels" - 22) "2" - 23) "quorum" - 24) "2" - -To see how the failover works, just put down your slave (for instance sending `DEBUG SEGFAULT` to crash it) and see what happens. - -This HOWTO is a work in progress, more information will be added in the near future. From 21e11e3eb3bff9d17c6870140b44e163081c0256 Mon Sep 17 00:00:00 2001 From: qii404 Date: Sun, 22 Mar 2020 19:13:27 +0800 Subject: [PATCH 0313/1457] add AnotherRedisDesktopManager to tools.json (#1271) --- tools.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools.json b/tools.json index e56a730539..8a897c100b 100644 --- a/tools.json +++ b/tools.json @@ -84,6 +84,13 @@ "description": "An intuitive, cross-platform Redis GUI Client built in Electron.", "authors": ["stefano_arnone"] }, + { + "name": "AnotherRedisDesktopManager", + "language": "Javascript", + "repository": "https://github.com/qishibo/AnotherRedisDesktopManager", + "description": "🚀🚀🚀A faster, better and more stable redis desktop manager. What's more, it won't crash when loading a large number of keys.", + "authors": ["qii404"] + }, { "name": "Rdb-parser", "language": "Javascript", From ffbf9ca2fc344fb9d978f9d4767f01de97d9ee6e Mon Sep 17 00:00:00 2001 From: tdv Date: Sat, 28 Mar 2020 15:55:09 +0300 Subject: [PATCH 0314/1457] Update clients.json (#1274) redis-cpp is a library in C++17 for executing Redis commands with support of the pipelines and publish / subscribe pattern --- clients.json | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 3a122655a0..66f44b3c64 100644 --- a/clients.json +++ b/clients.json @@ -1781,6 +1781,13 @@ "description": "A simple and small redis client for java.", "authors": [], "active": true - } + }, + { + "name": "redis-cpp", + "language": "C++", + "repository": "https://github.com/tdv/redis-cpp", + "description": "redis-cpp is a library in C++17 for executing Redis commands with support of the pipelines and publish / subscribe pattern", + "active": true + } ] From b1b0166938ef80f1abf2a677f8b6f5034981162b Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Mon, 30 Mar 2020 11:17:08 +0300 Subject: [PATCH 0315/1457] fix typo (#1276) --- topics/acl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index c91ddc77dd..195a4df527 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -250,7 +250,7 @@ the exception of +@all. If you say +@all all the commands can be executed by the user, even future commands loaded via the modules system. However if you use the ACL rule +@readonly or any other, the modules commands are always excluded. This is very important because you should just trust the Redis -internal command table for sanity. Modules my expose dangerous things and in +internal command table for sanity. Modules may expose dangerous things and in the case of an ACL that is just additive, that is, in the form of `+@all -...` You should be absolutely sure that you'll never include what you did not mean to. From 940c65e86aa07bae032ead26de48f7f10ea17d55 Mon Sep 17 00:00:00 2001 From: Rafael Paulo Date: Mon, 30 Mar 2020 05:26:09 -0300 Subject: [PATCH 0316/1457] Fix: Resque repo moved to https://github.com/resque/resque (#1275) --- topics/data-types.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/data-types.md b/topics/data-types.md index a44583079a..9ed2eaa50f 100644 --- a/topics/data-types.md +++ b/topics/data-types.md @@ -53,7 +53,7 @@ You can do many interesting things with Redis Lists, for instance you can: * Model a timeline in a social network, using [LPUSH](/commands/lpush) in order to add new elements in the user time line, and using [LRANGE](/commands/lrange) in order to retrieve a few of recently inserted items. * You can use [LPUSH](/commands/lpush) together with [LTRIM](/commands/ltrim) to create a list that never exceeds a given number of elements, but just remembers the latest N elements. -* Lists can be used as a message passing primitive, See for instance the well known [Resque](https://github.com/defunkt/resque) Ruby library for creating background jobs. +* Lists can be used as a message passing primitive, See for instance the well known [Resque](https://github.com/resque/resque) Ruby library for creating background jobs. * You can do a lot more with lists, this data type supports a number of commands, including blocking commands like [BLPOP](/commands/blpop). Please check all the [available commands operating on lists](/commands#list) for more information, or read the [introduction to Redis data types](/topics/data-types-intro). From 9b5924244519f5de25f49369f2f70fbd83d6d8cb Mon Sep 17 00:00:00 2001 From: ronkrl <58705049+ronkrl@users.noreply.github.com> Date: Mon, 30 Mar 2020 13:07:35 -0500 Subject: [PATCH 0317/1457] Update active status of predis in clients.json (#1273) * Update active status of predis in clients.json Removing "Active" status for predis client, as it has no commits in almost 3 years * Update clients.json --- clients.json | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 66f44b3c64..6cb63dc77e 100644 --- a/clients.json +++ b/clients.json @@ -436,8 +436,7 @@ "repository": "https://github.com/nrk/predis", "description": "Mature and supported", "authors": ["JoL1hAHN"], - "recommended": true, - "active": true + "recommended": true }, { From 10ca7242a10a5c460cba5bdcc3a61bb80e5ac744 Mon Sep 17 00:00:00 2001 From: brian p o'rourke Date: Mon, 30 Mar 2020 11:21:16 -0700 Subject: [PATCH 0318/1457] Document BGSAVE SCHEDULE (#1277) * Document BGSAVE SCHEDULE * Add details about situations where BGSAVE returns errors * Describe the SCHEDULE option introduced in 3.2.2 * Minor edits. Co-authored-by: Itamar Haber --- commands.json | 10 ++++++++++ commands/bgsave.md | 18 ++++++++++++++++-- 2 files changed, 26 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 3a2e0375f4..fca5b88d10 100644 --- a/commands.json +++ b/commands.json @@ -33,6 +33,16 @@ }, "BGSAVE": { "summary": "Asynchronously save the dataset to disk", + "arguments": [ + { + "name": "schedule", + "type": "enum", + "enum": [ + "SCHEDULE" + ], + "optional": true + } + ], "since": "1.0.0", "group": "server" }, diff --git a/commands/bgsave.md b/commands/bgsave.md index 71101b12bd..cf4c5bd662 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,7 +1,17 @@ Save the DB in background. -The OK code is immediately returned. + +Normally the OK code is immediately returned. Redis forks, the parent continues to serve the clients, the child saves the DB on disk then exits. + +An error is returned if there is already a background save running or if there +is another non-background-save process running, specifically an in-progress AOF +rewrite. + +If `BGSAVE SCHEDULE` is used, the command will immediately return `OK` when an +AOF rewrite is in progress and schedule the background save to run at the next +opportunity. + A client may be able to check if the operation succeeded using the `LASTSAVE` command. @@ -11,4 +21,8 @@ Please refer to the [persistence documentation][tp] for detailed information. @return -@simple-string-reply +@simple-string-reply: `OK` if `BGSAVE` started correctly. + +@history + +* `>= 3.2.2`: Added the `SCHEDULE` option. From f25893246da213d3d4869fcc8a5316e39a96a6ad Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 31 Mar 2020 19:44:54 +0300 Subject: [PATCH 0319/1457] Adds migrate and restore to keyspace notifications (#1266) --- topics/notifications.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index 369e83287b..32c17fae0b 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -97,10 +97,12 @@ Different commands generate different kind of events according to the following * `DEL` generates a `del` event for every deleted key. * `RENAME` generates two events, a `rename_from` event for the source key, and a `rename_to` event for the destination key. +* `MIGRATE` generates a `del` event if the source key is removed. +* `RESTORE` generates a `restore` event for the key. * `EXPIRE` generates an `expire` event when an expire is set to the key, or an `expired` event every time a positive timeout set on a key results into the key being deleted (see `EXPIRE` documentation for more info). * `SORT` generates a `sortstore` event when `STORE` is used to set a new key. If the resulting list is empty, and the `STORE` option is used, and there was already an existing key with that name, the result is that the key is deleted, so a `del` event is generated in this condition. * `SET` and all its variants (`SETEX`, `SETNX`,`GETSET`) generate `set` events. However `SETEX` will also generate an `expire` events. -* `MSET` generates a separated `set` event for every key. +* `MSET` generates a separate `set` event for every key. * `SETRANGE` generates a `setrange` event. * `INCR`, `DECR`, `INCRBY`, `DECRBY` commands all generate `incrby` events. * `INCRBYFLOAT` generates an `incrbyfloat` events. From 112e8a10ca985bf89b448201d85fa546a43fff58 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 1 Apr 2020 19:05:07 +0300 Subject: [PATCH 0320/1457] Edits to opening sentence --- topics/cluster-tutorial.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 31e1a7aa25..b42f8fc0e1 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -2,8 +2,8 @@ Redis cluster tutorial === This document is a gentle introduction to Redis Cluster, that does not use -complex to understand distributed systems concepts. It provides instructions -about how to setup a cluster, test, and operate it, without +difficult to understand concepts of distributed systems . It provides +instructions about how to setup a cluster, test, and operate it, without going into the details that are covered in the [Redis Cluster specification](/topics/cluster-spec) but just describing how the system behaves from the point of view of the user. From 1f0b5b98f815674ba9c185abc4eeda1c78a2e26f Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Mon, 6 Apr 2020 09:52:58 -0400 Subject: [PATCH 0321/1457] add missing move command notification (#1278) --- topics/notifications.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/notifications.md b/topics/notifications.md index 32c17fae0b..f40237e725 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -97,6 +97,7 @@ Different commands generate different kind of events according to the following * `DEL` generates a `del` event for every deleted key. * `RENAME` generates two events, a `rename_from` event for the source key, and a `rename_to` event for the destination key. +* `MOVE` generates two events, a `move_from` event for the source key, and a `move_to` event for the destination key. * `MIGRATE` generates a `del` event if the source key is removed. * `RESTORE` generates a `restore` event for the key. * `EXPIRE` generates an `expire` event when an expire is set to the key, or an `expired` event every time a positive timeout set on a key results into the key being deleted (see `EXPIRE` documentation for more info). From 5debb979ae1036c32755d12cb903f7fb13d05ebe Mon Sep 17 00:00:00 2001 From: Oliver Wickham Date: Tue, 7 Apr 2020 18:49:25 +0100 Subject: [PATCH 0322/1457] Fix URL link to Hexastore paper (#1279) --- topics/indexes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/indexes.md b/topics/indexes.md index f15a285573..410cbdb597 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -470,7 +470,7 @@ Representing and querying graphs using an hexastore One cool thing about composite indexes is that they are handy in order to represent graphs, using a data structure which is called -[Hexastore](http://www.vldb.org/pvldb/1/1453965.pdf). +[Hexastore](http://www.vldb.org/pvldb/vol1/1453965.pdf). The hexastore provides a representation for relations between objects, formed by a *subject*, a *predicate* and an *object*. From b0285951ae4691ce73f5344d2389390387a5cdd7 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 8 Apr 2020 11:11:34 +0200 Subject: [PATCH 0323/1457] Add ACL commands in commands.json. --- commands.json | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/commands.json b/commands.json index 3a2e0375f4..a711fee1fc 100644 --- a/commands.json +++ b/commands.json @@ -1,4 +1,92 @@ { + "ACL LOAD": { + "summary": "Reload the ACLs from the configured ACL file", + "complexity": "O(N). Where N is the number of configured users.", + "since": "6.0.0", + "group": "server" + }, + "ACL SAVE": { + "summary": "Save the current ACL rules in the configured ACL file", + "complexity": "O(N). Where N is the number of configured users.", + "since": "6.0.0", + "group": "server" + }, + "ACL LIST": { + "summary": "List the current ACL rules in ACL config file format", + "complexity": "O(N). Where N is the number of configured users.", + "since": "6.0.0", + "group": "server" + }, + "ACL USERS": { + "summary": "List the username of all the configured ACL rules", + "complexity": "O(N). Where N is the number of configured users.", + "since": "6.0.0", + "group": "server" + }, + "ACL SETUSER": { + "summary": "Modify or create the rules for a specific ACL user", + "complexity": "O(N). Where N is the number of rules provided.", + "arguments": [ + { + "name": "rule", + "type": "string", + "multiple": true + } + ], + "since": "6.0.0", + "group": "server" + }, + "ACL DELUSER": { + "summary": "Remove the specified ACL users and the associated rules", + "complexity": "O(1) amortized time considering the typical user.", + "arguments": [ + { + "name": "username", + "type": "string", + "multiple": true + } + ], + "since": "6.0.0", + "group": "server" + }, + "ACL CAT": { + "summary": "List the ACL categories or the commands inside a category", + "complexity": "O(1) since the categories and commands are a fixed set.", + "arguments": [ + { + "name": "categoryname", + "type": "string", + "optional": true + } + ], + "since": "6.0.0", + "group": "server" + }, + "ACL GENPASS": { + "summary": "Generate a pseudorandom secure password to use for ACL users", + "complexity": "O(1)", + "since": "6.0.0", + "group": "server" + }, + "ACL WHOAMI": { + "summary": "Return the name of the user associated to the current connection", + "complexity": "O(1)", + "since": "6.0.0", + "group": "server" + }, + "ACL LOG": { + "summary": "List latest events denied because of ACLs in place", + "complexity": "O(N) with N being the number of entries shown.", + "arguments": [ + { + "name": "count or RESET", + "type": "string", + "optional": true + } + ], + "since": "6.0.0", + "group": "server" + }, "APPEND": { "summary": "Append a value to a key", "complexity": "O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.", From e069e055a9e1754a91e6e70f0d1ee838503be502 Mon Sep 17 00:00:00 2001 From: linfangrong Date: Wed, 22 Apr 2020 20:01:28 +0800 Subject: [PATCH 0324/1457] [ADD] redismodule-ratelimit (#1281) --- modules.json | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/modules.json b/modules.json index c0a7ace5f0..c571137c32 100644 --- a/modules.json +++ b/modules.json @@ -298,14 +298,24 @@ ], "stars": 30 }, - { - "name": "Reventis", - "license": "Redis Source Available License", - "repository": "https://github.com/starkdg/reventis", - "description": "Redis module for storing and querying spatio-temporal event data", - "authors": [ - "starkdg" - ], - "stars": 2 - } + { + "name": "Reventis", + "license": "Redis Source Available License", + "repository": "https://github.com/starkdg/reventis", + "description": "Redis module for storing and querying spatio-temporal event data", + "authors": [ + "starkdg" + ], + "stars": 2 + }, + { + "name": "redismodule-ratelimit", + "license": "MIT", + "repository": "https://github.com/linfangrong/redismodule-ratelimit", + "description": "Redis module that provides ratelimit", + "authors": [ + "linfangrong" + ], + "stars": 0 + } ] From 3454437bafa44b33d4cab431f21c5ec220bc4663 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 22 Apr 2020 17:41:31 +0200 Subject: [PATCH 0325/1457] Add some ACL commands manual pages. --- commands/acl-list.md | 17 +++++++++ commands/acl-load.md | 23 ++++++++++++ commands/acl-save.md | 18 ++++++++++ commands/acl-setuser.md | 78 +++++++++++++++++++++++++++++++++++++++++ commands/acl-users.md | 15 ++++++++ 5 files changed, 151 insertions(+) create mode 100644 commands/acl-list.md create mode 100644 commands/acl-load.md create mode 100644 commands/acl-save.md create mode 100644 commands/acl-setuser.md create mode 100644 commands/acl-users.md diff --git a/commands/acl-list.md b/commands/acl-list.md new file mode 100644 index 0000000000..44b6c3c96f --- /dev/null +++ b/commands/acl-list.md @@ -0,0 +1,17 @@ +The command shows the currently active ACL rules in the Redis server. Each +line in the returned array defines a different user, and the format is the +same used in the redis.conf file or the external ACL file, so you can +cut and paste what is returned by the ACL LIST command directly inside a +configuration file if you wish (but make sure to check `ACL SAVE`). + +@return + +An array of strings. + +@examples + +``` +> ACL LIST +1) "user antirez on #9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 ~objects:* +@all -@admin -@dangerous" +2) "user default on nopass ~* +@all" +``` diff --git a/commands/acl-load.md b/commands/acl-load.md new file mode 100644 index 0000000000..5178e4398b --- /dev/null +++ b/commands/acl-load.md @@ -0,0 +1,23 @@ +When Redis is configured to use an ACL file (with the `aclfile` configuration +option), this command will reload the ACLs from the file, replacing all +the current ACL rules with the ones defined in the file. The command makes +sure to have an *all or nothing* behavior, that is: + +* If every line in the file is valid, all the ACLs are loaded. +* If oen or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used. + +@return + +@simple-string-reply: `OK` on success. + +The command may fail with an error for several reasons: if the file is not readable, if there is an error inside the file, and in such case the error will be reported to the user in the error. Finally the command will fail if the server is not configured to use an external ACL file. + +@examples + +``` +> ACL LOAD ++OK + +> ACL LOAD +-ERR /tmp/foo:1: Unknown command or category name in ACL... +``` diff --git a/commands/acl-save.md b/commands/acl-save.md new file mode 100644 index 0000000000..57badc8b78 --- /dev/null +++ b/commands/acl-save.md @@ -0,0 +1,18 @@ +When Redis is configured to use an ACL file (with the `aclfile` configuration +option), this command will save the currently defined ACLs from the server memory to the ACL file. + +@return + +@simple-string-reply: `OK` on success. + +The command may fail with an error for several reasons: if the file cannot be written or if the server is not configured to use an external ACL file. + +@examples + +``` +> ACL SAVE ++OK + +> ACL SAVE +-ERR There was an error trying to save the ACLs. Please check the server logs for more information +``` diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md new file mode 100644 index 0000000000..741ea654b5 --- /dev/null +++ b/commands/acl-setuser.md @@ -0,0 +1,78 @@ +Create an ACL user with the specified rules or modify the rules of an +existing user. This is the main interface in order to manipulate Redis ACL +users interactively: if the username does not exist, the command creates +the username without any privilege, then reads from left to right all the +rules provided as successive arguments, setting the user ACL rules as specified. + +If the user already exists, the provided ACL rules are simply applied +*in addition* to the rules already set. For example: + + ACL SETUSER virginia on allkeys +set + +The above command will create a user called `virginia` that is active +(the on rule), can access any key (allkeys rule), and can call the +set command (+set rule). Then another SETUSER call can modify the user rules: + + ACL SETUSER virginia +get + +The above rule will not apply the new rule to the user virginia, so other than `SET`, the user virginia will now be able to also use the `GET` command. + +When we want to be sure to define an user from scratch, without caring if +it had previously defined rules associated, we can use the special rule +`reset` as first rule, in order to flush all the other existing rules: + + ACL SETUSER antirez reset [... other rules ...] + +After resetting an user, it returns back to the status it has when it +was just created: non active (off rule), can't execute any command, can't +access any key: + + > ACL SETUSER antirez reset + +OK + > ACL LIST + 1) "user antirez off -@all" + +ACL rules are either words like "on", "off", "reset", "allkeys", or are +special rules that start with a special character, and are followed by +another string (without any space in between), like "+SET". + +The following documentation is a reference manual about the capabilities of this command, however our [ACL tutorial](/topics/acl) may be a more gentle introduction to how the ACL system works in general. + +## List of rules + +This is a list of all the supported Redis ACL rules: + +* **`on`**: set the user as active, it will be possible to authenticate as this user using `AUTH `. +* **`off`**: set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use `CLIENT KILL` with the user option. An alternative is to delete the user with `ACL DELUSER`, that will result in all the connections authenticated as the deleted user to be disconnected. +* **`~`**: add the specified key pattern (glob style pattern, like in the `KEYS` command), to the list of key patterns accessible by the user. You can add as many key patterns you want to the same user. Example: `~objects:*` +* **`allkeys`**: alias for `~*`, it allows the user to access all the keys. +* **`resetkey`**: removes all the key patterns from the list of key patterns the user can access. +* **`+`**: add this command to the list of the commands the user can call. Example: `+zadd`. +* **`+@`**: add all the commands in the specified categoty to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories check the `ACL CAT` command. +* **`+|`**: add the specified command to the list of the commands the user can execute, but only for the specified subcommand. Example: `+config|get`. Generates an error if the specified command is already allowed in its full version for the specified user. Note: there is no symmetrical command to remove subcommands, you need to remove the whole command and re-add the subcommands you want to allow. This is much safer than removing subcommands, in the future Redis may add new dangerous subcommands, so configuring by subtraction is not good. +* **`allcommands`**: alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. +* **`-`**. Like **`+`** but removes the command instead of adding it. +* **`-@`**: Like **`-@`** but removes all the commands in the category instead of adding them. +* **`nocommands`**: alias for `-@all`. Removes all the commands, the user will no longer be able to execute anything. +* **`nopass`**: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. +* **`>password`**: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. +* **`#`**: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. +* **`password`** but removes the password instead of adding it. +* **!`**: Like **`#`** but removes the password instead of adding it. +* **reset**: Remove any capability from the user. It is set to off, without passwords, unable to execute any command, unable to access any key. + +@return + +@simple-string-reply: `OK` on success. + +If the rules contain errors, the error is returned. + +@examples + +``` +> ACL SETUSER alan allkeys +@string +@set -SADD >alanpassword ++OK + +> ACL SETUSER antirez heeyyyy +(error) ERR Error in ACL SETUSER modifier 'heeyyyy': Syntax error +``` diff --git a/commands/acl-users.md b/commands/acl-users.md new file mode 100644 index 0000000000..9b0fe1bf38 --- /dev/null +++ b/commands/acl-users.md @@ -0,0 +1,15 @@ +The command shows a list of all the usernames of the currently configured +users in the Redis ACL system. + +@return + +An array of strings. + +@examples + +``` +> ACL USERS +1) "anna" +2) "antirez" +3) "default" +``` From 5003f56e3b2a98b8b99ac000f46f342e81b0a7c5 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 10:14:05 +0200 Subject: [PATCH 0326/1457] ACL SETUSER markdown fixes. --- commands/acl-setuser.md | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md index 741ea654b5..2df035bbb9 100644 --- a/commands/acl-setuser.md +++ b/commands/acl-setuser.md @@ -42,24 +42,24 @@ The following documentation is a reference manual about the capabilities of this This is a list of all the supported Redis ACL rules: -* **`on`**: set the user as active, it will be possible to authenticate as this user using `AUTH `. -* **`off`**: set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use `CLIENT KILL` with the user option. An alternative is to delete the user with `ACL DELUSER`, that will result in all the connections authenticated as the deleted user to be disconnected. -* **`~`**: add the specified key pattern (glob style pattern, like in the `KEYS` command), to the list of key patterns accessible by the user. You can add as many key patterns you want to the same user. Example: `~objects:*` -* **`allkeys`**: alias for `~*`, it allows the user to access all the keys. -* **`resetkey`**: removes all the key patterns from the list of key patterns the user can access. -* **`+`**: add this command to the list of the commands the user can call. Example: `+zadd`. -* **`+@`**: add all the commands in the specified categoty to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories check the `ACL CAT` command. -* **`+|`**: add the specified command to the list of the commands the user can execute, but only for the specified subcommand. Example: `+config|get`. Generates an error if the specified command is already allowed in its full version for the specified user. Note: there is no symmetrical command to remove subcommands, you need to remove the whole command and re-add the subcommands you want to allow. This is much safer than removing subcommands, in the future Redis may add new dangerous subcommands, so configuring by subtraction is not good. -* **`allcommands`**: alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. -* **`-`**. Like **`+`** but removes the command instead of adding it. -* **`-@`**: Like **`-@`** but removes all the commands in the category instead of adding them. -* **`nocommands`**: alias for `-@all`. Removes all the commands, the user will no longer be able to execute anything. -* **`nopass`**: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. -* **`>password`**: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. -* **`#`**: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. -* **`password`** but removes the password instead of adding it. -* **!`**: Like **`#`** but removes the password instead of adding it. -* **reset**: Remove any capability from the user. It is set to off, without passwords, unable to execute any command, unable to access any key. +* `on`: set the user as active, it will be possible to authenticate as this user using `AUTH `. +* `off`: set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use `CLIENT KILL` with the user option. An alternative is to delete the user with `ACL DELUSER`, that will result in all the connections authenticated as the deleted user to be disconnected. +* `~`: add the specified key pattern (glob style pattern, like in the `KEYS` command), to the list of key patterns accessible by the user. You can add as many key patterns you want to the same user. Example: `~objects:*` +* `allkeys`: alias for `~*`, it allows the user to access all the keys. +* `resetkey`: removes all the key patterns from the list of key patterns the user can access. +* `+`: add this command to the list of the commands the user can call. Example: `+zadd`. +* `+@`: add all the commands in the specified categoty to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories check the `ACL CAT` command. +* `+|`: add the specified command to the list of the commands the user can execute, but only for the specified subcommand. Example: `+config|get`. Generates an error if the specified command is already allowed in its full version for the specified user. Note: there is no symmetrical command to remove subcommands, you need to remove the whole command and re-add the subcommands you want to allow. This is much safer than removing subcommands, in the future Redis may add new dangerous subcommands, so configuring by subtraction is not good. +* `allcommands`: alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. +* `-`. Like `+` but removes the command instead of adding it. +* `-@`: Like `-@` but removes all the commands in the category instead of adding them. +* `nocommands`: alias for `-@all`. Removes all the commands, the user will no longer be able to execute anything. +* `nopass`: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. +* `>password`: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. +* `#`: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. +* `password` but removes the password instead of adding it. +* !`: Like `#` but removes the password instead of adding it. +* reset: Remove any capability from the user. It is set to off, without passwords, unable to execute any command, unable to access any key. @return From 8246e4355bcf6d2fc3cbb51a93bb5e73af2eb4af Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 10:19:29 +0200 Subject: [PATCH 0327/1457] ACL DELUSER manual page added. --- commands/acl-deluser.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 commands/acl-deluser.md diff --git a/commands/acl-deluser.md b/commands/acl-deluser.md new file mode 100644 index 0000000000..e3f443e4d7 --- /dev/null +++ b/commands/acl-deluser.md @@ -0,0 +1,16 @@ +Delete all the specified ACL users and terminate all the connections that are +authenticated with such users. Note: the special `default` user cannot be +removed from the system, this is the default user that every new connection +is authenticated with. The list of users may include usernames that do not +exist, in such case no operation is performed for the non existing users. + +@return + +@integer-reply: The number of users that were deleted. This number will not always match the number of arguments since certain users may not exist. + +@examples + +``` +> ACL DELUSER antirez +1 +``` From 4660d0f1537dcfab43a4c995623c5040e0442e34 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 10:20:47 +0200 Subject: [PATCH 0328/1457] More ACL SETUSER doc fixes. --- commands/acl-setuser.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md index 2df035bbb9..ec6b65bc42 100644 --- a/commands/acl-setuser.md +++ b/commands/acl-setuser.md @@ -58,7 +58,7 @@ This is a list of all the supported Redis ACL rules: * `>password`: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. * `#`: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. * `password` but removes the password instead of adding it. -* !`: Like `#` but removes the password instead of adding it. +* `!`: Like `#` but removes the password instead of adding it. * reset: Remove any capability from the user. It is set to off, without passwords, unable to execute any command, unable to access any key. @return From 22c0891f1484a7cda00e8be80eee563d9e50b135 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 10:28:32 +0200 Subject: [PATCH 0329/1457] ACL CAT documented. --- commands/acl-cat.md | 82 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 commands/acl-cat.md diff --git a/commands/acl-cat.md b/commands/acl-cat.md new file mode 100644 index 0000000000..56616e789f --- /dev/null +++ b/commands/acl-cat.md @@ -0,0 +1,82 @@ +The command shows the available ACL categories if called without arguments. +If a category name is given, the command shows all the Redis commands in +the specified category. + +ACL categories are very useful in order to create ACL rules that include or +exclude a large set of commands at once, without specifying every single +command. For instance the following rule will let the user `karin` perform +everything but the most dangerous operations that may affect the server +stability: + + ACL SETUSER karin on +@all -@dangerous + +We first add all the commands to the set of commands that `karin` is able +to execute, but then we remove all the dangerous commands. + +Checking for all the available categories is as simple as: + +``` +> ACL CAT + 1) "keyspace" + 2) "read" + 3) "write" + 4) "set" + 5) "sortedset" + 6) "list" + 7) "hash" + 8) "string" + 9) "bitmap" +10) "hyperloglog" +11) "geo" +12) "stream" +13) "pubsub" +14) "admin" +15) "fast" +16) "slow" +17) "blocking" +18) "dangerous" +19) "connection" +20) "transaction" +21) "scripting" +``` + +Then we may want to know what commands are part of a given category: + +``` +> ACL CAT dangerous + 1) "flushdb" + 2) "acl" + 3) "slowlog" + 4) "debug" + 5) "role" + 6) "keys" + 7) "pfselftest" + 8) "client" + 9) "bgrewriteaof" +10) "replicaof" +11) "monitor" +12) "restore-asking" +13) "latency" +14) "replconf" +15) "pfdebug" +16) "bgsave" +17) "sync" +18) "config" +19) "flushall" +20) "cluster" +21) "info" +22) "lastsave" +23) "slaveof" +24) "swapdb" +25) "module" +26) "restore" +27) "migrate" +28) "save" +29) "shutdown" +30) "psync" +31) "sort" +``` + +@return + +@array-reply: a list of ACL categories or a list of commands inside a given category. The command may return an error if an invalid category name is given as argument. From 34bfe7f93017a41faf6b42ad8cffd13e1355a312 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 11:30:38 +0200 Subject: [PATCH 0330/1457] ACL GENPASS documented. --- commands/acl-genpass.md | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 commands/acl-genpass.md diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md new file mode 100644 index 0000000000..532527e69d --- /dev/null +++ b/commands/acl-genpass.md @@ -0,0 +1,30 @@ +ACL users need a solid password in order to authenticate to the server without +security risks. Such password does not need to be remembered by humans, but +only by computers, so it can be very long and strong (unguessable by an +external attacker). The `ACL GENPASS` command generates a password starting +from /dev/urandom if available, otherwise (in systems without /dev/urandom) it +uses a weaker system that is likely still better than picking a weak password +by hand. + +By default (if /dev/urandom is availalbe) the password is strong and +can be used for other uses in the context of a Redis application, for +instance in order to create unique session identifiers or other kind of +unguessable and not colliding IDs. The password generation is also very cheap +because we don't really ask /de/urandom for bits at every execution. At +startup Redis creates a seed using /dev/urandom, then it will use SHA256 +in counter mode, with HMAC-SHA256(seed,counter) as primitive, in order to +create more random bytes as needed. This means that the application developer +should be feel free to abuse `ACL GENPASS` to create as many secure +pseudorandom strings as needed. + +The command outout is an hexadecimal representation of a binary string. +By default it emits 256 bits (so 64 hex characters). The user can provide +an argument in form of number of bits to emit from 1 to 1024 to change +the output length. Note that the number of bits provided is always +rounded to the next multiple of 4. So for instance asking for just 1 +bit password will result in 4 bits to be emitted, in the form of a single +hex character. + +@return + +@bulk-string-reply: by default 64 bytes string representing 128 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4. From 54024d30f728af979b1032bd27a75b5a01adff47 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 11:36:34 +0200 Subject: [PATCH 0331/1457] Add examples to ACL GENPASS. --- commands/acl-genpass.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md index 532527e69d..4573fee60b 100644 --- a/commands/acl-genpass.md +++ b/commands/acl-genpass.md @@ -28,3 +28,16 @@ hex character. @return @bulk-string-reply: by default 64 bytes string representing 128 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4. + +@examples + +``` +> ACL GENPASS +"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc" + +> ACL GENPASS 32 +"355ef3dd" + +> ACL GENPASS 5 +"90" +``` From 535212700467e6518103f20d17523a23a112fb02 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 11:37:37 +0200 Subject: [PATCH 0332/1457] Update ACL GENPASS arguments in JSON. --- commands.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/commands.json b/commands.json index 7d016eccd6..222348507f 100644 --- a/commands.json +++ b/commands.json @@ -65,6 +65,13 @@ "ACL GENPASS": { "summary": "Generate a pseudorandom secure password to use for ACL users", "complexity": "O(1)", + "arguments": [ + { + "name": "bits", + "type": "integer", + "optional": true + } + ], "since": "6.0.0", "group": "server" }, From f2c4ae703d0c65f79576db24597c846552505857 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 11:39:21 +0200 Subject: [PATCH 0333/1457] Fix spelling in ACL GENPASS. --- commands/acl-genpass.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md index 4573fee60b..5e7626d053 100644 --- a/commands/acl-genpass.md +++ b/commands/acl-genpass.md @@ -6,7 +6,7 @@ from /dev/urandom if available, otherwise (in systems without /dev/urandom) it uses a weaker system that is likely still better than picking a weak password by hand. -By default (if /dev/urandom is availalbe) the password is strong and +By default (if /dev/urandom is available) the password is strong and can be used for other uses in the context of a Redis application, for instance in order to create unique session identifiers or other kind of unguessable and not colliding IDs. The password generation is also very cheap @@ -17,7 +17,7 @@ create more random bytes as needed. This means that the application developer should be feel free to abuse `ACL GENPASS` to create as many secure pseudorandom strings as needed. -The command outout is an hexadecimal representation of a binary string. +The command output is an hexadecimal representation of a binary string. By default it emits 256 bits (so 64 hex characters). The user can provide an argument in form of number of bits to emit from 1 to 1024 to change the output length. Note that the number of bits provided is always From e7553883402826e4d55f6c98848731379bc652bf Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 16:50:05 +0200 Subject: [PATCH 0334/1457] AUTH doc updated for ACL. --- commands/acl-whoami.md | 14 ++++++++++++++ commands/auth.md | 36 ++++++++++++++++++++++++++++-------- 2 files changed, 42 insertions(+), 8 deletions(-) create mode 100644 commands/acl-whoami.md diff --git a/commands/acl-whoami.md b/commands/acl-whoami.md new file mode 100644 index 0000000000..5ec7b8485b --- /dev/null +++ b/commands/acl-whoami.md @@ -0,0 +1,14 @@ +Return the username the current connection is authenticated with. +New connections are authenticated with the "default" user. They +can change user using `AUTH`. + +@return + +@bulk-string-reply: the username of the current connection. + +@examples + +``` +> ACL WHOAMI +"default" +``` diff --git a/commands/auth.md b/commands/auth.md index 6bb411fd38..c9bf5b7fab 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,16 +1,36 @@ -Request for authentication in a password-protected Redis server. -Redis can be instructed to require a password before allowing clients to execute -commands. -This is done using the `requirepass` directive in the configuration file. +The AUTH command authenticates the current connection in two cases: -If `password` matches the password in the configuration file, the server replies -with the `OK` status code and starts accepting commands. +1. If the Redis server is password protected via the `requirepass` option. +2. If a Redis 6.0 instance, or greater, is using the [Redis ACL system](/topics/acl). + +Redis versions prior of Redis 6 were only able to understand the one argument +version of the command: + + AUTH + +This form just authenticates against the password set with `requirepass`. +In this configuration Redis will deny any command executed by the just +connected clients, unless the connection gets authenticated via `AUTH`. + +If the password provided via AUTH matches the password in the configuration file, the server replies with the `OK` status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password. -**Note**: because of the high performance nature of Redis, it is possible to try +When Redis ACLs are used, the command should be given in an extended way: + + AUTH + +In order to authenticate the current connection with one of the connections +defined in the ACL list (see `ACL SETUSER`) and the offical [ACL guide](/topics/acl) for more information. + +When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default". + +## Security notice + +Because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate a strong and very long password so that this attack is infeasible. +A good way to generate strong passwords is via the `ACL GENPASS` command. @return -@simple-string-reply +@simple-string-reply or an error if the password, or username/password pair, is invalid. From d225f1a49434a062bbd9452b2cd3d9fc8eaa342d Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 23 Apr 2020 16:56:14 +0200 Subject: [PATCH 0335/1457] ACL LOG documented. --- commands/acl-log.md | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 commands/acl-log.md diff --git a/commands/acl-log.md b/commands/acl-log.md new file mode 100644 index 0000000000..f22ddb65ef --- /dev/null +++ b/commands/acl-log.md @@ -0,0 +1,35 @@ +The command shows a list of recent ACL security events: + +1. Failures to authenticate their connections with `AUTH` or `HELLO`. +2. Commands denied because against the current ACL rules. +3. Commands denied because accessing keys not allowed in the current ACL rules. + +The optional argument specifies how many entries to show. By default +up to ten failures are returned. The special `RESET` argument clears the log. +Entries are displayed starting from the most recent. + +@return + +@array-reply: a list of ACL security events. + +@examples + +``` +> AUTH someuser wrongpassword +(error) WRONGPASS invalid username-password pair +> ACL LOG 1 +1) 1) "count" + 2) (integer) 1 + 3) "reason" + 4) "auth" + 5) "context" + 6) "toplevel" + 7) "object" + 8) "AUTH" + 9) "username" + 10) "someuser" + 11) "age-seconds" + 12) "4.0960000000000001" + 13) "client-info" + 14) "id=6 addr=127.0.0.1:63026 fd=8 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=auth user=default" +``` From df9b73bf9ae33d1f9fc92029662e98a97d8564ed Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 24 Apr 2020 19:55:52 +0200 Subject: [PATCH 0336/1457] STRALGO documented. --- commands.json | 20 +++++++++ commands/stralgo.md | 104 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 124 insertions(+) create mode 100644 commands/stralgo.md diff --git a/commands.json b/commands.json index 222348507f..33f8618f84 100644 --- a/commands.json +++ b/commands.json @@ -2970,6 +2970,26 @@ "since": "1.0.0", "group": "set" }, + "STRALGO": { + "summary": "Run algorithms (currently LCS) against strings", + "complexity": "For LCS O(strlen(s1)*strlen(s2))", + "arguments": [ + { + "name": "algorithm", + "type": "enum", + "enum": [ + "LCS" + ] + }, + { + "name": "algo-specific-argument", + "type": "string", + "multiple": true + } + ], + "since": "6.0.0", + "group": "string" + }, "STRLEN": { "summary": "Get the length of the value stored in a key", "complexity": "O(1)", diff --git a/commands/stralgo.md b/commands/stralgo.md new file mode 100644 index 0000000000..377ec5ed83 --- /dev/null +++ b/commands/stralgo.md @@ -0,0 +1,104 @@ +The STRALGO implements complex algorithsm that operate on strings. +Right now the only algorithm implemented is the LCS algorithm (longest common +substring). However new algorithms could be implemented in the future. +The goal of this command is to provide to Redis users algorithms that need +fast implementations and are normally not provided in the standard library of +most programming languages. + +The first argument of the command selects the algorithm to use, right now +the argument must be "LCS", since this is the only implemented one. + +## LCS algorithm + +``` +STRALGO LCS [KEYS ...] [STRINGS ...] [LEN] [IDX] [MINRANGELEN] [WITHRANGELEN] +``` + +The LCS subcommand implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous. + +For instane the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o". + +LCS is very useful in order to evaluate how similar two strings are. Strings can represent many things. For instance if two strings are DNA sequences, the LCS will provide a measure of similarity between the two DNA sequences. If the strings represent some text edited by some user, the LCS could represent how different the new text is compared to the old one, and so forth. + +Note that this algorithm runs in `O(N*M)` time, where N is the length of the first string and M is the length of the second string. So either spin a different Redis instance in order to run this algorithm, or make sure to run it against very small strings. + +The basic usage is the following: + +``` +> STRALGO LCS STRINGS ohmytext mynewtext +"mytext" +``` + +It is possible to also compute the LCS between the contet of two keys: + +``` +> MSET key1 ohmytext key2 mynewtext +OK +> STRALGO LCS KEYS key1 key2 +"mytext" +``` + +Sometimes we need just the length of the match: + +``` +> STRALGO LCS STRINGS ohmytext mynewtext LEN +"mytext" +``` + +However what is often very useful, is to know the match position in each strings: + +``` +> STRALGO LCS KEYS key1 key2 IDX +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 + 2) 1) 1) (integer) 2 + 2) (integer) 3 + 2) 1) (integer) 0 + 2) (integer) 1 +3) "len" +4) (integer) 6 +``` + +Matches are produced from the last one to the first one, since this is how +the algorithm works, and it more efficient to emit things in the same order. +The above array means that the first match (second element of the array) +is between positions 2-3 of the first string and 0-1 of the second. +Then there is another match between 4-7 and 5-8. + +To restrict the list of matches to the ones of a given minimal length: + +``` +> STRALGO LCS KEYS key1 key2 IDX MINMATCHLEN 4 +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 +3) "len" +4) (integer) 6 +``` + +Finally to also have the match len: + +``` +> STRALGO LCS KEYS key1 key2 IDX MINMATCHLEN 4 WITHMATCHLEN +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 + 3) (integer) 4 +3) "len" +4) (integer) 6 +``` + +@return + +For the LCS algorith: + +* Without modifiers the string representing the longest common substring is returned. +* When LEN is given the command returns the length of the longest common substring. +* When IDX is given the command returns an array with the LCS length and all the ranges in both the strings, start and end offset for each string, where there are matches. When WITHMATCHLEN is given each array representing a match will also have the length of the match (see examples). From f2ab177600e8b8f36af6dce508fb6ea3c2e0e5ae Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Tue, 28 Apr 2020 11:30:27 -0400 Subject: [PATCH 0337/1457] fix typo in client side caching docs (#1283) --- topics/client-side-caching.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 4a7c3d9f59..dc09182511 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -308,7 +308,7 @@ However simpler clients may just evict data using some random sampling just remembering the last time a given cached value was served, trying to evict keys that were not served recently. -## Other hitns about client libraries implementation +## Other hints about client libraries implementation * Handling TTLs: make sure you request also the key TTL and set the TTL in the local cache if you want to support caching keys with a TTL. * Putting a max TTL in every key is a good idea, even if it had no TTL. This is a good protection against bugs or connection issues that would make the client having old data in the local copy. From ac4b1bb34cd13d673c56e7931607697218dcc1b2 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Tue, 28 Apr 2020 11:48:45 -0400 Subject: [PATCH 0338/1457] correct the arrow direction in client-side-caching (#1260) * correct the arrow direction * fix typo --- topics/client-side-caching.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index dc09182511..cca69b9684 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -265,8 +265,8 @@ race condition. See the following example interaction, where we'll call the data connection "D" and the invalidation connection "I": [D] client -> server: GET foo - [I] server <- client: Invalidate foo (somebody else touched it) - [D] server <- client: "bar" (the reply of "GET foo") + [I] server -> client: Invalidate foo (somebody else touched it) + [D] server -> client: "bar" (the reply of "GET foo") As you can see, because the reply to the GET was slower to reach the client, we received the invalidation message before the actual data that @@ -276,9 +276,9 @@ when we send the command with a placeholder: Client cache: set the local copy of "foo" to "caching-in-progress" [D] client-> server: GET foo. - [I] server <- client: Invalidate foo (somebody else touched it) + [I] server -> client: Invalidate foo (somebody else touched it) Client cache: delete "foo" from the local cache. - [D] server <- client: "bar" (the reply of "GET foo") + [D] server -> client: "bar" (the reply of "GET foo") Client cache: don't set "bar" since the entry for "foo" is missing. Such race condition is not possible when using a single connection for both From 8b13059c2aeb19db2fbbd0c8134052aaadc4c65d Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 29 Apr 2020 19:02:24 +0200 Subject: [PATCH 0339/1457] Document CSC NOLOOP option. --- topics/client-side-caching.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 4a7c3d9f59..364b587fbf 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -257,6 +257,21 @@ In this mode we have the following main behaviors: * The server will consume a CPU proportional to the number of registered prefixes. If you have just a few, it is hard to see any difference. With a big number of prefixes the CPU cost can become quite large. * In this mode the server can perform the optimization of creating a single reply for all the clients subscribed to a given prefix, and send the same reply to all. This helps to lower the CPU usage. +## The NOLOOP option + +By default client side tracking will send invalidation messages even to +client that modified the key. Sometimes clients want this, since they +implement a very basic logic that does not involve automatically caching +writes locally. However more advanced clients may want to cache even the +writes they are doing in the local in-memory table. In such case receiving +an invalidation message immediately after the write is a problem, since it +will force the client to evict the value it just cached. + +In this case it is possible to use the `NOLOOP` option: it works both +in normal and broadcasting mode. Using such option, clients are able to +tell the server they don't want to receive invalidation messages for keys +that are modified by themselves. + ## Avoiding race conditions When implementing client side caching redirecting the invalidation messages From a8db7219b21f8c8d0c084e160cb6e8e6061307b1 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 10:15:59 +0200 Subject: [PATCH 0340/1457] Document MIGRATE AUTH2. --- commands/migrate.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/migrate.md b/commands/migrate.md index 928c953cfc..1635605eb2 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -64,10 +64,12 @@ just a single key exists. * `REPLACE` -- Replace existing key on the remote instance. * `KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `KEYS` option (see the above section for more info). * `AUTH` -- Authenticate with the given password to the remote instance. +* `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style). `COPY` and `REPLACE` are available only in 3.0 and above. `KEYS` is available starting with Redis 3.0.6. `AUTH` is available starting with Redis 4.0.7. +`AUTH2` is available starting with Redis 6.0.0. @return From e49a6dec3f2573342613d5fb20e3790c2cf80358 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 10:23:37 +0200 Subject: [PATCH 0341/1457] Document CLIENT KILL USER. --- commands/client-kill.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/client-kill.md b/commands/client-kill.md index 092d07d897..e692df8d99 100644 --- a/commands/client-kill.md +++ b/commands/client-kill.md @@ -15,6 +15,7 @@ instead of killing just by address. The following filters are available: * `CLIENT KILL ADDR ip:port`. This is exactly the same as the old three-arguments behavior. * `CLIENT KILL ID client-id`. Allows to kill a client by its unique `ID` field, which was introduced in the `CLIENT LIST` command starting from Redis 2.8.12. * `CLIENT KILL TYPE type`, where *type* is one of `normal`, `master`, `slave` and `pubsub` (the `master` type is available from v3.2). This closes the connections of **all the clients** in the specified class. Note that clients blocked into the `MONITOR` command are considered to belong to the `normal` class. +* `CLIENT KILL USER `username`. Closes all the connections that are authenticated with the specified [ACL](/topics/acl) username, however it returns an error if the username does not map to an existing ACL user. * `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is, the client calling the command will not get killed, however setting this option to `no` will have the effect of also killing the client calling the command. **Note: starting with Redis 5 the project is no longer using the slave word. You can use `TYPE replica` instead, however the old form is still supported for backward compatibility.** From ced480f542389af01a92e50a45d89ab5bee14992 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 11:27:49 +0200 Subject: [PATCH 0342/1457] CLIENT TRACKING documented. --- commands.json | 60 +++++++++++++++++++++++++++++++++++++ commands/client-tracking.md | 33 ++++++++++++++++++++ 2 files changed, 93 insertions(+) create mode 100644 commands/client-tracking.md diff --git a/commands.json b/commands.json index 33f8618f84..df24c2445f 100644 --- a/commands.json +++ b/commands.json @@ -475,6 +475,66 @@ ], "group": "server" }, + "CLIENT TRACKING": { + "summary": "Enable or disable server assisted client side caching support", + "complexity": "O(1)", + "arguments": [ + { + "name": "status", + "type": "enum", + "enum": [ + "ON", + "OFF" + ] + }, + { + "command": "REDIRECT", + "name": "client-id", + "type": "integer", + "optional": true + }, + { + "command": "PREFIX", + "name": "prefix", + "type": "srting", + "optional": true + }, + { + "name": "BCAST", + "type": "enum", + "enum": [ + "BCAST" + ], + "optional": true + }, + { + "name": "OPTIN", + "type": "enum", + "enum": [ + "OPTIN" + ], + "optional": true + }, + { + "name": "OPTOUT", + "type": "enum", + "enum": [ + "OPTOUT" + ], + "optional": true + }, + { + "name": "NOLOOP", + "type": "enum", + "enum": [ + "NOLOOP" + ], + "optional": true + } + ], + "since": "6.0.0", + "group": "server" + }, "CLIENT UNBLOCK": { "summary": "Unblock a client blocked in a blocking command from a different connection", "complexity": "O(log N) where N is the number of client connections", diff --git a/commands/client-tracking.md b/commands/client-tracking.md new file mode 100644 index 0000000000..cb2e7ef5e8 --- /dev/null +++ b/commands/client-tracking.md @@ -0,0 +1,33 @@ +This command enables the tracking feature of the Redis server, that is used +for [server assisted client side caching](/topics/client-side-caching). + +When tracking is enabled Redis remembers the keys that the connection +requested, in order to send later invalidation messages when such keys are +modified. Invalidation messages are sent in the same connection (only available +when the RESP3 protocol is used) or redirected in a different connection +(available also with RESP2 and Pub/Sub). A special *broadcasting* mode is +available where clients participating in this protocol receive every +notification just subscribing to given key prefixes, regardless of the +keys that they requested. Given the complexity of the argument please +refer to [the main client side caching documentation](/topics/client-side-caching) for the details. This manual page is only a reference for the options of this subcommand. + +In order to enable tracking, use: + + CLIENT TRACKING on ... options ... + +The feature will remain active in the current connection for all its life, +unless tracking is turned on with `CLIENT TRACKING off` at some point. + +The following are the list of options that modify the behavior of the +command when enabling tracking: + +* `REDIRECT `: send redirection messages to the connection with the specified ID. The connection must exist, you can get the ID of such connection using `CLIENT ID`. If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive `tracking-redir-broken` push messages in order to signal the condition. +* `BCAST`: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys. +* `PREFIX `: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. +* `OPTIN`: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a `CLIENT CACHING yes` command. +* `OPTOUT`: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a `CLIENT CACHING off` command. +* `NOLOOP`: don't send notifications about keys modified by this connection itself. + +@return + +@simple-string-reply: `OK` if the connection was successfully put in tracking mode or if the tracking mode was successfully disabled. Otherwise an error is returned. From 4269446dad56492f83d03b02b00c33ebe7b04705 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 11:48:28 +0200 Subject: [PATCH 0343/1457] CLIENT GETREDIR documented. --- commands.json | 6 ++++++ commands/client-getredir.md | 11 +++++++++++ 2 files changed, 17 insertions(+) create mode 100644 commands/client-getredir.md diff --git a/commands.json b/commands.json index df24c2445f..79eaaa648f 100644 --- a/commands.json +++ b/commands.json @@ -434,6 +434,12 @@ "since": "2.6.9", "group": "server" }, + "CLIENT GETREDIR": { + "summary": "Get tracking notifications redirection client ID if any", + "complexity": "O(1)", + "since": "6.0.0", + "group": "server" + }, "CLIENT PAUSE": { "summary": "Stop processing commands from clients for some time", "complexity": "O(1)", diff --git a/commands/client-getredir.md b/commands/client-getredir.md new file mode 100644 index 0000000000..f8c218a239 --- /dev/null +++ b/commands/client-getredir.md @@ -0,0 +1,11 @@ +This command returns the client ID we are redirecting our +[tracking](/topics/client-side-caching) notifications to. We set a client +to redirect to when using `CLIENT TRACKING` to enable tracking. However in +order to avoid forcing client libraries implementations to remember the +ID notifications are redirected to, this command exists in order to improve +introspection and allow clients to check later if redirection is active +ad towards which client ID. + +@return + +@integer-reply: the ID of the client we are redirecting the notifications to. The command returns `-1` if client tracking is not enabled, or `0` if client tracking is enabled but we are not redirecting the notifications to any client. From 6415f3a1dedafe3a9f9da0a899487ca50bdc96d2 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 11:57:15 +0200 Subject: [PATCH 0344/1457] CLIENT CACHING documented. --- commands.json | 16 ++++++++++++++++ commands/client-caching.md | 22 ++++++++++++++++++++++ 2 files changed, 38 insertions(+) create mode 100644 commands/client-caching.md diff --git a/commands.json b/commands.json index 79eaaa648f..a2cea351ad 100644 --- a/commands.json +++ b/commands.json @@ -361,6 +361,22 @@ "since": "5.0.0", "group": "sorted_set" }, + "CLIENT CACHING": { + "summary": "Instruct the server about tracking or not keys in the next request", + "complexity": "O(1)", + "arguments": [ + { + "name": "mode", + "type": "enum", + "enum": [ + "YES", + "NO" + ] + } + ], + "since": "6.0.0", + "group": "server" + }, "CLIENT ID": { "summary": "Returns the client ID for the current connection", "complexity": "O(1)", diff --git a/commands/client-caching.md b/commands/client-caching.md new file mode 100644 index 0000000000..d857811329 --- /dev/null +++ b/commands/client-caching.md @@ -0,0 +1,22 @@ +This command controls the tracking of the keys in the next command executed +by the connection, when tracking is enabled in `OPTIN` or `OPTOUT` mode. +Please check the +[client side caching documentation](/topics/client-side-caching) for +background informations. + +When tracking is enabled Redis, using the `CLIENT TRACKING` command, it is +possible to specify the `OPTIN` or `OPTOUT` options, so that keys +in read only commands are not automatically remembered by the server to +be invalidated later. When we are in `OPTIN` mode, we can enable the +tracking of the keys in the next command by calling `CLIENT CACHING yes` +immediately before it. Similarly when we are in `OPTOUT` mode, and keys +are normally tracked, we can avoid the keys in the next command to be +tracked using `CLIENT CACHING no`. + +Basically the command sets a state in the connection, that is valid only +for the next command execution, that will modify the behavior of client +tracking. + +@return + +@simple-string-reply: `OK` or an error if the argument is not yes or no. From e5564ad9e2208e77bf0b98dd8a77bce1b7b07877 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 13:15:53 +0200 Subject: [PATCH 0345/1457] Add HELLO to commands.json and fix groups. --- commands.json | 54 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 12 deletions(-) diff --git a/commands.json b/commands.json index a2cea351ad..0cbe91131c 100644 --- a/commands.json +++ b/commands.json @@ -375,13 +375,13 @@ } ], "since": "6.0.0", - "group": "server" + "group": "connectin" }, "CLIENT ID": { "summary": "Returns the client ID for the current connection", "complexity": "O(1)", "since": "5.0.0", - "group": "server" + "group": "connection" }, "CLIENT KILL": { "summary": "Kill the connection of a client", @@ -423,7 +423,7 @@ } ], "since": "2.4.0", - "group": "server" + "group": "connection" }, "CLIENT LIST": { "summary": "Get the list of client connections", @@ -442,19 +442,19 @@ } ], "since": "2.4.0", - "group": "server" + "group": "connection" }, "CLIENT GETNAME": { "summary": "Get the current connection name", "complexity": "O(1)", "since": "2.6.9", - "group": "server" + "group": "connection" }, "CLIENT GETREDIR": { "summary": "Get tracking notifications redirection client ID if any", "complexity": "O(1)", "since": "6.0.0", - "group": "server" + "group": "connection" }, "CLIENT PAUSE": { "summary": "Stop processing commands from clients for some time", @@ -466,7 +466,7 @@ } ], "since": "2.9.50", - "group": "server" + "group": "connection" }, "CLIENT REPLY": { "summary": "Instruct the server whether to reply to commands", @@ -483,7 +483,7 @@ } ], "since": "3.2", - "group": "server" + "group": "connection" }, "CLIENT SETNAME": { "summary": "Set the current connection name", @@ -495,7 +495,7 @@ "type": "string" } ], - "group": "server" + "group": "connection" }, "CLIENT TRACKING": { "summary": "Enable or disable server assisted client side caching support", @@ -555,7 +555,7 @@ } ], "since": "6.0.0", - "group": "server" + "group": "connection" }, "CLIENT UNBLOCK": { "summary": "Unblock a client blocked in a blocking command from a different connection", @@ -576,7 +576,7 @@ } ], "since": "5.0.0", - "group": "server" + "group": "connection" }, "CLUSTER ADDSLOTS": { "summary": "Assign new hash slots to receiving node", @@ -1462,6 +1462,36 @@ "since": "2.0.0", "group": "hash" }, + "HELLO": { + "summary": "switch Redis protocol", + "complexity": "O(1)", + "arguments": [ + { + "name": "protover", + "type": "integer" + }, + { + "command": "AUTH", + "name": [ + "username", + "password" + ], + "type": [ + "string", + "string" + ], + "optional": true + }, + { + "command": "SETNAME", + "name": "clientname", + "type": "string", + "optional": true + } + ], + "since": "6.0.0", + "group": "connection" + }, "HEXISTS": { "summary": "Determine if a hash field exists", "complexity": "O(1)", @@ -3140,7 +3170,7 @@ } ], "since": "4.0.0", - "group": "connection" + "group": "server" }, "SYNC": { "summary": "Internal command used for replication", From c66b1e7042842435a1145f271fb4076932a8f2ef Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 13:29:28 +0200 Subject: [PATCH 0346/1457] HELLO command documented. --- commands/hello.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 commands/hello.md diff --git a/commands/hello.md b/commands/hello.md new file mode 100644 index 0000000000..9ad3107980 --- /dev/null +++ b/commands/hello.md @@ -0,0 +1,40 @@ +Switch the connection to a different protocol. Redis version 6 or greater +are able to support two protocols, the old protocol, RESP2, and a new +one introduced with Redis 6, RESP3. RESP3 has certain advantages since when +the connection is in this mode, Redis is able to reply with more semantical +replies: for instance `HGETALL` will return a *map type*, so a client +library implementation no longer requires to know in advance to translate the +array into a hash before returning it to the caller. For a full +coverage of RESP3 please [check this repository](https://github.com/antirez/resp3). + +Redis 6 connections starts in RESP2 mode, so clients implementing RESP2 do +not need to change (nor there are short term plans to drop support for +RESP2). Clients that want to handshake the RESP3 mode need to call the +`HELLO` command, using "3" as first argument. + + > HELLO 3 + 1# "server" => "redis" + 2# "version" => "6.0.0" + 3# "proto" => (integer) 3 + 4# "id" => (integer) 10 + 5# "mode" => "standalone" + 6# "role" => "master" + 7# "modules" => (empty array) + +The `HELLO` command has a useful reply that will state a number of facts +about the server: the exact version, the set of modules loaded, the client +ID, the replication role and so forth. Because of that, and given that +the `HELLO` command also works with "2" as argument, both in order to +downgrade the protocol back to version 2, or just to get the reply from +the server without switching the protocol, client library authors may +consider using this command instead of the canonical `PING` when setting +up the connection. + +This command accepts two non mandatory options: + +* `AUTH `: directly authenticate the connection other than switching to the specified protocol. In this way there is no need to call `AUTH` before `HELLO` when setting up new connections. Note that the username can be set to "default" in order to authenticate against a server that does not use ACLs, but the simpler `requirepass` machanism of Redis before version 6. +* `SETNAME `: this is equivalent to also call `CLIENT SETNAME`. + +@return + +@array-reply: a list of server properties. The reply is a map instead of an array when RESP3 is selected. The command returns an error if the protocol requested does not exist. From e6d396c2b4d21039988fbc163c1b688fe398ee55 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 30 Apr 2020 13:30:45 +0200 Subject: [PATCH 0347/1457] Fix typo in commands.json. --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 0cbe91131c..ae05b80b5f 100644 --- a/commands.json +++ b/commands.json @@ -375,7 +375,7 @@ } ], "since": "6.0.0", - "group": "connectin" + "group": "connection" }, "CLIENT ID": { "summary": "Returns the client ID for the current connection", From 19cfb21ec8ed45f6680535d0d18b3910165998ec Mon Sep 17 00:00:00 2001 From: vstath Date: Thu, 30 Apr 2020 23:12:15 +0200 Subject: [PATCH 0348/1457] Couple typo fix (#1285) --- commands/stralgo.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/commands/stralgo.md b/commands/stralgo.md index 377ec5ed83..5b0c4190bb 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -1,4 +1,4 @@ -The STRALGO implements complex algorithsm that operate on strings. +The STRALGO implements complex algorithms that operate on strings. Right now the only algorithm implemented is the LCS algorithm (longest common substring). However new algorithms could be implemented in the future. The goal of this command is to provide to Redis users algorithms that need @@ -16,7 +16,7 @@ STRALGO LCS [KEYS ...] [STRINGS ...] [LEN] [IDX] [MINRANGELEN] [WITHRANGELEN] The LCS subcommand implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous. -For instane the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o". +For instance the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o". LCS is very useful in order to evaluate how similar two strings are. Strings can represent many things. For instance if two strings are DNA sequences, the LCS will provide a measure of similarity between the two DNA sequences. If the strings represent some text edited by some user, the LCS could represent how different the new text is compared to the old one, and so forth. @@ -29,7 +29,7 @@ The basic usage is the following: "mytext" ``` -It is possible to also compute the LCS between the contet of two keys: +It is possible to also compute the LCS between the content of two keys: ``` > MSET key1 ohmytext key2 mynewtext From e3b6aadc6a4b8c422738d70e3160dcd7cffbb5aa Mon Sep 17 00:00:00 2001 From: Spencer Sellers Date: Thu, 30 Apr 2020 16:15:47 -0500 Subject: [PATCH 0349/1457] Fix typo in acl-genpass (#1286) --- commands/acl-genpass.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md index 5e7626d053..025f03649e 100644 --- a/commands/acl-genpass.md +++ b/commands/acl-genpass.md @@ -10,7 +10,7 @@ By default (if /dev/urandom is available) the password is strong and can be used for other uses in the context of a Redis application, for instance in order to create unique session identifiers or other kind of unguessable and not colliding IDs. The password generation is also very cheap -because we don't really ask /de/urandom for bits at every execution. At +because we don't really ask /dev/urandom for bits at every execution. At startup Redis creates a seed using /dev/urandom, then it will use SHA256 in counter mode, with HMAC-SHA256(seed,counter) as primitive, in order to create more random bytes as needed. This means that the application developer From ee32a19c1a8ae372c60aea6ed7025f76a93a68b1 Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Thu, 30 Apr 2020 14:16:03 -0700 Subject: [PATCH 0350/1457] Typo fix + grammar tweak (#1287) Co-authored-by: Itamar Haber --- commands/stralgo.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/stralgo.md b/commands/stralgo.md index 5b0c4190bb..638c668c08 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -29,7 +29,7 @@ The basic usage is the following: "mytext" ``` -It is possible to also compute the LCS between the content of two keys: +It is also possible to compute the LCS between the content of two keys: ``` > MSET key1 ohmytext key2 mynewtext From 4d8a9a67cecf38ad4bd1edc4f712df98e644a306 Mon Sep 17 00:00:00 2001 From: guybe7 Date: Fri, 1 May 2020 15:42:51 +0300 Subject: [PATCH 0351/1457] XINFO STREAM FULL docs (#1284) --- commands/xinfo.md | 71 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/commands/xinfo.md b/commands/xinfo.md index e795f00254..2aafbd508e 100644 --- a/commands/xinfo.md +++ b/commands/xinfo.md @@ -36,6 +36,73 @@ not be the same as the last entry ID in case some entry was deleted. Finally the full first and last entry in the stream are shown, in order to give some sense about what is the stream content. +* `XINFO STREAM FULL [COUNT ]` + +In this form the command returns the entire state of the stream, including +entries, groups, consumers and PELs. This form is available since Redis 6.0. + +``` +> XADD mystream * foo bar +"1588152471065-0" +> XADD mystream * foo bar2 +"1588152473531-0" +> XGROUP CREATE mystream mygroup 0-0 +OK +> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > +1) 1) "mystream" + 2) 1) 1) "1588152471065-0" + 2) 1) "foo" + 2) "bar" +> XINFO STREAM mystream FULL + 1) "length" + 2) (integer) 2 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1588152473531-0" + 9) "entries" +10) 1) 1) "1588152471065-0" + 2) 1) "foo" + 2) "bar" + 2) 1) "1588152473531-0" + 2) 1) "foo" + 2) "bar2" +11) "groups" +12) 1) 1) "name" + 2) "mygroup" + 3) "last-delivered-id" + 4) "1588152471065-0" + 5) "pel-count" + 6) (integer) 1 + 7) "pending" + 8) 1) 1) "1588152471065-0" + 2) "Alice" + 3) (integer) 1588152520299 + 4) (integer) 1 + 9) "consumers" + 10) 1) 1) "name" + 2) "Alice" + 3) "seen-time" + 4) (integer) 1588152520299 + 5) "pel-count" + 6) (integer) 1 + 7) "pending" + 8) 1) 1) "1588152471065-0" + 2) (integer) 1588152520299 + 3) (integer) 1 +``` + +The reported information contains all of the fields reported by the simple +form of `XINFO STREAM`, with some additional information: +1. Stream entries are returned, including fields and values. +2. Groups, consumers and PELs are returned. +The `COUNT` option is used to limit the amount of stream/PEL entries that are +returned (The first entries are returned). The default `COUNT` is 10 and +a `COUNT` of 0 means that all entries will be returned (Execution time may be +long if the stream has a lot of entries) + * `XINFO GROUPS ` In this form we just get as output all the consumer groups associated with the @@ -104,3 +171,7 @@ remember the exact syntax, by using the `HELP` subcommand: 4) STREAM -- Show information about the stream. 5) HELP ``` + +@history + +* `>= 6.0.0`: Added the `FULL` option to `XINFO STREAM`. From bd2ff539dfa81d8cfce920b87d282c7ecf87939a Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Fri, 1 May 2020 08:47:24 -0400 Subject: [PATCH 0352/1457] STRALGO LCS STRINGS LEN should return length (#1290) --- commands/stralgo.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/stralgo.md b/commands/stralgo.md index 638c668c08..cb74d29f5b 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -42,7 +42,7 @@ Sometimes we need just the length of the match: ``` > STRALGO LCS STRINGS ohmytext mynewtext LEN -"mytext" +6 ``` However what is often very useful, is to know the match position in each strings: From 62bb487caff5daee8d872a9a351d5e205d39860b Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Fri, 1 May 2020 08:47:33 -0400 Subject: [PATCH 0353/1457] add persist event notification (#1289) * add persist event notification * Update notifications.md Co-authored-by: Itamar Haber --- topics/notifications.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index f40237e725..0c774d5429 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -133,13 +133,14 @@ Different commands generate different kind of events according to the following * `ZREMBYRANK` generates a single `zrembyrank` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. * `ZINTERSTORE` and `ZUNIONSTORE` respectively generate `zinterstore` and `zunionstore` events. In the special case the resulting sorted set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. * `XADD` generates an `xadd` event, possibly followed an `xtrim` event when used with the `MAXLEN` subcommand. -* `XDEL` generates a single `xdel` event even when multiple entries are are deleted. +* `XDEL` generates a single `xdel` event even when multiple entries are deleted. * `XGROUP CREATE` generates an `xgroup-create` event. * `XGROUP DELCONSUMER` generates an `xgroup-delconsumer` event. * `XGROUP DESTROY` generates an `xgroup-destroy` event. * `XGROUP SETID` generates an `xgroup-setid` event. * `XSETID` generates an `xsetid` event. * `XTRIM` generates an `xtrim` event. +* `PERSIST` generates a `persist` event if the expiry time associated with key has been successfully deleted. * Every time a key with a time to live associated is removed from the data set because it expired, an `expired` event is generated. * Every time a key is evicted from the data set in order to free memory as a result of the `maxmemory` policy, an `evicted` event is generated. From 0faa788445f667ba41adb7fcc85dd5612f3631a4 Mon Sep 17 00:00:00 2001 From: Jamie Scott Date: Fri, 1 May 2020 06:12:00 -0700 Subject: [PATCH 0354/1457] typo (#1288) --- commands/acl-load.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/acl-load.md b/commands/acl-load.md index 5178e4398b..521c1a6594 100644 --- a/commands/acl-load.md +++ b/commands/acl-load.md @@ -4,7 +4,7 @@ the current ACL rules with the ones defined in the file. The command makes sure to have an *all or nothing* behavior, that is: * If every line in the file is valid, all the ACLs are loaded. -* If oen or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used. +* If one or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used. @return From 96b8379e722c2e2e0996dc5577007b3a994128f3 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 1 May 2020 16:53:08 +0300 Subject: [PATCH 0355/1457] Formatting --- commands/xinfo.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/xinfo.md b/commands/xinfo.md index 2aafbd508e..e7c8689424 100644 --- a/commands/xinfo.md +++ b/commands/xinfo.md @@ -96,8 +96,10 @@ OK The reported information contains all of the fields reported by the simple form of `XINFO STREAM`, with some additional information: + 1. Stream entries are returned, including fields and values. 2. Groups, consumers and PELs are returned. + The `COUNT` option is used to limit the amount of stream/PEL entries that are returned (The first entries are returned). The default `COUNT` is 10 and a `COUNT` of 0 means that all entries will be returned (Execution time may be From 26d1cb0b4b0971a4e845b6272abef8b3b0d948d3 Mon Sep 17 00:00:00 2001 From: dengliming <7796156+dengliming@users.noreply.github.com> Date: Sat, 2 May 2020 23:12:25 +0800 Subject: [PATCH 0356/1457] Correct STRALGO command instructions (#1293) --- commands/stralgo.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/stralgo.md b/commands/stralgo.md index cb74d29f5b..b45760c75b 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -11,7 +11,7 @@ the argument must be "LCS", since this is the only implemented one. ## LCS algorithm ``` -STRALGO LCS [KEYS ...] [STRINGS ...] [LEN] [IDX] [MINRANGELEN] [WITHRANGELEN] +STRALGO LCS [KEYS ...] [STRINGS ...] [LEN] [IDX] [MINMATCHLEN ] [WITHMATCHLEN] ``` The LCS subcommand implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous. From d806cfcb800107e6ed096205ac4b0ca85fc1b522 Mon Sep 17 00:00:00 2001 From: Thiania Date: Sun, 3 May 2020 02:13:48 +0800 Subject: [PATCH 0357/1457] Add dbx module in list (#1292) Co-authored-by: Kenneth Cheng --- modules.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/modules.json b/modules.json index c571137c32..92f6e3d32c 100644 --- a/modules.json +++ b/modules.json @@ -317,5 +317,15 @@ "linfangrong" ], "stars": 0 + }, + { + "name": "dbx", + "license": "MIT", + "repository": "https://github.com/cscan/dbx", + "description": "Redis module for maintaining hash by simple SQL", + "authors": [ + "cscan" + ], + "stars": 0 } ] From dd4159397f115d53423c21337eedb04d3258d291 Mon Sep 17 00:00:00 2001 From: patrikx3 Date: Tue, 5 May 2020 14:23:32 +0200 Subject: [PATCH 0358/1457] Updated p3x-redis-ui link (#1294) --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index 8a897c100b..212f3f8d6c 100644 --- a/tools.json +++ b/tools.json @@ -662,7 +662,7 @@ "name": "p3x-redis-ui", "language": "javascript", "repository": "https://github.com/patrikx3/redis-ui/", - "url":"https://pages.corifeus.com/redis-ui/", + "url":"https://www.corifeus.com/redis-ui/", "description": "📡 P3X Redis UI that uses Socket.IO, AngularJs Material and IORedis with statistics, console - terminal, tree, dark mode, internationalization, multiple connections, web and desktop by Electron. Works as an app without Node.JS GUI or with the latest Node.Js version. Can test it at https://p3x.redis.patrikx3.com/.", "authors": ["patrikx3"] }, From 02b3d1a345093c1794fd86273e9d516fffd3b819 Mon Sep 17 00:00:00 2001 From: laixintao Date: Tue, 5 May 2020 23:28:29 +0800 Subject: [PATCH 0359/1457] bugfix: is a html tag, should be `` (#1297) --- commands/xinfo.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/xinfo.md b/commands/xinfo.md index e7c8689424..1d28553cbe 100644 --- a/commands/xinfo.md +++ b/commands/xinfo.md @@ -101,7 +101,7 @@ form of `XINFO STREAM`, with some additional information: 2. Groups, consumers and PELs are returned. The `COUNT` option is used to limit the amount of stream/PEL entries that are -returned (The first entries are returned). The default `COUNT` is 10 and +returned (The first `` entries are returned). The default `COUNT` is 10 and a `COUNT` of 0 means that all entries will be returned (Execution time may be long if the stream has a lot of entries) From d1129b6ce0c0d81e8e801e47aa1496e4aea4723b Mon Sep 17 00:00:00 2001 From: laixintao Date: Sun, 10 May 2020 18:26:25 +0800 Subject: [PATCH 0360/1457] bugfix: fix reply message for bgsave. (#1303) ``` 127.0.0.1:6379> bgsave Background saving started ``` --- commands/bgsave.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index cf4c5bd662..f670502e7f 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -21,7 +21,7 @@ Please refer to the [persistence documentation][tp] for detailed information. @return -@simple-string-reply: `OK` if `BGSAVE` started correctly. +@simple-string-reply: `Background saving started` if `BGSAVE` started correctly. @history From 0c8ed1c44d734974c7de1e65316ea8d52e48ed97 Mon Sep 17 00:00:00 2001 From: Stephen Mitchell Date: Sun, 10 May 2020 06:31:10 -0400 Subject: [PATCH 0361/1457] Minor typo corrections (#1302) * Update auth.md * Additional typo corrections --- commands/acl-cat.md | 2 +- commands/auth.md | 2 +- commands/hello.md | 2 +- commands/stralgo.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/commands/acl-cat.md b/commands/acl-cat.md index 56616e789f..0eb256fab8 100644 --- a/commands/acl-cat.md +++ b/commands/acl-cat.md @@ -4,7 +4,7 @@ the specified category. ACL categories are very useful in order to create ACL rules that include or exclude a large set of commands at once, without specifying every single -command. For instance the following rule will let the user `karin` perform +command. For instance, the following rule will let the user `karin` perform everything but the most dangerous operations that may affect the server stability: diff --git a/commands/auth.md b/commands/auth.md index c9bf5b7fab..7c1e02a800 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -20,7 +20,7 @@ When Redis ACLs are used, the command should be given in an extended way: AUTH In order to authenticate the current connection with one of the connections -defined in the ACL list (see `ACL SETUSER`) and the offical [ACL guide](/topics/acl) for more information. +defined in the ACL list (see `ACL SETUSER`) and the official [ACL guide](/topics/acl) for more information. When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default". diff --git a/commands/hello.md b/commands/hello.md index 9ad3107980..0d389450a7 100644 --- a/commands/hello.md +++ b/commands/hello.md @@ -32,7 +32,7 @@ up the connection. This command accepts two non mandatory options: -* `AUTH `: directly authenticate the connection other than switching to the specified protocol. In this way there is no need to call `AUTH` before `HELLO` when setting up new connections. Note that the username can be set to "default" in order to authenticate against a server that does not use ACLs, but the simpler `requirepass` machanism of Redis before version 6. +* `AUTH `: directly authenticate the connection other than switching to the specified protocol. In this way there is no need to call `AUTH` before `HELLO` when setting up new connections. Note that the username can be set to "default" in order to authenticate against a server that does not use ACLs, but the simpler `requirepass` mechanism of Redis before version 6. * `SETNAME `: this is equivalent to also call `CLIENT SETNAME`. @return diff --git a/commands/stralgo.md b/commands/stralgo.md index b45760c75b..73d06bf38d 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -97,7 +97,7 @@ Finally to also have the match len: @return -For the LCS algorith: +For the LCS algorithm: * Without modifiers the string representing the longest common substring is returned. * When LEN is given the command returns the length of the longest common substring. From c0853c162defc400e3fba311dbde2622a29653a4 Mon Sep 17 00:00:00 2001 From: brian p o'rourke Date: Sun, 10 May 2020 03:34:03 -0700 Subject: [PATCH 0362/1457] GENPASS uses 256 bits by default, not 128 (#1301) Update docs to reflect change in antirez/redis: 639c8a1 --- commands/acl-genpass.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md index 025f03649e..79b5e8d82b 100644 --- a/commands/acl-genpass.md +++ b/commands/acl-genpass.md @@ -27,7 +27,7 @@ hex character. @return -@bulk-string-reply: by default 64 bytes string representing 128 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4. +@bulk-string-reply: by default 64 bytes string representing 256 bits of pseudorandom data. Otherwise if an argument if needed, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4. @examples From a2006e069199b52f156e2b48a58b0797ebf954bc Mon Sep 17 00:00:00 2001 From: Kyle Banker Date: Mon, 18 May 2020 10:19:56 -0600 Subject: [PATCH 0363/1457] Update security.md This sentence makes less sense with all of the new security features in Redis 6.0. --- topics/security.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/topics/security.md b/topics/security.md index 0b71fe30f0..ecc6f7103b 100644 --- a/topics/security.md +++ b/topics/security.md @@ -30,9 +30,6 @@ This is a specific example, but, in general, untrusted access to Redis should always be mediated by a layer implementing ACLs, validating user input, and deciding what operations to perform against the Redis instance. -In general, Redis is not optimized for maximum security but for maximum -performance and simplicity. - Network security --- From 788615c1dadaef61ba72006c5f233bc7e4abcdc8 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 19 May 2020 16:18:49 +0300 Subject: [PATCH 0364/1457] Updates with 6.0 config file --- topics/config.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/config.md b/topics/config.md index d1fe0b68f3..69c8301d15 100644 --- a/topics/config.md +++ b/topics/config.md @@ -26,6 +26,7 @@ The list of configuration directives, and their meaning and intended usage is available in the self documented example redis.conf shipped into the Redis distribution. +* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/antirez/redis/6.0/redis.conf). * The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf). * The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf). * The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/antirez/redis/3.2/redis.conf). From 3240277971cea466e4e2caaa6aaeeca1f7a3f69a Mon Sep 17 00:00:00 2001 From: Gabriel Volpe Date: Tue, 19 May 2020 20:49:26 +0200 Subject: [PATCH 0365/1457] Add Redis4Cats Scala client (#1309) * Add Redis4Cats Scala client * Update clients.json Co-authored-by: Itamar Haber --- clients.json | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 6cb63dc77e..98b84810b1 100644 --- a/clients.json +++ b/clients.json @@ -193,7 +193,7 @@ "description": "A Redis client focused on streaming, with support for a print-like API, pipelining, Pub/Sub, and connection pooling.", "authors": ["stephensearles"] }, - + { "name": "go-resp3", "language": "Go", @@ -596,6 +596,16 @@ "active": true }, + { + "name": "Redis4Cats", + "language": "Scala", + "url": "https://redis4cats.profunktor.dev/", + "repository": "https://github.com/profunktor/redis4cats", + "description": "Purely functional Redis client for Cats Effect & Fs2", + "authors": ["volpegabriel87"], + "active": true + }, + { "name": "scala-redis", "language": "Scala", @@ -1643,7 +1653,7 @@ "authors": [], "active": true }, - + { "name": "Noderis", "language": "Node.js", From ee4ec471ac19fc293415fe14dd70aefb68ba26a6 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Wed, 20 May 2020 11:43:33 -0400 Subject: [PATCH 0366/1457] fix typo in CLIENT REDIR command (#1311) --- commands/client-getredir.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-getredir.md b/commands/client-getredir.md index f8c218a239..2cc326957f 100644 --- a/commands/client-getredir.md +++ b/commands/client-getredir.md @@ -4,7 +4,7 @@ to redirect to when using `CLIENT TRACKING` to enable tracking. However in order to avoid forcing client libraries implementations to remember the ID notifications are redirected to, this command exists in order to improve introspection and allow clients to check later if redirection is active -ad towards which client ID. +and towards which client ID. @return From 08d0e2c4a3c86ca92e9f6e175fb20c74cc2ebdaf Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Wed, 20 May 2020 11:44:48 -0400 Subject: [PATCH 0367/1457] change to correct command for disabling caching in OPTOUT mode (#1310) --- commands/client-tracking.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-tracking.md b/commands/client-tracking.md index cb2e7ef5e8..8300b5114e 100644 --- a/commands/client-tracking.md +++ b/commands/client-tracking.md @@ -25,7 +25,7 @@ command when enabling tracking: * `BCAST`: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys. * `PREFIX `: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. * `OPTIN`: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a `CLIENT CACHING yes` command. -* `OPTOUT`: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a `CLIENT CACHING off` command. +* `OPTOUT`: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a `CLIENT CACHING no` command. * `NOLOOP`: don't send notifications about keys modified by this connection itself. @return From 2049001c09531d106f1a4ea374d31f4a20813358 Mon Sep 17 00:00:00 2001 From: OrdinaryYZH Date: Thu, 21 May 2020 22:05:36 +0800 Subject: [PATCH 0368/1457] Add other reply in the bgsave.md (#1307) * Update bgsave.md Add other reply in the doc. reference from https://github.com/antirez/redis/blob/fe9acb3469508b1af721d15320b89e2bb2abdd0c/src/rdb.c ```c /* BGSAVE [SCHEDULE] */ void bgsaveCommand(client *c) { int schedule = 0; /* The SCHEDULE option changes the behavior of BGSAVE when an AOF rewrite * is in progress. Instead of returning an error a BGSAVE gets scheduled. */ if (c->argc > 1) { if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,"schedule")) { schedule = 1; } else { addReply(c,shared.syntaxerr); return; } } rdbSaveInfo rsi, *rsiptr; rsiptr = rdbPopulateSaveInfo(&rsi); if (server.rdb_child_pid != -1) { addReplyError(c,"Background save already in progress"); } else if (hasActiveChildProcess()) { if (schedule) { server.rdb_bgsave_scheduled = 1; addReplyStatus(c,"Background saving scheduled"); } else { addReplyError(c, "Another child process is active (AOF?): can't BGSAVE right now. " "Use BGSAVE SCHEDULE in order to schedule a BGSAVE whenever " "possible."); } } else if (rdbSaveBackground(server.rdb_filename,rsiptr) == C_OK) { addReplyStatus(c,"Background saving started"); } else { addReply(c,shared.err); } } ``` * Edits and removed error text Co-authored-by: Itamar Haber --- commands/bgsave.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index f670502e7f..96e185a13c 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -21,7 +21,7 @@ Please refer to the [persistence documentation][tp] for detailed information. @return -@simple-string-reply: `Background saving started` if `BGSAVE` started correctly. +@simple-string-reply: `Background saving started` if `BGSAVE` started correctly or `Background saving scheduled` when used with the `SCHEDULE` subcommand. @history From 43783b13aa394e37c83bf87901413dda21893871 Mon Sep 17 00:00:00 2001 From: Uzlopak Date: Thu, 21 May 2020 16:12:17 +0200 Subject: [PATCH 0369/1457] fix semver version in commands.json (#1306) --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index ae05b80b5f..d41355ec51 100644 --- a/commands.json +++ b/commands.json @@ -482,7 +482,7 @@ ] } ], - "since": "3.2", + "since": "3.2.0", "group": "connection" }, "CLIENT SETNAME": { From e580220c0ed034302271a1a253e4adbeef55ad0f Mon Sep 17 00:00:00 2001 From: Mickey Pashov Date: Thu, 21 May 2020 15:22:58 +0100 Subject: [PATCH 0370/1457] Based on: https://github.com/antirez/redis/commit/9ae8254e20c151bb153519a868933cbd13d4c164 (#1305) The number of alphanumeric bytes that are being output by GENPASS in Redis 6.0.0 GA is now 64. --- topics/acl.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 195a4df527..941faf6fb1 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -385,10 +385,10 @@ command that generates passwords using the system cryptographic pseudorandom generator: > ACL GENPASS - "0e8ad12c1962355a3eb35e0ca686343b" + "dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc" -The command outputs a 16 bytes (128 bit) pseudorandom string converted to a -32 byte alphanumerical string. This is long enough to avoid attacks and short +The command outputs a 32 bytes (256 bit) pseudorandom string converted to a +64 byte alphanumerical string. This is long enough to avoid attacks and short enough to be easy to manage, cut & paste, store and so forth. This is what you should use in order to generate Redis passwords. From 97cdd4b9e36b6e56f7a119912ff9698a230e6e0f Mon Sep 17 00:00:00 2001 From: Piraniks Date: Thu, 21 May 2020 16:24:32 +0200 Subject: [PATCH 0371/1457] Remove typos in ACL SETUSER. (#1304) Remove category typos from ACL SETUSER description. --- commands/acl-setuser.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md index ec6b65bc42..fb13514632 100644 --- a/commands/acl-setuser.md +++ b/commands/acl-setuser.md @@ -48,11 +48,11 @@ This is a list of all the supported Redis ACL rules: * `allkeys`: alias for `~*`, it allows the user to access all the keys. * `resetkey`: removes all the key patterns from the list of key patterns the user can access. * `+`: add this command to the list of the commands the user can call. Example: `+zadd`. -* `+@`: add all the commands in the specified categoty to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories check the `ACL CAT` command. +* `+@`: add all the commands in the specified category to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories check the `ACL CAT` command. * `+|`: add the specified command to the list of the commands the user can execute, but only for the specified subcommand. Example: `+config|get`. Generates an error if the specified command is already allowed in its full version for the specified user. Note: there is no symmetrical command to remove subcommands, you need to remove the whole command and re-add the subcommands you want to allow. This is much safer than removing subcommands, in the future Redis may add new dangerous subcommands, so configuring by subtraction is not good. * `allcommands`: alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. * `-`. Like `+` but removes the command instead of adding it. -* `-@`: Like `-@` but removes all the commands in the category instead of adding them. +* `-@`: Like `+@` but removes all the commands in the category instead of adding them. * `nocommands`: alias for `-@all`. Removes all the commands, the user will no longer be able to execute anything. * `nopass`: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. * `>password`: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. From 6c044c7527e77426b1fb81a741edbf7e0775ff60 Mon Sep 17 00:00:00 2001 From: Brad Dunbar Date: Thu, 21 May 2020 10:28:37 -0400 Subject: [PATCH 0372/1457] Correct BITFIELD syntax. (#1299) This snippet is missing (I think) a `SET` argument. 127.0.0.1:6379> BITFIELD mystring SET i8 #0 100 i8 #1 200 (error) ERR syntax error 127.0.0.1:6379> BITFIELD mystring SET i8 #0 100 SET i8 #1 200 1) (integer) 0 2) (integer) 0 127.0.0.1:6379> get mystring "d\xc8" --- commands/bitfield.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/bitfield.md b/commands/bitfield.md index 2f3e7b661a..8746870765 100644 --- a/commands/bitfield.md +++ b/commands/bitfield.md @@ -43,7 +43,7 @@ bit offset inside the string. However if the offset is prefixed with a `#` character, the specified offset is multiplied by the integer type width, so for example: - BITFIELD mystring SET i8 #0 100 i8 #1 200 + BITFIELD mystring SET i8 #0 100 SET i8 #1 200 Will set the first i8 integer at offset 0 and the second at offset 8. This way you don't have to do the math yourself inside your client if what From 7b94311ace2773ebc582c97c784cafdf48932e7a Mon Sep 17 00:00:00 2001 From: Stefan Miller <832146+stfnmllr@users.noreply.github.com> Date: Sun, 24 May 2020 17:35:58 +0200 Subject: [PATCH 0373/1457] ACL SETUSER: add username and optional to rules in commands.json (#1295) Co-authored-by: sm --- commands.json | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index d41355ec51..c73feb7f11 100644 --- a/commands.json +++ b/commands.json @@ -27,10 +27,15 @@ "summary": "Modify or create the rules for a specific ACL user", "complexity": "O(N). Where N is the number of rules provided.", "arguments": [ + { + "name": "username", + "type": "string" + }, { "name": "rule", "type": "string", - "multiple": true + "multiple": true, + "optional": true } ], "since": "6.0.0", From 47bbcb158a9a9b3c4e466e932f56efdfc395277e Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 24 May 2020 18:40:12 +0300 Subject: [PATCH 0374/1457] Adds ACL's `GETUSER` and `HELP` (#1312) --- commands.json | 18 ++++++++++++++++++ commands/acl-getuser.md | 28 ++++++++++++++++++++++++++++ commands/acl-help.md | 6 ++++++ 3 files changed, 52 insertions(+) create mode 100644 commands/acl-getuser.md create mode 100644 commands/acl-help.md diff --git a/commands.json b/commands.json index c73feb7f11..14871e527d 100644 --- a/commands.json +++ b/commands.json @@ -23,6 +23,18 @@ "since": "6.0.0", "group": "server" }, + "ACL GETUSER": { + "summary": "Get the rules for a specific ACL user", + "complexity": "O(N). Where N is the number of password, command and pattern rules that the user has.", + "arguments": [ + { + "name": "username", + "type": "string" + } + ], + "since": "6.0.0", + "group": "server" + }, "ACL SETUSER": { "summary": "Modify or create the rules for a specific ACL user", "complexity": "O(N). Where N is the number of rules provided.", @@ -99,6 +111,12 @@ "since": "6.0.0", "group": "server" }, + "ACL HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "6.0.0", + "group": "server" + }, "APPEND": { "summary": "Append a value to a key", "complexity": "O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.", diff --git a/commands/acl-getuser.md b/commands/acl-getuser.md new file mode 100644 index 0000000000..c2e051df58 --- /dev/null +++ b/commands/acl-getuser.md @@ -0,0 +1,28 @@ +The command returns all the rules defined for an existing ACL user. + +Specifically, it lists the user's ACL flags, password hashes and key name +patterns. Note that command rules are returned as a string in the same +format used with the `ACL SETUSER` command. This description of command rules +reflects the user's effective permissions, so while it may not be identical to +the set of rules used to configure the user, it is still functionally identical. + +@array-reply: a list of ACL rule definitions for the user. + +@examples + +Here's the default configuration for the default user: + +``` +> ACL GETUSER default +1) "flags" +2) 1) "on" + 2) "allkeys" + 3) "allcommands" + 4) "nopass" +3) "passwords" +4) (empty array) +5) "commands" +6) "+@all" +7) "keys" +8) 1) "*" +``` diff --git a/commands/acl-help.md b/commands/acl-help.md new file mode 100644 index 0000000000..3ec1ffbbb0 --- /dev/null +++ b/commands/acl-help.md @@ -0,0 +1,6 @@ +The `ACL HELP` command returns a helpful text describing the different +subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions From 9010df48d05e8963aba1394a66307d16accc6af5 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 24 May 2020 18:40:55 +0300 Subject: [PATCH 0375/1457] Fixes #1313 (#1314) --- commands/client-kill.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-kill.md b/commands/client-kill.md index e692df8d99..38af71cac0 100644 --- a/commands/client-kill.md +++ b/commands/client-kill.md @@ -15,7 +15,7 @@ instead of killing just by address. The following filters are available: * `CLIENT KILL ADDR ip:port`. This is exactly the same as the old three-arguments behavior. * `CLIENT KILL ID client-id`. Allows to kill a client by its unique `ID` field, which was introduced in the `CLIENT LIST` command starting from Redis 2.8.12. * `CLIENT KILL TYPE type`, where *type* is one of `normal`, `master`, `slave` and `pubsub` (the `master` type is available from v3.2). This closes the connections of **all the clients** in the specified class. Note that clients blocked into the `MONITOR` command are considered to belong to the `normal` class. -* `CLIENT KILL USER `username`. Closes all the connections that are authenticated with the specified [ACL](/topics/acl) username, however it returns an error if the username does not map to an existing ACL user. +* `CLIENT KILL USER username`. Closes all the connections that are authenticated with the specified [ACL](/topics/acl) username, however it returns an error if the username does not map to an existing ACL user. * `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is, the client calling the command will not get killed, however setting this option to `no` will have the effect of also killing the client calling the command. **Note: starting with Redis 5 the project is no longer using the slave word. You can use `TYPE replica` instead, however the old form is still supported for backward compatibility.** From c0a398593b6c766168b676d9c6731a60f265ec34 Mon Sep 17 00:00:00 2001 From: Loris Cro Date: Tue, 26 May 2020 17:27:58 +0200 Subject: [PATCH 0376/1457] added zig-okredis to clients.json (#1316) --- clients.json | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/clients.json b/clients.json index 98b84810b1..1b1eabdb99 100644 --- a/clients.json +++ b/clients.json @@ -1798,5 +1798,16 @@ "repository": "https://github.com/tdv/redis-cpp", "description": "redis-cpp is a library in C++17 for executing Redis commands with support of the pipelines and publish / subscribe pattern", "active": true + }, + + { + "name": "OkRedis", + "language": "Zig", + "url": "https://github.com/kristoff-it/zig-okredis", + "repository": "https://github.com/kristoff-it/zig-okredis", + "description": "OkRedis is a zero-allocation client for Redis 6+ ", + "authors": ["kristoff-it"], + "recommended": true, + "active": true } ] From af693578bcbedf5ddc09901c4fd1e8b948c2f7ae Mon Sep 17 00:00:00 2001 From: Stefan Miller <832146+stfnmllr@users.noreply.github.com> Date: Tue, 26 May 2020 23:17:21 +0200 Subject: [PATCH 0377/1457] CLIENT TRACKING: fix type typo and add multiple to PREFIX command in commands.json (#1315) * ACL SETUSER: add username and optional to rules in commands.json * CLIENT TRACKING: fix type typo and add multiple to PREFIX command in commands.json Co-authored-by: sm --- commands.json | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index 14871e527d..97e8334e6b 100644 --- a/commands.json +++ b/commands.json @@ -541,8 +541,9 @@ { "command": "PREFIX", "name": "prefix", - "type": "srting", - "optional": true + "type": "string", + "optional": true, + "multiple": true }, { "name": "BCAST", From f092dd3227cc74978853e379c0a7731bdaa324af Mon Sep 17 00:00:00 2001 From: Andy Pook Date: Thu, 28 May 2020 10:40:15 +0100 Subject: [PATCH 0378/1457] add "last-delivered-id" to XINFO GROUPS (#1318) Co-authored-by: Andy Pook --- commands/xinfo.md | 4 ++++ topics/streams-intro.md | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/commands/xinfo.md b/commands/xinfo.md index 1d28553cbe..7176ea9412 100644 --- a/commands/xinfo.md +++ b/commands/xinfo.md @@ -118,12 +118,16 @@ stream: 4) (integer) 2 5) pending 6) (integer) 2 + 7) last-delivered-id + 8) "1588152489012-0" 2) 1) name 2) "some-other-group" 3) consumers 4) (integer) 1 5) pending 6) (integer) 0 + 7) last-delivered-id + 8) "1588152498034-0" ``` For each consumer group listed the command also shows the number of consumers diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 86ee3ab663..e0a5334d31 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -512,12 +512,16 @@ The output shows information about how the stream is encoded internally, and als 4) (integer) 2 5) pending 6) (integer) 2 + 7) last-delivered-id + 8) "1588152489012-0" 2) 1) name 2) "some-other-group" 3) consumers 4) (integer) 1 5) pending 6) (integer) 0 + 7) last-delivered-id + 8) "1588152498034-0" ``` As you can see in this and in the previous output, the **XINFO** command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking the compatibility with older clients. Other commands that must be more bandwidth efficient instead, like **XPENDING**, just report the information without the field names. From 4f11642e272ef5f2c9989a122c25c0c3c4b260d9 Mon Sep 17 00:00:00 2001 From: laixintao Date: Sat, 30 May 2020 21:15:50 +0800 Subject: [PATCH 0379/1457] bugfix: fix the return value of ACL LOG command. (#1319) --- commands/acl-log.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands/acl-log.md b/commands/acl-log.md index f22ddb65ef..adeaf8d540 100644 --- a/commands/acl-log.md +++ b/commands/acl-log.md @@ -10,8 +10,14 @@ Entries are displayed starting from the most recent. @return +When called to show security events: + @array-reply: a list of ACL security events. +When called with `RESET`: + +@simple-string-reply: `OK` if the security log was cleared. + @examples ``` From 5c33b2075e287ad73569dbc90d4f4c7e4be91d35 Mon Sep 17 00:00:00 2001 From: Paul Botros Date: Tue, 2 Jun 2020 09:23:57 -0700 Subject: [PATCH 0380/1457] Adding river to Redis tools (#1321) River is a library built on top of Redis Streams written in C++ and with Python bindings. It's a framework that people can use for a more tailored interface to Redis Streams for both streaming and persistence. --- tools.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/tools.json b/tools.json index 212f3f8d6c..664edb372c 100644 --- a/tools.json +++ b/tools.json @@ -730,5 +730,14 @@ "repository": "https://github.com/Oriflame/RedisMessaging.ReliableDelivery", "description": "This library provides reliability to delivering messages via Redis. By design Redis pub/sub message delivery is not reliable so it can happen that some messages can be lost due to network issues or they can be delivered more than once in case of Redis replication failure.", "authors": ["PetrKozelek" , "OriflameSoftware"] + }, + + { + "name": "River", + "language": "C++, Python", + "repository": "https://github.com/pbotros/river", + "description": "A structured streaming framework built atop Redis Streams with built-in support for persistence and indefinitely long streams.", + "authors": [] } + ] From 0d50977930e01bb790a8716739542021a1dc4975 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Tue, 2 Jun 2020 12:43:23 -0400 Subject: [PATCH 0381/1457] add tracking related metrics in INFO command (#1320) * add tracking related metrics in INFO command * Update info.md Co-authored-by: Itamar Haber --- commands/info.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/commands/info.md b/commands/info.md index fef8a28d82..efafc40955 100644 --- a/commands/info.md +++ b/commands/info.md @@ -75,6 +75,8 @@ Here is the meaning of all fields in the **clients** section: connections * `blocked_clients`: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH) +* `tracking_clients`: Number of clients with tracking enabled +* `clients_in_timeout_table`: Number of clients in the clients timeout table Here is the meaning of all fields in the **memory** section: @@ -223,6 +225,13 @@ Here is the meaning of all fields in the **stats** section: * `active_defrag_key_hits`: Number of keys that were actively defragmented * `active_defrag_key_misses`: Number of keys that were skipped by the active defragmentation process +* `tracking_total_keys`: Number of keys being tracked by the server +* `tracking_total_items`: Number of items, that is the sum of clients number for + each key, that are being tracked +* `tracking_total_prefixes`: Number of tracked prefixes in server's prefix table + (only applicable for broadcast mode) +* `unexpected_error_replies`: Number of unexpected error replies, that are types + of errors from an AOF load or replication Here is the meaning of all fields in the **replication** section: From 65057e5c33e1bee77277fb2e9f9eb1bb20d9c7f7 Mon Sep 17 00:00:00 2001 From: Alessandro Dal Grande Date: Sun, 7 Jun 2020 05:10:20 -0700 Subject: [PATCH 0382/1457] Fix typo in latency docs (#1325) --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 664ef124ba..42021b74c1 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -558,7 +558,7 @@ However the algorithm is adaptive and will loop if it finds more than 25% of key Basically this means that **if the database has many many keys expiring in the same second, and these make up at least 25% of the current population of keys with an expire set**, Redis can block in order to get the percentage of keys already expired below 25%. -This approach is needed in order to avoid using too much memory for keys that area already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time. +This approach is needed in order to avoid using too much memory for keys that are already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time. In short: be aware that many keys expiring at the same moment can be a source of latency. From fd65331e471b6b8a02a6bed174ea6de202b5a74b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?K=C3=A9vin=20Dunglas?= Date: Mon, 8 Jun 2020 12:32:29 +0200 Subject: [PATCH 0383/1457] fix: remove faulty cli annotation in data-types.md (#1326) --- topics/data-types.md | 1 - 1 file changed, 1 deletion(-) diff --git a/topics/data-types.md b/topics/data-types.md index 9ed2eaa50f..7643e74f8c 100644 --- a/topics/data-types.md +++ b/topics/data-types.md @@ -89,7 +89,6 @@ Hashes Redis Hashes are maps between string fields and string values, so they are the perfect data type to represent objects (e.g. A User with a number of fields like name, surname, age, and so forth): - @cli HMSET user:1000 username antirez password P1pp0 age 34 HGETALL user:1000 HSET user:1000 password 12345 From b317010ab73a405cef0773adb6b76257858d7bb6 Mon Sep 17 00:00:00 2001 From: laixintao Date: Tue, 9 Jun 2020 01:03:28 +0800 Subject: [PATCH 0384/1457] latency reset's args cloud be multiple. (#1263) LATENCY HELP commands says: ``` 127.0.0.1:6379> latency help 6) RESET [event ...] -- Resets latency data of one or more event classes. 7) (default: reset all data for all event classes) ``` --- commands.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 97e8334e6b..0bd9c6023a 100644 --- a/commands.json +++ b/commands.json @@ -4480,7 +4480,8 @@ { "name": "event", "type": "string", - "optional": true + "optional": true, + "multiple": true } ], "since": "2.8.13", From 7f8699f39ba751ada570f0c3d50df09677667d21 Mon Sep 17 00:00:00 2001 From: laixintao Date: Tue, 9 Jun 2020 01:04:33 +0800 Subject: [PATCH 0385/1457] bugfix: add USER option for CLIENT KILL command. (#1327) --- commands.json | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands.json b/commands.json index 0bd9c6023a..ff2b550c84 100644 --- a/commands.json +++ b/commands.json @@ -432,6 +432,12 @@ ], "optional": true }, + { + "command": "USER", + "name": "username", + "type": "string", + "optional": true + }, { "command": "ADDR", "name": "ip:port", From f15cd9a4faf120beedb746d0e58d90ae7f95b33e Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Wed, 10 Jun 2020 10:04:38 -0400 Subject: [PATCH 0386/1457] fix spelling in acl doc (#1329) --- topics/acl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index 941faf6fb1..8aed9ca7a8 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -176,7 +176,7 @@ Now the user can do something, but will refuse to do other things: > GET cached:1234 (nil) > SET cached:1234 zap - (error) NOPERM this user has no permissions to run the 'set' command or its subcommnad + (error) NOPERM this user has no permissions to run the 'set' command or its subcommand Things are working as expected. In order to inspect the configuration of the user alice (remember that user names are case sensitive), it is possible to From 54017867e31dbb433750236bc64632b93097541c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 12 Jun 2020 12:07:16 +0200 Subject: [PATCH 0387/1457] LPOS documented. --- commands.json | 34 ++++++++++++++++++++++++ commands/lpos.md | 68 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+) create mode 100644 commands/lpos.md diff --git a/commands.json b/commands.json index ff2b550c84..b22710d98e 100644 --- a/commands.json +++ b/commands.json @@ -1895,6 +1895,40 @@ "since": "1.0.0", "group": "list" }, + "LPOS": { + "summary": "Return the index of matching elements on a list", + "complexity": "O(N) where N is the number of elements in the list, for the average case. When searching for elements near the head or the tail of the list, or when the MAXLEN option is provided, the command may run in constant time.", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "element", + "type": "string" + }, + { + "command": "FIRST", + "name": "rank", + "type": "string", + "optional": true + }, + { + "command": "COUNT", + "name": "num-matches", + "type": "integer", + "optional": true + }, + { + "command": "MAXLEN", + "name": "len", + "type": "integer", + "optional": true + } + ], + "since": "6.0.6", + "group": "list" + }, "LPUSH": { "summary": "Prepend one or multiple elements to a list", "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", diff --git a/commands/lpos.md b/commands/lpos.md new file mode 100644 index 0000000000..7e6cd508ea --- /dev/null +++ b/commands/lpos.md @@ -0,0 +1,68 @@ +The command returns the index of matching elements inside a Redis list. +By default, when no options are given, it will scan the list from head to tail, +looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned, otherwise if no match is found, NULL is returned. + +``` +> RPUSH mylist a b c 1 2 3 c c +> LPOS mylist c +2 +``` + +The optional arguments and options are able to modify the command behavior. +The `FIRST` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. + +For instance in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: + +``` +> LPOS mylist c FIRST 2 +6 +``` + +That is, the second occurrence of "c" is at position 6. +A negative "rank" as the `FIRST` argument tells `LPOS` to invert the search direction, starting from the tail to the head. + +So, we want to say, give me the first element starting from the tail of the list: + +``` +> LPOS mylist c -1 +7 +``` + +Note that the indexes are still reported in the "natural" way, that is, considering the first element starting from the head of the list at index 0, the next element at index 1, and so forth. This basically means that the returned indexes are stable whatever the rank is positive or negative. + +Sometimes we want to return not just the Nth matching element, but the position of all the first N matching elements. This can be achieved using the `COUNT` option. + +``` +> LPOS mylist c COUNT 2 +[2,6] +``` + +We can combine `COUNT` and `FIRST`, so that `COUNT` will try to return up to the specified number of matches, but starting from the Nth match, as specified by the `FIRST` option. + +``` +> LPOS mylist c FIRST -1 COUNT 2 +[7,6] +``` + +When `COUNT` is used, it is possible to specify 0 as number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. + +``` +> LPOS mylist COUNT 0 +[2,6,7] +``` + +When `COUNT` is used and no match is found, an empty array is returned. However when `COUNT` is not used and there are no matches, the command returns NULL. + +Finally, the `MAXLEN` option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying `MAXLEN 1000` will make sure that the command performs only 1000 comparison, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run. + +@return + +The command returns the integer representing the matching element, or null if there is no match. However if the `COUNT` option is given the command returns an array (empty if there are no matches). + +@examples + +```cli +RPUSH mylist a b c d 1 2 3 4 3 3 3 +LPOS mylist 3 +LPOS mylist 3 COUNT 0 FIRST 2 +``` From 15772f54474d60e36b9c5ab4d50c42c03be8f9ec Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 12 Jun 2020 16:05:00 +0300 Subject: [PATCH 0388/1457] Minor edits for LPOS (#1330) --- commands/lpos.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/commands/lpos.md b/commands/lpos.md index 7e6cd508ea..e3452ca528 100644 --- a/commands/lpos.md +++ b/commands/lpos.md @@ -1,6 +1,6 @@ The command returns the index of matching elements inside a Redis list. By default, when no options are given, it will scan the list from head to tail, -looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned, otherwise if no match is found, NULL is returned. +looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned. Otherwise, if no match is found, NULL is returned. ``` > RPUSH mylist a b c 1 2 3 c c @@ -8,10 +8,10 @@ looking for the first match of "element". If the element is found, its index (th 2 ``` -The optional arguments and options are able to modify the command behavior. +The optional arguments and options can modify the command's behavior. The `FIRST` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. -For instance in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: +For instance, in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: ``` > LPOS mylist c FIRST 2 @@ -44,7 +44,7 @@ We can combine `COUNT` and `FIRST`, so that `COUNT` will try to return up to the [7,6] ``` -When `COUNT` is used, it is possible to specify 0 as number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. +When `COUNT` is used, it is possible to specify 0 as the number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. ``` > LPOS mylist COUNT 0 @@ -53,11 +53,11 @@ When `COUNT` is used, it is possible to specify 0 as number of matches, as a way When `COUNT` is used and no match is found, an empty array is returned. However when `COUNT` is not used and there are no matches, the command returns NULL. -Finally, the `MAXLEN` option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying `MAXLEN 1000` will make sure that the command performs only 1000 comparison, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run. +Finally, the `MAXLEN` option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying `MAXLEN 1000` will make sure that the command performs only 1000 comparisons, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run. @return -The command returns the integer representing the matching element, or null if there is no match. However if the `COUNT` option is given the command returns an array (empty if there are no matches). +The command returns the integer representing the matching element, or null if there is no match. However, if the `COUNT` option is given the command returns an array (empty if there are no matches). @examples From c460d010a60df8dc37ef8715748361d0cb5b7fd7 Mon Sep 17 00:00:00 2001 From: Marc Gravell Date: Tue, 16 Jun 2020 15:44:34 +0100 Subject: [PATCH 0389/1457] sentinel docs (#1331) 1. mention that `SENTINEL replicas` even *exists* 2. add standard footnote 3. remove the obsolete sentinel-spec? --- topics/sentinel-spec.md | 477 ---------------------------------------- topics/sentinel.md | 10 +- 2 files changed, 6 insertions(+), 481 deletions(-) delete mode 100644 topics/sentinel-spec.md diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md deleted file mode 100644 index d260562d9d..0000000000 --- a/topics/sentinel-spec.md +++ /dev/null @@ -1,477 +0,0 @@ -**WARNING:** this document is no longer in sync with the implementation of Redis Sentinel and will be removed in the next weeks. - -Redis Sentinel design draft 1.3 -=== - -Changelog: - -* 1.0 first version. -* 1.1 fail over steps modified: slaves are pointed to new master one after the other and not simultaneously. New section about monitoring slaves to ensure they are replicating correctly. -* 1.2 Fixed a typo in the fail over section about: critical error is in step 5 and not 6. Added TODO section. -* 1.3 Document updated to reflect the actual implementation of the monitoring and leader election. - -Introduction -=== - -Redis Sentinel is the name of the Redis high availability solution that's -currently under development. It has nothing to do with Redis Cluster and -is intended to be used by people that don't need Redis Cluster, but simply -a way to perform automatic fail over when a master instance is not functioning -correctly. - -The plan is to provide a usable beta implementation of Redis Sentinel in a -short time, preferably in mid July 2012. - -In short this is what Redis Sentinel will be able to do: - -* Monitor master and slave instances to see if they are available. -* Promote a slave to master when the master fails. -* Modify clients configurations when a slave is elected. -* Inform the system administrator about incidents using notifications. - -So the three different roles of Redis Sentinel can be summarized in the following three big aspects: - -* Monitoring. -* Notification. -* Automatic failover. - -The following document explains what is the design of Redis Sentinel in order -to accomplish this goals. - -Redis Sentinel idea -=== - -The idea of Redis Sentinel is to have multiple "monitoring devices" in -different places of your network, monitoring the Redis master instance. - -However this independent devices can't act without agreement with other -sentinels. - -Once a Redis master instance is detected as failing, for the failover process -to start, the sentinel must verify that there is a given level of agreement. - -The amount of sentinels, their location in the network, and the -configured quorum, select the desired behavior among many possibilities. - -Redis Sentinel does not use any proxy: clients reconfiguration is performed -running user-provided executables (for instance a shell script or a -Python program) in a user setup specific way. - -In what form it will be shipped -=== - -Redis Sentinel is just a special mode of the redis-server executable. - -If the redis-server is called with "redis-sentinel" as `argv[0]` (for instance -using a symbolic link or copying the file), or if --sentinel option is passed, -the Redis instance starts in sentinel mode and will only understand sentinel -related commands. All the other commands will be refused. - -The whole implementation of sentinel will live in a separated file sentinel.c -with minimal impact on the rest of the code base. However this solution allows -to use all the facilities already implemented inside Redis without any need -to reimplement them or to maintain a separated code base for Redis Sentinel. - -Sentinels networking -=== - -All the sentinels take persistent connections with: - -* The monitored masters. -* All its slaves, that are discovered using the master's INFO output. -* All the other Sentinels connected to this master, discovered via Pub/Sub. - -Sentinels use the Redis protocol to talk with each other, and to reply to -external clients. - -Redis Sentinels export a SENTINEL command. Subcommands of the SENTINEL -command are used in order to perform different actions. - -For instance the `SENTINEL masters` command enumerates all the monitored -masters and their states. However Sentinels can also reply to the PING command -as a normal Redis instance, so that it is possible to monitor a Sentinel -considering it a normal Redis instance. - -The list of networking tasks performed by every sentinel is the following: - -* A Sentinel PUBLISH its presence using the master Pub/Sub multiple times every five seconds. -* A Sentinel accepts commands using a TCP port. By default the port is 26379. -* A Sentinel constantly monitors masters, slaves, other sentinels sending PING commands. -* A Sentinel sends INFO commands to the masters and slaves every ten seconds in order to take a fresh list of connected slaves, the state of the master, and so forth. -* A Sentinel monitors the sentinel Pub/Sub "hello" channel in order to discover newly connected Sentinels, or to detect no longer connected Sentinels. The channel used is `__sentinel__:hello`. - -Sentinels discovering -=== - -To make the configuration of sentinels as simple as possible every sentinel -broadcasts its presence using the Redis master Pub/Sub functionality. - -Every sentinel is subscribed to the same channel, and broadcast information -about its existence to the same channel, including the Run ID of the Sentinel, -and the IP address and port where it is listening for commands. - -Every sentinel maintains a list of other sentinels Run ID, IP and port. -A sentinel that does no longer announce its presence using Pub/Sub for too -long time is removed from the list, assuming the Master appears to be working well. In that case a notification is delivered to the system administrator. - -Detection of failing masters -=== - -An instance is not available from the point of view of Redis Sentinel when -it is no longer able to reply to the PING command correctly for longer than -the specified number of seconds, consecutively. - -For a PING reply to be considered valid, one of the following conditions -should be true: - -* PING replied with +PONG. -* PING replied with -LOADING error. -* PING replied with -MASTERDOWN error. - -What is not considered an acceptable reply: - -* PING replied with -BUSY error. -* PING replied with -MISCONF error. -* PING reply not received after more than a specified number of milliseconds. - -PING should never reply with a different error code than the ones listed above -but any other error code is considered an acceptable reply by Redis Sentinel. - -Handling of -BUSY state -=== - -The -BUSY error is returned when a script is running for more time than the -configured script time limit. When this happens before triggering a fail over -Redis Sentinel will try to send a "SCRIPT KILL" command, that will only -succeed if the script was read-only. - -Subjectively down and Objectively down -=== - -From the point of view of a Sentinel there are two different error conditions for a master: - -* *Subjectively Down* (aka `S_DOWN`) means that a master is down from the point of view of a Sentinel. -* *Objectively Down* (aka `O_DOWN`) means that a master is subjectively down from the point of view of enough Sentinels to reach the configured quorum for that master. - -How Sentinels agree to mark a master `O_DOWN`. -=== - -Once a Sentinel detects that a master is in `S_DOWN` condition it starts to -send other sentinels a `SENTINEL is-master-down-by-addr` request every second. -The reply is stored inside the state that every Sentinel takes in memory. - -Ten times every second a Sentinel scans the state and checks if there are -enough Sentinels thinking that a master is down (this is not specific for -this operation, most state checks are performed with this frequency). - -If this Sentinel has already an `S_DOWN` condition for this master, and there -are enough other sentinels that recently reported this condition -(the validity time is currently set to 5 seconds), then the master is marked -as `O_DOWN` (Objectively Down). - -Note that the `O_DOWN` state is not propagated among Sentinels. Every single -Sentinel can reach independently this state. - -The SENTINEL is-master-down-by-addr command -=== - -Sentinels ask other Sentinels for the state of a master from their local point -of view using the `SENTINEL is-master-down-by-addr` command. This command -replies with a boolean value (in the form of a 0 or 1 integer reply, as -a first element of a multi bulk reply). - -However in order to avoid false positives, the command acts in the following -way: - -* If the specified ip and port is not known, 0 is returned. -* If the specified ip and port are found but don't belong to a Master instance, 0 is returned. -* If the Sentinel is in TILT mode (see later in this document) 0 is returned. -* The value of 1 is returned only if the instance is known, is a master, is flagged `S_DOWN` and the Sentinel is in TILT mode. - -Duplicate Sentinels removal -=== - -In order to reach the configured quorum we absolutely want to make sure that -the quorum is reached by different physical Sentinel instances. Under -no circumstance we should get agreement from the same instance that for some -reason appears to be two or multiple distinct Sentinel instances. - -This is enforced by an aggressive removal of duplicated Sentinels: every time -a Sentinel sends a message in the Hello Pub/Sub channel with its address -and runid, if we can't find a perfect match (same runid and address) inside -the Sentinels table for that master, we remove any other Sentinel with the same -runid OR the same address. And later add the new Sentinel. - -For instance if a Sentinel instance is restarted, the Run ID will be different, -and the old Sentinel with the same IP address and port pair will be removed. - -Starting the failover: Leaders and Observers -=== - -The fact that a master is marked as `O_DOWN` is not enough to star the -failover process. What Sentinel should start the failover is also to be -decided. - -Also Sentinels can be configured in two ways: only as monitors that can't -perform the fail over, or as Sentinels that can start the failover. - -What is desirable is that only a Sentinel will start the failover process, -and this Sentinel should be selected among the Sentinels that are allowed -to perform the failover. - -In Sentinel there are two roles during a fail over: - -* The Leader Sentinel is the one selected to perform the failover. -* The Observers Sentinels are the other sentinels just following the failover process without doing active operations. - -So the condition to start the failover is: - -* A Master in `O_DOWN` condition. -* A Sentinel that is elected Leader. - -Leader Sentinel election -=== - -The election process works as follows: - -* Every Sentinel with a master in `O_DOWN` condition updates its internal state with frequency of 10 HZ to refresh what is the *Subjective Leader* from its point of view. - -A Subjective Leader is selected in this way by every sentinel. - -* Every Sentinel we know about a given master, that is reachable (no `S_DOWN` state), that is allowed to perform the failover (this Sentinel-specific configuration is propagated using the Hello channel), is a possible candidate. -* Among all the possible candidates, the one with lexicographically smaller Run ID is selected. - -Every time a Sentinel replies with to the `MASTER is-sentinel-down-by-addr` command it also replies with the Run ID of its Subjective Leader. - -Every Sentinel with a failing master (`O_DOWN`) checks its subjective leader -and the subjective leaders of all the other Sentinels with a frequency of -10 HZ, and will flag itself as the Leader if the following conditions happen: - -* It is the Subjective Leader of itself. -* At least N-1 other Sentinels that see the master as down, and are reachable, also think that it is the Leader. With N being the quorum configured for this master. -* At least 50% + 1 of all the Sentinels involved in the voting process (that are reachable and that also see the master as failing) should agree on the Leader. - -So for instance if there are a total of three sentinels, the master is failing, -and all the three sentinels are able to communicate (no Sentinel is failing) -and the configured quorum for this master is 2, a Sentinel will feel itself -an Objective Leader if at least it and another Sentinel is agreeing that -it is the subjective leader. - -Once a Sentinel detects that it is the objective leader, it flags the master -with `FAILOVER_IN_PROGRESS` and `IM_THE_LEADER` flags, and starts the failover -process in `SENTINEL_FAILOVER_DELAY` (5 seconds currently) plus a random -additional time between 0 milliseconds and 10000 milliseconds. - -During that time we ask INFO to all the slaves with an increased frequency -of one time per second (usually the period is 10 seconds). If a slave is -turned into a master in the meantime the failover is suspended and the -Leader clears the `IM_THE_LEADER` flag to turn itself into an observer. - -Guarantees of the Leader election process -=== - -As you can see for a Sentinel to become a leader the majority is not strictly -required. A user can force the majority to be needed just setting the master -quorum to, for instance, the value of 5 if there are a total of 9 sentinels. - -However it is also possible to set the quorum to the value of 2 with 9 -sentinels in order to improve the resistance to netsplits or failing Sentinels -or other error conditions. In such a case the protection against race -conditions (multiple Sentinels starting to perform the fail over at the same -time) is given by the random delay used to start the fail over, and the -continuous monitor of the slave instances to detect if another Sentinel -(or a human) started the failover process. - -Moreover the slave to promote is selected using a deterministic process to -minimize the chance that two different Sentinels with full vision of the -working slaves may pick two different slaves to promote. - -However it is possible to easily imagine netsplits and specific configurations -where two Sentinels may start to act as a leader at the same time, electing two -different slaves as masters, in two different parts of the net that can't -communicate. The Redis Sentinel user should evaluate the network topology and -select an appropriate quorum considering his or her goals and the different -trade offs. - -How observers understand that the failover started -=== - -An observer is just a Sentinel that does not believe to be the Leader, but -still sees a master in `O_DOWN` condition. - -The observer is still able to follow and update the internal state based on -what is happening with the failover, but does not directly rely on the -Leader to communicate with it to be informed by progresses. It simply observes -the state of the slaves to understand what is happening. - -Specifically the observers flags the master as `FAILOVER_IN_PROGRESS` if a slave -attached to a master turns into a master (observers can see it in the INFO output). An observer will also consider the failover complete once all the other -reachable slaves appear to be slaves of this slave that was turned into a -master. - -If a Slave is in `FAILOVER_IN_PROGRESS` and the failover is not progressing for -too much time, and at the same time the other Sentinels start claiming that -this Sentinel is the objective leader (because for example the old leader -is no longer reachable), the Sentinel will flag itself as `IM_THE_LEADER` and -will proceed with the failover. - -Note: all the Sentinel state, including the subjective and objective leadership -is a dynamic process that is continuously refreshed with period of 10 HZ. -There is no "one time decision" step in Sentinel. - -Selection of the Slave to promote -=== - -If a master has multiple slaves, the slave to promote to master is selected -checking the slave priority (a new configuration option of Redis instances -that is propagated via INFO output), and picking the one with lower priority -value (it is an integer similar to the one of the MX field of the DNS system). -All the slaves that appears to be disconnected from the master for a long -time are discarded (stale data). - -If slaves with the same priority exist, the one with the lexicographically -smaller Run ID is selected. - -If there is no Slave to select because all the salves are failing the failover -is not started at all. Instead if there is no Slave to select because the -master *never* used to have slaves in the monitoring session, then the -failover is performed nonetheless just calling the user scripts. -However for this to happen a special configuration option must be set for -that master (force-failover-without-slaves). - -This is useful because there are configurations where a new Instance can be -provisioned at IP protocol level by the script, but there are no attached -slaves. - -Fail over process -=== - -The fail over process consists of the following steps: - -* 1) Turn the selected slave into a master using the SLAVEOF NO ONE command. -* 2) Turn all the remaining slaves, if any, to slaves of the new master. This is done incrementally, one slave after the other, waiting for the previous slave to complete the synchronization process before starting with the next one. -* 3) Call a user script to inform the clients that the configuration changed. -* 4) Completely remove the old failing master from the table, and add the new master with the same name. - -If Steps "1" fails, the fail over is aborted. - -All the other errors are considered to be non-fatal. - -TILT mode -=== - -Redis Sentinel is heavily dependent on the computer time: for instance in -order to understand if an instance is available it remembers the time of the -latest successful reply to the PING command, and compares it with the current -time to understand how old it is. - -However if the computer time changes in an unexpected way, or if the computer -is very busy, or the process blocked for some reason, Sentinel may start to -behave in an unexpected way. - -The TILT mode is a special "protection" mode that a Sentinel can enter when -something odd is detected that can lower the reliability of the system. -The Sentinel timer interrupt is normally called 10 times per second, so we -expect that more or less 100 milliseconds will elapse between two calls -to the timer interrupt. - -What a Sentinel does is to register the previous time the timer interrupt -was called, and compare it with the current call: if the time difference -is negative or unexpectedly big (2 seconds or more) the TILT mode is entered -(or if it was already entered the exit from the TILT mode postponed). - -When in TILT mode the Sentinel will continue to monitor everything, but: - -* It stops acting at all. -* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. - -If everything appears to be normal for 30 second, the TILT mode is exited. - -Sentinels monitoring other sentinels -=== - -When a sentinel no longer advertises itself using the Pub/Sub channel for too -much time (30 minutes more the configured timeout for the master), but at the -same time the master appears to work correctly, the Sentinel is removed from -the table of Sentinels for this master, and a notification is sent to the -system administrator. - -User provided scripts -=== - -Sentinels can optionally call user-provided scripts to perform two tasks: - -* Inform clients that the configuration changed. -* Notify the system administrator of problems. - -The script to inform clients of a configuration change has the following parameters: - -* ip:port of the calling Sentinel. -* old master ip:port. -* new master ip:port. - -The script to send notifications is called with the following parameters: - -* ip:port of the calling Sentinel. -* The message to deliver to the system administrator is passed writing to the standard input. - -Using the ip:port of the calling sentinel, scripts may call SENTINEL subcommands -to get more info if needed. - -Concrete implementations of notification scripts will likely use the "mail" -command or some other command to deliver SMS messages, emails, tweets. - -Implementations of the script to modify the configuration in web applications -are likely to use HTTP GET requests to force clients to update the -configuration, or any other sensible mechanism for the specific setup in use. - -Setup examples -=== - -Imaginary setup: - - computer A runs the Redis master. - computer B runs the Redis slave and the client software. - -In this naive configuration it is possible to place a single sentinel, with -"minimal agreement" set to the value of one (no acknowledge from other -sentinels needed), running on "B". - -If "A" will fail the fail over process will start, the slave will be elected -to master, and the client software will be reconfigured. - -Imaginary setup: - - computer A runs the Redis master - computer B runs the Redis slave - computer C,D,E,F,G are web servers acting as clients - -In this setup it is possible to run five sentinels placed at C,D,E,F,G with -"minimal agreement" set to 3. - -In real production environments there is to evaluate how the different -computers are networked together, and to check what happens during net splits -in order to select where to place the sentinels, and the level of minimal -agreement, so that a single arm of the network failing will not trigger a -fail over. - -In general if a complex network topology is present, the minimal agreement -should be set to the max number of sentinels existing at the same time in -the same network arm, plus one. - -SENTINEL SUBCOMMANDS -=== - -* `SENTINEL masters`, provides a list of configured masters. -* `SENTINEL slaves `, provides a list of slaves for the master with the specified name. -* `SENTINEL sentinels `, provides a list of sentinels for the master with the specified name. -* `SENTINEL is-master-down-by-addr `, returns a two elements multi bulk reply where the first element is :0 or :1, and the second is the Subjective Leader for the failover. - -TODO -=== - -* More detailed specification of user script error handling, including what return codes may mean, like 0: try again. 1: fatal error. 2: try again, and so forth. -* More detailed specification of what happens when a user script does not return in a given amount of time. -* Add a "push" notification system for configuration changes. -* Document that for every master monitored the configuration specifies a name for the master that is reported by all the SENTINEL commands. -* Make clear that we handle a single Sentinel monitoring multiple masters. diff --git a/topics/sentinel.md b/topics/sentinel.md index 6bdec35116..70458a7532 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -513,7 +513,7 @@ a few that are of particular interest for us: In order to explore more about this instance, you may want to try the following two commands: - SENTINEL slaves mymaster + SENTINEL replicas mymaster SENTINEL sentinels mymaster The first will provide similar information about the replicas connected to the @@ -589,7 +589,7 @@ order to modify the Sentinel configuration, which are covered later. * **PING** This command simply returns PONG. * **SENTINEL masters** Show a list of monitored masters and their state. * **SENTINEL master ``** Show the state and info of the specified master. -* **SENTINEL slaves ``** Show a list of replicas for this master, and their state. +* **SENTINEL replicas ``** Show a list of replicas for this master, and their state. * **SENTINEL sentinels ``** Show a list of sentinel instances for this master, and their state. * **SENTINEL get-master-addr-by-name ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. * **SENTINEL reset ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. @@ -697,7 +697,7 @@ and is only specified if the instance is not a master itself. * **+slave** `` -- A new replica was detected and attached. * **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. * **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached replica turned into a master). -* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new replica. +* **+slave-reconf-sent** `` -- The leader sentinel sent the `REPLICAOF` command to this instance in order to reconfigure it for the new replica. * **+slave-reconf-inprog** `` -- The replica being reconfigured showed to be a replica of the new master ip:port pair, but the synchronization process is not yet complete. * **+slave-reconf-done** `` -- The replica is now synchronized with the new master. * **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). @@ -1007,7 +1007,7 @@ Configuration propagation Once a Sentinel is able to failover a master successfully, it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master. -For a failover to be considered successful, it requires that the Sentinel was able to send the `SLAVEOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master. +For a failover to be considered successful, it requires that the Sentinel was able to send the `REPLICAOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master. At this point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration. @@ -1142,3 +1142,5 @@ Note that in some way TILT mode could be replaced using the monotonic clock API that many kernels offer. However it is not still clear if this is a good solution since the current system avoids issues in case the process is just suspended or not executed by the scheduler for a long time. + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. \ No newline at end of file From 03b0599dcc22fdfbadb7a7f80462f0a904c48fd9 Mon Sep 17 00:00:00 2001 From: Riccardo Magliocchetti Date: Wed, 17 Jun 2020 20:05:22 +0200 Subject: [PATCH 0390/1457] Use replica in stream documentation (#1333) Following the replication documentation jargon --- topics/streams-intro.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index e0a5334d31..e0bdc03673 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -82,7 +82,7 @@ To query the stream by range we are only required to specify two IDs, *start* an 4) "18.2" ``` -Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, in the moment the entry was created (however note that Streams are replicated with fully specified **XADD** commands, so the slaves will have identical IDs to the master). This means that I could query a range of time using **XRANGE**. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, I may want to query a two milliseconds period I could use: +Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, in the moment the entry was created (however note that Streams are replicated with fully specified **XADD** commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using **XRANGE**. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, I may want to query a two milliseconds period I could use: ``` > XRANGE mystream 1518951480106 1518951480107 @@ -173,7 +173,7 @@ Similarly to blocking list operations, blocking stream reads are *fair* from the ## Consumer groups -When the task at hand is to consume the same stream from different clients, then **XREAD** already offers a way to *fan-out* to N clients, potentially also using slaves in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is the case of slow to process messages: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. +When the task at hand is to consume the same stream from different clients, then **XREAD** already offers a way to *fan-out* to N clients, potentially also using replicas in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is the case of slow to process messages: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. In practical terms, if we imagine having three consumers C1, C2, C3, and a stream that contains the messages 1, 2, 3, 4, 5, 6, 7 then what we want is to serve the messages like in the following diagram: @@ -640,13 +640,13 @@ So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and mos ## Persistence, replication and message safety -A Stream, like any other Redis data structure, is asynchronously replicated to slaves and persisted into AOF and RDB files. However what may not be so obvious is that also consumer groups full state is propagated to AOF, RDB and slaves, so if a message is pending in the master, also the slave will have the same information. Similarly, after a restart, the AOF will restore the consumer groups state. +A Stream, like any other Redis data structure, is asynchronously replicated to replicas and persisted into AOF and RDB files. However what may not be so obvious is that also consumer groups full state is propagated to AOF, RDB and replicas, so if a message is pending in the master, also the replica will have the same information. Similarly, after a restart, the AOF will restore the consumer groups state. However note that Redis streams and consumer groups are persisted and replicated using the Redis default replication, so: * AOF must be used with a strong fsync policy if persistence of messages is important in your application. -* By default the asynchronous replication will not guarantee that **XADD** commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of slaves to receive the data from the master. -* The **WAIT** command may be used in order to force the propagation of the changes to a set of slaves. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the slave which is the most updated, and under certain specific failures may promote a slave that lacks some data. +* By default the asynchronous replication will not guarantee that **XADD** commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of replicas to receive the data from the master. +* The **WAIT** command may be used in order to force the propagation of the changes to a set of replicas. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the replica which is the most updated, and under certain specific failures may promote a replica that lacks some data. So when designing an application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating if it is safe enough for your use case. From c8c40e51cee2d81b29795a3e668cd0bcc8a9bdfa Mon Sep 17 00:00:00 2001 From: Xiang Wang Date: Fri, 19 Jun 2020 09:51:29 +0800 Subject: [PATCH 0391/1457] correct the documents for object idletime more infomation can be found at https://github.com/antirez/redis/issues/5182 --- commands/object.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/commands/object.md b/commands/object.md index 4561d54990..67f313f79d 100644 --- a/commands/object.md +++ b/commands/object.md @@ -17,7 +17,8 @@ The `OBJECT` command supports multiple sub commands: at the specified key is idle (not requested by read or write operations). While the value is returned in seconds the actual resolution of this timer is 10 seconds, but may vary in future implementations. This subcommand is - available when `maxmemory-policy` is set to an LRU policy or `noeviction`. + available when `maxmemory-policy` is set to an LRU policy or `noeviction` + and `maxmemory` is set. * `OBJECT FREQ ` returns the logarithmic access frequency counter of the object stored at the specified key. This subcommand is available when `maxmemory-policy` is set to an LFU policy. From b87f53433581546ef2ab82dec51e527bcb480dd4 Mon Sep 17 00:00:00 2001 From: Alexander Cheprasov Date: Sat, 20 Jun 2020 20:37:59 +0100 Subject: [PATCH 0392/1457] Update cheprasov/php-redis-client description (#1335) --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 1b1eabdb99..74d6cfdd31 100644 --- a/clients.json +++ b/clients.json @@ -1505,7 +1505,7 @@ "name": "cheprasov/php-redis-client", "language": "PHP", "repository": "https://github.com/cheprasov/php-redis-client", - "description": "Supported PHP client for Redis. PHP ver 5.5 - 7.3 / REDIS ver 2.6 - 5.0", + "description": "Supported PHP client for Redis. PHP ver 5.5 - 7.4 / REDIS ver 2.6 - 6.0", "authors": ["cheprasov84"], "active": true }, From ad35b2c38a214caa1f9b973e51c999ea91631df0 Mon Sep 17 00:00:00 2001 From: D G Starkweather Date: Tue, 23 Jun 2020 08:16:30 -0400 Subject: [PATCH 0393/1457] add redis-audioscout and bump up no. stars on reventis (#1336) Co-authored-by: David Starkweather --- modules.json | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/modules.json b/modules.json index 92f6e3d32c..15dadde258 100644 --- a/modules.json +++ b/modules.json @@ -306,7 +306,7 @@ "authors": [ "starkdg" ], - "stars": 2 + "stars": 13 }, { "name": "redismodule-ratelimit", @@ -327,5 +327,15 @@ "cscan" ], "stars": 0 - } + }, + + { "name": "Redis-AudioScout", + "license": "pHash Source Available License", + "repository": "https://github.com/starkdg/Redis-AudioScout", + "description": "Redis module for Audio Track Recognition", + "authors": [ + "starkdg" + ], + "stars":1 + } ] From bbf0bc3e1bd9345ad32dc2eededddfd0e2a442ef Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 24 Jun 2020 09:12:02 +0200 Subject: [PATCH 0394/1457] LPOS: FIRST -> RANK. --- commands/lpos.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/commands/lpos.md b/commands/lpos.md index 7e6cd508ea..cdbc73c8d4 100644 --- a/commands/lpos.md +++ b/commands/lpos.md @@ -9,22 +9,22 @@ looking for the first match of "element". If the element is found, its index (th ``` The optional arguments and options are able to modify the command behavior. -The `FIRST` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. +The `RANK` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. For instance in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: ``` -> LPOS mylist c FIRST 2 +> LPOS mylist c RANK 2 6 ``` That is, the second occurrence of "c" is at position 6. -A negative "rank" as the `FIRST` argument tells `LPOS` to invert the search direction, starting from the tail to the head. +A negative "rank" as the `RANK` argument tells `LPOS` to invert the search direction, starting from the tail to the head. So, we want to say, give me the first element starting from the tail of the list: ``` -> LPOS mylist c -1 +> LPOS mylist c RANK -1 7 ``` @@ -37,10 +37,10 @@ Sometimes we want to return not just the Nth matching element, but the position [2,6] ``` -We can combine `COUNT` and `FIRST`, so that `COUNT` will try to return up to the specified number of matches, but starting from the Nth match, as specified by the `FIRST` option. +We can combine `COUNT` and `RANK`, so that `COUNT` will try to return up to the specified number of matches, but starting from the Nth match, as specified by the `RANK` option. ``` -> LPOS mylist c FIRST -1 COUNT 2 +> LPOS mylist c RANK -1 COUNT 2 [7,6] ``` @@ -64,5 +64,5 @@ The command returns the integer representing the matching element, or null if th ```cli RPUSH mylist a b c d 1 2 3 4 3 3 3 LPOS mylist 3 -LPOS mylist 3 COUNT 0 FIRST 2 +LPOS mylist 3 COUNT 0 RANK 2 ``` From d0496a05a1446583e0dd21a7c24cfc40ce321d5a Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Wed, 24 Jun 2020 12:17:11 -0400 Subject: [PATCH 0395/1457] update tracking doc to make it consistant with implementation (#1337) --- topics/client-side-caching.md | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 633854f8bf..735a13f397 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -226,18 +226,11 @@ In order to do so, tracking must be enabled using the OPTIN option: In this mode, by default keys mentioned in read queries *are not supposed to be cached*, instead when a client wants to cache something, it must send a special command immediately before the actual command to retrieve the data: - CACHING + CLIENT CACHING YES +OK GET foo "bar" -To make the protocol more efficient, the `CACHING` command can be sent with the -`NOREPLY` option: in this case it will be totally silent: - - CACHING NOREPLY - GET foo - "bar" - The `CACHING` command affects the command executed immediately after it, however in case the next command is `MULTI`, all the commands in the transaction will be tracked. Similarly in case of Lua scripts, all the From 029017563c734cf11956bb366f45290a79e51c6b Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Sat, 27 Jun 2020 12:06:32 -0400 Subject: [PATCH 0396/1457] remove Work in proress note for optin tracking (#1338) --- topics/client-side-caching.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 735a13f397..b4e65224e3 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -212,8 +212,6 @@ So there is an alternative described in the next section. ## Opt-in caching -(Note: this part is a work in progress and is yet not implemented inside Redis) - Clients implementations may want to cache only selected keys, and communicate explicitly to the server what they'll cache and what not: this will require more bandwidth when caching new objects, but at the same time will reduce From 4f1da37c03daa943f2cf3cd5fda8dbd1bf1f22d7 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 30 Jun 2020 16:02:41 +0300 Subject: [PATCH 0397/1457] Adds project governance (#1339) --- topics/governance.md | 86 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 topics/governance.md diff --git a/topics/governance.md b/topics/governance.md new file mode 100644 index 0000000000..33442fee5e --- /dev/null +++ b/topics/governance.md @@ -0,0 +1,86 @@ +# Redis Open Source Governance + +## Introduction + +Since 2009, the Redis open source project has become very successful and extremely popular. + +During this time, Salvatore Sanfillipo has led, managed, and maintained the project. While contributors from Redis Labs and others have made significant contributions, the project never adopted a formal governance structure and de-facto was operating as a [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)-style project. + +As Redis grows, matures, and continues to expand its user base, it becomes increasingly important to form a sustainable structure for the ongoing development and maintenance of Redis. We want to ensure the project’s continuity and reflect its larger community. + +## The new governance structure, applicable from June 30, 2020 + +Redis has adopted a _light governance_ model that matches the current size of the project and minimizes the changes from its earlier model. The governance model is intended to be a meritocracy, aiming to empower individuals who demonstrate a long-term commitment and make significant contributions. + +## The Redis core team + +Salvatore Sanfilippo has stepped down as head of the project and named two successors to take over and lead the Redis project: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) and Oran Agra ([oranagra](https://github.com/oranagra)) + +With the backing and blessing of Redis Labs, we wish to use this opportunity and create a more open, scalable, and community-driven “core team” structure to run the project. The core team will consist of members selected based on demonstrated, long-term personal involvement and contributions. + +The current core team comprises: + +* Three members from Redis Labs : + * Project leads: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) and Oran Agra ([oranagra](https://github.com/oranagra)) + * Community lead: Itamar Haber ([itamarhaber](https://github.com/itamarhaber)) + + The work of these three core team members will be funded by Redis Labs. + +* Redis community members. During the next few days, we plan to approach community members known for their contribution and involvement with Redis open-source. Their names will be shared here as soon as they have accepted this new role. + +The Redis core team members serve the Redis open source project and community. They are expected to set a good example of behavior, culture, and tone in accordance with the adopted [Code of Conduct](https://www.contributor-covenant.org/). They should also consider and act upon the best interests of the project and the community in a way that is free from foreign or conflicting interests. + +The core team will be responsible for the Redis core project, which is the part of Redis that is hosted in the main Redis repository and is BSD licensed. It will also aim to maintain coordination and collaboration with other projects that make up the Redis ecosystem, including Redis clients, satellite projects, major middleware that relies on Redis, etc. + +#### Roles and responsibilities of the core team + +* Managing the core Redis code and documentation +* Managing new Redis releases +* Maintaining a high-level technical direction/roadmap +* Providing a fast response, including fixes/patches, to address security vulnerabilities and other major issues +* Project governance decisions and changes +* Coordination of Redis core with the rest of the Redis ecosystem +* Managing the membership of the core team + +The core team will aim to form and empower a community of contributors by further delegating tasks to individuals who demonstrate commitment, know-how, and skills. In particular, we hope to see greater community involvement in the following areas: + +* Support, troubleshooting, and bug fixes of reported issues +* Triage of contributions/pull requests + +#### Decision making + +* **Normal decisions** will be made by core team members based on a lazy consensus approach: each member may vote +1 (positive) or -1 (negative). A negative vote must include thorough reasoning and better yet, an alternative proposal. The core team will always attempt to reach a full consensus rather than a majority. Examples of normal decisions: + * Day-to-day approval of pull requests and closing issues + * Opening new issues for discussion +* **Major decisions** that have a significant impact on the Redis architecture, design, or philosophy as well as core-team structure or membership changes should preferably be determined by full consensus. If the team is not able to achieve a full consensus, a majority vote is required. Examples of major decisions: + * Fundamental changes to the Redis core + * Adding a new data structure + * The new version of RESP (Redis Serialization Protocol) + * Changes that affect backward compatibility + * Adding or changing core team members +* Project leads have a right to veto major decisions + +#### Core team membership + +* The core team is not expected to serve for life, however, long-term participation is desired to provide stability and consistency in the Redis programming style and the community. +* If a core-team member whose work is funded by Redis Labs must be replaced, the replacement will be designated by Redis Labs after consultation with the remaining core-team members. +* If a core-team member not funded by Redis Labs will no longer participate, for whatever reason, the other team members will select a replacement. + +## Community forums and communications + +We want the Redis community to be as welcoming and inclusive as possible. To that end, we have adopted a [Code of Conduct](https://www.contributor-covenant.org/) that we ask all community members to read and observe. + + +We encourage that all significant communications will be public, asynchronous, archived, and open for the community to actively participate in using the channels described [here](https://redis.io/community). The exception to that is sensitive security issues that require resolution prior to public disclosure. + +## New Redis repository and commits approval process + +Initially, the Redis core source repository will be hosted under [https://github.com/redis-io/redis](https://github.com/redis-io/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. + +## Project and development updates + +Stay connected to the project and the community! For project and community updates, follow the project [channels](https://redis.io/community). Development announcements will be made via [the Redis mailing list](https://groups.google.com/forum/#!forum/redis-db). + +## Updates to these governance rules + +Any substantial changes to these rules will be treated as a major decision. Minor changes or ministerial corrections will be treated as normal decisions. From 97fb26824636f3281a96a348c0bdbe9088167562 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Fri, 3 Jul 2020 10:51:19 -0400 Subject: [PATCH 0398/1457] update acl docs (#1270) --- topics/acl.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 8aed9ca7a8..b06684cd75 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -248,7 +248,7 @@ all the commands that are tagged as dangerous inside the Redis command table. Please note that command categories **never include modules commands** with the exception of +@all. If you say +@all all the commands can be executed by the user, even future commands loaded via the modules system. However if you -use the ACL rule +@readonly or any other, the modules commands are always +use the ACL rule +@read or any other, the modules commands are always excluded. This is very important because you should just trust the Redis internal command table for sanity. Modules may expose dangerous things and in the case of an ACL that is just additive, that is, in the form of `+@all -...` @@ -301,7 +301,7 @@ command is part of the *geo* category: 8) "georadius" Note that commands may be part of multiple categories, so for instance an -ACL rule like `+@geo -@readonly` will result in certain geo commands to be +ACL rule like `+@geo -@read` will result in certain geo commands to be excluded because they are read-only commands. ## Adding subcommands From ab0bb646e4928491773bc9a4a4751eb526177b96 Mon Sep 17 00:00:00 2001 From: Chen Su Date: Wed, 8 Jul 2020 07:18:26 +0800 Subject: [PATCH 0399/1457] Rename LPOS option FIRST to RANK. (#1343) --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index b22710d98e..a01cff7271 100644 --- a/commands.json +++ b/commands.json @@ -1908,7 +1908,7 @@ "type": "string" }, { - "command": "FIRST", + "command": "RANK", "name": "rank", "type": "string", "optional": true From 886ebe1a802661042985faf22f8079ccde1f5ac3 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 8 Jul 2020 13:11:43 +0300 Subject: [PATCH 0400/1457] Moves CI/CD to GitHub actions (#1344) --- .github/workflows/deploy.yml | 16 ++++++++++++++++ .github/workflows/pull_request.yml | 18 ++++++++++++++++++ .travis.yml | 21 --------------------- 3 files changed, 34 insertions(+), 21 deletions(-) create mode 100644 .github/workflows/deploy.yml create mode 100644 .github/workflows/pull_request.yml delete mode 100644 .travis.yml diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml new file mode 100644 index 0000000000..34794764b0 --- /dev/null +++ b/.github/workflows/deploy.yml @@ -0,0 +1,16 @@ +name: Deploy website +on: + push: + branches: + - master + +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2.1.1 + - shell: bash + env: + DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }} + run: | + curl -G https://redis.io/deploy --data-urlencode token=$DEPLOY_TOKEN \ No newline at end of file diff --git a/.github/workflows/pull_request.yml b/.github/workflows/pull_request.yml new file mode 100644 index 0000000000..ff7f60664f --- /dev/null +++ b/.github/workflows/pull_request.yml @@ -0,0 +1,18 @@ +name: Check pull request +on: + pull_request: + branches: + - master + +jobs: + check: + runs-on: ubuntu-latest + steps: + - uses: ruby/setup-ruby@v1 + with: + ruby-version: 2.6 + - uses: actions/checkout@v2.1.1 + - name: Install dependencies + run: gem install $(sed -e 's/ -v /:/' .gems) + - name: Run tests + run: make -s diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index b5acaf3de0..0000000000 --- a/.travis.yml +++ /dev/null @@ -1,21 +0,0 @@ -language: ruby -sudo: false -addons: - apt: - packages: - - aspell - - aspell-en - -rvm: - - 2.2 - -install: - - gem install $(sed -e 's/ -v /:/' .gems) - -script: make -s - -deploy: - provider: script - script: curl -G https://redis.io/deploy --data-urlencode token=$DEPLOY_TOKEN - on: - branch: master From 41c5a104731e74f9bb47cd8a115a4799e5bb07ea Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 8 Jul 2020 13:12:49 +0300 Subject: [PATCH 0401/1457] Changes repository links (#1340) --- clients.json | 2 +- commands/config-get.md | 2 +- commands/config-set.md | 2 +- topics/config.md | 16 ++++++++-------- topics/problems.md | 6 +++--- topics/rdd-2.md | 2 +- topics/releases.md | 2 +- topics/transactions.md | 2 +- topics/virtual-memory.md | 2 +- 9 files changed, 18 insertions(+), 18 deletions(-) diff --git a/clients.json b/clients.json index 74d6cfdd31..903aaf8492 100644 --- a/clients.json +++ b/clients.json @@ -667,7 +667,7 @@ { "name": "Tcl Client", "language": "Tcl", - "repository": "https://github.com/antirez/redis/blob/unstable/tests/support/redis.tcl", + "repository": "https://github.com/redis/redis/blob/unstable/tests/support/redis.tcl", "description": "The client used in the Redis test suite. Not really full featured nor designed to be used in the real world.", "authors": ["antirez"] }, diff --git a/commands/config-get.md b/commands/config-get.md index c498e00e09..472a3da914 100644 --- a/commands/config-get.md +++ b/commands/config-get.md @@ -28,7 +28,7 @@ All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf][hgcarr22rc] file, with the following important differences: -[hgcarr22rc]: http://github.com/antirez/redis/raw/2.8/redis.conf +[hgcarr22rc]: http://github.com/redis/redis/raw/2.8/redis.conf * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (`10k`, `2gb` ... and so forth), everything diff --git a/commands/config-set.md b/commands/config-set.md index 8fd1e98d08..a3c3769a10 100644 --- a/commands/config-set.md +++ b/commands/config-set.md @@ -14,7 +14,7 @@ All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf][hgcarr22rc] file, with the following important differences: -[hgcarr22rc]: http://github.com/antirez/redis/raw/2.8/redis.conf +[hgcarr22rc]: http://github.com/redis/redis/raw/2.8/redis.conf * In options where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (`10k`, `2gb` ... and so forth), diff --git a/topics/config.md b/topics/config.md index 69c8301d15..65d190dc92 100644 --- a/topics/config.md +++ b/topics/config.md @@ -26,14 +26,14 @@ The list of configuration directives, and their meaning and intended usage is available in the self documented example redis.conf shipped into the Redis distribution. -* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/antirez/redis/6.0/redis.conf). -* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf). -* The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf). -* The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/antirez/redis/3.2/redis.conf). -* The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/antirez/redis/3.0/redis.conf). -* The self documented [redis.conf for Redis 2.8](https://raw.githubusercontent.com/antirez/redis/2.8/redis.conf). -* The self documented [redis.conf for Redis 2.6](https://raw.githubusercontent.com/antirez/redis/2.6/redis.conf). -* The self documented [redis.conf for Redis 2.4](https://raw.githubusercontent.com/antirez/redis/2.4/redis.conf). +* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/redis/redis/6.0/redis.conf). +* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/redis/redis/5.0/redis.conf). +* The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/redis/redis/4.0/redis.conf). +* The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/redis/redis/3.2/redis.conf). +* The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/redis/redis/3.0/redis.conf). +* The self documented [redis.conf for Redis 2.8](https://raw.githubusercontent.com/redis/redis/2.8/redis.conf). +* The self documented [redis.conf for Redis 2.6](https://raw.githubusercontent.com/redis/redis/2.6/redis.conf). +* The self documented [redis.conf for Redis 2.4](https://raw.githubusercontent.com/redis/redis/2.4/redis.conf). Passing arguments via the command line --- diff --git a/topics/problems.md b/topics/problems.md index 50ec957f7f..1512281c45 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -14,9 +14,9 @@ List of known critical bugs in Redis 3.0.x, 2.8.x and 2.6.x To find a list of critical bugs please refer to the changelogs: -* [Redis 3.0 Changelog](https://raw.githubusercontent.com/antirez/redis/3.0/00-RELEASENOTES). -* [Redis 2.8 Changelog](https://raw.githubusercontent.com/antirez/redis/2.8/00-RELEASENOTES). -* [Redis 2.6 Changelog](https://raw.githubusercontent.com/antirez/redis/2.6/00-RELEASENOTES). +* [Redis 3.0 Changelog](https://raw.githubusercontent.com/redis/redis/3.0/00-RELEASENOTES). +* [Redis 2.8 Changelog](https://raw.githubusercontent.com/redis/redis/2.8/00-RELEASENOTES). +* [Redis 2.6 Changelog](https://raw.githubusercontent.com/redis/redis/2.6/00-RELEASENOTES). Check the *upgrade urgency* level in each patch release to more easily spot releases that included important fixes. diff --git a/topics/rdd-2.md b/topics/rdd-2.md index 919d0bc508..fdc2429ccf 100644 --- a/topics/rdd-2.md +++ b/topics/rdd-2.md @@ -1,7 +1,7 @@ # Redis Design Draft 2 -- RDB version 7 info fields * Author: Salvatore Sanfilippo `antirez@gmail.com` -* GitHub issue [#1048](https://github.com/antirez/redis/issues/1048) +* GitHub issue [#1048](https://github.com/redis/redis/issues/1048) ## History of revisions diff --git a/topics/releases.md b/topics/releases.md index 7962b782ac..c709145cc1 100644 --- a/topics/releases.md +++ b/topics/releases.md @@ -20,7 +20,7 @@ Unstable tree === The unstable version of Redis is always located in the `unstable` branch in -the [Redis GitHub Repository](http://github.com/antirez/redis). +the [Redis GitHub Repository](http://github.com/redis/redis). This is the source tree where most of the new features are developed and is not considered to be production ready: it may contain critical bugs, diff --git a/topics/transactions.md b/topics/transactions.md index 5d263ef2c6..ee5366c3e8 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -196,7 +196,7 @@ So what is `WATCH` really about? It is a command that will make the `EXEC` conditional: we are asking Redis to perform the transaction only if none of the `WATCH`ed keys were modified. (But they might be changed by the same client inside the transaction -without aborting it. [More on this](https://github.com/antirez/redis-doc/issues/734).) +without aborting it. [More on this](https://github.com/redis/redis-doc/issues/734).) Otherwise the transaction is not entered at all. (Note that if you `WATCH` a volatile key and Redis expires the key after you `WATCH`ed it, `EXEC` will still work. [More on diff --git a/topics/virtual-memory.md b/topics/virtual-memory.md index 6a3fd54d89..620d329dd8 100644 --- a/topics/virtual-memory.md +++ b/topics/virtual-memory.md @@ -8,7 +8,7 @@ stable Redis distribution in Redis 2.0. However Virtual Memory (called VM starting from now) is already available and stable enough to be tests in the unstable branch of Redis available [on Git][redissrc]. -[redissrc]: http://github.com/antirez/redis +[redissrc]: http://github.com/redis/redis ## Virtual Memory explained in simple words From 80a3b97cfba910ec9e38b8287467f98220ea6853 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 8 Jul 2020 14:27:12 +0300 Subject: [PATCH 0402/1457] Updates repository link --- topics/governance.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/governance.md b/topics/governance.md index 33442fee5e..a75c854704 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -75,7 +75,7 @@ We encourage that all significant communications will be public, asynchronous, a ## New Redis repository and commits approval process -Initially, the Redis core source repository will be hosted under [https://github.com/redis-io/redis](https://github.com/redis-io/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. +Initially, the Redis core source repository will be hosted under [https://github.com/redis/redis](https://github.com/redis/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. ## Project and development updates From 80a1d3bdd3b2e6d3b150f2e7fbd07f882b843fcf Mon Sep 17 00:00:00 2001 From: Ben Mansheim Date: Wed, 8 Jul 2020 15:22:54 +0300 Subject: [PATCH 0403/1457] Fix small typos and grammar in replication (#1341) --- topics/replication.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/topics/replication.md b/topics/replication.md index d4901ffd3b..0acf84296b 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -11,10 +11,10 @@ This system works using three main mechanisms: Redis uses by default asynchronous replication, which being low latency and high performance, is the natural replication mode for the vast majority of Redis -use cases. However Redis replicas asynchronously acknowledge the amount of data +use cases. However, Redis replicas asynchronously acknowledge the amount of data they received periodically with the master. So the master does not wait every time for a command to be processed by the replicas, however it knows, if needed, what -replica already processed what command. This allows to have optional synchronous replication. +replica already processed what command. This allows having optional synchronous replication. Synchronous replication of certain data can be requested by the clients using the `WAIT` command. However `WAIT` is only able to ensure that there are the @@ -36,7 +36,7 @@ The following are some very important facts about Redis replication: * Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization. * Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 it is possible to configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica. * Replication can be used both for scalability, in order to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability. -* It is possible to use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to synchronized with it, the replica will be emptied as well. +* It is possible to use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to synchronize with it, the replica will be emptied as well. Safety of replication when master has persistence turned off --- @@ -110,7 +110,7 @@ potentially at a different time. It is the offset that works as a logical time to understand, for a given history (replication ID) who holds the most updated data set. -For instance if two instances A and B have the same replication ID, but one +For instance, if two instances A and B have the same replication ID, but one with offset 1000 and one with offset 1023, it means that the first lacks certain commands applied to the data set. It also means that A, by applying just a few commands, may reach exactly the same state of B. @@ -126,7 +126,7 @@ was the offset when this ID switch happened. Later it will select a new random replication ID, because a new history begins. When handling the new replicas connecting, the master will match their IDs and offsets both with the current ID and the secondary ID (up to a given offset, for safety). In short this means -that after a failover, replicas connecting to the new promoted master don't have +that after a failover, replicas connecting to the newly promoted master don't have to perform a full sync. In case you wonder why a replica promoted to master needs to change its @@ -244,7 +244,7 @@ Redis source distribution. How Redis replication deals with expires on keys --- -Redis expires allow keys to have a limited time to live. Such a feature depends +Redis expires allow keys to have a limited time to live (TTL). Such a feature depends on the ability of an instance to count the time, however Redis replicas correctly replicate keys with expires, even when such keys are altered using Lua scripts. @@ -264,7 +264,7 @@ Once a replica is promoted to a master it will start to expire keys independentl Configuring replication in Docker and NAT --- -When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output are scanned in order to discover replicas' addresses. +When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output is scanned in order to discover replicas' addresses. The problem is that the `ROLE` command, and the replication section of the `INFO` output, when issued into a master instance, will show replicas @@ -273,7 +273,7 @@ environments using NAT may be different compared to the logical address of the replica instance (the one that clients should use to connect to replicas). Similarly the replicas will be listed with the listening port configured -into `redis.conf`, that may be different than the forwarded port in case +into `redis.conf`, that may be different from the forwarded port in case the port is remapped. In order to fix both issues, it is possible, since Redis 3.2.2, to force From a79e884286804466987512220ec47acc4d752788 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 8 Jul 2020 16:43:11 +0300 Subject: [PATCH 0404/1457] Update governance.md --- topics/governance.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/governance.md b/topics/governance.md index a75c854704..dc351334a3 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -75,7 +75,7 @@ We encourage that all significant communications will be public, asynchronous, a ## New Redis repository and commits approval process -Initially, the Redis core source repository will be hosted under [https://github.com/redis/redis](https://github.com/redis/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. +The Redis core source repository is hosted under [https://github.com/redis/redis](https://github.com/redis/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. ## Project and development updates From bd84dafe687f98fe92830442cfabcff9cd087aa3 Mon Sep 17 00:00:00 2001 From: Stefan Miller <832146+stfnmllr@users.noreply.github.com> Date: Wed, 8 Jul 2020 16:03:26 +0200 Subject: [PATCH 0405/1457] LPOS: change RANK type from string to integer in commands.json (#1345) --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index a01cff7271..46de13849e 100644 --- a/commands.json +++ b/commands.json @@ -1910,7 +1910,7 @@ { "command": "RANK", "name": "rank", - "type": "string", + "type": "integer", "optional": true }, { From 6cc58753c145b326d7e3ee8d1b11c9e473f1ba1e Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 9 Jul 2020 19:30:57 +0300 Subject: [PATCH 0406/1457] Adds new core team members --- topics/governance.md | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/topics/governance.md b/topics/governance.md index dc351334a3..eb421934ab 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -18,15 +18,13 @@ Salvatore Sanfilippo has stepped down as head of the project and named two succe With the backing and blessing of Redis Labs, we wish to use this opportunity and create a more open, scalable, and community-driven “core team” structure to run the project. The core team will consist of members selected based on demonstrated, long-term personal involvement and contributions. -The current core team comprises: +The core team comprises of: -* Three members from Redis Labs : - * Project leads: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) and Oran Agra ([oranagra](https://github.com/oranagra)) - * Community lead: Itamar Haber ([itamarhaber](https://github.com/itamarhaber)) - - The work of these three core team members will be funded by Redis Labs. - -* Redis community members. During the next few days, we plan to approach community members known for their contribution and involvement with Redis open-source. Their names will be shared here as soon as they have accepted this new role. +* Project Lead: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) from Redis Labs +* Project Lead: Oran Agra ([oranagra](https://github.com/oranagra)) from Redis Labs +* Community Lead: Itamar Haber ([itamarhaber](https://github.com/itamarhaber)) from Redis Labs +* Member: Zhao Zhao ([soloestoy](https://github.com/soloestoy)) from Alibaba +* Member: Madelyn Olson ([madolson](https://github.com/madolson)) from Amazon Web Services The Redis core team members serve the Redis open source project and community. They are expected to set a good example of behavior, culture, and tone in accordance with the adopted [Code of Conduct](https://www.contributor-covenant.org/). They should also consider and act upon the best interests of the project and the community in a way that is free from foreign or conflicting interests. @@ -70,7 +68,6 @@ The core team will aim to form and empower a community of contributors by furthe We want the Redis community to be as welcoming and inclusive as possible. To that end, we have adopted a [Code of Conduct](https://www.contributor-covenant.org/) that we ask all community members to read and observe. - We encourage that all significant communications will be public, asynchronous, archived, and open for the community to actively participate in using the channels described [here](https://redis.io/community). The exception to that is sensitive security issues that require resolution prior to public disclosure. ## New Redis repository and commits approval process From cb687d98a3f2ceea6099f651d7cde5ab7b8a44bb Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 10 Jul 2020 00:07:12 +0300 Subject: [PATCH 0407/1457] Fixes #1267 (#1268) --- commands/georadius.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/commands/georadius.md b/commands/georadius.md index 6ab77716a2..e05000272d 100644 --- a/commands/georadius.md +++ b/commands/georadius.md @@ -24,6 +24,11 @@ The command default is to return unsorted items. Two different sorting methods c By default all the matching items are returned. It is possible to limit the results to the first N matching items by using the **COUNT ``** option. However note that internally the command needs to perform an effort proportional to the number of items matching the specified area, so to query very large areas with a very small `COUNT` option may be slow even if just a few results are returned. On the other hand `COUNT` can be a very effective way to reduce bandwidth usage if normally just the first results are used. +By default the command returns the items to the client. It is possible to store the results with one of these options: + +* `!STORE`: Store the items in a sorted set populated with their geospatial information. +* `!STOREDIST`: Store the items in a sorted set populated with their distance from the center as a floating point number, in the same unit specified in the radius. + @return @array-reply, specifically: From ef621293224d60a09551d97fa437fa9deba837dd Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 10 Jul 2020 00:17:45 +0300 Subject: [PATCH 0408/1457] Adds key miss events --- topics/notifications.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index 0c774d5429..c019ca5cd0 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -80,7 +80,8 @@ following table: t Stream commands x Expired events (events generated every time a key expires) e Evicted events (events generated when a key is evicted for maxmemory) - A Alias for g$lshztxe, so that the "AKE" string means all the events. + m Key miss events (events generated when a key that doesn't exist is accessed) + A Alias for "g$lshztxe", so that the "AKE" string means all the events except "m". At least `K` or `E` should be present in the string, otherwise no event will be delivered regardless of the rest of the string. @@ -174,3 +175,8 @@ The `expired` events are generated when a key is accessed and is found to be exp If no command targets the key constantly, and there are many keys with a TTL associated, there can be a significant delay between the time the key time to live drops to zero, and the time the `expired` event is generated. Basically `expired` events **are generated when the Redis server deletes the key** and not when the time to live theoretically reaches the value of zero. + +@history + +* `>= 6.0`: Key miss events were added. + From 9b506c662a9a6f2407599a7bfab48e824199afcf Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Tue, 14 Jul 2020 16:42:58 +0300 Subject: [PATCH 0409/1457] Modules retouch (#1348) * Adds email for core team * Removes redisml due to being archived * Updates RedisAI's license and authorships --- modules.json | 19 +++++-------------- topics/governance.md | 2 ++ 2 files changed, 7 insertions(+), 14 deletions(-) diff --git a/modules.json b/modules.json index 15dadde258..7a7022da72 100644 --- a/modules.json +++ b/modules.json @@ -72,17 +72,6 @@ ], "stars": 939 }, - { - "name": "RedisML", - "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/redisml", - "description": "Machine Learning Model Server", - "authors": [ - "shaynativ", - "RedisLabs" - ], - "stars": 286 - }, { "name": "RediSearch", "license": "Redis Source Available License", @@ -143,17 +132,19 @@ "repository": "https://github.com/RedisTimeSeries/RedisTimeSeries", "description": "Time-series data structure for redis", "authors": [ - "danni-m" + "danni-m", + "RedisLabs" ], "stars": 310 }, { "name": "RedisAI", - "license": "AGPL", + "license": "Redis Source Available License", "repository": "https://github.com/RedisAI/RedisAI", "description": "A Redis module for serving tensors and executing deep learning graphs", "authors": [ - "lantiga" + "lantiga", + "RedisLabs" ], "stars": 289 }, diff --git a/topics/governance.md b/topics/governance.md index eb421934ab..b7484fe1d7 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -70,6 +70,8 @@ We want the Redis community to be as welcoming and inclusive as possible. To tha We encourage that all significant communications will be public, asynchronous, archived, and open for the community to actively participate in using the channels described [here](https://redis.io/community). The exception to that is sensitive security issues that require resolution prior to public disclosure. +For contacting the core team on sensitive matters, such as misconduct or security issues, please email [redis@redis.io](mailto:redis@redis.io). + ## New Redis repository and commits approval process The Redis core source repository is hosted under [https://github.com/redis/redis](https://github.com/redis/redis). Our target is to eventually host everything (the Redis core source and other ecosystem projects) under the Redis GitHub organization ([https://github.com/redis](https://github.com/redis)). Commits to the Redis source repository will require code review, approval of at least one core-team member who is not the author of the commit, and no objections. From 2baab2cd8b0d71c7dce0797cd7261a1cd8b6546f Mon Sep 17 00:00:00 2001 From: Umair Khan Date: Thu, 16 Jul 2020 22:55:03 +0500 Subject: [PATCH 0410/1457] Outdated Redis PHP Clients (#1346) https://github.com/redis/redis-io/issues/203 --- clients.json | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/clients.json b/clients.json index 903aaf8492..81f1994c75 100644 --- a/clients.json +++ b/clients.json @@ -455,7 +455,8 @@ "url": "http://rediska.geometria-lab.net", "repository": "https://github.com/Shumkov/Rediska", "description": "", - "authors": ["shumkov"] + "authors": ["shumkov"], + "active": false }, { @@ -463,7 +464,8 @@ "language": "PHP", "repository": "https://github.com/jdp/redisent", "description": "", - "authors": ["justinpoliey"] + "authors": ["justinpoliey"], + "active": false }, { @@ -489,7 +491,8 @@ "language": "PHP", "repository": "https://github.com/swoole/redis-async", "description": "Asynchronous redis client library for PHP.", - "authors": ["matyhtf"] + "authors": ["matyhtf"], + "active": false }, { @@ -497,7 +500,8 @@ "language": "PHP", "repository": "https://github.com/yampee/Redis", "description": "A full-featured Redis client for PHP 5.2. Easy to use and to extend.", - "authors": ["tgalopin"] + "authors": ["tgalopin"], + "active": false }, { From 9a6e1b959ccc7bde4707fbd149e4e13ada5b7106 Mon Sep 17 00:00:00 2001 From: Yossi Gottlieb Date: Thu, 16 Jul 2020 19:18:11 +0300 Subject: [PATCH 0411/1457] Update Module API reference. (Auto-generated by gendoc.rb) --- topics/modules-api-ref.md | 1173 ++++++++++++++++++++++++++++++++++++- 1 file changed, 1162 insertions(+), 11 deletions(-) diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index 8f3b8e203c..69352ff691 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -137,6 +137,8 @@ example "write deny-oom". The set of flags are: this means. * **"no-monitor"**: Don't propagate the command on monitor. Use this if the command has sensible data among the arguments. +* **"no-slowlog"**: Don't log this command in the slowlog. Use this if + the command has sensible data among the arguments. * **"fast"**: The command time complexity is not greater than O(log(N)) where N is the size of the collection or anything else representing the normal scalability @@ -149,6 +151,9 @@ example "write deny-oom". The set of flags are: example, is unable to report the position of the keys, programmatically creates key names, or any other reason. +* **"no-auth"**: This command can be run by an un-authenticated client. + Normally this is used by a command that is used + to authenticate a client. ## `RedisModule_SetModuleAttribs` @@ -172,6 +177,27 @@ Otherwise zero is returned. Return the current UNIX time in milliseconds. +## `RedisModule_SetModuleOptions` + + void RedisModule_SetModuleOptions(RedisModuleCtx *ctx, int options); + +Set flags defining capabilities or behavior bit flags. + +`REDISMODULE_OPTIONS_HANDLE_IO_ERRORS`: +Generally, modules don't need to bother with this, as the process will just +terminate if a read error happens, however, setting this flag would allow +repl-diskless-load to work if enabled. +The module should use `RedisModule_IsIOError` after reads, before using the +data that was read, and in case of error, propagate it upwards, and also be +able to release the partially populated value and all it's allocations. + +## `RedisModule_SignalModifiedKey` + + int RedisModule_SignalModifiedKey(RedisModuleCtx *ctx, RedisModuleString *keyname); + +Signals that the key is modified from user's perspective (i.e. invalidate WATCH +and client side caching). + ## `RedisModule_AutoMemory` void RedisModule_AutoMemory(RedisModuleCtx *ctx); @@ -222,6 +248,29 @@ enabling automatic memory management. The passed context 'ctx' may be NULL if necessary, see the `RedisModule_CreateString()` documentation for more info. +## `RedisModule_CreateStringFromDouble` + + RedisModuleString *RedisModule_CreateStringFromDouble(RedisModuleCtx *ctx, double d); + +Like `RedisModule_CreatString()`, but creates a string starting from a double +integer instead of taking a buffer and its length. + +The returned string must be released with `RedisModule_FreeString()` or by +enabling automatic memory management. + +## `RedisModule_CreateStringFromLongDouble` + + RedisModuleString *RedisModule_CreateStringFromLongDouble(RedisModuleCtx *ctx, long double ld, int humanfriendly); + +Like `RedisModule_CreatString()`, but creates a string starting from a long +double. + +The returned string must be released with `RedisModule_FreeString()` or by +enabling automatic memory management. + +The passed context 'ctx' may be NULL if necessary, see the +`RedisModule_CreateString()` documentation for more info. + ## `RedisModule_CreateStringFromString` RedisModuleString *RedisModule_CreateStringFromString(RedisModuleCtx *ctx, const RedisModuleString *str); @@ -306,6 +355,14 @@ Convert the string into a double, storing it at `*d`. Returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if the string is not a valid string representation of a double value. +## `RedisModule_StringToLongDouble` + + int RedisModule_StringToLongDouble(const RedisModuleString *str, long double *ld); + +Convert the string into a long double, storing it at `*ld`. +Returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if the string is +not a valid string representation of a double value. + ## `RedisModule_StringCompare` int RedisModule_StringCompare(RedisModuleString *a, RedisModuleString *b); @@ -384,6 +441,23 @@ latest "open" count if there are multiple ones). The function always returns `REDISMODULE_OK`. +## `RedisModule_ReplyWithNullArray` + + int RedisModule_ReplyWithNullArray(RedisModuleCtx *ctx); + +Reply to the client with a null array, simply null in RESP3 +null array in RESP2. + +The function always returns `REDISMODULE_OK`. + +## `RedisModule_ReplyWithEmptyArray` + + int RedisModule_ReplyWithEmptyArray(RedisModuleCtx *ctx); + +Reply to the client with an empty array. + +The function always returns `REDISMODULE_OK`. + ## `RedisModule_ReplySetArrayLength` void RedisModule_ReplySetArrayLength(RedisModuleCtx *ctx, long len); @@ -422,6 +496,15 @@ Reply with a bulk string, taking in input a C buffer pointer and length. The function always returns `REDISMODULE_OK`. +## `RedisModule_ReplyWithCString` + + int RedisModule_ReplyWithCString(RedisModuleCtx *ctx, const char *buf); + +Reply with a bulk string, taking in input a C buffer pointer that is +assumed to be null-terminated. + +The function always returns `REDISMODULE_OK`. + ## `RedisModule_ReplyWithString` int RedisModule_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str); @@ -430,12 +513,28 @@ Reply with a bulk string, taking in input a RedisModuleString object. The function always returns `REDISMODULE_OK`. +## `RedisModule_ReplyWithEmptyString` + + int RedisModule_ReplyWithEmptyString(RedisModuleCtx *ctx); + +Reply with an empty string. + +The function always returns `REDISMODULE_OK`. + +## `RedisModule_ReplyWithVerbatimString` + + int RedisModule_ReplyWithVerbatimString(RedisModuleCtx *ctx, const char *buf, size_t len); + +Reply with a binary safe string, which should not be escaped or filtered +taking in input a C buffer pointer and length. + +The function always returns `REDISMODULE_OK`. + ## `RedisModule_ReplyWithNull` int RedisModule_ReplyWithNull(RedisModuleCtx *ctx); -Reply to the client with a NULL. In the RESP protocol a NULL is encoded -as the string "$-1\r\n". +Reply to the client with a NULL. The function always returns `REDISMODULE_OK`. @@ -461,6 +560,19 @@ a string into a C buffer, and then calling the function The function always returns `REDISMODULE_OK`. +## `RedisModule_ReplyWithLongDouble` + + int RedisModule_ReplyWithLongDouble(RedisModuleCtx *ctx, long double ld); + +Send a string reply obtained converting the long double 'ld' into a bulk +string. This function is basically equivalent to converting a long double +into a string into a C buffer, and then calling the function +`RedisModule_ReplyWithStringBuffer()` with the buffer and length. +The double string uses human readable formatting (see +`addReplyHumanLongDouble` in networking.c). + +The function always returns `REDISMODULE_OK`. + ## `RedisModule_Replicate` int RedisModule_Replicate(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...); @@ -482,6 +594,24 @@ matching the provided format specifiers. Please refer to `RedisModule_Call()` for more information. +Using the special "A" and "R" modifiers, the caller can exclude either +the AOF or the replicas from the propagation of the specified command. +Otherwise, by default, the command will be propagated in both channels. + +## Note about calling this function from a thread safe context: + +Normally when you call this function from the callback implementing a +module command, or any other callback provided by the Redis Module API, +Redis will accumulate all the calls to this function in the context of +the callback, and will propagate all the commands wrapped in a MULTI/EXEC +transaction. However when calling this function from a threaded safe context +that can live an undefined amount of time, and can be locked/unlocked in +at will, the behavior is different: MULTI/EXEC wrapper is not emitted +and the command specified is inserted in the AOF and replication stream +immediately. + +## Return value + The command returns `REDISMODULE_ERR` if the format specifiers are invalid or the command name does not belong to a known command. @@ -517,6 +647,65 @@ command. The returned ID has a few guarantees: Valid IDs are from 1 to 2^64-1. If 0 is returned it means there is no way to fetch the ID in the context the function was currently called. +After obtaining the ID, it is possible to check if the command execution +is actually happening in the context of AOF loading, using this macro: + + if (RedisModule_IsAOFClient(RedisModule_GetClientId(ctx)) { + // Handle it differently. + } + +## `RedisModule_GetClientInfoById` + + int RedisModule_GetClientInfoById(void *ci, uint64_t id); + +Return information about the client with the specified ID (that was +previously obtained via the `RedisModule_GetClientId()` API). If the +client exists, `REDISMODULE_OK` is returned, otherwise `REDISMODULE_ERR` +is returned. + +When the client exist and the `ci` pointer is not NULL, but points to +a structure of type RedisModuleClientInfo, previously initialized with +the correct `REDISMODULE_CLIENTINFO_INITIALIZER`, the structure is populated +with the following fields: + + uint64_t flags; // REDISMODULE_CLIENTINFO_FLAG_* + uint64_t id; // Client ID + char addr[46]; // IPv4 or IPv6 address. + uint16_t port; // TCP port. + uint16_t db; // Selected DB. + +Note: the client ID is useless in the context of this call, since we + already know, however the same structure could be used in other + contexts where we don't know the client ID, yet the same structure + is returned. + +With flags having the following meaning: + + REDISMODULE_CLIENTINFO_FLAG_SSL Client using SSL connection. + REDISMODULE_CLIENTINFO_FLAG_PUBSUB Client in Pub/Sub mode. + REDISMODULE_CLIENTINFO_FLAG_BLOCKED Client blocked in command. + REDISMODULE_CLIENTINFO_FLAG_TRACKING Client with keys tracking on. + REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET Client using unix domain socket. + REDISMODULE_CLIENTINFO_FLAG_MULTI Client in MULTI state. + +However passing NULL is a way to just check if the client exists in case +we are not interested in any additional information. + +This is the correct usage when we want the client info structure +returned: + + RedisModuleClientInfo ci = REDISMODULE_CLIENTINFO_INITIALIZER; + int retval = RedisModule_GetClientInfoById(&ci,client_id); + if (retval == REDISMODULE_OK) { + printf("Address: %s\n", ci.addr); + } + +## `RedisModule_PublishMessage` + + int RedisModule_PublishMessage(RedisModuleCtx *ctx, RedisModuleString *channel, RedisModuleString *message); + +Publish a message to subscribers (see PUBLISH command). + ## `RedisModule_GetSelectedDb` int RedisModule_GetSelectedDb(RedisModuleCtx *ctx); @@ -531,12 +720,20 @@ Return the current context's flags. The flags provide information on the current request context (whether the client is a Lua script or in a MULTI), and about the Redis instance in general, i.e replication and persistence. -The available flags are: +It is possible to call this function even with a NULL context, however +in this case the following flags will not be reported: + + * LUA, MULTI, REPLICATED, DIRTY (see below for more info). + +Available flags and their meaning: * REDISMODULE_CTX_FLAGS_LUA: The command is running in a Lua script * REDISMODULE_CTX_FLAGS_MULTI: The command is running inside a transaction + * REDISMODULE_CTX_FLAGS_REPLICATED: The command was sent over the replication + link by the MASTER + * REDISMODULE_CTX_FLAGS_MASTER: The Redis instance is a master * REDISMODULE_CTX_FLAGS_SLAVE: The Redis instance is a slave @@ -560,6 +757,47 @@ The available flags are: * REDISMODULE_CTX_FLAGS_OOM_WARNING: Less than 25% of memory remains before reaching the maxmemory level. + * REDISMODULE_CTX_FLAGS_LOADING: Server is loading RDB/AOF + + * REDISMODULE_CTX_FLAGS_REPLICA_IS_STALE: No active link with the master. + + * REDISMODULE_CTX_FLAGS_REPLICA_IS_CONNECTING: The replica is trying to + connect with the master. + + * REDISMODULE_CTX_FLAGS_REPLICA_IS_TRANSFERRING: Master -> Replica RDB + transfer is in progress. + + * REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE: The replica has an active link + with its master. This is the + contrary of STALE state. + + * REDISMODULE_CTX_FLAGS_ACTIVE_CHILD: There is currently some background + process active (RDB, AUX or module). + +## `RedisModule_AvoidReplicaTraffic` + + int RedisModule_AvoidReplicaTraffic(); + +Returns true if some client sent the CLIENT PAUSE command to the server or +if Redis Cluster is doing a manual failover, and paused tue clients. +This is needed when we have a master with replicas, and want to write, +without adding further data to the replication channel, that the replicas +replication offset, match the one of the master. When this happens, it is +safe to failover the master without data loss. + +However modules may generate traffic by calling `RedisModule_Call()` with +the "!" flag, or by calling `RedisModule_Replicate()`, in a context outside +commands execution, for instance in timeout callbacks, threads safe +contexts, and so forth. When modules will generate too much traffic, it +will be hard for the master and replicas offset to match, because there +is more data to send in the replication channel. + +So modules may want to try to avoid very heavy background work that has +the effect of creating data to the replication channel, when this function +returns true. This is mostly useful for modules that have background +garbage collection tasks, or that do writes and replicate such writes +periodically in timer callbacks or other periodic callbacks. + ## `RedisModule_SelectDb` int RedisModule_SelectDb(RedisModuleCtx *ctx, int newid); @@ -630,7 +868,7 @@ writing `REDISMODULE_ERR` is returned. int RedisModule_UnlinkKey(RedisModuleKey *key); -If the key is open for writing, unlink it (that is delete it in a +If the key is open for writing, unlink it (that is delete it in a non-blocking way, not reclaiming memory immediately) and setup the key to accept new writes as an empty key (that will be created on demand). On success `REDISMODULE_OK` is returned. If the key is not open for @@ -658,6 +896,27 @@ the number of milliseconds of TTL the key should have. The function returns `REDISMODULE_OK` on success or `REDISMODULE_ERR` if the key was not open for writing or is an empty key. +## `RedisModule_ResetDataset` + + void RedisModule_ResetDataset(int restart_aof, int async); + +Performs similar operation to FLUSHALL, and optionally start a new AOF file (if enabled) +If restart_aof is true, you must make sure the command that triggered this call is not +propagated to the AOF file. +When async is set to true, db contents will be freed by a background thread. + +## `RedisModule_DbSize` + + unsigned long long RedisModule_DbSize(RedisModuleCtx *ctx); + +Returns the number of keys in the current db. + +## `RedisModule_RandomKey` + + RedisModuleString *RedisModule_RandomKey(RedisModuleCtx *ctx); + +Returns a name of a random key, or NULL if current db is empty. + ## `RedisModule_StringSet` int RedisModule_StringSet(RedisModuleKey *key, RedisModuleString *str); @@ -1003,7 +1262,7 @@ passing flags different than `REDISMODULE_HASH_NONE`: `REDISMODULE_HASH_EXISTS`: instead of setting the value of the field expecting a RedisModuleString pointer to pointer, the function just -reports if the field esists or not and expects an integer pointer +reports if the field exists or not and expects an integer pointer as the second element of each pair. Example of `REDISMODULE_HASH_CFIELD`: @@ -1085,8 +1344,15 @@ Exported API to call any Redis command from modules. On success a RedisModuleCallReply object is returned, otherwise NULL is returned and errno is set to the following values: -EINVAL: command non existing, wrong arity, wrong format specifier. +EBADF: wrong format specifier. +EINVAL: wrong command arity. +ENOENT: command does not exist. EPERM: operation in Cluster instance with key in non local slot. +EROFS: operation in Cluster instance when a write command is sent + in a readonly state. +ENETDOWN: operation in Cluster instance when cluster is down. + +This API is documented here: https://redis.io/topics/modules-intro ## `RedisModule_CallReplyProto` @@ -1134,6 +1400,8 @@ documentation, especially the TYPES.md file. // Optional fields .digest = myType_DigestCallBack, .mem_usage = myType_MemUsageCallBack, + .aux_load = myType_AuxRDBLoadCallBack, + .aux_save = myType_AuxRDBSaveCallBack, } * **rdb_load**: A callback function pointer that loads data from RDB files. @@ -1141,6 +1409,10 @@ documentation, especially the TYPES.md file. * **aof_rewrite**: A callback function pointer that rewrites data as commands. * **digest**: A callback function pointer that is used for `DEBUG DIGEST`. * **free**: A callback function pointer that can free a type value. +* **aux_save**: A callback function pointer that saves out of keyspace data to RDB files. + 'when' argument is either REDISMODULE_AUX_BEFORE_RDB or REDISMODULE_AUX_AFTER_RDB. +* **aux_load**: A callback function pointer that loads out of keyspace data from RDB files. + Similar to aux_save, returns REDISMODULE_OK on success, and ERR otherwise. The **digest* and **mem_usage** methods should currently be omitted since they are not yet implemented inside the Redis modules core. @@ -1193,6 +1465,14 @@ it was set by the user via `RedisModule_ModuleTypeSet()`. If the key is NULL, is not associated with a module type, or is empty, then NULL is returned instead. +## `RedisModule_IsIOError` + + int RedisModule_IsIOError(RedisModuleIO *io); + +Returns true if any previous IO API failed. +for Load* APIs the `REDISMODULE_OPTIONS_HANDLE_IO_ERRORS` flag must be set with +RediModule_SetModuleOptions first. + ## `RedisModule_SaveUnsigned` void RedisModule_SaveUnsigned(RedisModuleIO *io, uint64_t value); @@ -1262,7 +1542,7 @@ was allocated with `RedisModule_Alloc()`, and can be resized or freed with `RedisModule_Realloc()` or `RedisModule_Free()`. The size of the string is stored at '*lenptr' if not NULL. -The returned string is not automatically NULL termianted, it is loaded +The returned string is not automatically NULL terminated, it is loaded exactly as it was stored inisde the RDB file. ## `RedisModule_SaveDouble` @@ -1295,6 +1575,21 @@ It is possible to load back the value with `RedisModule_LoadFloat()`. In the context of the rdb_save method of a module data type, loads back the float value saved by `RedisModule_SaveFloat()`. +## `RedisModule_SaveLongDouble` + + void RedisModule_SaveLongDouble(RedisModuleIO *io, long double value); + +In the context of the rdb_save method of a module data type, saves a long double +value to the RDB file. The double can be a valid number, a NaN or infinity. +It is possible to load back the value with `RedisModule_LoadLongDouble()`. + +## `RedisModule_LoadLongDouble` + + long double RedisModule_LoadLongDouble(RedisModuleIO *io); + +In the context of the rdb_save method of a module data type, loads back the +long double value saved by `RedisModule_SaveLongDouble()`. + ## `RedisModule_DigestAddStringBuffer` void RedisModule_DigestAddStringBuffer(RedisModuleDigest *md, unsigned char *ele, size_t len); @@ -1359,6 +1654,20 @@ by a module. The command works exactly like `RedisModule_Call()` in the way the parameters are passed, but it does not return anything as the error handling is performed by Redis itself. +## `RedisModule_GetKeyNameFromIO` + + const RedisModuleString *RedisModule_GetKeyNameFromIO(RedisModuleIO *io); + +Returns a RedisModuleString with the name of the key currently saving or +loading, when an IO data type callback is called. There is no guarantee +that the key name is always available, so this may return NULL. + +## `RedisModule_GetKeyNameFromModuleKey` + + const RedisModuleString *RedisModule_GetKeyNameFromModuleKey(RedisModuleKey *key); + +Returns a RedisModuleString with the name of the key from RedisModuleKey + ## `RedisModule_LogRaw` void RedisModule_LogRaw(RedisModule *module, const char *levelstr, const char *fmt, va_list ap); @@ -1386,6 +1695,10 @@ There is a fixed limit to the length of the log line this function is able to emit, this limit is not specified but is guaranteed to be more than a few lines of text. +The ctx argument may be NULL if cannot be provided in the context of the +caller for instance threads or callbacks, in which case a generic "module" +will be used instead of the module name. + ## `RedisModule_LogIOError` void RedisModule_LogIOError(RedisModuleIO *io, const char *levelstr, const char *fmt, ...); @@ -1396,6 +1709,23 @@ This function should be used when a callback is returning a critical error to the caller since cannot load or save the data for some critical reason. +## `RedisModule__Assert` + + void RedisModule__Assert(const char *estr, const char *file, int line); + +Redis-like assert function. + +A failed assertion will shut down the server and produce logging information +that looks identical to information generated by Redis itself. + +## `RedisModule_LatencyAddSample` + + void RedisModule_LatencyAddSample(const char *event, mstime_t latency); + +Allows adding event to the latency monitor to be observed by the LATENCY +command. The call is skipped if the latency is smaller than the configured +latency-monitor-threshold. + ## `RedisModule_BlockClient` RedisModuleBlockedClient *RedisModule_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms); @@ -1416,6 +1746,81 @@ The callbacks are called in the following contexts: free_privdata: called in order to free the private data that is passed by RedisModule_UnblockClient() call. +Note: `RedisModule_UnblockClient` should be called for every blocked client, + even if client was killed, timed-out or disconnected. Failing to do so + will result in memory leaks. + +## `RedisModule_BlockClientOnKeys` + + RedisModuleBlockedClient *RedisModule_BlockClientOnKeys(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata); + +This call is similar to `RedisModule_BlockClient()`, however in this case we +don't just block the client, but also ask Redis to unblock it automatically +once certain keys become "ready", that is, contain more data. + +Basically this is similar to what a typical Redis command usually does, +like BLPOP or ZPOPMAX: the client blocks if it cannot be served ASAP, +and later when the key receives new data (a list push for instance), the +client is unblocked and served. + +However in the case of this module API, when the client is unblocked? + +1. If you block ok a key of a type that has blocking operations associated, + like a list, a sorted set, a stream, and so forth, the client may be + unblocked once the relevant key is targeted by an operation that normally + unblocks the native blocking operations for that type. So if we block + on a list key, an RPUSH command may unblock our client and so forth. +2. If you are implementing your native data type, or if you want to add new + unblocking conditions in addition to "1", you can call the modules API + RedisModule_SignalKeyAsReady(). + +Anyway we can't be sure if the client should be unblocked just because the +key is signaled as ready: for instance a successive operation may change the +key, or a client in queue before this one can be served, modifying the key +as well and making it empty again. So when a client is blocked with +`RedisModule_BlockClientOnKeys()` the reply callback is not called after +`RM_UnblockCLient()` is called, but every time a key is signaled as ready: +if the reply callback can serve the client, it returns `REDISMODULE_OK` +and the client is unblocked, otherwise it will return `REDISMODULE_ERR` +and we'll try again later. + +The reply callback can access the key that was signaled as ready by +calling the API `RedisModule_GetBlockedClientReadyKey()`, that returns +just the string name of the key as a RedisModuleString object. + +Thanks to this system we can setup complex blocking scenarios, like +unblocking a client only if a list contains at least 5 items or other +more fancy logics. + +Note that another difference with `RedisModule_BlockClient()`, is that here +we pass the private data directly when blocking the client: it will +be accessible later in the reply callback. Normally when blocking with +`RedisModule_BlockClient()` the private data to reply to the client is +passed when calling `RedisModule_UnblockClient()` but here the unblocking +is performed by Redis itself, so we need to have some private data before +hand. The private data is used to store any information about the specific +unblocking operation that you are implementing. Such information will be +freed using the free_privdata callback provided by the user. + +However the reply callback will be able to access the argument vector of +the command, so the private data is often not needed. + +Note: Under normal circumstances `RedisModule_UnblockClient` should not be + called for clients that are blocked on keys (Either the key will + become ready or a timeout will occur). If for some reason you do want + to call RedisModule_UnblockClient it is possible: Client will be + handled as if it were timed-out (You must implement the timeout + callback in that case). + +## `RedisModule_SignalKeyAsReady` + + void RedisModule_SignalKeyAsReady(RedisModuleCtx *ctx, RedisModuleString *key); + +This function is used in order to potentially unblock a client blocked +on keys with `RedisModule_BlockClientOnKeys()`. When this function is called, +all the clients blocked for this key will get their reply callback called, +and if the callback returns `REDISMODULE_OK` the client will be unblocked. + ## `RedisModule_UnblockClient` int RedisModule_UnblockClient(RedisModuleBlockedClient *bc, void *privdata); @@ -1430,7 +1835,16 @@ A common usage for 'privdata' is a thread that computes something that needs to be passed to the client, included but not limited some slow to compute reply or some reply obtained via networking. -Note: this function can be called from threads spawned by the module. +Note 1: this function can be called from threads spawned by the module. + +Note 2: when we unblock a client that is blocked for keys using +the API `RedisModule_BlockClientOnKeys()`, the privdata argument here is +not used, and the reply callback is called with the privdata pointer that +was passed when blocking the client. + +Unblocking a client that was blocked for keys using this API will still +require the client to get some reply, so the function will use the +"timeout" handler in order to do so. ## `RedisModule_AbortBlock` @@ -1479,6 +1893,13 @@ reply for a blocked client that timed out. Get the private data set by `RedisModule_UnblockClient()` +## `RedisModule_GetBlockedClientReadyKey` + + RedisModuleString *RedisModule_GetBlockedClientReadyKey(RedisModuleCtx *ctx); + +Get the key that is ready when the reply callback is called in the context +of a client blocked by `RedisModule_BlockClientOnKeys()`. + ## `RedisModule_GetBlockedClientHandle` RedisModuleBlockedClient *RedisModule_GetBlockedClientHandle(RedisModuleCtx *ctx); @@ -1509,9 +1930,9 @@ detached by a specific client. To call non-reply APIs, the thread safe context must be prepared with: - RedisModule_ThreadSafeCallStart(ctx); + RedisModule_ThreadSafeContextLock(ctx); ... make your call here ... - RedisModule_ThreadSafeCallStop(ctx); + RedisModule_ThreadSafeContextUnlock(ctx); This is not needed when using ``RedisModule_Reply`*` functions, assuming that a blocked client was used when the context was created, otherwise @@ -1565,7 +1986,8 @@ is interested in. This can be an ORed mask of any of the following flags: - REDISMODULE_NOTIFY_EXPIRED: Expiration events - REDISMODULE_NOTIFY_EVICTED: Eviction events - REDISMODULE_NOTIFY_STREAM: Stream events - - REDISMODULE_NOTIFY_ALL: All events + - REDISMODULE_NOTIFY_KEYMISS: Key-miss events + - REDISMODULE_NOTIFY_ALL: All events (Excluding REDISMODULE_NOTIFY_KEYMISS) We do not distinguish between key events and keyspace events, and it is up to the module to filter the actions taken based on the key. @@ -1593,6 +2015,19 @@ If you need to take long actions, use threads to offload them. See https://redis.io/topics/notifications for more information. +## `RedisModule_GetNotifyKeyspaceEvents` + + int RedisModule_GetNotifyKeyspaceEvents(); + +Get the configured bitmap of notify-keyspace-events (Could be used +for additional filtering in RedisModuleNotificationFunc) + +## `RedisModule_NotifyKeyspaceEvent` + + int RedisModule_NotifyKeyspaceEvent(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key); + +Expose notifyKeyspaceEvent to modules + ## `RedisModule_RegisterClusterMessageReceiver` void RedisModule_RegisterClusterMessageReceiver(RedisModuleCtx *ctx, uint8_t type, RedisModuleClusterMessageReceiver callback); @@ -1714,6 +2149,87 @@ no information is returned and the function returns `REDISMODULE_ERR`, otherwise `REDISMODULE_OK` is returned. The arguments remaining or data can be NULL if the caller does not need certain information. +## `RedisModule_CreateModuleUser` + + RedisModuleUser *RedisModule_CreateModuleUser(const char *name); + +Creates a Redis ACL user that the module can use to authenticate a client. +After obtaining the user, the module should set what such user can do +using the `RM_SetUserACL()` function. Once configured, the user +can be used in order to authenticate a connection, with the specified +ACL rules, using the `RedisModule_AuthClientWithUser()` function. + +Note that: + +* Users created here are not listed by the ACL command. +* Users created here are not checked for duplicated name, so it's up to + the module calling this function to take care of not creating users + with the same name. +* The created user can be used to authenticate multiple Redis connections. + +The caller can later free the user using the function +`RM_FreeModuleUser()`. When this function is called, if there are +still clients authenticated with this user, they are disconnected. +The function to free the user should only be used when the caller really +wants to invalidate the user to define a new one with different +capabilities. + +## `RedisModule_FreeModuleUser` + + int RedisModule_FreeModuleUser(RedisModuleUser *user); + +Frees a given user and disconnects all of the clients that have been +authenticated with it. See `RM_CreateModuleUser` for detailed usage. + +## `RedisModule_SetModuleUserACL` + + int RedisModule_SetModuleUserACL(RedisModuleUser *user, const char* acl); + +Sets the permissions of a user created through the redis module +interface. The syntax is the same as ACL SETUSER, so refer to the +documentation in acl.c for more information. See `RM_CreateModuleUser` +for detailed usage. + +Returns `REDISMODULE_OK` on success and `REDISMODULE_ERR` on failure +and will set an errno describing why the operation failed. + +## `RedisModule_AuthenticateClientWithUser` + + int RedisModule_AuthenticateClientWithUser(RedisModuleCtx *ctx, RedisModuleUser *module_user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id); + +Authenticate the current context's user with the provided redis acl user. +Returns `REDISMODULE_ERR` if the user is disabled. + +See authenticateClientWithUser for information about callback, client_id, +and general usage for authentication. + +## `RedisModule_AuthenticateClientWithACLUser` + + int RedisModule_AuthenticateClientWithACLUser(RedisModuleCtx *ctx, const char *name, size_t len, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id); + +Authenticate the current context's user with the provided redis acl user. +Returns `REDISMODULE_ERR` if the user is disabled or the user does not exist. + +See authenticateClientWithUser for information about callback, client_id, +and general usage for authentication. + +## `RedisModule_DeauthenticateAndCloseClient` + + int RedisModule_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id); + +Deauthenticate and close the client. The client resources will not be +be immediately freed, but will be cleaned up in a background job. This is +the recommended way to deauthenicate a client since most clients can't +handle users becomming deauthenticated. Returns `REDISMODULE_ERR` when the +client doesn't exist and `REDISMODULE_OK` when the operation was successful. + +The client ID is returned from the `RM_AuthenticateClientWithUser` and +`RM_AuthenticateClientWithACLUser` APIs, but can be obtained through +the CLIENT api or through server events. + +This function is not thread safe, and must be executed within the context +of a command or thread safe context. + ## `RedisModule_CreateDict` RedisModuleDict *RedisModule_CreateDict(RedisModuleCtx *ctx); @@ -1922,6 +2438,119 @@ Like `RedisModule_DictNext()` but after returning the currently selected element in the iterator, it selects the previous element (laxicographically smaller) instead of the next one. +## `RedisModule_DictCompareC` + + int RedisModule_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen); + +Compare the element currently pointed by the iterator to the specified +element given by key/keylen, according to the operator 'op' (the set of +valid operators are the same valid for `RedisModule_DictIteratorStart)`. +If the comparision is successful the command returns `REDISMODULE_OK` +otherwise `REDISMODULE_ERR` is returned. + +This is useful when we want to just emit a lexicographical range, so +in the loop, as we iterate elements, we can also check if we are still +on range. + +The function returne `REDISMODULE_ERR` if the iterator reached the +end of elements condition as well. + +## `RedisModule_DictCompare` + + int RedisModule_DictCompare(RedisModuleDictIter *di, const char *op, RedisModuleString *key); + +Like `RedisModule_DictCompareC` but gets the key to compare with the current +iterator key as a RedisModuleString. + +## `RedisModule_InfoAddSection` + + int RedisModule_InfoAddSection(RedisModuleInfoCtx *ctx, char *name); + +Used to start a new section, before adding any fields. the section name will +be prefixed by "_" and must only include A-Z,a-z,0-9. +NULL or empty string indicates the default section (only ) is used. +When return value is `REDISMODULE_ERR`, the section should and will be skipped. + +## `RedisModule_InfoBeginDictField` + + int RedisModule_InfoBeginDictField(RedisModuleInfoCtx *ctx, char *name); + +Starts a dict field, similar to the ones in INFO KEYSPACE. Use normal +`RedisModule_InfoAddField`* functions to add the items to this field, and +terminate with `RedisModule_InfoEndDictField`. + +## `RedisModule_InfoEndDictField` + + int RedisModule_InfoEndDictField(RedisModuleInfoCtx *ctx); + +Ends a dict field, see `RedisModule_InfoBeginDictField` + +## `RedisModule_InfoAddFieldString` + + int RedisModule_InfoAddFieldString(RedisModuleInfoCtx *ctx, char *field, RedisModuleString *value); + +Used by RedisModuleInfoFunc to add info fields. +Each field will be automatically prefixed by "_". +Field names or values must not include \r\n of ":" + +## `RedisModule_GetServerInfo` + + RedisModuleServerInfoData *RedisModule_GetServerInfo(RedisModuleCtx *ctx, const char *section); + +Get information about the server similar to the one that returns from the +INFO command. This function takes an optional 'section' argument that may +be NULL. The return value holds the output and can be used with +`RedisModule_ServerInfoGetField` and alike to get the individual fields. +When done, it needs to be freed with `RedisModule_FreeServerInfo` or with the +automatic memory management mechanism if enabled. + +## `RedisModule_FreeServerInfo` + + void RedisModule_FreeServerInfo(RedisModuleCtx *ctx, RedisModuleServerInfoData *data); + +Free data created with `RM_GetServerInfo()`. You need to pass the +context pointer 'ctx' only if the dictionary was created using the +context instead of passing NULL. + +## `RedisModule_ServerInfoGetField` + + RedisModuleString *RedisModule_ServerInfoGetField(RedisModuleCtx *ctx, RedisModuleServerInfoData *data, const char* field); + +Get the value of a field from data collected with `RM_GetServerInfo()`. You +need to pass the context pointer 'ctx' only if you want to use auto memory +mechanism to release the returned string. Return value will be NULL if the +field was not found. + +## `RedisModule_ServerInfoGetFieldC` + + const char *RedisModule_ServerInfoGetFieldC(RedisModuleServerInfoData *data, const char* field); + +Similar to `RM_ServerInfoGetField`, but returns a char* which should not be freed but the caller. + +## `RedisModule_ServerInfoGetFieldSigned` + + long long RedisModule_ServerInfoGetFieldSigned(RedisModuleServerInfoData *data, const char* field, int *out_err); + +Get the value of a field from data collected with `RM_GetServerInfo()`. If the +field is not found, or is not numerical or out of range, return value will be +0, and the optional out_err argument will be set to `REDISMODULE_ERR`. + +## `RedisModule_ServerInfoGetFieldUnsigned` + + unsigned long long RedisModule_ServerInfoGetFieldUnsigned(RedisModuleServerInfoData *data, const char* field, int *out_err); + +Get the value of a field from data collected with `RM_GetServerInfo()`. If the +field is not found, or is not numerical or out of range, return value will be +0, and the optional out_err argument will be set to `REDISMODULE_ERR`. + +## `RedisModule_ServerInfoGetFieldDouble` + + double RedisModule_ServerInfoGetFieldDouble(RedisModuleServerInfoData *data, const char* field, int *out_err); + +Get the value of a field from data collected with `RM_GetServerInfo()`. If the +field is not found, or is not a double, return value will be 0, and the +optional out_err argument will be set to `REDISMODULE_ERR`. + ## `RedisModule_GetRandomBytes` void RedisModule_GetRandomBytes(unsigned char *dst, size_t len); @@ -1939,3 +2568,525 @@ Like `RedisModule_GetRandomBytes()` but instead of setting the string to random bytes the string is set to random characters in the in the hex charset [0-9a-f]. +## `RedisModule_ExportSharedAPI` + + int RedisModule_ExportSharedAPI(RedisModuleCtx *ctx, const char *apiname, void *func); + +This function is called by a module in order to export some API with a +given name. Other modules will be able to use this API by calling the +symmetrical function `RM_GetSharedAPI()` and casting the return value to +the right function pointer. + +The function will return `REDISMODULE_OK` if the name is not already taken, +otherwise `REDISMODULE_ERR` will be returned and no operation will be +performed. + +IMPORTANT: the apiname argument should be a string literal with static +lifetime. The API relies on the fact that it will always be valid in +the future. + +## `RedisModule_GetSharedAPI` + + void *RedisModule_GetSharedAPI(RedisModuleCtx *ctx, const char *apiname); + +Request an exported API pointer. The return value is just a void pointer +that the caller of this function will be required to cast to the right +function pointer, so this is a private contract between modules. + +If the requested API is not available then NULL is returned. Because +modules can be loaded at different times with different order, this +function calls should be put inside some module generic API registering +step, that is called every time a module attempts to execute a +command that requires external APIs: if some API cannot be resolved, the +command should return an error. + +Here is an exmaple: + + int ... myCommandImplementation() { + if (getExternalAPIs() == 0) { + reply with an error here if we cannot have the APIs + } + // Use the API: + myFunctionPointer(foo); + } + +And the function registerAPI() is: + + int getExternalAPIs(void) { + static int api_loaded = 0; + if (api_loaded != 0) return 1; // APIs already resolved. + + myFunctionPointer = RedisModule_GetOtherModuleAPI("..."); + if (myFunctionPointer == NULL) return 0; + + return 1; + } + +## `RedisModule_UnregisterCommandFilter` + + int RedisModule_UnregisterCommandFilter(RedisModuleCtx *ctx, RedisModuleCommandFilter *filter); + +Unregister a command filter. + +## `RedisModule_CommandFilterArgsCount` + + int RedisModule_CommandFilterArgsCount(RedisModuleCommandFilterCtx *fctx); + +Return the number of arguments a filtered command has. The number of +arguments include the command itself. + +## `RedisModule_CommandFilterArgGet` + + const RedisModuleString *RedisModule_CommandFilterArgGet(RedisModuleCommandFilterCtx *fctx, int pos); + +Return the specified command argument. The first argument (position 0) is +the command itself, and the rest are user-provided args. + +## `RedisModule_CommandFilterArgDelete` + + int RedisModule_CommandFilterArgDelete(RedisModuleCommandFilterCtx *fctx, int pos); + +Modify the filtered command by deleting an argument at the specified +position. + +## `RedisModule_MallocSize` + + size_t RedisModule_MallocSize(void* ptr); + +For a given pointer allocated via `RedisModule_Alloc()` or +`RedisModule_Realloc()`, return the amount of memory allocated for it. +Note that this may be different (larger) than the memory we allocated +with the allocation calls, since sometimes the underlying allocator +will allocate more memory. + +## `RedisModule_GetUsedMemoryRatio` + + float RedisModule_GetUsedMemoryRatio(); + +Return the a number between 0 to 1 indicating the amount of memory +currently used, relative to the Redis "maxmemory" configuration. + +0 - No memory limit configured. +Between 0 and 1 - The percentage of the memory used normalized in 0-1 range. +Exactly 1 - Memory limit reached. +Greater 1 - More memory used than the configured limit. + +## `RedisModule_ScanCursorCreate` + + RedisModuleScanCursor *RedisModule_ScanCursorCreate(); + +Create a new cursor to be used with `RedisModule_Scan` + +## `RedisModule_ScanCursorRestart` + + void RedisModule_ScanCursorRestart(RedisModuleScanCursor *cursor); + +Restart an existing cursor. The keys will be rescanned. + +## `RedisModule_ScanCursorDestroy` + + void RedisModule_ScanCursorDestroy(RedisModuleScanCursor *cursor); + +Destroy the cursor struct. + +## `RedisModule_Scan` + + int RedisModule_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata); + +Scan API that allows a module to scan all the keys and value in +the selected db. + +Callback for scan implementation. +void scan_callback(RedisModuleCtx *ctx, RedisModuleString *keyname, + RedisModuleKey *key, void *privdata); +ctx - the redis module context provided to for the scan. +keyname - owned by the caller and need to be retained if used after this +function. + +key - holds info on the key and value, it is provided as best effort, in +some cases it might be NULL, in which case the user should (can) use +`RedisModule_OpenKey` (and CloseKey too). +when it is provided, it is owned by the caller and will be free when the +callback returns. + +privdata - the user data provided to `RedisModule_Scan`. + +The way it should be used: + RedisModuleCursor *c = RedisModule_ScanCursorCreate(); + while(RedisModule_Scan(ctx, c, callback, privateData)); + RedisModule_ScanCursorDestroy(c); + +It is also possible to use this API from another thread while the lock +is acquired durring the actuall call to `RM_Scan`: + + RedisModuleCursor *c = RedisModule_ScanCursorCreate(); + RedisModule_ThreadSafeContextLock(ctx); + while(RedisModule_Scan(ctx, c, callback, privateData)){ + RedisModule_ThreadSafeContextUnlock(ctx); + // do some background job + RedisModule_ThreadSafeContextLock(ctx); + } + RedisModule_ScanCursorDestroy(c); + +The function will return 1 if there are more elements to scan and +0 otherwise, possibly setting errno if the call failed. + +It is also possible to restart and existing cursor using `RM_CursorRestart`. + +IMPORTANT: This API is very similar to the Redis SCAN command from the +point of view of the guarantees it provides. This means that the API +may report duplicated keys, but guarantees to report at least one time +every key that was there from the start to the end of the scanning process. + +NOTE: If you do database changes within the callback, you should be aware +that the internal state of the database may change. For instance it is safe +to delete or modify the current key, but may not be safe to delete any +other key. +Moreover playing with the Redis keyspace while iterating may have the +effect of returning more duplicates. A safe pattern is to store the keys +names you want to modify elsewhere, and perform the actions on the keys +later when the iteration is complete. Howerver this can cost a lot of +memory, so it may make sense to just operate on the current key when +possible during the iteration, given that this is safe. + +## `RedisModule_ScanKey` + + int RedisModule_ScanKey(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleScanKeyCB fn, void *privdata); + +Scan api that allows a module to scan the elements in a hash, set or sorted set key + +Callback for scan implementation. +void scan_callback(RedisModuleKey *key, RedisModuleString* field, RedisModuleString* value, void *privdata); +- key - the redis key context provided to for the scan. +- field - field name, owned by the caller and need to be retained if used + after this function. +- value - value string or NULL for set type, owned by the caller and need to + be retained if used after this function. +- privdata - the user data provided to `RedisModule_ScanKey`. + +The way it should be used: + RedisModuleCursor *c = RedisModule_ScanCursorCreate(); + RedisModuleKey *key = RedisModule_OpenKey(...) + while(RedisModule_ScanKey(key, c, callback, privateData)); + RedisModule_CloseKey(key); + RedisModule_ScanCursorDestroy(c); + +It is also possible to use this API from another thread while the lock is acquired durring +the actuall call to `RM_Scan`, and re-opening the key each time: + RedisModuleCursor *c = RedisModule_ScanCursorCreate(); + RedisModule_ThreadSafeContextLock(ctx); + RedisModuleKey *key = RedisModule_OpenKey(...) + while(RedisModule_ScanKey(ctx, c, callback, privateData)){ + RedisModule_CloseKey(key); + RedisModule_ThreadSafeContextUnlock(ctx); + // do some background job + RedisModule_ThreadSafeContextLock(ctx); + RedisModuleKey *key = RedisModule_OpenKey(...) + } + RedisModule_CloseKey(key); + RedisModule_ScanCursorDestroy(c); + +The function will return 1 if there are more elements to scan and 0 otherwise, +possibly setting errno if the call failed. +It is also possible to restart and existing cursor using `RM_CursorRestart`. + +NOTE: Certain operations are unsafe while iterating the object. For instance +while the API guarantees to return at least one time all the elements that +are present in the data structure consistently from the start to the end +of the iteration (see HSCAN and similar commands documentation), the more +you play with the elements, the more duplicates you may get. In general +deleting the current element of the data structure is safe, while removing +the key you are iterating is not safe. + +## `RedisModule_Fork` + + int RedisModule_Fork(RedisModuleForkDoneHandler cb, void *user_data); + +Create a background child process with the current frozen snaphost of the +main process where you can do some processing in the background without +affecting / freezing the traffic and no need for threads and GIL locking. +Note that Redis allows for only one concurrent fork. +When the child wants to exit, it should call `RedisModule_ExitFromChild`. +If the parent wants to kill the child it should call `RedisModule_KillForkChild` +The done handler callback will be executed on the parent process when the +child existed (but not when killed) +Return: -1 on failure, on success the parent process will get a positive PID +of the child, and the child process will get 0. + +## `RedisModule_ExitFromChild` + + int RedisModule_ExitFromChild(int retcode); + +Call from the child process when you want to terminate it. +retcode will be provided to the done handler executed on the parent process. + +## `RedisModule_KillForkChild` + + int RedisModule_KillForkChild(int child_pid); + +Can be used to kill the forked child process from the parent process. +child_pid whould be the return value of `RedisModule_Fork`. + +## `RedisModule_SubscribeToServerEvent` + + int RedisModule_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback); + +Register to be notified, via a callback, when the specified server event +happens. The callback is called with the event as argument, and an additional +argument which is a void pointer and should be cased to a specific type +that is event-specific (but many events will just use NULL since they do not +have additional information to pass to the callback). + +If the callback is NULL and there was a previous subscription, the module +will be unsubscribed. If there was a previous subscription and the callback +is not null, the old callback will be replaced with the new one. + +The callback must be of this type: + + int (*RedisModuleEventCallback)(RedisModuleCtx *ctx, + RedisModuleEvent eid, + uint64_t subevent, + void *data); + +The 'ctx' is a normal Redis module context that the callback can use in +order to call other modules APIs. The 'eid' is the event itself, this +is only useful in the case the module subscribed to multiple events: using +the 'id' field of this structure it is possible to check if the event +is one of the events we registered with this callback. The 'subevent' field +depends on the event that fired. + +Finally the 'data' pointer may be populated, only for certain events, with +more relevant data. + +Here is a list of events you can use as 'eid' and related sub events: + + RedisModuleEvent_ReplicationRoleChanged + + This event is called when the instance switches from master + to replica or the other way around, however the event is + also called when the replica remains a replica but starts to + replicate with a different master. + + The following sub events are available: + + REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_MASTER + REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_REPLICA + + The 'data' field can be casted by the callback to a + RedisModuleReplicationInfo structure with the following fields: + + int master; // true if master, false if replica + char *masterhost; // master instance hostname for NOW_REPLICA + int masterport; // master instance port for NOW_REPLICA + char *replid1; // Main replication ID + char *replid2; // Secondary replication ID + uint64_t repl1_offset; // Main replication offset + uint64_t repl2_offset; // Offset of replid2 validity + + RedisModuleEvent_Persistence + + This event is called when RDB saving or AOF rewriting starts + and ends. The following sub events are available: + + REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START + REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START + REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START + REDISMODULE_SUBEVENT_PERSISTENCE_ENDED + REDISMODULE_SUBEVENT_PERSISTENCE_FAILED + + The above events are triggered not just when the user calls the + relevant commands like BGSAVE, but also when a saving operation + or AOF rewriting occurs because of internal server triggers. + The SYNC_RDB_START sub events are happening in the forground due to + SAVE command, FLUSHALL, or server shutdown, and the other RDB and + AOF sub events are executed in a background fork child, so any + action the module takes can only affect the generated AOF or RDB, + but will not be reflected in the parent process and affect connected + clients and commands. Also note that the AOF_START sub event may end + up saving RDB content in case of an AOF with rdb-preamble. + + RedisModuleEvent_FlushDB + + The FLUSHALL, FLUSHDB or an internal flush (for instance + because of replication, after the replica synchronization) + happened. The following sub events are available: + + REDISMODULE_SUBEVENT_FLUSHDB_START + REDISMODULE_SUBEVENT_FLUSHDB_END + + The data pointer can be casted to a RedisModuleFlushInfo + structure with the following fields: + + int32_t async; // True if the flush is done in a thread. + See for instance FLUSHALL ASYNC. + In this case the END callback is invoked + immediately after the database is put + in the free list of the thread. + int32_t dbnum; // Flushed database number, -1 for all the DBs + in the case of the FLUSHALL operation. + + The start event is called *before* the operation is initated, thus + allowing the callback to call DBSIZE or other operation on the + yet-to-free keyspace. + + RedisModuleEvent_Loading + + Called on loading operations: at startup when the server is + started, but also after a first synchronization when the + replica is loading the RDB file from the master. + The following sub events are available: + + REDISMODULE_SUBEVENT_LOADING_RDB_START + REDISMODULE_SUBEVENT_LOADING_AOF_START + REDISMODULE_SUBEVENT_LOADING_REPL_START + REDISMODULE_SUBEVENT_LOADING_ENDED + REDISMODULE_SUBEVENT_LOADING_FAILED + + Note that AOF loading may start with an RDB data in case of + rdb-preamble, in which case you'll only recieve an AOF_START event. + + + RedisModuleEvent_ClientChange + + Called when a client connects or disconnects. + The data pointer can be casted to a RedisModuleClientInfo + structure, documented in RedisModule_GetClientInfoById(). + The following sub events are available: + + REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED + REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED + + RedisModuleEvent_Shutdown + + The server is shutting down. No subevents are available. + + RedisModuleEvent_ReplicaChange + + This event is called when the instance (that can be both a + master or a replica) get a new online replica, or lose a + replica since it gets disconnected. + The following sub events are availble: + + REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE + REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE + + No additional information is available so far: future versions + of Redis will have an API in order to enumerate the replicas + connected and their state. + + RedisModuleEvent_CronLoop + + This event is called every time Redis calls the serverCron() + function in order to do certain bookkeeping. Modules that are + required to do operations from time to time may use this callback. + Normally Redis calls this function 10 times per second, but + this changes depending on the "hz" configuration. + No sub events are available. + + The data pointer can be casted to a RedisModuleCronLoop + structure with the following fields: + + int32_t hz; // Approximate number of events per second. + + RedisModuleEvent_MasterLinkChange + + This is called for replicas in order to notify when the + replication link becomes functional (up) with our master, + or when it goes down. Note that the link is not considered + up when we just connected to the master, but only if the + replication is happening correctly. + The following sub events are available: + + REDISMODULE_SUBEVENT_MASTER_LINK_UP + REDISMODULE_SUBEVENT_MASTER_LINK_DOWN + + RedisModuleEvent_ModuleChange + + This event is called when a new module is loaded or one is unloaded. + The following sub events are availble: + + REDISMODULE_SUBEVENT_MODULE_LOADED + REDISMODULE_SUBEVENT_MODULE_UNLOADED + + The data pointer can be casted to a RedisModuleModuleChange + structure with the following fields: + + const char* module_name; // Name of module loaded or unloaded. + int32_t module_version; // Module version. + + RedisModuleEvent_LoadingProgress + + This event is called repeatedly called while an RDB or AOF file + is being loaded. + The following sub events are availble: + + REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB + REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF + + The data pointer can be casted to a RedisModuleLoadingProgress + structure with the following fields: + + int32_t hz; // Approximate number of events per second. + int32_t progress; // Approximate progress between 0 and 1024, + or -1 if unknown. + +The function returns `REDISMODULE_OK` if the module was successfully subscrived +for the specified event. If the API is called from a wrong context then +`REDISMODULE_ERR` is returned. + +## `RedisModule_SetLRU` + + int RedisModule_SetLRU(RedisModuleKey *key, mstime_t lru_idle); + +Set the key last access time for LRU based eviction. not relevent if the +servers's maxmemory policy is LFU based. Value is idle time in milliseconds. +returns `REDISMODULE_OK` if the LRU was updated, `REDISMODULE_ERR` otherwise. + +## `RedisModule_GetLRU` + + int RedisModule_GetLRU(RedisModuleKey *key, mstime_t *lru_idle); + +Gets the key last access time. +Value is idletime in milliseconds or -1 if the server's eviction policy is +LFU based. +returns `REDISMODULE_OK` if when key is valid. + +## `RedisModule_SetLFU` + + int RedisModule_SetLFU(RedisModuleKey *key, long long lfu_freq); + +Set the key access frequency. only relevant if the server's maxmemory policy +is LFU based. +The frequency is a logarithmic counter that provides an indication of +the access frequencyonly (must be <= 255). +returns `REDISMODULE_OK` if the LFU was updated, `REDISMODULE_ERR` otherwise. + +## `RedisModule_GetLFU` + + int RedisModule_GetLFU(RedisModuleKey *key, long long *lfu_freq); + +Gets the key access frequency or -1 if the server's eviction policy is not +LFU based. +returns `REDISMODULE_OK` if when key is valid. + +## `RedisModule_ModuleTypeReplaceValue` + + int RedisModule_ModuleTypeReplaceValue(RedisModuleKey *key, moduleType *mt, void *new_value, void **old_value); + +Replace the value assigned to a module type. + +The key must be open for writing, have an existing value, and have a moduleType +that matches the one specified by the caller. + +Unlike `RM_ModuleTypeSetValue()` which will free the old value, this function +simply swaps the old value with the new value. + +The function returns `REDISMODULE_OK` on success, `REDISMODULE_ERR` on errors +such as: + +1. Key is not opened for writing. +2. Key is not a module data type key. +3. Key is a module datatype other than 'mt'. + +If old_value is non-NULL, the old value is returned by reference. + From ee1d5e13a606695549f35678a868d27bd1480d00 Mon Sep 17 00:00:00 2001 From: ricosuave73 Date: Fri, 17 Jul 2020 00:12:36 +0200 Subject: [PATCH 0412/1457] Added Redis-COM-client to redis.io/clients (#1349) * Added Redis-COM-client to redis.io/clients * Removed name form authors field. Authors field should contain a Twitter handle. Co-authored-by: Erik Oosterwaal --- clients.json | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/clients.json b/clients.json index 81f1994c75..ba78a2ecaf 100644 --- a/clients.json +++ b/clients.json @@ -1813,5 +1813,16 @@ "authors": ["kristoff-it"], "recommended": true, "active": true + }, + + { + "name": "Redis COM client", + "language": "ActiveX/COM+", + "url": "https://gitlab.com/erik4/redis-com-client", + "repository": "https://gitlab.com/erik4/redis-com-client", + "description": "A COM wrapper for StackExchange.Redis that allows using Redis from a COM environment like Classic ASP (ASP 3.0) using vbscript, jscript or any other COM capable language.", + "authors": [], + "recommended": true, + "active": true } ] From 72d8284b75da00443aee2a9deac60af110d70744 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pau=20Alarc=C3=B3n?= <33580722+paualarco@users.noreply.github.com> Date: Sun, 19 Jul 2020 13:45:10 +0200 Subject: [PATCH 0413/1457] Adds monix redis repo (#1352) * Adds monix redis repo * Update clients.json * Update clients.json --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index ba78a2ecaf..beef3890b5 100644 --- a/clients.json +++ b/clients.json @@ -591,6 +591,16 @@ "authors": ["vitaliykhamin"] }, + { + "name": "monix-connect", + "language": "Scala", + "url": "https://monix.github.io/monix-connect/docs/redis", + "repository": "https://github.com/monix/monix-connect", + "description": "Monix integration with Redis", + "authors": ["paualarco"], + "active": true + }, + { "name": "laserdisc", "language": "Scala", From 82b844481c4cb4433696f0f949145909eb67d4f0 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 19 Jul 2020 16:07:32 +0300 Subject: [PATCH 0414/1457] Updates `INFO` with v6 (#1236) --- commands/info.md | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/commands/info.md b/commands/info.md index efafc40955..d31c5a3705 100644 --- a/commands/info.md +++ b/commands/info.md @@ -12,6 +12,7 @@ The optional parameter can be used to select a specific section of information: * `cpu`: CPU consumption statistics * `commandstats`: Redis command statistics * `cluster`: Redis Cluster section +* `modules`: Modules section * `keyspace`: Database related statistics It can also take the following values: @@ -60,7 +61,8 @@ Here is the meaning of all fields in the **server** section: * `tcp_port`: TCP/IP listen port * `uptime_in_seconds`: Number of seconds since Redis server start * `uptime_in_days`: Same value expressed in days -* `hz`: The server's frequency setting +* `hz`: The server's current frequency setting +* `configured_hz`: The server's configured frequency setting * `lru_clock`: Clock incrementing every minute, for LRU management * `executable`: The path to the server's executable * `config_file`: The path to the config file @@ -69,13 +71,13 @@ Here is the meaning of all fields in the **clients** section: * `connected_clients`: Number of client connections (excluding connections from replicas) -* `client_longest_output_list`: longest output list among current client +* `client_longest_output_list`: Longest output list among current client connections -* `client_biggest_input_buf`: biggest input buffer among current client +* `client_biggest_input_buf`: Biggest input buffer among current client connections -* `blocked_clients`: Number of clients pending on a blocking call (BLPOP, - BRPOP, BRPOPLPUSH) -* `tracking_clients`: Number of clients with tracking enabled +* `blocked_clients`: Number of clients pending on a blocking call (`BLPOP`, + `BRPOP`, `BRPOPLPUSH`, `BZPOPMIN`, `BZPOPMAX`) +* `tracking_clients`: Number of clients being tracked (`CLIENT TRACKING`) * `clients_in_timeout_table`: Number of clients in the clients timeout table Here is the meaning of all fields in the **memory** section: @@ -133,7 +135,7 @@ When Redis frees memory, the memory is given back to the allocator, and the allocator may or may not give the memory back to the system. There may be a discrepancy between the `used_memory` value and memory consumption as reported by the operating system. It may be due to the fact memory has been -used and released by Redis, but not given back to the system. The +used and released by Redis, but not given back to the system. The `used_memory_peak` value is generally useful to check this point. Additional introspective information about the server's memory can be obtained @@ -165,8 +167,11 @@ Here is the meaning of all fields in the **persistence** section: * `aof_last_write_status`: Status of the last write operation to the AOF * `aof_last_cow_size`: The size in bytes of copy-on-write allocations during the last AOF rewrite operation +* `module_fork_in_progress`: Flag indicating a module fork is on-going +* `module_fork_last_cow_size`: The size in bytes of copy-on-write allocations + during the last module fork operation -`changes_since_last_save` refers to the number of operations that produced +`rdb_changes_since_last_save` refers to the number of operations that produced some kind of changes in the dataset since the last time either `SAVE` or `BGSAVE` was called. @@ -207,6 +212,9 @@ Here is the meaning of all fields in the **stats** section: * `sync_partial_ok`: The number of accepted partial resync requests * `sync_partial_err`: The number of denied partial resync requests * `expired_keys`: Total number of key expiration events +* `expired_stale_perc`: The percentage of keys probably expired +* `expired_time_cap_reached_count`: The count of times that active expiry cycles have stopped early +* `expire_cycle_cpu_milliseconds`: The cumulative amount of time spend on active expiry cycles * `evicted_keys`: Number of evicted keys due to `maxmemory` limit * `keyspace_hits`: Number of successful lookup of keys in the main dictionary * `keyspace_misses`: Number of failed lookup of keys in the main dictionary @@ -255,7 +263,7 @@ If the instance is a replica, these additional fields are provided: * `master_link_status`: Status of the link (up/down) * `master_last_io_seconds_ago`: Number of seconds since the last interaction with master -* `master_sync_in_progress`: Indicate the master is syncing to the replica +* `master_sync_in_progress`: Indicate the master is syncing to the replica * `slave_repl_offset`: The replication offset of the replica instance * `slave_priority`: The priority of the instance as a candidate for failover * `slave_read_only`: Flag indicating if the replica is read-only @@ -276,7 +284,7 @@ The following field is always provided: If the server is configured with the `min-slaves-to-write` (or starting with Redis 5 with the `min-replicas-to-write`) directive, an additional field is provided: -* `min_slaves_good_slaves`: Number of replicas currently considered good +* `min_slaves_good_slaves`: Number of replicas currently considered good For each replica, the following line is added: @@ -301,6 +309,8 @@ The **cluster** section currently only contains a unique field: * `cluster_enabled`: Indicate Redis cluster is enabled +The **modules** section contains additional information about loaded modules if the modules provide it. The field part of properties lines in this section is always prefixed with the module's name. + The **keyspace** section provides statistics on the main dictionary of each database. The statistics are the number of keys, and the number of keys with an expiration. From 02423fd2f5603ae300654613a51eaee13bc5cb80 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Mon, 20 Jul 2020 21:55:02 +0300 Subject: [PATCH 0415/1457] Add INFO MODULES and EVERYTHING (#1250) Co-authored-by: Itamar Haber --- commands/info.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/commands/info.md b/commands/info.md index d31c5a3705..0919ccdc4e 100644 --- a/commands/info.md +++ b/commands/info.md @@ -14,11 +14,13 @@ The optional parameter can be used to select a specific section of information: * `cluster`: Redis Cluster section * `modules`: Modules section * `keyspace`: Database related statistics +* `modules`: Module related sections It can also take the following values: -* `all`: Return all sections +* `all`: Return all sections (excluding module generated ones) * `default`: Return only the default set of sections +* `everything`: Includes `all` and `modules` When no parameter is provided, the `default` option is assumed. @@ -322,3 +324,5 @@ For each database, the following line is added: [hcgcpgp]: http://code.google.com/p/google-perftools/ **A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. + +**Modules generated sections**: Starting with Redis 6, modules can inject their info into the `INFO` command, these are excluded by default even when the `all` argument is provided (it will include a list of loaded modules but not their generated info fields). To get these you must use either the `modules` argument or `everything`., From c748e72a9eef88fefc2a37574399bc31cbce8f3e Mon Sep 17 00:00:00 2001 From: Raza ellahi <48081394+razaellahi@users.noreply.github.com> Date: Tue, 21 Jul 2020 19:01:25 +0500 Subject: [PATCH 0416/1457] added a new redis client for node (#1354) * add a new redis client for node * Update clients.json Co-authored-by: Itamar Haber Co-authored-by: Itamar Haber --- clients.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index beef3890b5..2423a8376f 100644 --- a/clients.json +++ b/clients.json @@ -775,7 +775,17 @@ "authors": [], "active": true }, - + + { + "name": "xredis", + "language": "Node.js", + "repository": "https://github.com/razaellahi/xredis", + "description": "Redis client with redis ACL features", + "authors": [], + "recommended": true, + "active": true + }, + { "name": "node_redis", "language": "Node.js", From 9cc7e624bfa89925764ecbd6ad30dbe122f71d69 Mon Sep 17 00:00:00 2001 From: Luke Volpatti Date: Tue, 21 Jul 2020 13:38:14 -0400 Subject: [PATCH 0417/1457] Typo fix (#1356) --- topics/indexes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/indexes.md b/topics/indexes.md index 410cbdb597..f2ef46b332 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -101,7 +101,7 @@ Updating simple sorted set indexes Often we index things which change over time. In the above example, the age of the user changes every year. In such a case it would make sense to use the birth date as index instead of the age itself, -but there are other cases where we simple want some field to change from +but there are other cases where we simply want some field to change from time to time, and the index to reflect this change. The `ZADD` command makes updating simple indexes a very trivial operation From 74509fecc9fa0721e2bca95db065a211d3d1eef7 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Wed, 22 Jul 2020 07:02:33 -0400 Subject: [PATCH 0418/1457] fix expire command in keyspace event doc (#1355) --- topics/notifications.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/notifications.md b/topics/notifications.md index c019ca5cd0..ae17b0973b 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -101,7 +101,7 @@ Different commands generate different kind of events according to the following * `MOVE` generates two events, a `move_from` event for the source key, and a `move_to` event for the destination key. * `MIGRATE` generates a `del` event if the source key is removed. * `RESTORE` generates a `restore` event for the key. -* `EXPIRE` generates an `expire` event when an expire is set to the key, or an `expired` event every time a positive timeout set on a key results into the key being deleted (see `EXPIRE` documentation for more info). +* `EXPIRE` and all its variants (`PEXPIRE`, `EXPIREAT`, `PEXPIREAT`) generate an `expire` event when called with a positive timeout (or a future timestamp). Note that when these commands are called with a negative timeout value or timestamp in the past, the key is deleted and only a `del` event is generated instead. * `SORT` generates a `sortstore` event when `STORE` is used to set a new key. If the resulting list is empty, and the `STORE` option is used, and there was already an existing key with that name, the result is that the key is deleted, so a `del` event is generated in this condition. * `SET` and all its variants (`SETEX`, `SETNX`,`GETSET`) generate `set` events. However `SETEX` will also generate an `expire` events. * `MSET` generates a separate `set` event for every key. From 32ae8d3dc3921f90d31de3714a220180a212b6d4 Mon Sep 17 00:00:00 2001 From: Raza ellahi <48081394+razaellahi@users.noreply.github.com> Date: Wed, 22 Jul 2020 17:19:09 +0500 Subject: [PATCH 0419/1457] added my twitter account (#1357) --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 2423a8376f..d625602f9a 100644 --- a/clients.json +++ b/clients.json @@ -781,7 +781,7 @@ "language": "Node.js", "repository": "https://github.com/razaellahi/xredis", "description": "Redis client with redis ACL features", - "authors": [], + "authors": ["razaellahi531"], "recommended": true, "active": true }, From 74b605145d042970d30c30da72f61a1ab5c5823c Mon Sep 17 00:00:00 2001 From: laixintao Date: Wed, 29 Jul 2020 00:25:14 +0800 Subject: [PATCH 0420/1457] bugfix: syntax error in lpos' doc. element is missing. (#1359) --- commands/lpos.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/lpos.md b/commands/lpos.md index da5cd47b84..bb6bc163b4 100644 --- a/commands/lpos.md +++ b/commands/lpos.md @@ -47,7 +47,7 @@ We can combine `COUNT` and `RANK`, so that `COUNT` will try to return up to the When `COUNT` is used, it is possible to specify 0 as the number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. ``` -> LPOS mylist COUNT 0 +> LPOS mylist c COUNT 0 [2,6,7] ``` From fa77e399cdf3fdfd4484fcbee23499ddb2877b26 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 29 Jul 2020 15:40:40 +0300 Subject: [PATCH 0421/1457] Adds redis-cli preferences (#1358) * Adds redis-cli preferences * Update rediscli.md --- topics/rediscli.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index e42800f6b4..26b2535e64 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -323,7 +323,7 @@ were in the middle of it: This is usually not an issue when using the CLI in interactive mode for testing, but you should be aware of this limitation. -## Editing, history and completion +## Editing, history, completion and hints Because `redis-cli` uses the [linenoise line editing library](http://github.com/antirez/linenoise), it @@ -345,6 +345,23 @@ key, like in the following example: 127.0.0.1:6379> ZADD 127.0.0.1:6379> ZCARD +Once you've typed a Redis command name at the prompt, the CLI will display +syntax hints. This behavior can be turned on and off via the CLI preferences. + +## Preferences + +TThere are two ways to customize the CLI's behavior. The file `.redisclirc` +in your home directory is loaded by the CLI on startup. Preferences can also +be set during a CLI session, in which case they will last only the the +duration of the session. + +To set preferences, use the special `:set` command. The following preferences +can be set, either by typing the command in the CLI or adding it to the +`.redisclirc` file: + +* `:set hints` - enables syntax hints +* `:set nohints` - disables syntax hints + ## Running the same command N times It's possible to run the same command multiple times by prefixing the command From 789e9374a14d13a976e6dae196d1280b5169a323 Mon Sep 17 00:00:00 2001 From: Arun Ranganathan Date: Wed, 29 Jul 2020 08:42:18 -0400 Subject: [PATCH 0422/1457] Show theading configuration in INFO output - https://github.com/redis/redis/pull/7446 (#1353) --- commands/info.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/commands/info.md b/commands/info.md index 0919ccdc4e..18202face6 100644 --- a/commands/info.md +++ b/commands/info.md @@ -81,6 +81,7 @@ Here is the meaning of all fields in the **clients** section: `BRPOP`, `BRPOPLPUSH`, `BZPOPMIN`, `BZPOPMAX`) * `tracking_clients`: Number of clients being tracked (`CLIENT TRACKING`) * `clients_in_timeout_table`: Number of clients in the clients timeout table +* `io_threads_active`: Flag indicating if I/O threads are active Here is the meaning of all fields in the **memory** section: @@ -241,7 +242,11 @@ Here is the meaning of all fields in the **stats** section: * `tracking_total_prefixes`: Number of tracked prefixes in server's prefix table (only applicable for broadcast mode) * `unexpected_error_replies`: Number of unexpected error replies, that are types - of errors from an AOF load or replication + of errors from an AOF load or replication +* `total_reads_processed`: Total number of read events processed +* `total_writes_processed`: Total number of write events processed +* `io_threaded_reads_processed`: Number of read events processed by the main and I/O threads +* `io_threaded_writes_processed`: Number of write events processed by the main and I/O threads Here is the meaning of all fields in the **replication** section: From fd5e73fd3f084a2ab331bbd254ee138c274d3152 Mon Sep 17 00:00:00 2001 From: Pepelev Alexey Date: Thu, 30 Jul 2020 22:49:38 +0500 Subject: [PATCH 0423/1457] adds Rediska to clients.json (#1360) * adds Rediska to clients.json Co-authored-by: pepelev Co-authored-by: Itamar Haber --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index d625602f9a..2bb65dc7b6 100644 --- a/clients.json +++ b/clients.json @@ -722,6 +722,16 @@ "authors": ["chakrit"] }, + { + "name": "Rediska", + "language": "C#", + "url": "https://github.com/pepelev/Rediska", + "description": "Rediska is a Redis client for .NET with a focus on flexibility and extensibility.", + "authors": [], + "recommended": false, + "active": true + }, + { "name": "DartRedisClient", "language": "Dart", From 1287ba0c3fc7b4da9c2189583f4892803b310a4e Mon Sep 17 00:00:00 2001 From: johndelcastillo <49706593+johndelcastillo@users.noreply.github.com> Date: Sat, 1 Aug 2020 23:41:29 +1000 Subject: [PATCH 0424/1457] Fix missing directive (#1269) --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index b42f8fc0e1..a48973259b 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -610,7 +610,7 @@ Resharding can be performed automatically without the need to manually enter the parameters in an interactive way. This is possible using a command line like the following: - redis-cli reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes + redis-cli --cluster reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes This allows to build some automatism if you are likely to reshard often, however currently there is no way for `redis-cli` to automatically From 7b6a8ca794aad95d751111b909f60cf7c99ea483 Mon Sep 17 00:00:00 2001 From: Jim Brunner Date: Sat, 1 Aug 2020 06:46:08 -0700 Subject: [PATCH 0425/1457] Documentation for Module OnUnload hook (#1096) --- topics/modules-intro.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 8f874dd3ce..fdc8586fea 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -149,6 +149,24 @@ Zooming into the example command implementation, we can find another call: This function returns an integer to the client that invoked the command, exactly like other Redis commands do, like for example `INCR` or `SCARD`. +# Module cleanup + +In most cases, there is no need for special cleanup. +When a module is unloaded, Redis will automatically unregister commands and +unsubscribe from notifications. +However in the case where a module contains some persistent memory or +configuration, a module may include an optional `RedisModule_OnUnload` +function. +If a module provides this function, it will be invoked during the module unload +process. +The following is the function prototype: + + int RedisModule_OnUnload(RedisModuleCtx *ctx); + +The `OnUnload` function may prevent module unloading by returning +`REDISMODULE_ERR`. +Otherwise, `REDISMODULE_OK` should be returned. + # Setup and dependencies of a Redis module Redis modules don't depend on Redis or some other library, nor they From c2ed3ef0639d2abe296c1b1096ec8e58fd3470e5 Mon Sep 17 00:00:00 2001 From: Zhao Lang Date: Sun, 2 Aug 2020 09:58:34 -0400 Subject: [PATCH 0426/1457] Update modules.json (#1362) add redis_hnsw --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 7a7022da72..cca9ce7669 100644 --- a/modules.json +++ b/modules.json @@ -328,5 +328,15 @@ "starkdg" ], "stars":1 - } + }, + { + "name": "redis_hnsw", + "license": "MIT", + "repository": "https://github.com/zhao-lang/redis_hnsw", + "description": "Redis module for Hierarchical Navigable Small World (HNSW) approxmiate nearest neighbor search", + "authors": [ + "zhao-lang" + ], + "stars":0 + } ] From 3939c55755f0c771b8260ff945db5dbdf756dab5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andr=C3=A9=20Srinivasan?= Date: Tue, 4 Aug 2020 14:30:33 -0700 Subject: [PATCH 0427/1457] Corrected reference to 6.0 redis.conf (#1363) --- commands/config-set.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/config-set.md b/commands/config-set.md index a3c3769a10..fbce876976 100644 --- a/commands/config-set.md +++ b/commands/config-set.md @@ -14,7 +14,7 @@ All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf][hgcarr22rc] file, with the following important differences: -[hgcarr22rc]: http://github.com/redis/redis/raw/2.8/redis.conf +[hgcarr22rc]: http://github.com/redis/redis/raw/6.0/redis.conf * In options where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (`10k`, `2gb` ... and so forth), From 3e750a58450ec1f79079897ccf7561d800299cc6 Mon Sep 17 00:00:00 2001 From: sewenew Date: Sat, 8 Aug 2020 21:07:45 +0800 Subject: [PATCH 0428/1457] update description of redis-plus-plus and mark it as recommended (#1365) --- clients.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 2bb65dc7b6..39fb76babd 100644 --- a/clients.json +++ b/clients.json @@ -1775,8 +1775,9 @@ "name": "redis-plus-plus", "language": "C++", "repository": "https://github.com/sewenew/redis-plus-plus", - "description": "This is a Redis client, based on hiredis and written in C++11. It supports scritpting, pub/sub, pipeline, transaction, Redis Cluster, connection pool and thread safety.", + "description": "This is a Redis client, based on hiredis and written in C++11. It supports scritpting, pub/sub, pipeline, transaction, Redis Cluster, Redis Sentinel, connection pool, ACL, SSL and thread safety.", "authors": ["sewenew"], + "recommended": true, "active": true }, From 26738286bc0a762532ad58364bcb50e48e50fa5f Mon Sep 17 00:00:00 2001 From: Romuald Brunet Date: Wed, 12 Aug 2020 13:52:54 +0200 Subject: [PATCH 0429/1457] Fix typo/spelling in DUMP documentation (#1366) "to serialized it" -> "to serialize it" --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 46de13849e..3a57b9ecd2 100644 --- a/commands.json +++ b/commands.json @@ -1002,7 +1002,7 @@ }, "DUMP": { "summary": "Return a serialized version of the value stored at the specified key.", - "complexity": "O(1) to access the key and additional O(N*M) to serialized it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).", + "complexity": "O(1) to access the key and additional O(N*M) to serialize it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).", "arguments": [ { "name": "key", From c7757ea8df67e4c632d842d95e08392ceacaafa8 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 12 Aug 2020 17:02:11 +0300 Subject: [PATCH 0430/1457] Adds a note about cluster keyspace notifications (#1367) And some minor edits --- topics/notifications.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/topics/notifications.md b/topics/notifications.md index ae17b0973b..b84a774d11 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -9,7 +9,7 @@ Feature overview Keyspace notifications allow clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way. -Examples of the events that are possible to receive are the following: +Examples of events that can be received are: * All the commands affecting a given key. * All the keys receiving an LPUSH operation. @@ -168,7 +168,7 @@ Timing of expired events Keys with a time to live associated are expired by Redis in two ways: * When the key is accessed by a command and is found to be expired. -* Via a background system that looks for expired keys in background, incrementally, in order to be able to also collect keys that are never accessed. +* Via a background system that looks for expired keys in the background, incrementally, in order to be able to also collect keys that are never accessed. The `expired` events are generated when a key is accessed and is found to be expired by one of the above systems, as a result there are no guarantees that the Redis server will be able to generate the `expired` event at the time the key time to live reaches the value of zero. @@ -176,6 +176,11 @@ If no command targets the key constantly, and there are many keys with a TTL ass Basically `expired` events **are generated when the Redis server deletes the key** and not when the time to live theoretically reaches the value of zero. +Events in a cluster +--- + +Every node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications **are not** broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes. + @history * `>= 6.0`: Key miss events were added. From 86cd1b3dc4e6e0a1514e198859a3af3645180ece Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 20 Aug 2020 17:15:10 +0300 Subject: [PATCH 0431/1457] Makes SET's KEEPTTL an enum with EX/PX Fixes #1368 --- commands.json | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/commands.json b/commands.json index 3a57b9ecd2..040bb9f77f 100644 --- a/commands.json +++ b/commands.json @@ -2794,7 +2794,8 @@ "type": "enum", "enum": [ "EX seconds", - "PX milliseconds" + "PX milliseconds", + "KEEPTTL" ], "optional": true }, @@ -2806,14 +2807,6 @@ "XX" ], "optional": true - }, - { - "name": "keepttl", - "type": "enum", - "enum": [ - "KEEPTTL" - ], - "optional": true } ], "since": "1.0.0", From 5c5e7169c5380bad9f65d9563bd1716097f1aeda Mon Sep 17 00:00:00 2001 From: MichalCho <26363814+MichalCho@users.noreply.github.com> Date: Fri, 21 Aug 2020 13:54:19 +0200 Subject: [PATCH 0432/1457] Correct discard policy description (#1370) Discarding the oldest item is first-in first-out policy. First in last-out discards the newest entries first. Co-authored-by: Michal Cholewa --- topics/client-side-caching.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index b4e65224e3..8bc821fe24 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -195,7 +195,7 @@ command is tracked by the server, because it *could be cached*. This has the obvious advantage of not requiring the client to tell the server what it is caching. Moreover in many clients implementations, this is what you want, because a good solution could be to just cache everything that is not -already cached, using a first-in last-out approach: we may want to cache a +already cached, using a first-in first-out approach: we may want to cache a fixed number of objects, every new data we retrieve, we could cache it, discarding the oldest cached object. More advanced implementations may instead drop the least used object or alike. From 80d22b22408063d5b0dd02347ac03c2a7e70a0b3 Mon Sep 17 00:00:00 2001 From: filipe oliveira Date: Sat, 22 Aug 2020 15:20:20 +0100 Subject: [PATCH 0433/1457] Added performance considerations on TLS support (#1372) * Added performance considerations on TLS support --- topics/encryption.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/topics/encryption.md b/topics/encryption.md index d2a77af564..81102d5ab9 100644 --- a/topics/encryption.md +++ b/topics/encryption.md @@ -114,6 +114,11 @@ Additional TLS configuration is available to control the choice of TLS protocol versions, ciphers and cipher suites, etc. Please consult the self documented `redis.conf` for more information. +Performance Considerations +--- + +TLS adds a layer to the communication stack with overheads due to writing/reading to/from an SSL connection, encryption/decryption and integrity checks. Consequently, using TLS results in a decrease of the achievable throughput per Redis instance (for more information refer to this [discussion](https://github.com/redis/redis/issues/7595)). + Limitations --- From 644a04d988eff2c6f2a54a7f9780e65993fcddcb Mon Sep 17 00:00:00 2001 From: Zach <64266233+zachary-samsel@users.noreply.github.com> Date: Sat, 22 Aug 2020 13:02:18 -0400 Subject: [PATCH 0434/1457] Added Redis Connector for Dell Boomi client to clients.json (#1371) Co-authored-by: zmsl --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 39fb76babd..569ad2556b 100644 --- a/clients.json +++ b/clients.json @@ -1855,5 +1855,14 @@ "authors": [], "recommended": true, "active": true + }, + + { + "name": "Redis Connector for Dell Boomi", + "language": "Boomi", + "repository": "https://github.com/zachary-samsel/boomi-redis-connector", + "description": "A custom connector for Dell Boomi that utilizes the lettuce.io Java client to add Redis client support to the Dell Boomi iPaaS.", + "authors": [], + "active": true } ] From 421eba2f174eced230e35559839a14b5de043539 Mon Sep 17 00:00:00 2001 From: D G Starkweather Date: Tue, 25 Aug 2020 09:00:26 -0400 Subject: [PATCH 0435/1457] add Redis-ImageScout module (#1375) Co-authored-by: David Starkweather --- modules.json | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/modules.json b/modules.json index cca9ce7669..6ffff9adfd 100644 --- a/modules.json +++ b/modules.json @@ -297,7 +297,7 @@ "authors": [ "starkdg" ], - "stars": 13 + "stars": 14 }, { "name": "redismodule-ratelimit", @@ -338,5 +338,14 @@ "zhao-lang" ], "stars":0 - } + }, + { "name":"Redis-ImageScout", + "license": "pHash Redis Source Available License", + "repository": "https://github.com/starkdg/Redis-ImageScout.git", + "description": "Redis module for Indexing of pHash Image fingerprints for Near-Duplicate Detection", + "authors": [ + "starkdg" + ], + "stars":2 + } ] From a1eeeab9c22b23d3c28706aeb246f8b63be67815 Mon Sep 17 00:00:00 2001 From: Matt Westcott Date: Wed, 26 Aug 2020 16:02:55 +0100 Subject: [PATCH 0436/1457] Add Runnel to tools.json (#1376) --- tools.json | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/tools.json b/tools.json index 664edb372c..c5c47a0bb5 100644 --- a/tools.json +++ b/tools.json @@ -738,6 +738,12 @@ "repository": "https://github.com/pbotros/river", "description": "A structured streaming framework built atop Redis Streams with built-in support for persistence and indefinitely long streams.", "authors": [] + }, + { + "name": "Runnel", + "language": "Python", + "repository": "https://github.com/mjwestcott/runnel", + "description": "Distributed event processing for Python based on Redis Streams", + "authors": ["mjwestcott"] } - ] From 27498c90a22487fccc1171d934950aa686ef5889 Mon Sep 17 00:00:00 2001 From: zenyu Date: Sat, 29 Aug 2020 21:10:14 +0800 Subject: [PATCH 0437/1457] Fix a few typos in client-side-caching.md (#1379) --- topics/client-side-caching.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 8bc821fe24..5dd98cb56e 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -177,7 +177,7 @@ are groups of keys to invalidate, we can do that in a single message. A very important thing to understand about client side caching used with RESP2, and a Pub/Sub connection in order to read the invalidation messages, is that using Pub/Sub is entirely a trick **in order to reuse old client -implementations**, but actually the message is not really sent a channel +implementations**, but actually the message is not really sent to a channel and received by all the clients subscribed to it. Only the connection we specified in the `REDIRECT` argument of the `CLIENT` command will actually receive the Pub/Sub message, making the feature a lot more scalable. @@ -242,7 +242,7 @@ point of view of a different tradeoff, does not consume any memory on the server side, but instead sends more invalidation messages to clients. In this mode we have the following main behaviors: -* Clients enable client side caching using the `BCAST` option, specifying one or more prefixes using the `PREFIX` option. For instance: `CLIENT TRACKING on REDIRECT 10 BCAST PREFIX object: PREFIX user:`. If no prefix is specified at all, the prefix is assumed to be the empty string, so the client will receive invalidation messages for every key that gets modified. Instead if one or more prefixes are used, only keys matching the one of the specified prefixes will be send in the invalidation messages. +* Clients enable client side caching using the `BCAST` option, specifying one or more prefixes using the `PREFIX` option. For instance: `CLIENT TRACKING on REDIRECT 10 BCAST PREFIX object: PREFIX user:`. If no prefix is specified at all, the prefix is assumed to be the empty string, so the client will receive invalidation messages for every key that gets modified. Instead if one or more prefixes are used, only keys matching the one of the specified prefixes will be sent in the invalidation messages. * The server does not store anything in the invalidation table. Instead it only uses a different **Prefixes Table**, where each prefix is associated to a list of clients. * Every time a key matching any of the prefixes is modified, all the clients subscribed to such prefix, will receive the invalidation message. * The server will consume a CPU proportional to the number of registered prefixes. If you have just a few, it is hard to see any difference. With a big number of prefixes the CPU cost can become quite large. @@ -306,8 +306,8 @@ Clients may want to run an internal statistics about the amount of times a given cached key was actually served in a request, to understand in the future what is good to cache. In general: -* We don't want to cache much keys that change continuously. -* We don't want to cache much keys that are requested very rarely. +* We don't want to cache many keys that change continuously. +* We don't want to cache many keys that are requested very rarely. * We want to cache keys that are requested often and change at a reasonable rate. For an example of key not changing at a reasonable rate, think at a global counter that is continuously `INCR`emented. However simpler clients may just evict data using some random sampling just @@ -322,5 +322,5 @@ keys that were not served recently. ## Limiting the amount of memory used by Redis -Just make sure to configure a suitable value for the maxmimum number of keys remembered by Redis, or alternatively use the BCAST mode that consumes no memory at all in the Redis side. Note that the memory consumed by Redis when BCAST is not used, is proportional both to the number of keys tracked, and the number of clients requested such keys. +Just make sure to configure a suitable value for the maxmimum number of keys remembered by Redis, or alternatively use the BCAST mode that consumes no memory at all in the Redis side. Note that the memory consumed by Redis when BCAST is not used, is proportional both to the number of keys tracked, and the number of clients requesting such keys. From 0a14f1277fc723d6d06057b5b1c29764983dd800 Mon Sep 17 00:00:00 2001 From: haadj Date: Mon, 31 Aug 2020 14:54:31 +0200 Subject: [PATCH 0438/1457] @haadj Expand OOM abbreviation (#1380) --- commands/command.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/command.md b/commands/command.md index 88c0ef5a7a..d6d2152691 100644 --- a/commands/command.md +++ b/commands/command.md @@ -70,7 +70,7 @@ Command flags is @array-reply containing one or more status replies: - *write* - command may result in modifications - *readonly* - command will never modify keys - - *denyoom* - reject command if currently OOM + - *denyoom* - reject command if currently out of memory - *admin* - server admin command - *pubsub* - pubsub-related command - *noscript* - deny this command from scripts From 2c335a8226789afae2b5a2ebbd689933b0a5e01a Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Mon, 31 Aug 2020 21:08:48 +0300 Subject: [PATCH 0439/1457] Redis 6.0.7 - RedisModule_HoldString, RedisModule_HoldString. (#1378) --- topics/modules-api-ref.md | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index 69352ff691..8089710067 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -330,6 +330,31 @@ no FreeString() call is performed. It is possible to call this function with a NULL context. +## `RedisModule_HoldString` + + RedisModuleString* RedisModule_HoldString(RedisModuleCtx *ctx, RedisModuleString *str); + +/** +* This function can be used instead of `RedisModule_RetainString()`. +* The main difference between the two is that this function will always +* succeed, whereas `RedisModule_RetainString()` may fail because of an +* assertion. +* +* The function returns a pointer to RedisModuleString, which is owned +* by the caller. It requires a call to `RedisModule_FreeString()` to free +* the string when automatic memory management is disabled for the context. +* When automatic memory management is enabled, you can either call +* `RedisModule_FreeString()` or let the automation free it. +* +* This function is more efficient than `RedisModule_CreateStringFromString()` +* because whenever possible, it avoids copying the underlying +* RedisModuleString. The disadvantage of using this function is that it +* might not be possible to use `RedisModule_StringAppendBuffer()` on the +* returned RedisModuleString. +* +* It is possible to call this function with a NULL context. +  + ## `RedisModule_StringPtrLen` const char *RedisModule_StringPtrLen(const RedisModuleString *str, size_t *len); @@ -1750,6 +1775,14 @@ Note: `RedisModule_UnblockClient` should be called for every blocked client, even if client was killed, timed-out or disconnected. Failing to do so will result in memory leaks. +There are some cases where `RedisModule_BlockClient()` cannot be used: + +1. If the client is a Lua script. +2. If the client is executing a MULTI block. + +In these cases, a call to `RedisModule_BlockClient()` will **not** block the +client, but instead produce a specific error reply. + ## `RedisModule_BlockClientOnKeys` RedisModuleBlockedClient *RedisModule_BlockClientOnKeys(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata); @@ -1988,6 +2021,11 @@ is interested in. This can be an ORed mask of any of the following flags: - REDISMODULE_NOTIFY_STREAM: Stream events - REDISMODULE_NOTIFY_KEYMISS: Key-miss events - REDISMODULE_NOTIFY_ALL: All events (Excluding REDISMODULE_NOTIFY_KEYMISS) + - REDISMODULE_NOTIFY_LOADED: A special notification available only for modules, + indicates that the key was loaded from persistence. + Notice, when this event fires, the given key + can not be retained, use RM_CreateStringFromString + instead. We do not distinguish between key events and keyspace events, and it is up to the module to filter the actions taken based on the key. From 25555fe05a571454fa0f11dca28cb5796e04112f Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Wed, 9 Sep 2020 21:09:15 +0300 Subject: [PATCH 0440/1457] Missing array index in module example (#1386) --- topics/modules-native-types.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/modules-native-types.md b/topics/modules-native-types.md index 342e1fb3cc..28cfb520b1 100644 --- a/topics/modules-native-types.md +++ b/topics/modules-native-types.md @@ -325,7 +325,7 @@ method we'll do something like this: da->count = RedisModule_LoadUnsigned(io); da->values = RedisModule_Alloc(da->count * sizeof(double)); for (size_t j = 0; j < da->count; j++) - da->values = RedisModule_LoadDouble(io); + da->values[j] = RedisModule_LoadDouble(io); return da; } From 28292cc3c01e8f471fbd4c394c57cf3125ed3d65 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 11 Sep 2020 22:12:18 +0300 Subject: [PATCH 0441/1457] Fixes #1387 (#1389) --- topics/acl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/acl.md b/topics/acl.md index b06684cd75..da0adae94b 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -105,7 +105,7 @@ Allow and disallow commands: Allow and disallow certain keys: -`~`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of KEYS. It is possible to specify multiple patterns. +* `~`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of KEYS. It is possible to specify multiple patterns. * `allkeys`: Alias for `~*`. * `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will result in the client only be able to access keys matching the pattern `objects:*`. From b10b18c6bb7d28f98eb3d5538e0301ce34fb7471 Mon Sep 17 00:00:00 2001 From: Justin Castilla <59704472+justincastilla@users.noreply.github.com> Date: Fri, 11 Sep 2020 12:27:00 -0700 Subject: [PATCH 0442/1457] fix typo in client-side-caching article (#1390) Co-authored-by: Justin Castilla --- topics/client-side-caching.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/client-side-caching.md b/topics/client-side-caching.md index 5dd98cb56e..1e00343caa 100644 --- a/topics/client-side-caching.md +++ b/topics/client-side-caching.md @@ -104,7 +104,7 @@ This is an example of the protocol: This looks great superficially, but if you think at 10k connected clients all asking for millions of keys in the story of each long living connection, the -server would end storing too much information. For this reason Redis uses two +server would end up storing too much information. For this reason Redis uses two key ideas in order to limit the amount of memory used server side, and the CPU cost of handling the data structures implementing the feature: From 823e5466bb7c75aa33c60638eb3c81dff109a417 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Sep 2020 16:42:39 +0300 Subject: [PATCH 0443/1457] Adds AUTH2 form to `MIGRATE` (#1391) --- commands.json | 6 ++++++ commands/migrate.md | 10 ++++++---- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/commands.json b/commands.json index 040bb9f77f..12b16d5230 100644 --- a/commands.json +++ b/commands.json @@ -2149,6 +2149,12 @@ "type": "string", "optional": true }, + { + "command": "AUTH2", + "name": "username password", + "type": "string", + "optional": true + }, { "name": "key", "command": "KEYS", diff --git a/commands/migrate.md b/commands/migrate.md index 1635605eb2..6559b1b441 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -66,10 +66,12 @@ just a single key exists. * `AUTH` -- Authenticate with the given password to the remote instance. * `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style). -`COPY` and `REPLACE` are available only in 3.0 and above. -`KEYS` is available starting with Redis 3.0.6. -`AUTH` is available starting with Redis 4.0.7. -`AUTH2` is available starting with Redis 6.0.0. +@history + +* `>= 3.0.0`: Added the `COPY` and `REPLACE` options. +* `>= 3.0.6`: Added the `KEYS` option. +* `>= 4.0.7`: Added the `AUTH` option. +* `>= 6.0.0`: Added the `AUTH2` option. @return From dc8140a91e47913c46086c9377cb369d234bfe9e Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 14 Sep 2020 18:47:04 +0300 Subject: [PATCH 0444/1457] Adds ACL `AUTH` form syntax (#1392) --- commands.json | 5 +++++ commands/auth.md | 4 ++++ 2 files changed, 9 insertions(+) diff --git a/commands.json b/commands.json index 12b16d5230..8f606997f3 100644 --- a/commands.json +++ b/commands.json @@ -136,6 +136,11 @@ "AUTH": { "summary": "Authenticate to the server", "arguments": [ + { + "name": "username", + "type": "string", + "optional": true + }, { "name": "password", "type": "string" diff --git a/commands/auth.md b/commands/auth.md index 7c1e02a800..2af6dc5084 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -24,6 +24,10 @@ defined in the ACL list (see `ACL SETUSER`) and the official [ACL guide](/topics When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default". +@history + +* `>= 6.0.0`: Added ACL style (username and password). + ## Security notice Because of the high performance nature of Redis, it is possible to try From 4fcfcf25dfab6ab4a6571662eb98f06cf9f2dc22 Mon Sep 17 00:00:00 2001 From: Nathan Harris Date: Sun, 20 Sep 2020 12:59:11 -0700 Subject: [PATCH 0445/1457] Update clients.json for Swift language (#1393) Also marks any Swift client that hasn't received an update in over two years as inactive. --- clients.json | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/clients.json b/clients.json index 569ad2556b..0c51dca3b0 100644 --- a/clients.json +++ b/clients.json @@ -1416,7 +1416,7 @@ "repository": "https://github.com/ronp001/SwiftRedis", "description": "Basic async client for Redis in Swift (iOS)", "authors": ["ronp001"], - "active": true + "active": false }, { @@ -1425,7 +1425,7 @@ "repository": "https://github.com/Zewo/Redis", "description": "Redis client for Swift. OpenSwift C7 Compliant, OS X and Linux compatible.", "authors": ["rabc"], - "active": true + "active": false }, { @@ -1434,7 +1434,7 @@ "repository": "https://github.com/czechboy0/Redbird", "description": "Pure-Swift implementation of a Redis client from the original protocol spec (OS X + Linux compatible)", "authors": ["czechboy0"], - "active": true + "active": false }, { @@ -1455,6 +1455,17 @@ "active": true }, + { + "name": "RediStack", + "language": "Swift", + "repository": "https://gitlab.com/Mordil/RediStack", + "description": "Non-blocking, event-driven Swift client for Redis built with SwiftNIO for all official Swift deployment environments.", + "authors": ["mordil"], + "active": true, + "recommended": true, + "url": "https://docs.redistack.info" + }, + { "name": "Rackdis", "language": "Racket", From 4c59ba5e8654ea8baf106086833d7753c7b960ac Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sun, 20 Sep 2020 23:06:38 +0300 Subject: [PATCH 0446/1457] Update sponsors.md --- topics/sponsors.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sponsors.md b/topics/sponsors.md index 07db25e291..7bd07462aa 100644 --- a/topics/sponsors.md +++ b/topics/sponsors.md @@ -6,7 +6,7 @@ Starting from June 2015 the work [Salvatore Sanfilippo](http://twitter.com/antir Past sponsorships: * The [Shuttleworth Foundation](http://www.shuttleworthfoundation.org) donated 5000 USD to the Redis project in form of a flash grant. The details will be posted soon on a blog post documenting how the money was used. -![Shuttleworth Foundation](http://redis.io/images/shuttleworth.png) +![Shuttleworth Foundation](/images/shuttleworth.png) * From May 2013 to June 2015 the work [Salvatore Sanfilippo](http://twitter.com/antirez) did in order to develop Redis was sponsored by [Pivotal](http://gopivotal.com). * Before May 2013 the project was sponsored by VMware with the work of [Salvatore Sanfilippo](http://twitter.com/antirez) and [Pieter Noordhuis](http://twitter.com/pnoordhuis). * [VMware](http://vmware.com) and later [Pivotal](http://pivotal.io) provided a 24 GB RAM workstation for me to run the [Redis CI test](http://ci.redis.io) and other long running tests. Later I (Salvatore) equipped the server with an SSD drive in order to test in the same hardware with rotating and flash drives. From 8cbf12d953f55c49acdd69d0b2e4706c136f19b8 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Tue, 22 Sep 2020 16:48:15 +0300 Subject: [PATCH 0447/1457] update modules stars counters (#1395) --- modules.json | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules.json b/modules.json index 6ffff9adfd..c7e9915ea6 100644 --- a/modules.json +++ b/modules.json @@ -18,7 +18,7 @@ "MeirShpilraien", "RedisLabs" ], - "stars": 67 + "stars": 122 }, { "name": "redis-roaring", @@ -49,7 +49,7 @@ "swilly22", "RedisLabs" ], - "stars": 884 + "stars": 1144 }, { "name": "redis-tdigest", @@ -70,7 +70,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 939 + "stars": 1119 }, { "name": "RediSearch", @@ -81,7 +81,7 @@ "dvirsky", "RedisLabs" ], - "stars": 1936 + "stars": 2585 }, { "name": "topk", @@ -114,7 +114,7 @@ "mnunberg", "RedisLabs" ], - "stars": 473 + "stars": 691 }, { "name": "neural-redis", @@ -135,7 +135,7 @@ "danni-m", "RedisLabs" ], - "stars": 310 + "stars": 452 }, { "name": "RedisAI", @@ -146,7 +146,7 @@ "lantiga", "RedisLabs" ], - "stars": 289 + "stars": 435 }, { "name": "ReDe", From d53cf30708a2fb5f8969ff252c3f69caf2884e36 Mon Sep 17 00:00:00 2001 From: raphaelauv Date: Tue, 22 Sep 2020 15:49:39 +0200 Subject: [PATCH 0448/1457] Update client Lettuce Info (#1394) --- clients.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 0c51dca3b0..1800d9c095 100644 --- a/clients.json +++ b/clients.json @@ -1028,8 +1028,8 @@ { "name": "lettuce", "language": "Java", - "url": "http://redis.paluch.biz", - "repository": "https://github.com/mp911de/lettuce", + "url": "https://lettuce.io/", + "repository": "https://github.com/lettuce-io/lettuce-core", "description": "Advanced Redis client for thread-safe sync, async, and reactive usage. Supports Cluster, Sentinel, Pipelining, and codecs.", "authors": ["ar3te", "mp911de"], "active": true, From 5922c3c3e1f0578734d6e2603bb917f2efcfc906 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Mon, 28 Sep 2020 17:19:52 +0300 Subject: [PATCH 0449/1457] clarification about AOF fsync-always (#1402) --- topics/persistence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/persistence.md b/topics/persistence.md index 967807110a..eb5207b995 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -134,7 +134,7 @@ You can configure how many times Redis will [`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are three options: -* `appendfsync always`: `fsync` every time a new command is appended to the AOF. Very very slow, very safe. +* `appendfsync always`: `fsync` every time new commands are appended to the AOF. Very very slow, very safe. Note that the commands are apended to the AOF after a batch of commands from multiple clients or a pipeline are executed, so it means a single write and a single fsync (before sending the replies). * `appendfsync everysec`: `fsync` every second. Fast enough (in 2.4 likely to be as fast as snapshotting), and you can lose 1 second of data if there is a disaster. * `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel exact tuning. From 8d10326a60f2107525cf99c0b692e5e12bd12741 Mon Sep 17 00:00:00 2001 From: Pieter du Preez Date: Mon, 28 Sep 2020 14:26:38 +0000 Subject: [PATCH 0450/1457] Removed two duplicate words from ./wordlist. (#1400) CJSON was a duplicate inside ./wordlist. PSYNC is a command inside ./commands.json and therefore should not be listed in the ./wordlist file again. --- wordlist | 2 -- 1 file changed, 2 deletions(-) diff --git a/wordlist b/wordlist index 37ce95f651..ed67ea5bc5 100644 --- a/wordlist +++ b/wordlist @@ -8,7 +8,6 @@ BitOp Bitfields CAS CJSON -CJSON CLI CP CPUs @@ -94,7 +93,6 @@ PHP PINGs POSIX PRNG -PSYNC PostgreSQL Presharding RDB From cb361501d5cee82a64811fa4f9de96f78f0fe64f Mon Sep 17 00:00:00 2001 From: Pieter du Preez Date: Mon, 28 Sep 2020 14:37:23 +0000 Subject: [PATCH 0451/1457] Fixed an awk regexp escape sequence warning in the makefile. (#1398) The following warning was printed by awk, when running 'make spell': awk: cmd. line:1: warning: regexp escape sequence `\&' is not a known regexp operator This patch removes the escape in front of the '&' in the awk regexp and fixes this issue. --- makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/makefile b/makefile index 97646e5f04..c53ed8da14 100644 --- a/makefile +++ b/makefile @@ -27,7 +27,7 @@ $(TEXT_FILES): tmp/%.txt: %.md $(SPELL_FILES): %.spell: %.txt tmp/dict aspell -a --extra-dicts=./tmp/dict 2>/dev/null < $< | \ - awk -v FILE=$(patsubst tmp/%.spell,%.md,$@) '/^\&/ { print FILE, $$2 }' | \ + awk -v FILE=$(patsubst tmp/%.spell,%.md,$@) '/^&/ { print FILE, $$2 }' | \ sort -f | uniq > $@ tmp/commands: From ac366b2cd58d91ef519ab7dc7ac8c88516ef53ff Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 28 Sep 2020 18:20:59 +0300 Subject: [PATCH 0452/1457] Re-establishes the use of `make spell` (#1403) * Fixes errors found by `make spell` * Adds spell check to PR github action --- .github/workflows/pull_request.yml | 4 +++- commands/acl-setuser.md | 2 +- commands/client-caching.md | 2 +- commands/eval.md | 6 +++--- commands/module-load.md | 2 +- commands/stralgo.md | 4 ++-- commands/xreadgroup.md | 2 +- wordlist | 10 ++++++++++ 8 files changed, 22 insertions(+), 10 deletions(-) diff --git a/.github/workflows/pull_request.yml b/.github/workflows/pull_request.yml index ff7f60664f..5388d4d398 100644 --- a/.github/workflows/pull_request.yml +++ b/.github/workflows/pull_request.yml @@ -14,5 +14,7 @@ jobs: - uses: actions/checkout@v2.1.1 - name: Install dependencies run: gem install $(sed -e 's/ -v /:/' .gems) - - name: Run tests + - name: Sanity parse test run: make -s + - name: Spelling check + run: make spell diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md index fb13514632..a71e6ccb38 100644 --- a/commands/acl-setuser.md +++ b/commands/acl-setuser.md @@ -55,7 +55,7 @@ This is a list of all the supported Redis ACL rules: * `-@`: Like `+@` but removes all the commands in the category instead of adding them. * `nocommands`: alias for `-@all`. Removes all the commands, the user will no longer be able to execute anything. * `nopass`: the user is set as a "no password" user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. -* `>password`: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored in cleartext inside the server. Example: `>mypassword`. +* `>password`: Add the specified clear text password as an hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored as clear text inside the server. Example: `>mypassword`. * `#`: Add the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. * `password` but removes the password instead of adding it. * `!`: Like `#` but removes the password instead of adding it. diff --git a/commands/client-caching.md b/commands/client-caching.md index d857811329..1f4b8b8a3e 100644 --- a/commands/client-caching.md +++ b/commands/client-caching.md @@ -2,7 +2,7 @@ This command controls the tracking of the keys in the next command executed by the connection, when tracking is enabled in `OPTIN` or `OPTOUT` mode. Please check the [client side caching documentation](/topics/client-side-caching) for -background informations. +background information. When tracking is enabled Redis, using the `CLIENT TRACKING` command, it is possible to specify the `OPTIN` or `OPTOUT` options, so that keys diff --git a/commands/eval.md b/commands/eval.md index faa84fa7f5..ba02ce7f13 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -602,13 +602,13 @@ was the cause of bugs. ## Using Lua scripting in RESP3 mode -Starting with Redis version 6, the server supports two differnent protocols. +Starting with Redis version 6, the server supports two different protocols. One is called RESP2, and is the old protocol: all the new connections to the server start in this mode. However clients are able to negotiate the new protocol using the `HELLO` command: this way the connection is put in RESP3 mode. In this mode certain commands, like for instance `HGETALL`, reply with a new data type (the Map data type in this specific case). The -RESP3 protocol is semantically more powerful, however most scripts are ok +RESP3 protocol is semantically more powerful, however most scripts are OK with using just RESP2. The Lua engine always assumes to run in RESP2 mode when talking with Redis, @@ -647,7 +647,7 @@ At this point the new conversions are available, specifically: * Lua boolean -> Redis boolean true or false. **Note that this is a change compared to the RESP2 mode**, where returning true from Lua returned the number 1 to the Redis client, and returning false used to return NULL. * Lua table with a single `map` field set to a field-value Lua table -> Redis map reply. -* Lua table with a single `set` field set to a field-value Lua table -> Redis set reply, the values are discared and can be anything. +* Lua table with a single `set` field set to a field-value Lua table -> Redis set reply, the values are discarded and can be anything. * Lua table with a single `double` field set to a field-value Lua table -> Redis double reply. * Lua null -> Redis RESP3 new null reply (protocol `"_\r\n"`). * All the RESP2 old conversions still apply unless specified above. diff --git a/commands/module-load.md b/commands/module-load.md index c5919c0077..99777c3886 100644 --- a/commands/module-load.md +++ b/commands/module-load.md @@ -5,7 +5,7 @@ specified by the `path` argument. The `path` should be the absolute path of the library, including the full filename. Any additional arguments are passed unmodified to the module. -**Note**: modules can also be loaded at server startup with 'loadmodule' +**Note**: modules can also be loaded at server startup with `loadmodule` configuration directive in `redis.conf`. @return diff --git a/commands/stralgo.md b/commands/stralgo.md index 73d06bf38d..05df3a4bbf 100644 --- a/commands/stralgo.md +++ b/commands/stralgo.md @@ -100,5 +100,5 @@ Finally to also have the match len: For the LCS algorithm: * Without modifiers the string representing the longest common substring is returned. -* When LEN is given the command returns the length of the longest common substring. -* When IDX is given the command returns an array with the LCS length and all the ranges in both the strings, start and end offset for each string, where there are matches. When WITHMATCHLEN is given each array representing a match will also have the length of the match (see examples). +* When `LEN` is given the command returns the length of the longest common substring. +* When `IDX` is given the command returns an array with the LCS length and all the ranges in both the strings, start and end offset for each string, where there are matches. When `WITHMATCHLEN` is given each array representing a match will also have the length of the match (see examples). diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md index e458ebf0bb..237646859b 100644 --- a/commands/xreadgroup.md +++ b/commands/xreadgroup.md @@ -16,7 +16,7 @@ Without consumer groups, just using `XREAD`, all the clients are served with all Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with an unique *consumer name*. Which is just a string. -One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgement of the messages successfully processed by the consumer, via the `XACK` command. This is needed because the stream will track, for each consumer group, who is processing what message. +One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgment of the messages successfully processed by the consumer, via the `XACK` command. This is needed because the stream will track, for each consumer group, who is processing what message. This is how to understand if you want to use a consumer group or not: diff --git a/wordlist b/wordlist index ed67ea5bc5..002d6d382a 100644 --- a/wordlist +++ b/wordlist @@ -40,6 +40,7 @@ Geohashes Geospatial GitHub Google +HMAC HLL HLLs HOWTO @@ -55,6 +56,7 @@ IRC Inline JPEG JSON +LCS LDB LF LFU @@ -89,6 +91,7 @@ OOM OSGEO Opteron PEL +PELs PHP PINGs POSIX @@ -219,10 +222,12 @@ failback failover failovers fanout +fao fdatasync filesystem firewalled firewalling +fo freenode fsync functionalities @@ -292,6 +297,7 @@ netsplits newjobs nils noeviction +nopass noscript numactl online @@ -311,6 +317,7 @@ probabilistically proc programmatically prstat +pseudorandom pubsub qsort queueing @@ -356,12 +363,14 @@ sismember slowlog smaps snapshotting +somekey startup strace struct subcommand subcommands suboptimal +subsequence substring swappability syscall @@ -403,6 +412,7 @@ variadic versa versioned versioning +virginia virtualization virtualized vmstat From 86170e4e48eaba79ea09a8405bbe2bc04387c566 Mon Sep 17 00:00:00 2001 From: Pieter du Preez Date: Mon, 28 Sep 2020 15:28:36 +0000 Subject: [PATCH 0453/1457] Added make-check-targets for duplication inside the ./wordlist file. (#1399) Spell checking by the 'spell' makefile target is done by using white-list words originating from commands in the ./commands.json file as well as words in the ./wordlist file. Two ways of word duplication in these two files are possible: 1. A new command gets added to the ./commands.json file, while the command already exists in the ./wordlist file. 2. The same word simply exist twice inside the ./wordlist file. This patch adds two makefile targets that check for these two cases, when running 'make spell'. --- makefile | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/makefile b/makefile index c53ed8da14..5ac7ef4d9c 100644 --- a/makefile +++ b/makefile @@ -16,8 +16,31 @@ clients: tools: ruby utils/clients.rb tools.json +check_duplicate_wordlist: wordlist + @cat wordlist |sort |uniq -c |sort -n \ + |awk '{ if ($$1 > 1) print "grep -nw "$$2" wordlist"}' \ + |sh >tmp/duplicates_in_wordlist.txt || true + @test -s tmp/duplicates_in_wordlist.txt \ + && echo "ERROR: The following word(s) appear more than once in the './wordlist' file:" \ + && echo "line:word" \ + && echo "---------" \ + && cat tmp/duplicates_in_wordlist.txt \ + || true + @test ! -s tmp/duplicates_in_wordlist.txt -spell: tmp/commands tmp/topics $(SPELL_FILES) +check_command_wordlist: wordlist tmp/commands.txt check_duplicate_wordlist + @cat wordlist tmp/commands.txt |sort |uniq -c |sort -n \ + |awk '{ if ($$1 > 1) print "grep -nw "$$2" wordlist"}' \ + |sh >tmp/commands_in_wordlist.txt || true + @test -s tmp/commands_in_wordlist.txt \ + && echo "ERROR: The following command(s) should be removed from in the './wordlist' file:" \ + && echo "line:command" \ + && echo "------------" \ + && cat tmp/commands_in_wordlist.txt \ + || true + @test ! -s tmp/commands_in_wordlist.txt + +spell: tmp/commands tmp/topics $(SPELL_FILES) check_command_wordlist find tmp -name '*.spell' | xargs cat > tmp/spelling-errors cat tmp/spelling-errors test ! -s tmp/spelling-errors From c72a016de2a65647972633c2b615fd33bf04c718 Mon Sep 17 00:00:00 2001 From: Tyson Andre Date: Wed, 30 Sep 2020 01:51:08 -0400 Subject: [PATCH 0454/1457] Add documentation for zmscore command (#1361) Co-authored-by: Tyson Andre --- commands.json | 17 +++++++++++++++++ commands/zmscore.md | 16 ++++++++++++++++ 2 files changed, 33 insertions(+) create mode 100644 commands/zmscore.md diff --git a/commands.json b/commands.json index 8f606997f3..207447f4ec 100644 --- a/commands.json +++ b/commands.json @@ -3883,6 +3883,23 @@ "since": "1.2.0", "group": "sorted_set" }, + "ZMSCORE": { + "summary": "Get the score associated with the given members in a sorted set", + "complexity": "O(N) where N is the number of members being requested.", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "member", + "type": "string", + "multiple": true + } + ], + "since": "6.2.0", + "group": "sorted_set" + }, "ZUNIONSTORE": { "summary": "Add multiple sorted sets and store the resulting sorted set in a new key", "complexity": "O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.", diff --git a/commands/zmscore.md b/commands/zmscore.md new file mode 100644 index 0000000000..c2317e90b8 --- /dev/null +++ b/commands/zmscore.md @@ -0,0 +1,16 @@ +Returns the scores associated with the specified `members` in the sorted set stored at `key`. + +For every `member` that does not exist in the sorted set, a `nil` value is returned. + +@return + +@array-reply: list of scores or `nil` associated with the specified `member` values (a double precision floating point number), +represented as strings. + +@examples + +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZMSCORE myzset "one" "two" "nofield" +``` From 6608023641d860f24734220aa8263071b2f41b01 Mon Sep 17 00:00:00 2001 From: valentinogeron Date: Wed, 30 Sep 2020 08:58:39 +0300 Subject: [PATCH 0455/1457] Add docs for XGROUP CREATECONSUMER subcommand (#1385) Co-authored-by: Itamar Haber --- commands.json | 14 ++++++++++++++ commands/xgroup.md | 17 +++++++++++++---- topics/notifications.md | 1 + 3 files changed, 28 insertions(+), 4 deletions(-) diff --git a/commands.json b/commands.json index 207447f4ec..4619dc1ad5 100644 --- a/commands.json +++ b/commands.json @@ -4313,6 +4313,20 @@ ], "optional": true }, + { + "command": "CREATECONSUMER", + "name": [ + "key", + "groupname", + "consumername" + ], + "type": [ + "key", + "string", + "string" + ], + "optional": true + }, { "command": "DELCONSUMER", "name": [ diff --git a/commands/xgroup.md b/commands/xgroup.md index f503c12267..95dc9fb550 100644 --- a/commands/xgroup.md +++ b/commands/xgroup.md @@ -40,15 +40,20 @@ The consumer group will be destroyed even if there are active consumers and pending messages, so make sure to call this command only when really needed. +Consumers in a consumer group are auto-created every time a new consumer +name is mentioned by some command. They can also be explicitly created +by using the following form: + + XGROUP CREATECONSUMER mystream consumer-group-name myconsumer123 + To just remove a given consumer from a consumer group, the following form is used: XGROUP DELCONSUMER mystream consumer-group-name myconsumer123 -Consumers in a consumer group are auto-created every time a new consumer -name is mentioned by some command. However sometimes it may be useful to -remove old consumers since they are no longer used. This form returns -the number of pending messages that the consumer had before it was deleted. +Sometimes it may be useful to remove old consumers since they are no longer +used. This form returns the number of pending messages that the consumer +had before it was deleted. Finally it possible to set the next message to deliver using the `SETID` subcommand. Normally the next ID is set when the consumer is @@ -64,3 +69,7 @@ Finally to get some help if you don't remember the syntax, use the HELP subcommand: XGROUP HELP + +@history + + * `>= 6.2.0`: Supports the `CREATECONSUMER` subcommand. diff --git a/topics/notifications.md b/topics/notifications.md index b84a774d11..786e348a34 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -136,6 +136,7 @@ Different commands generate different kind of events according to the following * `XADD` generates an `xadd` event, possibly followed an `xtrim` event when used with the `MAXLEN` subcommand. * `XDEL` generates a single `xdel` event even when multiple entries are deleted. * `XGROUP CREATE` generates an `xgroup-create` event. +* `XGROUP CREATECONSUMER` generates an `xgroup-createconsumer` event. * `XGROUP DELCONSUMER` generates an `xgroup-delconsumer` event. * `XGROUP DESTROY` generates an `xgroup-destroy` event. * `XGROUP SETID` generates an `xgroup-setid` event. From 7ad24ee79785b26e2dfe39ab2f1a5ff9c03b0a38 Mon Sep 17 00:00:00 2001 From: Tyson Andre Date: Wed, 30 Sep 2020 01:59:56 -0400 Subject: [PATCH 0456/1457] Document proposed SMISMEMBER KEY MEMBER ... (#1364) Documentation PR for https://github.com/redis/redis/pull/7615 --- commands.json | 17 +++++++++++++++++ commands/smismember.md | 16 ++++++++++++++++ 2 files changed, 33 insertions(+) create mode 100644 commands/smismember.md diff --git a/commands.json b/commands.json index 4619dc1ad5..84f9b032f6 100644 --- a/commands.json +++ b/commands.json @@ -2961,6 +2961,23 @@ "since": "1.0.0", "group": "set" }, + "SMISMEMBER": { + "summary": "Returns the membership associated with the given elements for a set", + "complexity": "O(N) where N is the number of elements being checked for membership", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "member", + "type": "string", + "multiple": true + } + ], + "since": "6.2.0", + "group": "set" + }, "SLAVEOF": { "summary": "Make the server a replica of another instance, or promote it as master. Deprecated starting with Redis 5. Use REPLICAOF instead.", "arguments": [ diff --git a/commands/smismember.md b/commands/smismember.md new file mode 100644 index 0000000000..c4cec64a6b --- /dev/null +++ b/commands/smismember.md @@ -0,0 +1,16 @@ +Returns whether each `member` is a member of the set stored at `key`. + +For every `member`, `1` is returned if the value is a member of the set, or `0` if the element is not a member of the set or if `key` does not exist. + +@return + +@array-reply: list representing the membership of the given elements, in the same +order as they are requested. + +@examples + +```cli +SADD myset "one" +SADD myset "one" +SMISMEMBER myset "one" "notamember" +``` From e676d450cd16df6f142b876caead503c5781e589 Mon Sep 17 00:00:00 2001 From: alexronke-channeladvisor <41968397+alexronke-channeladvisor@users.noreply.github.com> Date: Wed, 30 Sep 2020 02:03:16 -0400 Subject: [PATCH 0457/1457] Documentation update for new ZADD GT and LT options (#1397) Co-authored-by: Itamar Haber Co-authored-by: Oran Agra --- commands.json | 9 +++++++++ commands/zadd.md | 8 +++++++- topics/modules-api-ref.md | 2 ++ 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 84f9b032f6..e4ebb0ae04 100644 --- a/commands.json +++ b/commands.json @@ -3399,6 +3399,15 @@ ], "optional": true }, + { + "name": "comparison", + "type": "enum", + "enum": [ + "GT", + "LT" + ], + "optional": true + }, { "name": "change", "type": "enum", diff --git a/commands/zadd.md b/commands/zadd.md index e112a19118..8656ce9c22 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -10,7 +10,7 @@ members is created, like if the sorted set was empty. If the key exists but does The score values should be the string representation of a double precision floating point number. `+inf` and `-inf` values are valid values as well. -ZADD options (Redis 3.0.2 or greater) +ZADD options --- ZADD supports a list of options, specified after the name of the key and before @@ -18,9 +18,13 @@ the first score argument. Options are: * **XX**: Only update elements that already exist. Never add elements. * **NX**: Don't update already existing elements. Always add new elements. +* **LT**: Only update existing elements if the new score is **less than** the current score. This flag doesn't prevent adding new elements. +* **GT**: Only update existing elements if the new score is **greater than** the current score. This flag doesn't prevent adding new elements. * **CH**: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of *changed*). Changed elements are **new elements added** and elements already existing for which **the score was updated**. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally the return value of `ZADD` only counts the number of new elements added. * **INCR**: When this option is specified `ZADD` acts like `ZINCRBY`. Only one score-element pair can be specified in this mode. +Note: The **GT**, **LT** and **NX** options are mutually exclusive. + Range of integer scores that can be expressed precisely --- @@ -70,6 +74,8 @@ If the `INCR` option is specified, the return value will be @bulk-string-reply: * `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was possible to add or update a single member per call. +* `>= 3.0.2`: Added the `XX`, `NX`, `CH` and `INCR` options. +* `>=6.2`: Added the `GT` and `LT` options. @examples diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index 8089710067..9ed02e2472 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -1053,6 +1053,8 @@ The input flags are: REDISMODULE_ZADD_XX: Element must already exist. Do nothing otherwise. REDISMODULE_ZADD_NX: Element must not exist. Do nothing otherwise. + REDISMODULE_ZADD_LT: For existing element, only update if the new score is less than the current score. + REDISMODULE_ZADD_GT: For existing element, only update if the new score is greater than the current score. The output flags are: From 7ac6599a8d30919116c0838d6324da66e5671b19 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=9D=A8=E5=8D=9A=E4=B8=9C?= Date: Wed, 30 Sep 2020 14:05:02 +0800 Subject: [PATCH 0458/1457] Add `ZINTER/ZUNION` command (#1396) Co-authored-by: Itamar Haber Co-authored-by: Oran Agra --- commands.json | 86 ++++++++++++++++++++++++++++++++++++++++++++++ commands/zinter.md | 21 +++++++++++ commands/zunion.md | 21 +++++++++++ 3 files changed, 128 insertions(+) create mode 100644 commands/zinter.md create mode 100644 commands/zunion.md diff --git a/commands.json b/commands.json index e4ebb0ae04..f4c5f3379b 100644 --- a/commands.json +++ b/commands.json @@ -3491,6 +3491,49 @@ "since": "1.2.0", "group": "sorted_set" }, + "ZINTER": { + "summary": "Intersect multiple sorted sets", + "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", + "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "command": "WEIGHTS", + "name": "weight", + "type": "integer", + "variadic": true, + "optional": true + }, + { + "command": "AGGREGATE", + "name": "aggregate", + "type": "enum", + "enum": [ + "SUM", + "MIN", + "MAX" + ], + "optional": true + }, + { + "name": "withscores", + "type": "enum", + "enum": [ + "WITHSCORES" + ], + "optional": true + } + ], + "since": "6.2.0", + "group": "sorted_set" + }, "ZINTERSTORE": { "summary": "Intersect multiple sorted sets and store the resulting sorted set in a new key", "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", @@ -3909,6 +3952,49 @@ "since": "1.2.0", "group": "sorted_set" }, + "ZUNION": { + "summary": "Add multiple sorted sets", + "complexity": "O(N)+O(M*log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.", + "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "command": "WEIGHTS", + "name": "weight", + "type": "integer", + "variadic": true, + "optional": true + }, + { + "command": "AGGREGATE", + "name": "aggregate", + "type": "enum", + "enum": [ + "SUM", + "MIN", + "MAX" + ], + "optional": true + }, + { + "name": "withscores", + "type": "enum", + "enum": [ + "WITHSCORES" + ], + "optional": true + } + ], + "since": "6.2.0", + "group": "sorted_set" + }, "ZMSCORE": { "summary": "Get the score associated with the given members in a sorted set", "complexity": "O(N) where N is the number of members being requested.", diff --git a/commands/zinter.md b/commands/zinter.md new file mode 100644 index 0000000000..5a7adccd79 --- /dev/null +++ b/commands/zinter.md @@ -0,0 +1,21 @@ +This command is similar to `ZINTERSTORE`, but instead of storing the resulting +sorted set, it is returned to the client. + +For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. + +@return + +@array-reply: the result of intersection (optionally with their scores, in case +the `WITHSCORES` option is given). + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTER 2 zset1 zset2 +ZINTER 2 zset1 zset2 WITHSCORES +``` diff --git a/commands/zunion.md b/commands/zunion.md new file mode 100644 index 0000000000..d77d81f47c --- /dev/null +++ b/commands/zunion.md @@ -0,0 +1,21 @@ +This command is similar to `ZUNIONSTORE`, but instead of storing the resulting +sorted set, it is returned to the client. + +For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. + +@return + +@array-reply: the result of union (optionally with their scores, in case +the `WITHSCORES` option is given). + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZUNION 2 zset1 zset2 +ZUNION 2 zset1 zset2 WITHSCORES +``` From 950401c7e66cc2079121760395f1607e99f2b9a4 Mon Sep 17 00:00:00 2001 From: Gavrie Philipson Date: Thu, 1 Oct 2020 01:51:31 +0300 Subject: [PATCH 0459/1457] Fix some typos, punctuation and style errors (#1406) --- topics/cluster-tutorial.md | 42 +++++++++++++++++++------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index a48973259b..d004f60cde 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -83,7 +83,7 @@ Redis Cluster data sharding --- Redis Cluster does not use consistent hashing, but a different form of sharding -where every key is conceptually part of what we call an **hash slot**. +where every key is conceptually part of what we call a **hash slot**. There are 16384 hash slots in Redis Cluster, and to compute what is the hash slot of a given key, we simply take the CRC16 of the key modulo @@ -131,13 +131,13 @@ range 5501-11000. However when the cluster is created (or at a later time) we add a slave node to every master, so that the final cluster is composed of A, B, C -that are masters nodes, and A1, B1, C1 that are slaves nodes, the system is -able to continue if node B fails. +that are master nodes, and A1, B1, C1 that are slave nodes. +This way, the system is able to continue if node B fails. Node B1 replicates B, and B fails, the cluster will promote node B1 as the new master and will continue to operate correctly. -However note that if nodes B and B1 fail at the same time Redis Cluster is not +However, note that if nodes B and B1 fail at the same time, Redis Cluster is not able to continue to operate. Redis Cluster consistency guarantees @@ -166,19 +166,19 @@ This is **very similar to what happens** with most databases that are configured to flush data to disk every second, so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems. Similarly you can -improve consistency by forcing the database to flush data on disk before -replying to the client, but this usually results into prohibitively low +improve consistency by forcing the database to flush data to disk before +replying to the client, but this usually results in prohibitively low performance. That would be the equivalent of synchronous replication in the case of Redis Cluster. -Basically there is a trade-off to take between performance and consistency. +Basically, there is a trade-off to be made between performance and consistency. Redis Cluster has support for synchronous writes when absolutely needed, -implemented via the `WAIT` command, this makes losing writes a lot less -likely, however note that Redis Cluster does not implement strong consistency -even when synchronous replication is used: it is always possible under more -complex failure scenarios that a slave that was not able to receive the write -is elected as master. +implemented via the `WAIT` command. This makes losing writes a lot less +likely. However, note that Redis Cluster does not implement strong consistency +even when synchronous replication is used: it is always possible, under more +complex failure scenarios, that a slave that was not able to receive the write +will be elected as master. There is another notable scenario where Redis Cluster will lose writes, that happens during a network partition where a client is isolated with a minority @@ -190,23 +190,23 @@ with 3 masters and 3 slaves. There is also a client, that we will call Z1. After a partition occurs, it is possible that in one side of the partition we have A, C, A1, B1, C1, and in the other side we have B and Z1. -Z1 is still able to write to B, that will accept its writes. If the +Z1 is still able to write to B, which will accept its writes. If the partition heals in a very short time, the cluster will continue normally. -However if the partition lasts enough time for B1 to be promoted to master -in the majority side of the partition, the writes that Z1 is sending to B -will be lost. +However, if the partition lasts enough time for B1 to be promoted to master +on the majority side of the partition, the writes that Z1 has sent to B +in the mean time will be lost. Note that there is a **maximum window** to the amount of writes Z1 will be able to send to B: if enough time has elapsed for the majority side of the partition to elect a slave as master, every master node in the minority -side stops accepting writes. +side will have stopped accepting writes. This amount of time is a very important configuration directive of Redis Cluster, and is called the **node timeout**. After node timeout has elapsed, a master node is considered to be failing, and can be replaced by one of its replicas. -Similarly after node timeout has elapsed without a master node to be able +Similarly, after node timeout has elapsed without a master node to be able to sense the majority of the other master nodes, it enters an error state and stops accepting writes. @@ -221,10 +221,10 @@ as you continue reading. * **cluster-enabled ``**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. * **cluster-config-file ``**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. * **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its slaves. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. -* **cluster-slave-validity-factor ``**: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster. +* **cluster-slave-validity-factor ``**: If set to zero, a slave will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no slave that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster. * **cluster-migration-barrier ``**: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. * **cluster-require-full-coverage ``**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. -* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as fail, either when a node can't reach a quorum of masters or full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. +* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. Creating and using a Redis Cluster @@ -302,7 +302,7 @@ Creating the cluster Now that we have a number of instances running, we need to create our cluster by writing some meaningful configuration to the nodes. -If you are using Redis 5, this is very easy to accomplish as we are helped by the Redis Cluster command line utility embedded into `redis-cli`, that can be used to create new clusters, check or reshard an existing cluster, and so forth. +If you are using Redis 5 or higher, this is very easy to accomplish as we are helped by the Redis Cluster command line utility embedded into `redis-cli`, that can be used to create new clusters, check or reshard an existing cluster, and so forth. For Redis version 3 or 4, there is the older tool called `redis-trib.rb` which is very similar. You can find it in the `src` directory of the Redis source code distribution. You need to install `redis` gem to be able to run `redis-trib`. From 4a6aaa3a2a974f8c1dc36dc0c8b02711b6be58e4 Mon Sep 17 00:00:00 2001 From: Nykolas Laurentino de Lima Date: Fri, 2 Oct 2020 02:07:36 -0300 Subject: [PATCH 0460/1457] Add GET parameter to SET command (#1405) Add optional GET parameter to SET command in order to set a new value to a key and retrieve the old key value. With this change we can deprecate `GETSET` command and use only the SET command with the GET parameter. Equivalent of adding an EX to GETSET. --- commands.json | 8 ++++++++ commands/getset.md | 2 ++ commands/set.md | 7 +++++-- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index f4c5f3379b..f6bb4016f3 100644 --- a/commands.json +++ b/commands.json @@ -2818,6 +2818,14 @@ "XX" ], "optional": true + }, + { + "name": "get", + "type": "enum", + "enum": [ + "GET" + ], + "optional": true } ], "since": "1.0.0", diff --git a/commands/getset.md b/commands/getset.md index ea68401543..b3674f6881 100644 --- a/commands/getset.md +++ b/commands/getset.md @@ -15,6 +15,8 @@ GETSET mycounter "0" GET mycounter ``` +As per Redis 6.2, GETSET is considered deprecated. Please use `SET` with `GET` parameter in new code. + @return @bulk-string-reply: the old value stored at `key`, or `nil` when `key` did not exist. diff --git a/commands/set.md b/commands/set.md index a4d2f411e1..7d9ec38476 100644 --- a/commands/set.md +++ b/commands/set.md @@ -11,18 +11,21 @@ The `SET` command supports a set of options that modify its behavior: * `NX` -- Only set the key if it does not already exist. * `XX` -- Only set the key if it already exist. * `KEEPTTL` -- Retain the time to live associated with the key. +* `GET` -- Return the old value stored at key, or nil when key did not exist. -Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it is possible that in future versions of Redis these three commands will be deprecated and finally removed. +Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, `GETSET`, it is possible that in future versions of Redis these three commands will be deprecated and finally removed. @return @simple-string-reply: `OK` if `SET` was executed correctly. -@nil-reply: a Null Bulk Reply is returned if the `SET` operation was not performed because the user specified the `NX` or `XX` option but the condition was not met. +@bulk-string-reply: when `GET` option is set, the old value stored at key, or nil when key did not exist. +@nil-reply: a Null Bulk Reply is returned if the `SET` operation was not performed because the user specified the `NX` or `XX` option but the condition was not met or if user specified the `NX` and `GET` options that do not met. @history * `>= 2.6.12`: Added the `EX`, `PX`, `NX` and `XX` options. * `>= 6.0`: Added the `KEEPTTL` option. +* `>= 6.2`: Added the `GET` option. @examples From be74d8adbbe34f0575587b596f7982c7e2d9d09d Mon Sep 17 00:00:00 2001 From: 1Jack2 <37974403+1Jack2@users.noreply.github.com> Date: Mon, 5 Oct 2020 23:24:36 +0800 Subject: [PATCH 0461/1457] change 'configEpoch' to 'currentEpoch'. (#1410) --- topics/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index b8eb801420..2a81b52f24 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -777,7 +777,7 @@ At node creation every Redis Cluster node, both slaves and master nodes, set the Every time a packet is received from another node, if the epoch of the sender (part of the cluster bus messages header) is greater than the local node epoch, the `currentEpoch` is updated to the sender epoch. -Because of these semantics, eventually all the nodes will agree to the greatest `configEpoch` in the cluster. +Because of these semantics, eventually all the nodes will agree to the greatest `currentEpoch` in the cluster. This information is used when the state of the cluster is changed and a node seeks agreement in order to perform some action. From ea5eaeb6503ee5d085c617961b41f00d34aea693 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Tue, 6 Oct 2020 12:27:49 +0300 Subject: [PATCH 0462/1457] update client list fields: tracking flags and argv-mem / tot-mem (#1409) --- commands/client-list.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands/client-list.md b/commands/client-list.md index 3843689be1..1ed45be669 100644 --- a/commands/client-list.md +++ b/commands/client-list.md @@ -31,6 +31,8 @@ Here is the meaning of the fields: * `omem`: output buffer memory usage * `events`: file descriptor events (see below) * `cmd`: last command played +* `argv-mem`: incomplete arguments for the next command (already extracted from query buffer) +* `tot-mem`: total memory consumed by this client in its various buffers The client flags can be a combination of: @@ -49,6 +51,8 @@ S: the client is a replica node connection to this instance u: the client is unblocked U: the client is connected via a Unix domain socket x: the client is in a MULTI/EXEC context +t: the client enabled keys tracking in order to perform client side caching +R: the client tracking target client is invalid ``` The file descriptor events can be: From 1963925f7cae5741d2b64b3a3296e0671c874e57 Mon Sep 17 00:00:00 2001 From: Felipe Machado <462154+felipou@users.noreply.github.com> Date: Thu, 8 Oct 2020 03:04:19 -0300 Subject: [PATCH 0463/1457] Add new LMOVE and BLMOVE commands deprecating [B]RPOPLPUSH (#1347) --- commands.json | 68 ++++++++++++++++++++++++++++++++ commands/blmove.md | 24 +++++++++++ commands/brpoplpush.md | 3 ++ commands/info.md | 2 +- commands/lmove.md | 81 ++++++++++++++++++++++++++++++++++++++ commands/rpoplpush.md | 3 ++ topics/data-types-intro.md | 4 +- topics/notifications.md | 1 + 8 files changed, 183 insertions(+), 3 deletions(-) create mode 100644 commands/blmove.md create mode 100644 commands/lmove.md diff --git a/commands.json b/commands.json index f6bb4016f3..1748905bbf 100644 --- a/commands.json +++ b/commands.json @@ -355,6 +355,42 @@ "since": "2.2.0", "group": "list" }, + "BLMOVE": { + "summary": "Pop an element from a list, push it to another list and return it; or block until one is available", + "complexity": "O(1)", + "arguments": [ + { + "name": "source", + "type": "key" + }, + { + "name": "destination", + "type": "key" + }, + { + "name": "wherefrom", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + }, + { + "name": "whereto", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + }, + { + "name": "timeout", + "type": "integer" + } + ], + "since": "6.2.0", + "group": "list" + }, "BZPOPMIN": { "summary": "Remove and return the member with the lowest score from one or more sorted sets, or block until one is available", "complexity": "O(log(N)) with N being the number of elements in the sorted set.", @@ -2625,6 +2661,38 @@ "since": "1.2.0", "group": "list" }, + "LMOVE": { + "summary": "Pop an element from a list, push it to another list and return it", + "complexity": "O(1)", + "arguments": [ + { + "name": "source", + "type": "key" + }, + { + "name": "destination", + "type": "key" + }, + { + "name": "wherefrom", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + }, + { + "name": "whereto", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + } + ], + "since": "6.2.0", + "group": "list" + }, "RPUSH": { "summary": "Append one or multiple elements to a list", "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", diff --git a/commands/blmove.md b/commands/blmove.md new file mode 100644 index 0000000000..e1d5be924b --- /dev/null +++ b/commands/blmove.md @@ -0,0 +1,24 @@ +`BLMOVE` is the blocking variant of `LMOVE`. +When `source` contains elements, this command behaves exactly like `LMOVE`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `LMOVE`. +When `source` is empty, Redis will block the connection until another client +pushes to it or until `timeout` is reached. +A `timeout` of zero can be used to block indefinitely. + +This command comes in place of the now deprecated `BRPOPLPUSH`. Doing +`BLMOVE RIGHT LEFT` is equivalent. + +See `LMOVE` for more information. + +@return + +@bulk-string-reply: the element being popped from `source` and pushed to `destination`. +If `timeout` is reached, a @nil-reply is returned. + +## Pattern: Reliable queue + +Please see the pattern description in the `LMOVE` documentation. + +## Pattern: Circular list + +Please see the pattern description in the `LMOVE` documentation. diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 9a6fe376d9..7f260d8f6c 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -5,6 +5,9 @@ When `source` is empty, Redis will block the connection until another client pushes to it or until `timeout` is reached. A `timeout` of zero can be used to block indefinitely. +As per Redis 6.2.0, BRPOPLPUSH is considered deprecated. Please use `BLMOVE` in +new code. + See `RPOPLPUSH` for more information. @return diff --git a/commands/info.md b/commands/info.md index 18202face6..7d18df11cd 100644 --- a/commands/info.md +++ b/commands/info.md @@ -78,7 +78,7 @@ Here is the meaning of all fields in the **clients** section: * `client_biggest_input_buf`: Biggest input buffer among current client connections * `blocked_clients`: Number of clients pending on a blocking call (`BLPOP`, - `BRPOP`, `BRPOPLPUSH`, `BZPOPMIN`, `BZPOPMAX`) + `BRPOP`, `BRPOPLPUSH`, `BLMOVE`, `BZPOPMIN`, `BZPOPMAX`) * `tracking_clients`: Number of clients being tracked (`CLIENT TRACKING`) * `clients_in_timeout_table`: Number of clients in the clients timeout table * `io_threads_active`: Flag indicating if I/O threads are active diff --git a/commands/lmove.md b/commands/lmove.md new file mode 100644 index 0000000000..ec62ced7ab --- /dev/null +++ b/commands/lmove.md @@ -0,0 +1,81 @@ +Atomically returns and removes the first/last element (head/tail depending on +the `wherefrom` argument) of the list stored at `source`, and pushes the +element at the first/last element (head/tail depending on the `whereto` +argument) of the list stored at `destination`. + +For example: consider `source` holding the list `a,b,c`, and `destination` +holding the list `x,y,z`. +Executing `LMOVE source destination RIGHT LEFT` results in `source` holding +`a,b` and `destination` holding `c,x,y,z`. + +If `source` does not exist, the value `nil` is returned and no operation is +performed. +If `source` and `destination` are the same, the operation is equivalent to +removing the first/last element from the list and pushing it as first/last +element of the list, so it can be considered as a list rotation command (or a +no-op if `wherefrom` is the same as `whereto`). + +This command comes in place of the now deprecated `RPOPLPUSH`. Doing +`LMOVE RIGHT LEFT` is equivalent. + +@return + +@bulk-string-reply: the element being popped and pushed. + +@examples + +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LMOVE mylist myotherlist RIGHT LEFT +LMOVE mylist myotherlist LEFT RIGHT +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` + +## Pattern: Reliable queue + +Redis is often used as a messaging server to implement processing of background +jobs or other kinds of messaging tasks. +A simple form of queue is often obtained pushing values into a list in the +producer side, and waiting for this values in the consumer side using `RPOP` +(using polling), or `BRPOP` if the client is better served by a blocking +operation. + +However in this context the obtained queue is not _reliable_ as messages can +be lost, for example in the case there is a network problem or if the consumer +crashes just after the message is received but it is still to process. + +`LMOVE` (or `BLMOVE` for the blocking variant) offers a way to avoid +this problem: the consumer fetches the message and at the same time pushes it +into a _processing_ list. +It will use the `LREM` command in order to remove the message from the +_processing_ list once the message has been processed. + +An additional client may monitor the _processing_ list for items that remain +there for too much time, and will push those timed out items into the queue +again if needed. + +## Pattern: Circular list + +Using `LMOVE` with the same source and destination key, a client can visit +all the elements of an N-elements list, one after the other, in O(N) without +transferring the full list from the server to the client using a single `LRANGE` +operation. + +The above pattern works even if the following two conditions: + +* There are multiple clients rotating the list: they'll fetch different + elements, until all the elements of the list are visited, and the process + restarts. +* Even if other clients are actively pushing new items at the end of the list. + +The above makes it very simple to implement a system where a set of items must +be processed by N workers continuously as fast as possible. +An example is a monitoring system that must check that a set of web sites are +reachable, with the smallest delay possible, using a number of parallel workers. + +Note that this implementation of workers is trivially scalable and reliable, +because even if a message is lost the item is still in the queue and will be +processed at the next iteration. diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 49aa8fa798..121c05090f 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -13,6 +13,9 @@ If `source` and `destination` are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command. +As per Redis 6.2.0, RPOPLPUSH is considered deprecated. Please use `LMOVE` in +new code. + @return @bulk-string-reply: the element being popped and pushed. diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 13b55cb591..e1b6515fb3 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -445,8 +445,8 @@ A few things to note about `BRPOP`: There are more things you should know about lists and blocking ops. We suggest that you read more on the following: -* It is possible to build safer queues or rotating queues using `RPOPLPUSH`. -* There is also a blocking variant of the command, called `BRPOPLPUSH`. +* It is possible to build safer queues or rotating queues using `LMOVE`. +* There is also a blocking variant of the command, called `BLMOVE`. Automatic creation and removal of keys --- diff --git a/topics/notifications.md b/topics/notifications.md index 786e348a34..bf2c0f3b26 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -118,6 +118,7 @@ Different commands generate different kind of events according to the following * `LREM` generates an `lrem` event, and additionally a `del` event if the resulting list is empty and the key is removed. * `LTRIM` generates an `ltrim` event, and additionally a `del` event if the resulting list is empty and the key is removed. * `RPOPLPUSH` and `BRPOPLPUSH` generate an `rpop` event and an `lpush` event. In both cases the order is guaranteed (the `lpush` event will always be delivered after the `rpop` event). Additionally a `del` event will be generated if the resulting list is zero length and the key is removed. +* `LMOVE` and `BLMOVE` generate an `lpop`/`rpop` event (depending on the wherefrom argument) and an `lpush`/`rpush` event (depending on the whereto argument). In both cases the order is guaranteed (the `lpush`/`rpush` event will always be delivered after the `lpop`/`rpop` event). Additionally a `del` event will be generated if the resulting list is zero length and the key is removed. * `HSET`, `HSETNX` and `HMSET` all generate a single `hset` event. * `HINCRBY` generates an `hincrby` event. * `HINCRBYFLOAT` generates an `hincrbyfloat` event. From 4d64e538e115e0940e8e41719898540f32acd70f Mon Sep 17 00:00:00 2001 From: Lin Taylor Date: Fri, 9 Oct 2020 10:56:14 -0400 Subject: [PATCH 0464/1457] Fix minor typos and line lengths in mass-insert.md (#921) * Fix typo * A couple more minor typos * Wrap line lengths to 80 characters Co-authored-by: Itamar Haber --- topics/mass-insert.md | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index ea81e09094..cfa8e305a7 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -122,7 +122,7 @@ first mass import session. errors: 0, replies: 1000 How the pipe mode works under the hood ---------------------------------------- +-------------------------------------- The magic needed inside the pipe mode of redis-cli is to be as fast as netcat and still be able to understand when the last reply was sent by the server @@ -132,9 +132,17 @@ This is obtained in the following way: + redis-cli --pipe tries to send data as fast as possible to the server. + At the same time it reads data when available, trying to parse it. -+ Once there is no more data to read from stdin, it sends a special **ECHO** command with a random 20 bytes string: we are sure this is the latest command sent, and we are sure we can match the reply checking if we receive the same 20 bytes as a bulk reply. -+ Once this special final command is sent, the code receiving replies starts to match replies with these 20 bytes. When the matching reply is reached it can exit with success. - -Using this trick we don't need to parse the protocol we send to the server in order to understand how many commands we are sending, but just the replies. - -However while parsing the replies we take a counter of all the replies parsed so that at the end we are able to tell the user the amount of commands transferred to the server by the mass insert session. ++ Once there is no more data to read from stdin, it sends a special **ECHO** +command with a random 20 byte string: we are sure this is the latest command +sent, and we are sure we can match the reply checking if we receive the same +20 bytes as a bulk reply. ++ Once this special final command is sent, the code receiving replies starts +to match replies with these 20 bytes. When the matching reply is reached it +can exit with success. + +Using this trick we don't need to parse the protocol we send to the server +in order to understand how many commands we are sending, but just the replies. + +However while parsing the replies we take a counter of all the replies parsed +so that at the end we are able to tell the user the amount of commands +transferred to the server by the mass insert session. From e0cdd4054825e4435cf0eb29f14b3b92f77d0d25 Mon Sep 17 00:00:00 2001 From: Stefan Merettig Date: Fri, 9 Oct 2020 17:18:04 +0200 Subject: [PATCH 0465/1457] Fix typo in streams-intro (#1064) * streams-intro: Grammar fixes Co-authored-by: Itamar Haber --- topics/streams-intro.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index e0bdc03673..fb6bae323a 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -596,7 +596,7 @@ Many applications do not want to collect data into a stream forever. Sometimes i Using **MAXLEN** the old entries are automatically evicted when the specified length is reached, so that the stream is taken at a constant size. There is currently no option to tell the stream to just retain items that are not older than a given amount, because such command, in order to run consistently, would have to potentially block for a lot of time in order to evict items. Imagine for example what happens if there is an insertion spike, then a long pause, and another insertion, all with the same maximum time. The stream would block to evict the data that became too old during the pause. So it is up to the user to do some planning and understand what is the maximum stream length desired. Moreover, while the length of the stream is proportional to the memory used, trimming by time is less simple to control and anticipate: it depends on the insertion rate that is a variable often changing over time (and when it does not change, then to just trim by size is trivial). -However trimming with **MAXLEN** can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it is possible to give the command in the following special form: +However trimming with **MAXLEN** can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it's possible to use the command in the following special form: ``` XADD mystream MAXLEN ~ 1000 * ... entry fields here ... @@ -604,7 +604,7 @@ XADD mystream MAXLEN ~ 1000 * ... entry fields here ... The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. -There is also the **XTRIM** command available, which performs something very similar to what the **MAXLEN** option does above, but this command does not need to add anything, it can be run against any stream in a standalone way. +There is also the **XTRIM** command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: ``` > XTRIM mystream MAXLEN 10 @@ -616,15 +616,15 @@ Or, as for the **XADD** option: > XTRIM mystream MAXLEN ~ 10 ``` -However, **XTRIM** is designed to accept different trimming strategies, even if currently only **MAXLEN** is implemented. Given that this is an explicit command, it is possible that in the future it will allow to specify trimming by time, because the user calling this command in a stand-alone way is supposed to know what she or he is doing. +However, **XTRIM** is designed to accept different trimming strategies, even if only **MAXLEN** is currently implemented. -One useful eviction strategy that **XTRIM** should have is probably the ability to remove by a range of IDs. This is currently not possible, but will be likely implemented in the future in order to more easily use **XRANGE** and **XTRIM** together to move data from Redis to other storage systems if needed. +As **XTRIM** is an explicit command, the user is expected to know about the possible shortcomings of different trimming strategies. As such, it's possible that trimming by time will be implemented at a later time. + +Another useful eviction strategy that **XTRIM** may learn later is to remove by a range of IDs to ease use of **XRANGE** and **XTRIM** to move data from Redis to other storage systems if needed. ## Special IDs in the streams API -You may have noticed that there are several special IDs that can be -used in the Redis streams API. Here is a short recap, so that they can make more -sense in the future. +You may have noticed that there are several special IDs that can be used in the Redis API. Here is a short recap, so that they can make more sense in the future. The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively mean the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. From 27b34645086e3fa3f00b134dcf2ed063044819a0 Mon Sep 17 00:00:00 2001 From: piglig <1197437384@qq.com> Date: Fri, 9 Oct 2020 23:20:35 +0800 Subject: [PATCH 0466/1457] Fix document spelling errors (#1413) Co-authored-by: root --- topics/rediscli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 26b2535e64..6b6c7d0643 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -350,7 +350,7 @@ syntax hints. This behavior can be turned on and off via the CLI preferences. ## Preferences -TThere are two ways to customize the CLI's behavior. The file `.redisclirc` +There are two ways to customize the CLI's behavior. The file `.redisclirc` in your home directory is loaded by the CLI on startup. Preferences can also be set during a CLI session, in which case they will last only the the duration of the session. From e978cf5875a257ba4a3389abfa844a23df28c0f0 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 10 Oct 2020 14:35:03 +0300 Subject: [PATCH 0467/1457] Uses double timeout value in blocking commands (#1411) --- commands.json | 12 ++++++------ commands/blpop.md | 6 +++++- commands/brpop.md | 4 ++++ commands/brpoplpush.md | 4 ++++ commands/bzpopmax.md | 6 +++++- commands/bzpopmin.md | 6 +++++- 6 files changed, 29 insertions(+), 9 deletions(-) diff --git a/commands.json b/commands.json index 1748905bbf..b65afe1a8b 100644 --- a/commands.json +++ b/commands.json @@ -312,7 +312,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "2.0.0", @@ -329,7 +329,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "2.0.0", @@ -349,7 +349,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "2.2.0", @@ -385,7 +385,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "6.2.0", @@ -402,7 +402,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "5.0.0", @@ -419,7 +419,7 @@ }, { "name": "timeout", - "type": "integer" + "type": "double" } ], "since": "5.0.0", diff --git a/commands/blpop.md b/commands/blpop.md index b0777f9e2f..3a0abf6f9c 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -36,7 +36,7 @@ the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. -**The timeout argument is interpreted as an integer value specifying the maximum number of seconds to block**. A timeout of zero can be used to block indefinitely. +**The timeout argument is interpreted as a double value specifying the maximum number of seconds to block**. A timeout of zero can be used to block indefinitely. ## What key is served first? What client? What element? Priority ordering details. @@ -91,6 +91,10 @@ If you like science fiction, think of time flowing at infinite speed inside a where an element was popped and the second element being the value of the popped element. +@history + +* `>= 6.0`: `timeout` is interpreted as a double instead of an integer. + @examples ``` diff --git a/commands/brpop.md b/commands/brpop.md index dfa2b91cac..1e0f22a0a0 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -19,6 +19,10 @@ the tail of a list instead of popping from the head. where an element was popped and the second element being the value of the popped element. +@history + +* `>= 6.0`: `timeout` is interpreted as a double instead of an integer. + @examples ``` diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 7f260d8f6c..3547ff49ee 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -15,6 +15,10 @@ See `RPOPLPUSH` for more information. @bulk-string-reply: the element being popped from `source` and pushed to `destination`. If `timeout` is reached, a @nil-reply is returned. +@history + +* `>= 6.0`: `timeout` is interpreted as a double instead of an integer. + ## Pattern: Reliable queue Please see the pattern description in the `RPOPLPUSH` documentation. diff --git a/commands/bzpopmax.md b/commands/bzpopmax.md index 1c99c247a4..d3a38cd07c 100644 --- a/commands/bzpopmax.md +++ b/commands/bzpopmax.md @@ -5,7 +5,7 @@ members to pop from any of the given sorted sets. A member with the highest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given. -The `timeout` argument is interpreted as an integer value specifying the maximum +The `timeout` argument is interpreted as a double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely. See the [BZPOPMIN documentation][cb] for the exact semantics, since `BZPOPMAX` @@ -23,6 +23,10 @@ with the highest scores instead of popping the ones with the lowest scores. where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element. +@history + +* `>= 6.0`: `timeout` is interpreted as a double instead of an integer. + @examples ``` diff --git a/commands/bzpopmin.md b/commands/bzpopmin.md index 1b75aa4f4a..651e229301 100644 --- a/commands/bzpopmin.md +++ b/commands/bzpopmin.md @@ -5,7 +5,7 @@ members to pop from any of the given sorted sets. A member with the lowest score is popped from first sorted set that is non-empty, with the given keys being checked in the order that they are given. -The `timeout` argument is interpreted as an integer value specifying the maximum +The `timeout` argument is interpreted as an double value specifying the maximum number of seconds to block. A timeout of zero can be used to block indefinitely. See the [BLPOP documentation][cl] for the exact semantics, since `BZPOPMIN` is @@ -23,6 +23,10 @@ popped from. where a member was popped, the second element is the popped member itself, and the third element is the score of the popped element. +@history + +* `>= 6.0`: `timeout` is interpreted as a double instead of an integer. + @examples ``` From 8fa5ba577f3f9a90f55027c5a0d83bec8271ad27 Mon Sep 17 00:00:00 2001 From: Cristian Greco Date: Sat, 10 Oct 2020 12:37:35 +0100 Subject: [PATCH 0468/1457] Rewrite all links to redis.io with https scheme. (#1412) On some pages, Firefox shows a warning in the url bar when visiting the website over https, caused by images being loaded over http. For the sake of completeness, here I've moved all internal links to use https scheme. --- commands/set.md | 2 +- commands/setnx.md | 2 +- topics/clients.md | 4 ++-- topics/cluster-spec.md | 2 +- topics/data-types-intro.md | 2 +- topics/indexes.md | 6 +++--- topics/lru-cache.md | 2 +- topics/memory-optimization.md | 2 +- topics/partitioning.md | 2 +- topics/pipelining.md | 2 +- topics/quickstart.md | 10 +++++----- topics/twitter-clone.md | 2 +- 12 files changed, 19 insertions(+), 19 deletions(-) diff --git a/commands/set.md b/commands/set.md index 7d9ec38476..4697f33652 100644 --- a/commands/set.md +++ b/commands/set.md @@ -38,7 +38,7 @@ SET anotherkey "will expire in a minute" EX 60 ## Patterns -**Note:** The following pattern is discouraged in favor of [the Redlock algorithm](http://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. +**Note:** The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. The command `SET resource-name anystring NX EX max-lock-time` is a simple way to implement a locking system with Redis. diff --git a/commands/setnx.md b/commands/setnx.md index 94d0b517b9..833573c45e 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -22,7 +22,7 @@ GET mykey **Please note that:** -1. The following pattern is discouraged in favor of [the Redlock algorithm](http://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. +1. The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. 2. We document the old pattern anyway because certain existing implementations link to this page as a reference. Moreover it is an interesting example of how Redis commands can be used in order to mount programming primitives. 3. Anyway even assuming a single-instance locking primitive, starting with 2.6.12 it is possible to create a much simpler locking primitive, equivalent to the one discussed here, using the `SET` command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the `SET` command page. diff --git a/topics/clients.md b/topics/clients.md index f621d0f653..da8e6d75ff 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -153,11 +153,11 @@ In the above example session two clients are connected to the Redis server. The * **name**: The client name as set by `CLIENT SETNAME`. * **age**: The number of seconds the connection existed for. * **idle**: The number of seconds the connection is idle. -* **flags**: The kind of client (N means normal client, check the [full list of flags](http://redis.io/commands/client-list)). +* **flags**: The kind of client (N means normal client, check the [full list of flags](https://redis.io/commands/client-list)). * **omem**: The amount of memory used by the client for the output buffer. * **cmd**: The last executed command. -See the [CLIENT LIST](http://redis.io/commands/client-list) documentation for the full list of fields and their meaning. +See the [CLIENT LIST](https://redis.io/commands/client-list) documentation for the full list of fields and their meaning. Once you have the list of clients, you can easily close the connection with a client using the `CLIENT KILL` command specifying the client address as argument. diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 2a81b52f24..aba8e73588 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -273,7 +273,7 @@ the node was pinged and the last time the pong was received, the current *configuration epoch* of the node (explained later in this specification), the link state and finally the set of hash slots served. -A detailed [explanation of all the node fields](http://redis.io/commands/cluster-nodes) is described in the `CLUSTER NODES` documentation. +A detailed [explanation of all the node fields](https://redis.io/commands/cluster-nodes) is described in the `CLUSTER NODES` documentation. The `CLUSTER NODES` command can be sent to any node in the cluster and provides the state of the cluster and the information for each node according to the local view the queried node has of the cluster. diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index e1b6515fb3..f7c00ef576 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -548,7 +548,7 @@ as well, like `HINCRBY`: > hincrby user:1000 birthyear 10 (integer) 1997 -You can find the [full list of hash commands in the documentation](http://redis.io/commands#hash). +You can find the [full list of hash commands in the documentation](https://redis.io/commands#hash). It is worth noting that small hashes (i.e., a few elements with small values) are encoded in special way in memory that make them very memory efficient. diff --git a/topics/indexes.md b/topics/indexes.md index f2ef46b332..1460022e61 100644 --- a/topics/indexes.md +++ b/topics/indexes.md @@ -547,7 +547,7 @@ our coordinates. Both variables max value is 400. The blue box in the picture represents our query. We want all the points where `x` is between 50 and 100, and where `y` is between 100 and 300. -![Points in the space](http://redis.io/images/redisdoc/2idx_0.png) +![Points in the space](https://redis.io/images/redisdoc/2idx_0.png) In order to represent data that makes these kinds of queries fast to perform, we start by padding our numbers with 0. So for example imagine we want to @@ -586,7 +586,7 @@ variable is between 70 and 79, and the `y` variable is between 200 and 209. We can write random points in this interval, in order to identify this specific area: -![Small area](http://redis.io/images/redisdoc/2idx_1.png) +![Small area](https://redis.io/images/redisdoc/2idx_1.png) So the above lexicographic query allows us to easily query for points in a specific square in the picture. However the square may be too small for @@ -601,7 +601,7 @@ This time the range represents all the points where `x` is between 0 and 99 and `y` is between 200 and 299. Drawing random points in this interval shows us this larger area: -![Large area](http://redis.io/images/redisdoc/2idx_2.png) +![Large area](https://redis.io/images/redisdoc/2idx_2.png) Oops now our area is ways too big for our query, and still our search box is not completely included. We need more granularity, but we can easily obtain diff --git a/topics/lru-cache.md b/topics/lru-cache.md index 20d4f355f2..91a7184f0e 100644 --- a/topics/lru-cache.md +++ b/topics/lru-cache.md @@ -105,7 +105,7 @@ costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis compares with true LRU. -![LRU comparison](http://redis.io/images/redisdoc/lru_comparison.png) +![LRU comparison](https://redis.io/images/redisdoc/lru_comparison.png) The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted. diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 35b3b8fefd..3b09a9e0fb 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -187,7 +187,7 @@ To store user keys, Redis allocates at most as much memory as the `maxmemory` setting enables (however there are small extra allocations possible). The exact value can be set in the configuration file or set later via -`CONFIG SET` (see [Using memory as an LRU cache for more info](http://redis.io/topics/lru-cache)). There are a few things that should be noted about how +`CONFIG SET` (see [Using memory as an LRU cache for more info](https://redis.io/topics/lru-cache)). There are a few things that should be noted about how Redis manages memory: * Redis will not always free up (return) memory to the OS when keys are removed. diff --git a/topics/partitioning.md b/topics/partitioning.md index 2989ae4727..bba6ac2535 100644 --- a/topics/partitioning.md +++ b/topics/partitioning.md @@ -116,4 +116,4 @@ Clients supporting consistent hashing An alternative to Twemproxy is to use a client that implements client side partitioning via consistent hashing or other similar algorithms. There are multiple Redis clients with support for consistent hashing, notably [Redis-rb](https://github.com/redis/redis-rb) and [Predis](https://github.com/nrk/predis). -Please check the [full list of Redis clients](http://redis.io/clients) to check if there is a mature client with consistent hashing implementation for your language. +Please check the [full list of Redis clients](https://redis.io/clients) to check if there is a mature client with consistent hashing implementation for your language. diff --git a/topics/pipelining.md b/topics/pipelining.md index 20c30a2a71..3554eaf4f8 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -129,7 +129,7 @@ Pipelining VS Scripting Using [Redis scripting](/commands/eval) (available in Redis version 2.6 or greater) a number of use cases for pipelining can be addressed more efficiently using scripts that perform a lot of the work needed at the server side. A big advantage of scripting is that it is able to both read and write data with minimal latency, making operations like *read, compute, write* very fast (pipelining can't help in this scenario since the client needs the reply of the read command before it can call the write command). -Sometimes the application may also want to send `EVAL` or `EVALSHA` commands in a pipeline. This is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD](http://redis.io/commands/script-load) command (it guarantees that `EVALSHA` can be called without the risk of failing). +Sometimes the application may also want to send `EVAL` or `EVALSHA` commands in a pipeline. This is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD](https://redis.io/commands/script-load) command (it guarantees that `EVALSHA` can be called without the risk of failing). Appendix: Why are busy loops slow even on the loopback interface? --- diff --git a/topics/quickstart.md b/topics/quickstart.md index 733023884a..b1addb4f2a 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -19,7 +19,7 @@ Redis has no dependencies other than a working GCC compiler and libc. Installing it using the package manager of your Linux distribution is somewhat discouraged as usually the available version is not the latest. -You can either download the latest Redis tar ball from the [redis.io](http://redis.io) web site, or you can alternatively use this special URL that always points to the latest stable Redis version, that is, [http://download.redis.io/redis-stable.tar.gz](http://download.redis.io/redis-stable.tar.gz). +You can either download the latest Redis tar ball from the [redis.io](https://redis.io) web site, or you can alternatively use this special URL that always points to the latest stable Redis version, that is, [http://download.redis.io/redis-stable.tar.gz](http://download.redis.io/redis-stable.tar.gz). In order to compile Redis follow these simple steps: @@ -83,7 +83,7 @@ Another interesting way to run redis-cli is without arguments: the program will redis 127.0.0.1:6379> get mykey "somevalue" -At this point you are able to talk with Redis. It is the right time to pause a bit with this tutorial and start the [fifteen minutes introduction to Redis data types](http://redis.io/topics/data-types-intro) in order to learn a few Redis commands. Otherwise if you already know a few basic Redis commands you can keep reading. +At this point you are able to talk with Redis. It is the right time to pause a bit with this tutorial and start the [fifteen minutes introduction to Redis data types](https://redis.io/topics/data-types-intro) in order to learn a few Redis commands. Otherwise if you already know a few basic Redis commands you can keep reading. Securing Redis === @@ -109,7 +109,7 @@ Using Redis from your application Of course using Redis just from the command line interface is not enough as the goal is to use it from your application. In order to do so you need to download and install a Redis client library for your programming language. -You'll find a [full list of clients for different languages in this page](http://redis.io/clients). +You'll find a [full list of clients for different languages in this page](https://redis.io/clients). For instance if you happen to use the Ruby programming language our best advice is to use the [Redis-rb](https://github.com/redis/redis-rb) client. @@ -135,12 +135,12 @@ commands calling methods. A short interactive example using Ruby: Redis persistence ================= -You can learn [how Redis persistence works on this page](http://redis.io/topics/persistence), however what is important to understand for a quick start is that by default, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time (for instance after at least five minutes if you have at least 100 changes in your data), so if you want your database to persist and be reloaded after a restart make sure to call the **SAVE** command manually every time you want to force a data set snapshot. Otherwise make sure to shutdown the database using the **SHUTDOWN** command: +You can learn [how Redis persistence works on this page](https://redis.io/topics/persistence), however what is important to understand for a quick start is that by default, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time (for instance after at least five minutes if you have at least 100 changes in your data), so if you want your database to persist and be reloaded after a restart make sure to call the **SAVE** command manually every time you want to force a data set snapshot. Otherwise make sure to shutdown the database using the **SHUTDOWN** command: $ redis-cli shutdown This way Redis will make sure to save the data on disk before quitting. -Reading the [persistence page](http://redis.io/topics/persistence) is strongly suggested in order to better understand how Redis persistence works. +Reading the [persistence page](https://redis.io/topics/persistence) is strongly suggested in order to better understand how Redis persistence works. Installing Redis more properly ============================== diff --git a/topics/twitter-clone.md b/topics/twitter-clone.md index d52a079191..2603eee868 100644 --- a/topics/twitter-clone.md +++ b/topics/twitter-clone.md @@ -143,7 +143,7 @@ also to retrieve its score if it exists, we use the `ZSCORE` command: Sorted Sets are a very powerful data structure, you can query elements by score range, lexicographically, in reverse order, and so forth. -To know more [please check the Sorted Set sections in the official Redis commands documentation](http://redis.io/commands/#sorted_set). +To know more [please check the Sorted Set sections in the official Redis commands documentation](https://redis.io/commands/#sorted_set). The Hash data type --- From 99a0aa5242ce7fb1f4b2293a3ef8b142e46f8dcb Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Sun, 11 Oct 2020 20:52:13 +0300 Subject: [PATCH 0469/1457] Add redex (#1415) * Add redex * Update modules.json --- modules.json | 41 +++++++++++++++-------------------------- 1 file changed, 15 insertions(+), 26 deletions(-) diff --git a/modules.json b/modules.json index c7e9915ea6..61d1d4b333 100644 --- a/modules.json +++ b/modules.json @@ -7,7 +7,7 @@ "authors": [ "antirez" ], - "stars": 332 + "stars": 457 }, { "name": "RedisGears", @@ -81,29 +81,7 @@ "dvirsky", "RedisLabs" ], - "stars": 2585 - }, - { - "name": "topk", - "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/topk", - "description": "An almost deterministic top k elements counter", - "authors": [ - "itamarhaber", - "RedisLabs" - ], - "stars": 32 - }, - { - "name": "countminsketch", - "license": "Redis Source Available License", - "repository": "https://github.com/RedisLabsModules/countminsketch", - "description": "An apporximate frequency counter", - "authors": [ - "itamarhaber", - "RedisLabs" - ], - "stars": 39 + "stars": 2616 }, { "name": "RedisBloom", @@ -135,7 +113,7 @@ "danni-m", "RedisLabs" ], - "stars": 452 + "stars": 455 }, { "name": "RedisAI", @@ -339,7 +317,8 @@ ], "stars":0 }, - { "name":"Redis-ImageScout", + { + "name":"Redis-ImageScout", "license": "pHash Redis Source Available License", "repository": "https://github.com/starkdg/Redis-ImageScout.git", "description": "Redis module for Indexing of pHash Image fingerprints for Near-Duplicate Detection", @@ -347,5 +326,15 @@ "starkdg" ], "stars":2 + }, + { + "name":"redex", + "license": "AGPL-3.0", + "repository": "https://github.com/RedisLabsModules/redex.git", + "description": "Extension modules to Redis' native data types and commands", + "authors": [ + "itamarhaber" + ], + "stars":52 } ] From c387a8f0c330f2af8c96cf46919e2f1248316a37 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Sun, 25 Oct 2020 08:45:13 -0400 Subject: [PATCH 0470/1457] Updates to Sentinel doc (#1417) * Adds ACL authentication for Redis and Sentinel * Documents missing commands and minimal versions * Generic edits Co-authored-by: Oran Agra Co-authored-by: Itamar Haber --- topics/sentinel.md | 114 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 87 insertions(+), 27 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 70458a7532..f82c1d097f 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -583,30 +583,50 @@ so forth. Sentinel commands --- -The following is a list of accepted commands, not covering commands used in -order to modify the Sentinel configuration, which are covered later. - +The `SENTINEL` command, as of Redis 2.8, is the main API for Sentinel. The following is the list of its subcommands (minimal version is noted for where applicable): + +* **SENTINEL CKQUORUM ``** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. +* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. +* **SENTINEL FAILOVER ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). +* **SENTINEL GET-MASTER-ADDR-BY-NAME ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. +* **SENTINEL INFO-CACHE** (`>= 3.2`) Return cached `INFO` output from masters and replicas. +* **SENTINEL IS-MASTER-DOWN-BY-ADDR ** Check if the master specified by ip:port is down from current Sentinel's point of view. This command is mostly for internal use. +* **SENTINEL MASTER ``** Show the state and info of the specified master. +* **SENTINEL MASTERS** Show a list of monitored masters and their state. +* **SENTINEL MONITOR** Start Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL MYID** (`>= 6.2`) Return the ID of the Sentinel instance. +* **SENTINEL PENDING-SCRIPTS** This command returns information about pending scripts. +* **SENTINEL REMOVE** Stop Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL REPLICAS ``** (`>= 5.0`) Show a list of replicas for this master, and their state. +* **SENTINEL SENTINELS ``** Show a list of sentinel instances for this master, and their state. +* **SENTINEL SET** Set Sentinel's monitoring configuration. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL SIMULATE-FAILURE (crash-after-election|crash-after-promotion|help)** (`>= 3.2`) This command simulates different Sentinel crash scenarios. +* **SENTINEL RESET ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. + +For connection management and administration purposes, Sentinel supports the following subset of Redis' commands: + +* **ACL** (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](/topics/acl) documentation page and the [_Sentinel Access Control List authentication_](#sentinel-access-control-list-authentication). +* **AUTH** (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [_Configuring Sentinel instances with authentication_ section](#configuring-sentinel-instances-with-authentication). +* **CLIENT** This command manages client connections. For more information refer to the its subcommands' pages. +* **COMMAND** (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands. +* **HELLO** (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command. +* **INFO** Return information and statistics about the Sentinel server. For more information see the `INFO` command. * **PING** This command simply returns PONG. -* **SENTINEL masters** Show a list of monitored masters and their state. -* **SENTINEL master ``** Show the state and info of the specified master. -* **SENTINEL replicas ``** Show a list of replicas for this master, and their state. -* **SENTINEL sentinels ``** Show a list of sentinel instances for this master, and their state. -* **SENTINEL get-master-addr-by-name ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. -* **SENTINEL reset ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. -* **SENTINEL failover ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). -* **SENTINEL ckquorum ``** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. -* **SENTINEL flushconfig** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. +* **ROLE** This command returns the string "sentinel" and a list of monitored masters. For more information refer to the `ROLE` command. +* **SHUTDOWN** Shut down the Sentinel instance. + +Lastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands. Refer to the [_Pub/Sub Messages_ section](#pubsub-messages) for more details. Reconfiguring Sentinel at Runtime --- Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagates the changes to the other Sentinels in the network. -The following is a list of `SENTINEL` sub commands used in order to update the configuration of a Sentinel instance. +The following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance. * **SENTINEL MONITOR `` `` `` ``** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use an hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. * **SENTINEL REMOVE ``** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth. -* **SENTINEL SET `` `