From 7282fadc4b1e1a2c738b0790c292b288d2fe6b68 Mon Sep 17 00:00:00 2001 From: monkey92t Date: Thu, 15 Jul 2021 15:48:04 +0800 Subject: [PATCH 001/813] GEOSEARCH, GEOSEARCHSTORE more detailed examples (#1597) --- commands/geosearch.md | 2 +- commands/geosearchstore.md | 15 +++++++++++++++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/commands/geosearch.md b/commands/geosearch.md index 8049ea2d9e..972c1c984d 100644 --- a/commands/geosearch.md +++ b/commands/geosearch.md @@ -47,5 +47,5 @@ When additional information is returned as an array of arrays for each item, the GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" GEOSEARCH Sicily FROMLONLAT 15 37 BYRADIUS 200 km ASC -GEOSEARCH Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC +GEOSEARCH Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST ``` diff --git a/commands/geosearchstore.md b/commands/geosearchstore.md index fc47a3f05e..2a4fc38d15 100644 --- a/commands/geosearchstore.md +++ b/commands/geosearchstore.md @@ -5,3 +5,18 @@ This command comes in place of the now deprecated `GEORADIUS` and `GEORADIUSBYME By default, it stores the results in the `destination` sorted set with their geospatial information. When using the `STOREDIST` option, the command stores the items in a sorted set populated with their distance from the center of the circle or box, as a floating-point number, in the same unit specified for that shape. + +@return + +@integer-reply: the number of elements in the resulting set. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" +GEOSEARCHSTORE key1 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 +GEOSEARCH key1 FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST WITHHASH +GEOSEARCHSTORE key2 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 STOREDIST +ZRANGE key2 0 -1 WITHSCORES +``` \ No newline at end of file From e722a583bffd7337a7507101202f54e4344ee905 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Thu, 15 Jul 2021 19:39:05 -0400 Subject: [PATCH 002/813] Document the INFO field "sentinel_tilt_since_seconds" for Sentinel (#1594) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds description of the INFO field "sentinel_tilt_since_seconds" for Sentinel, with example output of the INFO command. The new INFO field for Sentinel was added in the Redis pull request https://github.com/redis/redis/pull/9000. Co-authored-by: Viktor Söderqvist --- topics/sentinel.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/topics/sentinel.md b/topics/sentinel.md index 7e13e440f1..c821e20f63 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -1239,6 +1239,24 @@ When in TILT mode the Sentinel will continue to monitor everything, but: * It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. If everything appears to be normal for 30 second, the TILT mode is exited. + +In the Sentinel TILT mode, if we send the INFO command, we could get the following response: + + $ redis-cli -p 26379 + 127.0.0.1:26379> info + (Other information from Sentinel server skipped.) + + # Sentinel + sentinel_masters:1 + sentinel_tilt:0 + sentinel_tilt_since_seconds:-1 + sentinel_running_scripts:0 + sentinel_scripts_queue_length:0 + sentinel_simulate_failure_flags:0 + master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=0,sentinels=1 + +The field "sentinel_tilt_since_seconds" indicates how many seconds the Sentinel already is in the TILT mode. +If it is not in TILT mode, the value will be -1. Note that in some way TILT mode could be replaced using the monotonic clock API that many kernels offer. However it is not still clear if this is a good From bfc5e2cb756df60820e9987f3e3bda7b1dc8fc9c Mon Sep 17 00:00:00 2001 From: yoav-steinberg Date: Tue, 20 Jul 2021 09:44:22 +0300 Subject: [PATCH 003/813] Add unit to TTL summary (be consistent with PTTL) (#1600) Co-authored-by: 0xmohit --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 1b0ae22436..394f9eb36f 100644 --- a/commands.json +++ b/commands.json @@ -3979,7 +3979,7 @@ "group": "generic" }, "TTL": { - "summary": "Get the time to live for a key", + "summary": "Get the time to live for a key in seconds", "complexity": "O(1)", "arguments": [ { From 6a9c6310b7291d802cf2fbc45b742e97e2804413 Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Wed, 21 Jul 2021 15:39:53 +0100 Subject: [PATCH 004/813] RPOPLPUSH minor phrasing updates --- commands/rpoplpush.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 9224343860..37019aca7a 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -42,7 +42,7 @@ operation. However in this context the obtained queue is not _reliable_ as messages can be lost, for example in the case there is a network problem or if the consumer -crashes just after the message is received but it is still to process. +crashes just after the message is received but before it can be processed. `RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it @@ -51,7 +51,7 @@ It will use the `LREM` command in order to remove the message from the _processing_ list once the message has been processed. An additional client may monitor the _processing_ list for items that remain -there for too much time, and will push those timed out items into the queue +there for too much time, pushing timed out items into the queue again if needed. ## Pattern: Circular list @@ -61,12 +61,12 @@ all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single `LRANGE` operation. -The above pattern works even if the following two conditions: +The above pattern works even if one or both of the following conditions occur: * There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. -* Even if other clients are actively pushing new items at the end of the list. +* Other clients are actively pushing new items at the end of the list. The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. From 5ddb3b26b83252c1c595207d8a7e907e32d581f1 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Sun, 25 Jul 2021 22:17:52 +0300 Subject: [PATCH 005/813] update stars (#1611) --- modules.json | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/modules.json b/modules.json index 3fc535216c..1e62d62b28 100644 --- a/modules.json +++ b/modules.json @@ -18,7 +18,7 @@ "MeirShpilraien", "RedisLabs" ], - "stars": 122 + "stars": 185 }, { "name": "redis-roaring", @@ -49,7 +49,7 @@ "swilly22", "RedisLabs" ], - "stars": 1144 + "stars": 1409 }, { "name": "redis-tdigest", @@ -70,7 +70,7 @@ "itamarhaber", "RedisLabs" ], - "stars": 1119 + "stars": 1358 }, { "name": "RediSearch", @@ -81,7 +81,7 @@ "dvirsky", "RedisLabs" ], - "stars": 2616 + "stars": 3051 }, { "name": "RedisBloom", @@ -92,7 +92,7 @@ "mnunberg", "RedisLabs" ], - "stars": 691 + "stars": 960 }, { "name": "neural-redis", @@ -113,7 +113,7 @@ "danni-m", "RedisLabs" ], - "stars": 455 + "stars": 593 }, { "name": "RedisAI", @@ -124,7 +124,7 @@ "lantiga", "RedisLabs" ], - "stars": 435 + "stars": 604 }, { "name": "ReDe", From 788189c65964f57e019294b3139e86e96efee22a Mon Sep 17 00:00:00 2001 From: Huang Zhw Date: Tue, 27 Jul 2021 17:06:58 +0800 Subject: [PATCH 006/813] Add info metrics total_eviction_exceeded_time and current_eviction_exceeded_time, redis#9031 --- commands/info.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/info.md b/commands/info.md index caaccbbec0..bce2adda68 100644 --- a/commands/info.md +++ b/commands/info.md @@ -233,6 +233,8 @@ Here is the meaning of all fields in the **stats** section: * `expired_time_cap_reached_count`: The count of times that active expiry cycles have stopped early * `expire_cycle_cpu_milliseconds`: The cumulative amount of time spend on active expiry cycles * `evicted_keys`: Number of evicted keys due to `maxmemory` limit +* `total_eviction_exceeded_time`: Total time `used_memory` was greater than `maxmemory` since server startup, in milliseconds +* `current_eviction_exceeded_time`: The time passed since `used_memory` last rose above `maxmemory`, in milliseconds * `keyspace_hits`: Number of successful lookup of keys in the main dictionary * `keyspace_misses`: Number of failed lookup of keys in the main dictionary * `pubsub_channels`: Global number of pub/sub channels with client From b00eac4cf18995c3fd1a1c27edc97542c52585dc Mon Sep 17 00:00:00 2001 From: Spring_MT Date: Tue, 27 Jul 2021 19:27:35 +0900 Subject: [PATCH 007/813] Add `process_supervised` description at INFO (#1612) --- commands/info.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/info.md b/commands/info.md index bce2adda68..e3a1fa5d27 100644 --- a/commands/info.md +++ b/commands/info.md @@ -59,6 +59,7 @@ Here is the meaning of all fields in the **server** section: * `atomicvar_api`: Atomicvar API used by Redis * `gcc_version`: Version of the GCC compiler used to compile the Redis server * `process_id`: PID of the server process +* `process_supervised`: Supervised system ("upstart", "systemd", "unknown" or "no") * `run_id`: Random value identifying the Redis server (to be used by Sentinel and Cluster) * `tcp_port`: TCP/IP listen port From 2320aaeaff8ae462b8ee11f6459023beb9e702d3 Mon Sep 17 00:00:00 2001 From: Leokuma Date: Tue, 27 Jul 2021 09:22:27 -0300 Subject: [PATCH 008/813] Better phrasing and fix typos in streams (#1607) --- topics/streams-intro.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/streams-intro.md b/topics/streams-intro.md index 21745299d9..f18c97ecf7 100644 --- a/topics/streams-intro.md +++ b/topics/streams-intro.md @@ -187,14 +187,14 @@ In practical terms, if we imagine having three consumers C1, C2, C3, and a strea 7 -> C1 ``` -In order to achieve this, Redis uses a concept called *consumer groups*. It is very important to understand that Redis consumer groups have nothing to do, from an implementation standpoint, with Kafka (TM) consumer groups. Yet they are similar in functionality, so I decided to keep Kafka's (TM) terminology, as it originaly popularized this idea. +In order to achieve this, Redis uses a concept called *consumer groups*. It is very important to understand that Redis consumer groups have nothing to do, from an implementation standpoint, with Kafka (TM) consumer groups. Yet they are similar in functionality, so I decided to keep Kafka's (TM) terminology, as it originally popularized this idea. A consumer group is like a *pseudo consumer* that gets data from a stream, and actually serves multiple consumers, providing certain guarantees: 1. Each message is served to a different consumer so that it is not possible that the same message will be delivered to multiple consumers. 2. Consumers are identified, within a consumer group, by a name, which is a case-sensitive string that the clients implementing consumers must choose. This means that even after a disconnect, the stream consumer group retains all the state, since the client will claim again to be the same consumer. However, this also means that it is up to the client to provide a unique identifier. 3. Each consumer group has the concept of the *first ID never consumed* so that, when a consumer asks for new messages, it can provide just messages that were not previously delivered. -4. Consuming a message, however, requires an explicit acknowledgment using a specific command. Redis interperts the acknowledgment as: this message was correctly processed so it can be evicted from the consumer group. +4. Consuming a message, however, requires an explicit acknowledgment using a specific command. Redis interprets the acknowledgment as: this message was correctly processed so it can be evicted from the consumer group. 5. A consumer group tracks all the messages that are currently pending, that is, messages that were delivered to some consumer of the consumer group, but are yet to be acknowledged as processed. Thanks to this feature, when accessing the message history of a stream, each consumer *will only see messages that were delivered to it*. In a way, a consumer group can be imagined as some *amount of state* about a stream: @@ -405,7 +405,7 @@ In its simplest form, the command is called with two arguments, which are the na 2) "2" ``` -When called in this way the command outputs the total number of pending messages in the consumer group, only two messages in this case, the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. +When called in this way, the command outputs the total number of pending messages in the consumer group (two in this case), the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. We have only Bob with two pending messages because the single message that Alice requested was acknowledged using **XACK**. We can ask for more information by giving more arguments to **XPENDING**, because the full command signature is the following: @@ -455,7 +455,7 @@ Client 1: XCLAIM mystream mygroup Alice 3600000 1526569498055-0 Client 2: XCLAIM mystream mygroup Lora 3600000 1526569498055-0 ``` -However claiming a message, as a side effect will reset its idle time! And will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). +However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). This is the result of the command execution: @@ -466,7 +466,7 @@ This is the result of the command execution: 2) "orange" ``` -The message was successfully claimed by Alice, that can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. +The message was successfully claimed by Alice, who can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. It is clear from the example above that as a side effect of successfully claiming a given message, the **XCLAIM** command also returns it. However this is not mandatory. The **JUSTID** option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server (and also the performance of the command) and you are not interested in the message because your consumer is implemented in a way that it will rescan the history of pending messages from time to time. From a7a331b9d544b8ac7615f11a93e18bd210ca3e2f Mon Sep 17 00:00:00 2001 From: yoav-steinberg Date: Tue, 27 Jul 2021 15:35:53 +0300 Subject: [PATCH 009/813] Add 'systemd' to wordlist (#1614) --- wordlist | 1 + 1 file changed, 1 insertion(+) diff --git a/wordlist b/wordlist index 5e2c71df60..6cbdad1233 100644 --- a/wordlist +++ b/wordlist @@ -434,3 +434,4 @@ deauthenticates reauthenticate async lazyfree +systemd From 5b46fada02c787cc854b2b2facaf1d735a1d1672 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Tue, 27 Jul 2021 15:53:58 +0300 Subject: [PATCH 010/813] add description of the ACL categories (#1609) --- topics/acl.md | 38 +++++++++++++++++++++++++++++++++++--- 1 file changed, 35 insertions(+), 3 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 6d3a8a39e7..d2730fa01e 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -268,9 +268,41 @@ the case of an ACL that is just additive, that is, in the form of `+@all -...` You should be absolutely sure that you'll never include what you did not mean to. -However to remember that categories are defined, and what commands each -category exactly includes, is impossible and would be super boring, so the -Redis `ACL` command exports the `CAT` subcommand that can be used in two forms: +The following is a list of command categories and their meanings: +* keyspace - Writing or reading from keys, databases, or their metadata + in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`, + `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace, + key or metadata will also have `write` category. Commands that only read + the keyspace, key or metadata will have the `read` category. +* read - Reading from keys (values or metadata). Note that commands that don't + interact with keys, will not have either `read` or `write`. +* write - Writing to keys (values or metadata). +* admin - Administrative commands. Normal applications will never need to use + these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc. +* dangerous - Potentially dangerous commands (each should be considered with care for + various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`, + `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc. +* connection - Commands affecting the connection or other connections. + This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc. +* blocking - Potentially blocking the connection until released by another + command. +* fast - Fast O(1) commands. May loop on the number of arguments, but not the + number of elements in the key. +* slow - All commands that are not `fast`. +* pubsub - PubSub-related commands. +* transaction - `WATCH` / `MULTI` / `EXEC` related commands. +* scripting - Scripting related. +* set - Data type: sets related. +* sortedset - Data type: sorted sets related. +* list - Data type: lists related. +* hash - Data type: hashes related. +* string - Data type: strings related. +* bitmap - Data type: bitmaps related. +* hyperloglog - Data type: hyperloglog related. +* geo - Data type: geospatial indexes related. +* stream - Data type: streams related. + +Redis can also show you a list of all categories, and the exact commands each category includes using the redis `ACL` command's `CAT` subcommand that can be used in two forms: ACL CAT -- Will just list all the categories available ACL CAT -- Will list all the commands inside the category From 9877b67407317581b4381d82bd3572a095639936 Mon Sep 17 00:00:00 2001 From: Yoav Steinberg Date: Tue, 27 Jul 2021 16:12:21 +0300 Subject: [PATCH 011/813] Fix line feed in acl cats --- topics/acl.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/acl.md b/topics/acl.md index d2730fa01e..1a62b512b0 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -269,6 +269,7 @@ You should be absolutely sure that you'll never include what you did not mean to. The following is a list of command categories and their meanings: + * keyspace - Writing or reading from keys, databases, or their metadata in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`, `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace, From 6ca15626910a1404f4738ec44e16f84bdcf62dfb Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Tue, 27 Jul 2021 16:40:59 +0300 Subject: [PATCH 012/813] INFO details about fragmentation (#1608) INFO details about fragmentation Co-authored-by: Huang Zhw Co-authored-by: yoav-steinberg --- commands/info.md | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/commands/info.md b/commands/info.md index e3a1fa5d27..a855aeec66 100644 --- a/commands/info.md +++ b/commands/info.md @@ -123,18 +123,29 @@ Here is the meaning of all fields in the **memory** section: * `maxmemory_human`: Human readable representation of previous value * `maxmemory_policy`: The value of the `maxmemory-policy` configuration directive -* `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory` -* `mem_allocator`: Memory allocator, chosen at compile time -* `active_defrag_running`: Flag indicating if active defragmentation is active +* `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`. + Note that this doesn't only includes fragmentation, but also other process overheads (see the `allocator_*` metrics), and also overheads like code, shared libraries, stack, etc. +* `mem_fragmentation_bytes`: Delta between `used_memory_rss` and `used_memory`. + Note that when the total fragmentation bytes is low (few megabytes), a high ratio (e.g. 1.5 and above) is not an indication of an issue. +* `allocator_frag_ratio:`: Ratio between `allocator_active` and `allocator_allocated`. This is the true (external) fragmentation metric (not `mem_fragmentation_ratio`). +* `allocator_frag_bytes` Delta between `allocator_active` and `allocator_allocated`. See note about `mem_fragmentation_bytes`. +* `allocator_rss_ratio`: Ratio between `allocator_resident` and `allocator_active`. This usually indicates pages that the allocator can and probably will soon release back to the OS. +* `allocator_rss_bytes`: Delta between `allocator_resident` and `allocator_active` +* `rss_overhead_ratio`: Ratio between `used_memory_rss` (the process RSS) and `allocator_resident`. This includes RSS overheads that are not allocator or heap related. +* `rss_overhead_bytes`: Delta between `used_memory_rss` (the process RSS) and `allocator_resident` +* `allocator_allocated`: Total bytes allocated form the allocator, including internal-fragmentation. Normally the same as `used_memory`. +* `allocator_active`: Total bytes in the allocator active pages, this includes external-fragmentation. +* `allocator_resident`: Total bytes resident (RSS) in the allocator, this includes pages that can be released to the OS (by `MEMORY PURGE`, or just waiting). +* `mem_allocator`: Memory allocator, chosen at compile time. +* `active_defrag_running`: When `activedefrag` is enabled, this indicates whether defragmentation is currently active, and the CPU percentage it intends to utilize. * `lazyfree_pending_objects`: The number of objects waiting to be freed (as a result of calling `UNLINK`, or `FLUSHDB` and `FLUSHALL` with the **ASYNC** option) Ideally, the `used_memory_rss` value should be only slightly higher than `used_memory`. -When rss >> used, a large difference means there is memory fragmentation -(internal or external), which can be evaluated by checking -`mem_fragmentation_ratio`. +When rss >> used, a large difference may mean there is (external) memory fragmentation, which can be evaluated by checking +`allocator_frag_ratio`, `allocator_frag_bytes`. When used >> rss, it means part of Redis memory has been swapped off by the operating system: expect some significant latencies. From 10a0136531240c72e87f7241c85f398952a8ed00 Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Tue, 27 Jul 2021 18:04:28 +0100 Subject: [PATCH 013/813] Clarifies that timeout is in seconds. (#1604) * Clarifies that timeout is in seconds. * Updated specifying that the timeout is a double in line with BLPOP docs. --- commands/blmove.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/blmove.md b/commands/blmove.md index e1d5be924b..463a2dca28 100644 --- a/commands/blmove.md +++ b/commands/blmove.md @@ -2,7 +2,7 @@ When `source` contains elements, this command behaves exactly like `LMOVE`. When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `LMOVE`. When `source` is empty, Redis will block the connection until another client -pushes to it or until `timeout` is reached. +pushes to it or until `timeout` (a double value specifying the maximum number of seconds to block) is reached. A `timeout` of zero can be used to block indefinitely. This command comes in place of the now deprecated `BRPOPLPUSH`. Doing From 228f8c6923f849854914b53fa43637d8f40d195e Mon Sep 17 00:00:00 2001 From: Michael Grunder Date: Tue, 27 Jul 2021 10:13:36 -0700 Subject: [PATCH 014/813] Fix typo (#1606) --- commands/client-tracking.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-tracking.md b/commands/client-tracking.md index 571557d74a..12bb2e626f 100644 --- a/commands/client-tracking.md +++ b/commands/client-tracking.md @@ -21,7 +21,7 @@ unless tracking is turned on with `CLIENT TRACKING off` at some point. The following are the list of options that modify the behavior of the command when enabling tracking: -* `REDIRECT `: send redirection messages to the connection with the specified ID. The connection must exist, you can get the ID of such connection using `CLIENT ID`. If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive `tracking-redir-broken` push messages in order to signal the condition. +* `REDIRECT `: send invalidation messages to the connection with the specified ID. The connection must exist. You can get the ID of a connection using `CLIENT ID`. If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive `tracking-redir-broken` push messages in order to signal the condition. * `BCAST`: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys. * `PREFIX `: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. You can't delete a single prefix, but you can delete all prefixes by disabling and re-enabling tracking. Using this option adds the additional time complexity of O(N^2), where N is the total number of prefixes tracked. * `OPTIN`: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a `CLIENT CACHING yes` command. From a25c27b0f0791f018dbb7b7074ad97722c1b477a Mon Sep 17 00:00:00 2001 From: Clement Jean Date: Mon, 2 Aug 2021 02:46:04 +0800 Subject: [PATCH 015/813] Add RedisIMS (If-Modified-Since) module (#1599) --- modules.json | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/modules.json b/modules.json index 1e62d62b28..a25032c1e3 100644 --- a/modules.json +++ b/modules.json @@ -396,5 +396,15 @@ "ogama" ], "stars": 3 - } + }, + { + "name": "redisims", + "license": "MIT", + "repository": "https://github.com/Clement-Jean/RedisIMS", + "description": "A lightweight Redis module following the If Modified Since (IMS) pattern for caching", + "authors": [ + "Clement-Jean" + ], + "stars": 0 + } ] From e3d11f00b9058b4ef46e81e97f99410c0b2576a6 Mon Sep 17 00:00:00 2001 From: Binbin Date: Mon, 2 Aug 2021 02:51:36 +0800 Subject: [PATCH 016/813] Fix inconsistent EVALSHA error output (#1616) Fixes https://github.com/redis/redis/issues/9286 --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index fc97af47a5..74bd7baef6 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -247,7 +247,7 @@ OK > evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0 "bar" > evalsha ffffffffffffffffffffffffffffffffffffffff 0 -(error) `NOSCRIPT` No matching script. Please use `EVAL`. +(error) NOSCRIPT No matching script. Please use EVAL. ``` The client library implementation can always optimistically send `EVALSHA` under From 1edd7b68a08ec28bd296888cf7eb353b2a7776fe Mon Sep 17 00:00:00 2001 From: M Sazzadul Hoque <7600764+sazzad16@users.noreply.github.com> Date: Mon, 2 Aug 2021 00:57:11 +0600 Subject: [PATCH 017/813] Render lists properly in CLIENT PAUSE doc (#1601) Without the blank line, all elements in the list is rendered as a paragraph. --- commands/client-pause.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/commands/client-pause.md b/commands/client-pause.md index 3a30e9ccc8..224f1a3d0f 100644 --- a/commands/client-pause.md +++ b/commands/client-pause.md @@ -7,10 +7,12 @@ The command performs the following actions: * When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause. Client pause currently supports two modes: + * `ALL`: This is the default mode. All client commands are blocked. * `WRITE`: Clients are only blocked if they attempt to execute a write command. For the `WRITE` mode, some commands have special behavior: + * `EVAL`/`EVALSHA`: Will block client for all scripts. * `PUBLISH`: Will block client. * `PFCOUNT`: Will block client. @@ -40,4 +42,4 @@ to be static not just from the point of view of clients not being able to write, @history * `>= 3.2.10`: Client pause prevents client pause and key eviction as well. -* `>= 6.2`: CLIENT PAUSE WRITE mode added along with the `mode` option. \ No newline at end of file +* `>= 6.2`: CLIENT PAUSE WRITE mode added along with the `mode` option. From af282ed4a7c575e3058adba7eb0cc9afe1437a82 Mon Sep 17 00:00:00 2001 From: Madelyn Olson <34459052+madolson@users.noreply.github.com> Date: Mon, 2 Aug 2021 22:14:37 -0700 Subject: [PATCH 018/813] Fix spelling of acknowledgment (#1618) --- commands/client-pause.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/client-pause.md b/commands/client-pause.md index 224f1a3d0f..2806072a00 100644 --- a/commands/client-pause.md +++ b/commands/client-pause.md @@ -16,7 +16,7 @@ For the `WRITE` mode, some commands have special behavior: * `EVAL`/`EVALSHA`: Will block client for all scripts. * `PUBLISH`: Will block client. * `PFCOUNT`: Will block client. -* `WAIT`: Acknowledgements will be delayed, so this command will appear blocked. +* `WAIT`: Acknowledgments will be delayed, so this command will appear blocked. This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following: From 2d26a5a9cc272632173e024aeea680fd217bba21 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Tue, 3 Aug 2021 11:46:17 +0300 Subject: [PATCH 019/813] adds new SINTERCARD and ZINTERCARD (#1610) Co-authored-by: yoav-steinberg --- commands.json | 30 ++++++++++++++++++++++++++++++ commands/sintercard.md | 21 +++++++++++++++++++++ commands/zintercard.md | 20 ++++++++++++++++++++ 3 files changed, 71 insertions(+) create mode 100644 commands/sintercard.md create mode 100644 commands/zintercard.md diff --git a/commands.json b/commands.json index 394f9eb36f..a3f6b79edd 100644 --- a/commands.json +++ b/commands.json @@ -3609,6 +3609,19 @@ "since": "1.0.0", "group": "set" }, + "SINTERCARD": { + "summary": "Intersect multiple sets and return the cardinality of the result", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", + "arguments": [ + { + "name": "key", + "type": "key", + "multiple": true + } + ], + "since": "7.0.0", + "group": "set" + }, "SINTERSTORE": { "summary": "Intersect multiple sets and store the resulting set in a key", "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", @@ -4262,6 +4275,23 @@ "since": "6.2.0", "group": "sorted_set" }, + "ZINTERCARD": { + "summary": "Intersect multiple sorted sets and return the cardinality of the result", + "complexity": "O(N*K) worst case with N being the smallest input sorted set, K being the number of input sorted sets.", + "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + } + ], + "since": "7.0.0", + "group": "sorted_set" + }, "ZINTERSTORE": { "summary": "Intersect multiple sorted sets and store the resulting sorted set in a new key", "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", diff --git a/commands/sintercard.md b/commands/sintercard.md new file mode 100644 index 0000000000..4b06982d6c --- /dev/null +++ b/commands/sintercard.md @@ -0,0 +1,21 @@ +Returns the cardinality of the set which would result from the intersection of all the given sets. + +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). + +@return + +@integer-reply: the number of elements in the resulting intersection. + +@examples + +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTER key1 key2 +SINTERCARD key1 key2 +``` diff --git a/commands/zintercard.md b/commands/zintercard.md new file mode 100644 index 0000000000..84abe27ffd --- /dev/null +++ b/commands/zintercard.md @@ -0,0 +1,20 @@ +This command is similar to `ZINTER`, but instead of returning the result set, it returns just the cardinality of the result. + +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). + +@return + +@integer-reply: the number of elements in the resulting intersection. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTER 2 zset1 zset2 +ZINTERCARD 2 zset1 zset2 +``` From f9a75a4b07ebcd32f87e6229156188682414394c Mon Sep 17 00:00:00 2001 From: Ning Sun Date: Tue, 3 Aug 2021 17:03:05 +0800 Subject: [PATCH 020/813] Add doc for new options of expire commands (#1613) Signed-off-by: Ning Sun --- commands.json | 48 +++++++++++++++++++++++++++++++++++++++++-- commands/expire.md | 22 +++++++++++++++++++- commands/expireat.md | 18 +++++++++++++++- commands/pexpire.md | 22 +++++++++++++++++++- commands/pexpireat.md | 18 +++++++++++++++- 5 files changed, 122 insertions(+), 6 deletions(-) diff --git a/commands.json b/commands.json index a3f6b79edd..fa8bf4faf3 100644 --- a/commands.json +++ b/commands.json @@ -1285,6 +1285,17 @@ { "name": "seconds", "type": "integer" + }, + { + "name": "condition", + "type": "enum", + "enum": [ + "NX", + "XX", + "GT", + "LT" + ], + "optional": true } ], "since": "1.0.0", @@ -1301,6 +1312,17 @@ { "name": "timestamp", "type": "posix time" + }, + { + "name": "condition", + "type": "enum", + "enum": [ + "NX", + "XX", + "GT", + "LT" + ], + "optional": true } ], "since": "1.2.0", @@ -2310,7 +2332,7 @@ "WITHVALUES" ], "optional": true - } + } ], "optional": true } @@ -2917,6 +2939,17 @@ { "name": "milliseconds", "type": "integer" + }, + { + "name": "condition", + "type": "enum", + "enum": [ + "NX", + "XX", + "GT", + "LT" + ], + "optional": true } ], "since": "2.6.0", @@ -2933,6 +2966,17 @@ { "name": "milliseconds-timestamp", "type": "posix time" + }, + { + "name": "condition", + "type": "enum", + "enum": [ + "NX", + "XX", + "GT", + "LT" + ], + "optional": true } ], "since": "2.6.0", @@ -4408,7 +4452,7 @@ "WITHSCORES" ], "optional": true - } + } ], "optional": true } diff --git a/commands/expire.md b/commands/expire.md index fbd86172a2..65befa96c4 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -31,6 +31,18 @@ will be `del`, not `expired`). [del]: /commands/del [ntf]: /topics/notifications +## Options + +The `EXPIRE` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + ## Refreshing expires It is possible to call `EXPIRE` using as argument a key that already has an @@ -53,7 +65,7 @@ are now fixed. @integer-reply, specifically: * `1` if the timeout was set. -* `0` if `key` does not exist. +* `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. @examples @@ -63,8 +75,16 @@ EXPIRE mykey 10 TTL mykey SET mykey "Hello World" TTL mykey +EXPIRE mykey 10 XX +TTL mykey +EXPIRE mykey 10 NX +TTL mykey ``` +@history + +* `>= 7.0`: Added options: `NX`, `XX`, `GT` and `LT`. + ## Pattern: Navigation session Imagine you have a web service and you are interested in the latest N pages diff --git a/commands/expireat.md b/commands/expireat.md index a4430bb7c0..7508559464 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -15,12 +15,24 @@ timeouts for the AOF persistence mode. Of course, it can be used directly to specify that a given key should expire at a given time in the future. +## Options + +The `EXPIREAT` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + @return @integer-reply, specifically: * `1` if the timeout was set. -* `0` if `key` does not exist. +* `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. @examples @@ -30,3 +42,7 @@ EXISTS mykey EXPIREAT mykey 1293840000 EXISTS mykey ``` + +@history + +* `>= 7.0`: Added options: `NX`, `XX`, `GT` and `LT`. diff --git a/commands/pexpire.md b/commands/pexpire.md index 33d9f0bc75..6fd90c02f2 100644 --- a/commands/pexpire.md +++ b/commands/pexpire.md @@ -1,12 +1,24 @@ This command works exactly like `EXPIRE` but the time to live of the key is specified in milliseconds instead of seconds. +## Options + +The `PEXPIRE` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + @return @integer-reply, specifically: * `1` if the timeout was set. -* `0` if `key` does not exist. +* `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. @examples @@ -15,4 +27,12 @@ SET mykey "Hello" PEXPIRE mykey 1500 TTL mykey PTTL mykey +PEXPIRE mykey 1000 XX +TTL mykey +PEXPIRE mykey 1000 NX +TTL mykey ``` + +@history + +* `>= 7.0`: Added options: `NX`, `XX`, `GT` and `LT`. diff --git a/commands/pexpireat.md b/commands/pexpireat.md index a15bb0a9a0..03fe346551 100644 --- a/commands/pexpireat.md +++ b/commands/pexpireat.md @@ -1,12 +1,24 @@ `PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at which the key will expire is specified in milliseconds instead of seconds. +## Options + +The `PEXPIREAT` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + @return @integer-reply, specifically: * `1` if the timeout was set. -* `0` if `key` does not exist. +* `0` if the timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments. @examples @@ -16,3 +28,7 @@ PEXPIREAT mykey 1555555555005 TTL mykey PTTL mykey ``` + +@history + +* `>= 7.0`: Added options: `NX`, `XX`, `GT` and `LT`. From 87ecbfca7519399c0975f38d809fbc44fc036831 Mon Sep 17 00:00:00 2001 From: Wang Yuan Date: Thu, 5 Aug 2021 13:16:12 +0800 Subject: [PATCH 021/813] Add current_cow_peak field in INFO command (#1617) --- commands/info.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/info.md b/commands/info.md index a855aeec66..ea88b234c6 100644 --- a/commands/info.md +++ b/commands/info.md @@ -166,6 +166,8 @@ by referring to the `MEMORY STATS` command and the `MEMORY DOCTOR`. Here is the meaning of all fields in the **persistence** section: * `loading`: Flag indicating if the load of a dump file is on-going +* `current_cow_peak`: The peak size in bytes of copy-on-write memory + while a child fork is running * `current_cow_size`: The size in bytes of copy-on-write memory while a child fork is running * `current_fork_perc`: The percentage of progress of the current fork process. For AOF and RDB forks it is the percentage of `current_save_keys_processed` out of `current_save_keys_total`. From b7d1ecfae882dc5e8bec769e302d8dbd70185ffc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Viktor=20S=C3=B6derqvist?= Date: Mon, 9 Aug 2021 18:20:40 +0200 Subject: [PATCH 022/813] Avoid confusing use of term "pure function" MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Also cleaned up and clarified other text in the replication section of the EVAL command. Co-authored-by: Viktor Söderqvist Co-authored-by: Kevin Christopher Henry --- commands/eval.md | 82 ++++++++++++++++++++++-------------------------- wordlist | 1 + 2 files changed, 39 insertions(+), 44 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 74bd7baef6..11026de51c 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -338,18 +338,18 @@ SCRIPT currently accepts three different commands: not violate the scripting engine's guaranteed atomicity). See the next sections for more information about long running scripts. -## Scripts as pure functions +## Scripts with deterministic writes *Note: starting with Redis 5, scripts are always replicated as effects and not sending the script verbatim. So the following section is mostly applicable to Redis version 4 or older.* -A very important part of scripting is writing scripts that are pure functions. +A very important part of scripting is writing scripts that only change the database in a deterministic way. Scripts executed in a Redis instance are, by default, propagated to replicas and to the AOF file by sending the script itself -- not the resulting commands. +Since the script will be re-run on the remote host (or when reloading the AOF file), the changes it makes to the database must be reproducible. -The reason is that sending a script to another Redis instance is often much -faster than sending the multiple commands the script generates, so if the -client is sending many scripts to the master, converting the scripts into +The reason for sending the script is that it is often much faster than sending the multiple commands that the script generates. +If the client is sending many scripts to the master, converting the scripts into individual commands for the replica / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via network is a lot more work for Redis compared @@ -360,12 +360,13 @@ however not in all the cases. So starting with Redis 3.2, the scripting engine is able to, alternatively, replicate the sequence of write commands resulting from the script execution, instead of replication the script itself. See the next section for more information. + In this section we'll assume that scripts are replicated by sending the whole script. Let's call this replication mode **whole scripts replication**. The main drawback with the *whole scripts replication* approach is that scripts are required to have the following property: -* The script must always evaluates the same Redis _write_ commands with the +* The script must always execute the same Redis _write_ commands with the same arguments given the same input data set. Operations performed by the script cannot depend on any hidden (non-explicit) information or state that may change as script execution proceeds or between @@ -373,7 +374,7 @@ The main drawback with the *whole scripts replication* approach is that scripts from I/O devices. Things like using the system time, calling Redis random commands like -`RANDOMKEY`, or using Lua random number generator, could result into scripts +`RANDOMKEY`, or using Lua's random number generator, could result in scripts that will not always evaluate in the same way. In order to enforce this behavior in scripts Redis does the following: @@ -401,9 +402,8 @@ In order to enforce this behavior in scripts Redis does the following: assume that certain commands in Lua will be ordered, but instead rely on the documentation of the original command you call to see the properties it provides. -* Lua pseudo random number generation functions `math.random` and - `math.randomseed` are modified in order to always have the same seed every - time a new script is executed. +* Lua's pseudo-random number generation function `math.random` is + modified to always use the same seed every time a new script is executed. This means that calling `math.random` will always generate the same sequence of numbers every time a script is executed if `math.randomseed` is not used. @@ -434,7 +434,7 @@ r.del(:mylist) puts r.eval(RandomPushScript,[:mylist],[10,rand(2**32)]) ``` -Every time this script executed the resulting list will have exactly the +Every time this script is executed the resulting list will have exactly the following elements: ``` @@ -451,9 +451,9 @@ following elements: 10) "0.17082803611217" ``` -In order to make it a pure function, but still be sure that every invocation +In order to make it deterministic, but still be sure that every invocation of the script will result in different random elements, we can simply add an -additional argument to the script that will be used in order to seed the Lua +additional argument to the script that will be used to seed the Lua pseudo-random number generator. The new script is as follows: @@ -474,9 +474,8 @@ puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) ``` What we are doing here is sending the seed of the PRNG as one of the arguments. -This way the script output will be the same given the same arguments, but we are -changing one of the arguments in every invocation, generating the random seed -client-side. +The script output will always be the same given the same arguments (our requirement) +but we are changing one of the arguments at every invocation, generating the random seed client-side. The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, guaranteeing that the same changes will be generated when the AOF is reloaded or when the replica processes the script. @@ -492,7 +491,7 @@ output. *Note: starting with Redis 5, the replication method described in this section (scripts effects replication) is the default and does not need to be explicitly enabled.* Starting with Redis 3.2, it is possible to select an -alternative replication method. Instead of replication whole scripts, we +alternative replication method. Instead of replicating whole scripts, we can just replicate single write commands generated by the script. We call this **script effects replication**. @@ -500,44 +499,41 @@ In this replication mode, while Lua scripts are executed, Redis collects all the commands executed by the Lua scripting engine that actually modify the dataset. When the script execution finishes, the sequence of commands that the script generated are wrapped into a MULTI / EXEC transaction and -are sent to replicas and AOF. +are sent to the replicas and AOF. This is useful in several ways depending on the use case: -* When the script is slow to compute, but the effects can be summarized by -a few write commands, it is a shame to re-compute the script on the replicas -or when reloading the AOF. In this case to replicate just the effect of the -script is much better. -* When script effects replication is enabled, the controls about non -deterministic functions are disabled. You can, for example, use the `TIME` -or `SRANDMEMBER` commands inside your scripts freely at any place. -* The Lua PRNG in this mode is seeded randomly at every call. +* When the script is slow to compute, but the effects can be summarized by a few write commands, it is a shame to re-compute the script on the replicas or when reloading the AOF. + In this case it is much better to replicate just the effects of the script. +* When script effects replication is enabled, the restrictions on non-deterministic functions are removed. + You can, for example, use the `TIME` or `SRANDMEMBER` commands inside your scripts freely at any place. +* The Lua PRNG in this mode is seeded randomly on every call. -In order to enable script effects replication, you need to issue the -following Lua command before any write operated by the script: +To enable script effects replication you need to issue the +following Lua command before the script performs a write: redis.replicate_commands() -The function returns true if the script effects replication was enabled, -otherwise if the function was called after the script already called -some write command, it returns false, and normal whole script replication +The function returns true if script effects replication was enabled; +otherwise, if the function was called after the script already called +a write command, it returns false, and normal whole script replication is used. ## Selective replication of commands When script effects replication is selected (see the previous section), it -is possible to have more control in the way commands are replicated to replicas -and AOF. This is a very advanced feature since **a misuse can do damage** by -breaking the contract that the master, replicas, and AOF, all must contain the +is possible to have more control over the way commands are propagated to replicas and the AOF. +This is a very advanced feature since **a misuse can do damage** by breaking the contract that the master, replicas, and AOF must all contain the same logical content. However this is a useful feature since, sometimes, we need to execute certain commands only in the master in order to create, for example, intermediate values. -Think at a Lua script where we perform an intersection between two sets. -Pick five random elements, and create a new set with this five random -elements. Finally we delete the temporary key representing the intersection +Think of a Lua script where we perform an intersection between two sets. +We then pick five random elements from the intersection and create a new set +containing them. +Finally, we delete the temporary key representing the intersection between the two original sets. What we want to replicate is only the creation of the new set with the five elements. It's not useful to also replicate the commands creating the temporary key. @@ -549,15 +545,14 @@ an error if called when script effects replication is disabled. The command can be called with four different arguments: - redis.set_repl(redis.REPL_ALL) -- Replicate to AOF and replicas. - redis.set_repl(redis.REPL_AOF) -- Replicate only to AOF. + redis.set_repl(redis.REPL_ALL) -- Replicate to the AOF and replicas. + redis.set_repl(redis.REPL_AOF) -- Replicate only to the AOF. redis.set_repl(redis.REPL_REPLICA) -- Replicate only to replicas (Redis >= 5) redis.set_repl(redis.REPL_SLAVE) -- Used for backward compatibility, the same as REPL_REPLICA. redis.set_repl(redis.REPL_NONE) -- Don't replicate at all. -By default the scripting engine is always set to `REPL_ALL`. By calling -this function the user can switch on/off AOF and or replicas propagation, and -turn them back later at her/his wish. +By default the scripting engine is set to `REPL_ALL`. +By calling this function the user can switch the replication mode on or off at any time. A simple example follows: @@ -568,8 +563,7 @@ A simple example follows: redis.set_repl(redis.REPL_ALL) redis.call('set','C','3') -After running the above script, the result is that only keys A and C -will be created on replicas and AOF. +After running the above script, the result is that only the keys A and C will be created on the replicas and AOF. ## Global variables protection diff --git a/wordlist b/wordlist index 6cbdad1233..9faaeee186 100644 --- a/wordlist +++ b/wordlist @@ -65,6 +65,7 @@ LRU Linode Liveness Lua +Lua's MAXLEN MERCHANTABILITY MX From dfa8671d3090d2de027e75d9d66501545f0e8eaa Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 11 Aug 2021 17:23:09 +0300 Subject: [PATCH 023/813] Sponsor name change (#1624) --- topics/ldb.md | 2 +- topics/sponsors.md | 7 +++---- topics/trademark.md | 4 ++-- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/topics/ldb.md b/topics/ldb.md index 0d454228f3..07231ed0a9 100644 --- a/topics/ldb.md +++ b/topics/ldb.md @@ -215,7 +215,7 @@ LDB uses the client-server model where the Redis server acts as a debugging serv 2. The client provides an interface for sending arbitrary commands over RESP. 3. The client allows sending raw messages to the Redis server. -For example, the [Redis plugin](https://redislabs.com/blog/zerobrane-studio-plugin-for-redis-lua-scripts) for [ZeroBrane Studio](http://studio.zerobrane.com/) integrates with LDB using [redis-lua](https://github.com/nrk/redis-lua). The following Lua code is a simplified example of how the plugin achieves that: +For example, the [Redis plugin](https://redis.com/blog/zerobrane-studio-plugin-for-redis-lua-scripts) for [ZeroBrane Studio](http://studio.zerobrane.com/) integrates with LDB using [redis-lua](https://github.com/nrk/redis-lua). The following Lua code is a simplified example of how the plugin achieves that: ```Lua local redis = require 'redis' diff --git a/topics/sponsors.md b/topics/sponsors.md index 8a58dce9ba..ac676c40de 100644 --- a/topics/sponsors.md +++ b/topics/sponsors.md @@ -1,7 +1,7 @@ Redis Sponsors === -Starting from June 2015 the work [Salvatore Sanfilippo](http://twitter.com/antirez) is doing in order to develop Redis is sponsored by [Redis Labs](https://redislabs.com). +Between 2015 to June 2020, the work Salvatore Sanfilippo was doing in order to develop Redis was sponsored by [Redis Ltd.](https://redis.com) As of June 2020, Redis Ltd. sponsors the Redis project [governance](/topics/governance). Past sponsorships: @@ -22,7 +22,7 @@ Also thanks to the following people or organizations that donated to the Project * [Brad Jasper](http://bradjasper.com/) * [Mrkris](http://www.mrkris.com/) -We are grateful to [Redis Labs](http://redislabs.com), [Pivotal](http://gopivotal.com), [VMware](http://vmware.com) and to the other companies and people that donated to the Redis project. Thank you. +We are grateful to [Redis Ltd.](http://redis.com), [Pivotal](http://gopivotal.com), [VMware](http://vmware.com) and to the other companies and people that donated to the Redis project. Thank you. ## redis.io @@ -32,8 +32,7 @@ transferred its copyright to Salvatore Sanfilippo. They also sponsored the initial implementation of this site by [Damian Janowski](https://twitter.com/djanowski) and [Michel -Martens](https://twitter.com/soveran). Damian and Michel remain the current -maintainers. +Martens](https://twitter.com/soveran). The `redis.io` domain was donated for a few years to the project by [I Want My Name](https://iwantmyname.com). diff --git a/topics/trademark.md b/topics/trademark.md index 24452afdba..ea46574891 100644 --- a/topics/trademark.md +++ b/topics/trademark.md @@ -37,11 +37,11 @@ or Logo other than as expressly described as permitted above, is not permitted b 6. **GENERAL USE INFORMATION.** * a. Attribution. Any permitted use of the Mark or Logo, as indicated above, should comply with the following provisions: * i. You should add the TM mark (™) and an asterisk (`*`) to the first mention of the word "Redis" as part of or in connection with a product name. - * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Labs Ltd. Any rights therein are reserved to Redis Labs Ltd. Any use by `<`company XYZ`>` is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and `<`company XYZ`>`";. + * ii. Whenever "Redis™`*`" is shown - add the following legend (with an asterisk) in a noticeable and readable format: "`*` Redis is a trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by `<`company XYZ`>` is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and `<`company XYZ`>`";. * iii. Sections i. And ii. above apply to any appearance of the word "Redis" in: (a) any web page, gated or un-gated; (b) any marketing collateral, white paper, or other promotional material, whether printed or electronic; and (c) any advertisement, in any format. * b. Capitalization. Always distinguish the Mark from surrounding text with at least initial capital letters or in all capital letters, e.g., as Redis or REDIS. * c. Adjective. Always use the Mark as an adjective modifying a noun, such as “the Redis software.” * d. Do not make any changes to the Logo. This means you may not add decorative elements, change the colors, change the proportions, distort it, add elements or combine it with other logos. 7. **NOTIFY US OF ABUSE.** Do not make any changes to the Logo. This means you may not add decorative elements, change the colors, change the proportions, distort it, add elements or combine it with other logos. -8. **MORE QUESTIONS?** If you have questions about this policy, or wish to request a license for any uses that are not specifically authorized in this policy, please contact us at legal@redislabs.com. +8. **MORE QUESTIONS?** If you have questions about this policy, or wish to request a license for any uses that are not specifically authorized in this policy, please contact us at legal@redis.com. From 46db9136724f1a2dc48d5be8ba634a3bf758d5ec Mon Sep 17 00:00:00 2001 From: M Sazzadul Hoque <7600764+sazzad16@users.noreply.github.com> Date: Thu, 12 Aug 2021 05:30:12 +0600 Subject: [PATCH 024/813] Update active Java clients (#1623) --- clients.json | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/clients.json b/clients.json index edf79e6165..3a735b59d4 100644 --- a/clients.json +++ b/clients.json @@ -257,8 +257,7 @@ "url": "https://code.google.com/p/jredis/", "repository": "https://github.com/alphazero/jredis", "description": "", - "authors": ["SunOf27"], - "active": true + "authors": ["SunOf27"] }, { @@ -299,7 +298,8 @@ "language": "Java", "repository": "https://github.com/vert-x3/vertx-redis-client", "description": "The Vert.x Redis client provides an asynchronous API to interact with a Redis data-structure server.", - "authors": ["pmlopes"] + "authors": ["pmlopes"], + "active": true }, { @@ -1139,8 +1139,7 @@ "language": "Java", "repository": "https://github.com/caoxinyu/RedisClient", "description": "redis client GUI tool", - "authors": [], - "active": true + "authors": [] }, { @@ -1890,8 +1889,7 @@ "language": "Java", "repository": "https://github.com/virendradhankar/viredis", "description": "A simple and small redis client for java.", - "authors": [], - "active": true + "authors": [] }, { From 3554566ce11d98ad8595b9d7ac3be3162cd96354 Mon Sep 17 00:00:00 2001 From: Eduardo Semprebon Date: Tue, 17 Aug 2021 10:46:23 +0200 Subject: [PATCH 025/813] Add SORT_RO documentation (#1621) --- commands.json | 56 ++++++++++++++++++++++++++++++++++++++++++- commands/georadius.md | 6 ++--- commands/sort.md | 3 +++ commands/sort_ro.md | 17 +++++++++++++ 4 files changed, 78 insertions(+), 4 deletions(-) create mode 100644 commands/sort_ro.md diff --git a/commands.json b/commands.json index fa8bf4faf3..66aa870456 100644 --- a/commands.json +++ b/commands.json @@ -3796,7 +3796,7 @@ }, "SORT": { "summary": "Sort the elements in a list, set or sorted set", - "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is currently O(N) as there is a copy step that will be avoided in next releases.", + "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).", "arguments": [ { "name": "key", @@ -3854,6 +3854,60 @@ "since": "1.0.0", "group": "generic" }, + "SORT_RO": { + "summary": "Sort the elements in a list, set or sorted set. Read-only variant of SORT.", + "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "command": "BY", + "name": "pattern", + "type": "pattern", + "optional": true + }, + { + "command": "LIMIT", + "name": [ + "offset", + "count" + ], + "type": [ + "integer", + "integer" + ], + "optional": true + }, + { + "command": "GET", + "name": "pattern", + "type": "string", + "optional": true, + "multiple": true + }, + { + "name": "order", + "type": "enum", + "enum": [ + "ASC", + "DESC" + ], + "optional": true + }, + { + "name": "sorting", + "type": "enum", + "enum": [ + "ALPHA" + ], + "optional": true + } + ], + "since": "7.0.0", + "group": "generic" + }, "SPOP": { "summary": "Remove and return one or multiple random members from a set", "complexity": "Without the count argument O(1), otherwise O(N) where N is the value of the passed count.", diff --git a/commands/georadius.md b/commands/georadius.md index cecad20f64..e93737c8e0 100644 --- a/commands/georadius.md +++ b/commands/georadius.md @@ -52,11 +52,11 @@ So for example the command `GEORADIUS Sicily 15 37 200 km WITHCOORD WITHDIST` wi ["Palermo","190.4424",["13.361389338970184","38.115556395496299"]] -## Read only variants +## Read-only variants -Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read only mode (See the `READONLY` command of Redis Cluster). +Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). -Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`, and can safely be used in replicas. +Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read-only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`, and can safely be used in replicas. Both commands were introduced in Redis 3.2.10 and Redis 4.0.0 respectively. diff --git a/commands/sort.md b/commands/sort.md index 28e8bc681d..d8994648e6 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,5 +1,8 @@ Returns or stores the elements contained in the [list][tdtl], [set][tdts] or [sorted set][tdtss] at `key`. + +Since Redis 7.0.0, there is also the `SORT_RO` read-only variant of this command. + By default, sorting is numeric and elements are compared by their value interpreted as double precision floating point number. This is `SORT` in its simplest form: diff --git a/commands/sort_ro.md b/commands/sort_ro.md new file mode 100644 index 0000000000..d15303b8db --- /dev/null +++ b/commands/sort_ro.md @@ -0,0 +1,17 @@ +Read-only variant of the `SORT` command. It is exactly like the original `SORT` but refuses the `STORE` option and can safely be used in read-only replicas. + +Since the original `SORT` has a `STORE` option it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). + +Since Redis 7.0.0, the `SORT_RO` variant was introduced in order to allow `SORT` behavior in read-only replicas without breaking compatibility on command flags. + +See original `SORT` for more details. + +@examples + +``` +SORT_RO mylist BY weight_*->fieldname GET object_*->fieldname +``` + +@return + +@array-reply: a list of sorted elements. From 94a0e4846bf18a3f6c1909c4b6d116bedaf79c59 Mon Sep 17 00:00:00 2001 From: Karuppiah Natarajan Date: Sat, 14 Aug 2021 19:45:55 +0530 Subject: [PATCH 026/813] Add missing hyperlinks to different sections of the sentinel doc Signed-off-by: Karuppiah Natarajan --- topics/sentinel.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index c821e20f63..4e84fc556d 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -82,7 +82,7 @@ Fundamental things to know about Sentinel before deploying 3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it. 4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all. 5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working). -6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the section about Sentinel and Docker later in this document for more information. +6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the [section about _Sentinel and Docker_](#sentinel-docker-nat-and-possible-issues) later in this document for more information. Configuring Sentinel --- @@ -166,7 +166,7 @@ Configuration parameters can be modified at runtime: * Master-specific configuration parameters are modified using `SENTINEL SET`. * Global configuration parameters are modified using `SENTINEL CONFIG SET`. -See the **Reconfiguring Sentinel at runtime** section for more information. +See the [_Reconfiguring Sentinel at runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. Example Sentinel deployments --- @@ -445,7 +445,7 @@ Using host names may be useful when clients use TLS to connect to instances and A quick tutorial === -In the next sections of this document, all the details about Sentinel API, +In the next sections of this document, all the details about [_Sentinel API_](#sentinel-api), configuration and semantics will be covered incrementally. However for people that want to play with the system ASAP, this section is a tutorial that shows how to configure and interact with 3 Sentinel instances. @@ -478,7 +478,7 @@ Once you start the three Sentinels, you'll see a few messages they log, like: +monitor master mymaster 127.0.0.1 6379 quorum 2 This is a Sentinel event, and you can receive this kind of events via Pub/Sub -if you `SUBSCRIBE` to the event name as specified later. +if you `SUBSCRIBE` to the event name as specified later in [_Pub/Sub Messages_ section](#pubsub-messages). Sentinel generates and logs different events during failure detection and failover. @@ -808,7 +808,7 @@ master, and another replica S2 in another data center, it is possible to set S1 with a priority of 10 and S2 with a priority of 100, so that if the master fails and both S1 and S2 are available, S1 will be preferred. -For more information about the way replicas are selected, please check the **replica selection and priority** section of this documentation. +For more information about the way replicas are selected, please check the [_Replica selection and priority_ section](#replica-selection-and-priority) of this documentation. Sentinel and Redis authentication --- From 9a53d7038b51319909b03aecd6112f53cdf44faa Mon Sep 17 00:00:00 2001 From: Yossi Gottlieb Date: Mon, 23 Aug 2021 20:09:25 +0300 Subject: [PATCH 027/813] Sentinel: rephrase replica-priority paragraph. (#1630) --- topics/sentinel.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 4e84fc556d..5a20315c3f 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -1044,11 +1044,11 @@ and sorts it based on the above criteria, in the following order. 2. If the priority is the same, the replication offset processed by the replica is checked, and the replica that received more data from the master is selected. 3. If multiple replicas have the same priority and processed the same data from the master, a further check is performed, selecting the replica with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a replica, but is useful in order to make the process of replica selection more deterministic, instead of resorting to select a random replica. -Redis masters (that may be turned into replicas after a failover), and replicas, all -must be configured with a `replica-priority` if there are machines to be strongly -preferred. Otherwise all the instances can run with the default run ID (which -is the suggested setup, since it is far more interesting to select the replica -by replication offset). +In most cases, `replica-priority` does not need to be set explicitly so all +instances will use the same default value. If there is a particular fail-over +preference, `replica-priority` must be set on all instances, including masters, +as a master may become a replica at some future point in time - and it will then +need the proper `replica-priority` settings. A Redis instance can be configured with a special `replica-priority` of zero in order to be **never selected** by Sentinels as the new master. From 845a28d3eb4701da8f17cce64223ee1a2c2afcc6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Viktor=20S=C3=B6derqvist?= Date: Mon, 23 Aug 2021 21:06:46 +0200 Subject: [PATCH 028/813] Notes on manual failover to a new (just upgraded) replica (#1633) Some notes are added to CLUSTER FAILOVER command and to the Cluster Tutorial document, regarding manual failover to a new, just added, replica. Co-authored-by: Bjorn Svensson Co-authored-by: Itamar Haber --- commands/cluster-failover.md | 13 ++++++++----- topics/cluster-tutorial.md | 9 ++++++++- 2 files changed, 16 insertions(+), 6 deletions(-) diff --git a/commands/cluster-failover.md b/commands/cluster-failover.md index 45c584ba44..911eaea894 100644 --- a/commands/cluster-failover.md +++ b/commands/cluster-failover.md @@ -53,11 +53,14 @@ Because of this the **TAKEOVER** option should be used with care. ## Implementation details and notes -`CLUSTER FAILOVER`, unless the **TAKEOVER** option is specified, does not -execute a failover synchronously, it only *schedules* a manual failover, -bypassing the failure detection stage, so to check if the failover actually -happened, `CLUSTER NODES` or other means should be used in order to verify -that the state of the cluster changes after some time the command was sent. +* `CLUSTER FAILOVER`, unless the **TAKEOVER** option is specified, does not execute a failover synchronously. + It only *schedules* a manual failover, bypassing the failure detection stage. +* An `OK` reply is no guarantee that the failover will succeed. +* A replica can only be promoted to a master if it is known as a replica by a majority of the masters in the cluster. + If the replica is a new node that has just been added to the cluster (for example after upgrading it), it may not yet be known to all the masters in the cluster. + To check that the masters are aware of a new replica, you can send `CLUSTER NODES` or `CLUSTER REPLICAS` to each of the master nodes and check that it appears as a replica, before sending `CLUSTER FAILOVER` to the replica. +* To check that the failover has actually happened you can use `ROLE`, `INFO REPLICATION` (which indicates "role:master" after successful failover), or `CLUSTER NODES` to verify that the state of the cluster has changed sometime after the command was sent. +* To check if the failover has failed, check the replica's log for "Manual failover timed out", which is logged if the replica has given up after a few seconds. @return diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 461fd51ffc..b824fe9dfb 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -804,6 +804,12 @@ the failover starts, and the old master is informed about the configuration switch. When the clients are unblocked on the old master, they are redirected to the new master. +Note: + +* To promote a replica to master, it must first be known as a replica by a majority of the masters in the cluster. + Otherwise, it cannot win the failover election. + If the replica has just been added to the cluster (see [Adding a new node as a replica](#adding-a-new-node-as-a-replica) below), you may need to wait a while before sending the `CLUSTER FAILOVER` command, to make sure the masters in cluster are aware of the new replica. + Adding a new node --- @@ -991,7 +997,8 @@ one is not available. Upgrading masters is a bit more complex, and the suggested procedure is: -1. Use CLUSTER FAILOVER to trigger a manual failover of the master to one of its slaves (see the "Manual failover" section of this documentation). +1. Use `CLUSTER FAILOVER` to trigger a manual failover of the master to one of its replicas. + (See the [Manual failover](#manual-failover) section in this document.) 2. Wait for the master to turn into a slave. 3. Finally upgrade the node as you do for slaves. 4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master. From e94992395fae74f81b02f4ca6989038bfaf6476b Mon Sep 17 00:00:00 2001 From: Madelyn Olson Date: Mon, 30 Aug 2021 23:38:19 -0700 Subject: [PATCH 029/813] Clarify wording around number of pubsub numpat --- commands/pubsub.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/commands/pubsub.md b/commands/pubsub.md index 8a5d5cdfe7..9d86bc9bc6 100644 --- a/commands/pubsub.md +++ b/commands/pubsub.md @@ -39,9 +39,9 @@ will just return an empty list. # `PUBSUB NUMPAT` -Returns the number of subscriptions to patterns (that are performed using the -`PSUBSCRIBE` command). Note that this is not just the count of clients subscribed -to patterns but the total number of patterns all the clients are subscribed to. +Returns the number of unique patterns that are subscribed to by clients (that are performed using the +`PSUBSCRIBE` command). Note that this is not the count of clients subscribed +to patterns but the total number of unique patterns all the clients are subscribed to. @return From 1f5754db4ff42ae569aa2e28b52e453f80583b64 Mon Sep 17 00:00:00 2001 From: Rudolf Zundel Date: Tue, 31 Aug 2021 12:29:37 +0200 Subject: [PATCH 030/813] Escape markdown in my_random_value (#1635) --- topics/distlock.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/distlock.md b/topics/distlock.md index 48e6672e78..139a738a04 100644 --- a/topics/distlock.md +++ b/topics/distlock.md @@ -80,7 +80,7 @@ To acquire the lock, the way to go is the following: SET resource_name my_random_value NX PX 30000 The command will set the key only if it does not already exist (NX option), with an expire of 30000 milliseconds (PX option). -The key is set to a value “my_random_value”. This value must be unique across all clients and all lock requests. +The key is set to a value “my\_random\_value”. This value must be unique across all clients and all lock requests. Basically the random value is used in order to release the lock in a safe way, with a script that tells Redis: remove the key only if it exists and the value stored at the key is exactly the one I expect to be. This is accomplished by the following Lua script: From 8dbd962e6b3d35d1e764ec07074284f1220f0584 Mon Sep 17 00:00:00 2001 From: Vijay Prasanna Date: Tue, 31 Aug 2021 16:05:33 +0530 Subject: [PATCH 031/813] update readme: small typo fix (#1634) --- topics/rediscli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 86a944bca3..65fce1f958 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -114,7 +114,7 @@ you can specify a certificate and a corresponding private key using `--cert` and There are two ways you can use `redis-cli` in order to get the input from other commands (from the standard input, basically). One is to use as last argument the payload we read from *stdin*. For example, in order to set a Redis key -to the content of the file `/etc/services` if my computer, I can use the `-x` +to the content of the file `/etc/services` of my computer, I can use the `-x` option: $ redis-cli -x set foo < /etc/services From d9410a6a1f039885b0e6b82cd51c653c00a249c4 Mon Sep 17 00:00:00 2001 From: Leibale Eidelman Date: Thu, 2 Sep 2021 10:50:17 -0400 Subject: [PATCH 032/813] Update clients.json (#1639) --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 3a735b59d4..fe2e41a99f 100644 --- a/clients.json +++ b/clients.json @@ -804,7 +804,7 @@ "description": "Recommended client for node.", "authors": ["mranney"], "recommended": true, - "active": false + "active": true }, { From 12f57795981199aeb49dc8c1e3e3df52d48c2a92 Mon Sep 17 00:00:00 2001 From: Binbin Date: Thu, 9 Sep 2021 19:43:07 +0800 Subject: [PATCH 033/813] Add new LMPOP and BLMPOP commands. (#1636) Co-authored-by: Oran Agra Co-authored-by: yoav-steinberg --- commands.json | 68 ++++++++++++++++++++++++++++++++++++++++++++++ commands/blmpop.md | 30 ++++++++++++++++++++ commands/lmpop.md | 34 +++++++++++++++++++++++ 3 files changed, 132 insertions(+) create mode 100644 commands/blmpop.md create mode 100644 commands/lmpop.md diff --git a/commands.json b/commands.json index 66aa870456..ec4daecb8c 100644 --- a/commands.json +++ b/commands.json @@ -397,6 +397,74 @@ "since": "6.2.0", "group": "list" }, + "LMPOP": { + "summary": "Pop elements from a list", + "complexity": "O(N+M) where N is the number of provided keys and M is the number of elements returned.", + "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "optional": true, + "multiple": true + }, + { + "name": "where", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "7.0.0", + "group": "list" + }, + "BLMPOP": { + "summary": "Pop elements from a list, or block until one is available", + "complexity": "O(N+M) where N is the number of provided keys and M is the number of elements returned.", + "arguments": [ + { + "name": "timeout", + "type": "double" + }, + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "optional": true, + "multiple": true + }, + { + "name": "where", + "type": "enum", + "enum": [ + "LEFT", + "RIGHT" + ] + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "7.0.0", + "group": "list" + }, "BZPOPMIN": { "summary": "Remove and return the member with the lowest score from one or more sorted sets, or block until one is available", "complexity": "O(log(N)) with N being the number of elements in the sorted set.", diff --git a/commands/blmpop.md b/commands/blmpop.md new file mode 100644 index 0000000000..fd31eb8ae4 --- /dev/null +++ b/commands/blmpop.md @@ -0,0 +1,30 @@ +`BLMPOP` is the blocking variant of `LMPOP`. + +When any of the lists contains elements, this command behaves exactly like `LMPOP`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `LMPOP`. +When all lists are empty, Redis will block the connection until another client pushes to it or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. +A `timeout` of zero can be used to block indefinitely. + +See `LMPOP` for more information. + +@return + +@array-reply: specifically: + +* A `nil` when no element could be popped, and timeout is reached. +* A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements. + +@examples + +```cli +DEL mylist mylist2 +LPUSH mylist "one" "two" "three" "four" "five" +BLMPOP 1 1 mylist LEFT COUNT 2 +LRANGE mylist 0 -1 +LPUSH mylist2 "a" "b" "c" "d" "e" +BLMPOP 1 2 mylist mylist2 LEFT COUNT 3 +LRANGE mylist 0 -1 +BLMPOP 1 2 mylist mylist2 RIGHT COUNT 10 +LRANGE mylist2 0 -1 +EXISTS mylist mylist2 +``` diff --git a/commands/lmpop.md b/commands/lmpop.md new file mode 100644 index 0000000000..6caa35f3de --- /dev/null +++ b/commands/lmpop.md @@ -0,0 +1,34 @@ +Pops one or more elements from the first non-empty list key from the list of provided key names. + +LMPOP and BLMPOP are similar to the following, more limited, commands: +- `LPOP` or `RPOP` which take only one key, and can return multiple elements. +- `BLPOP` or `BRPOP` which take multiple keys, but return only one element from just one key. + +See `BLMPOP` for the blocking variant of this command. + +Elements are popped from either the left or right of the first non-empty list based on the passed argument. +The number of returned elements is limited to the lower between the non-empty list's length, and the count argument (which defaults to 1). + +@return + +@array-reply: specifically: + +* A `nil` when no element could be popped. +* A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements. + +@examples + +```cli +LMPOP 2 non1 non2 LEFT COUNT 10 +LPUSH mylist "one" "two" "three" "four" "five" +LMPOP 1 mylist LEFT +LRANGE mylist 0 -1 +LMPOP 1 mylist RIGHT COUNT 10 +LPUSH mylist "one" "two" "three" "four" "five" +LPUSH mylist2 "a" "b" "c" "d" "e" +LMPOP 2 mylist mylist2 right count 3 +LRANGE mylist 0 -1 +LMPOP 2 mylist mylist2 right count 5 +LMPOP 2 mylist mylist2 right count 10 +EXISTS mylist mylist2 +``` From 80cd34afffa2f4880b1368c3226296fecbfab861 Mon Sep 17 00:00:00 2001 From: Aslan Dukaev Date: Fri, 10 Sep 2021 12:05:49 +0300 Subject: [PATCH 034/813] Format Ruby examples (#1136) Formatted Ruby examples on the pages memory-optimization and pipelining according to common Ruby coding style. --- topics/memory-optimization.md | 46 +++++++++++++++++------------------ topics/pipelining.md | 38 ++++++++++++++--------------- 2 files changed, 42 insertions(+), 42 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 3b09a9e0fb..ff252e20fb 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -117,41 +117,41 @@ I used the following Ruby program to test how this works: require 'rubygems' require 'redis' - UseOptimization = true + USE_OPTIMIZATION = true def hash_get_key_field(key) - s = key.split(":") - if s[1].length > 2 - {:key => s[0]+":"+s[1][0..-3], :field => s[1][-2..-1]} - else - {:key => s[0]+":", :field => s[1]} - end + s = key.split(':') + if s[1].length > 2 + { key: s[0] + ':' + s[1][0..-3], field: s[1][-2..-1] } + else + { key: s[0] + ':', field: s[1] } + end end - def hash_set(r,key,value) - kf = hash_get_key_field(key) - r.hset(kf[:key],kf[:field],value) + def hash_set(r, key, value) + kf = hash_get_key_field(key) + r.hset(kf[:key], kf[:field], value) end - def hash_get(r,key,value) - kf = hash_get_key_field(key) - r.hget(kf[:key],kf[:field],value) + def hash_get(r, key, value) + kf = hash_get_key_field(key) + r.hget(kf[:key], kf[:field], value) end r = Redis.new - (0..100000).each{|id| - key = "object:#{id}" - if UseOptimization - hash_set(r,key,"val") - else - r.set(key,"val") - end - } + (0..100_000).each do |id| + key = "object:#{id}" + if USE_OPTIMIZATION + hash_set(r, key, 'val') + else + r.set(key, 'val') + end + end This is the result against a 64 bit instance of Redis 2.2: - * UseOptimization set to true: 1.7 MB of used memory - * UseOptimization set to false; 11 MB of used memory + * USE_OPTIMIZATION set to true: 1.7 MB of used memory + * USE_OPTIMIZATION set to false; 11 MB of used memory This is an order of magnitude, I think this makes Redis more or less the most memory efficient plain key value store out there. diff --git a/topics/pipelining.md b/topics/pipelining.md index a18b9bee9d..6b656b9fbf 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -89,33 +89,33 @@ In the following benchmark we'll use the Redis Ruby client, supporting pipelinin require 'redis' def bench(descr) - start = Time.now - yield - puts "#{descr} #{Time.now-start} seconds" + start = Time.now + yield + puts "#{descr} #{Time.now - start} seconds" end def without_pipelining - r = Redis.new - 10000.times { - r.ping - } + r = Redis.new + 10_000.times do + r.ping + end end def with_pipelining - r = Redis.new - r.pipelined { - 10000.times { - r.ping - } - } + r = Redis.new + r.pipelined do + 10_000.times do + r.ping + end + end end - bench("without pipelining") { - without_pipelining - } - bench("with pipelining") { - with_pipelining - } + bench('without pipelining') do + without_pipelining + end + bench('with pipelining') do + with_pipelining + end Running the above simple script yields the following figures on my Mac OS X system, running over the loopback interface, where pipelining will provide the smallest improvement as the RTT is already pretty low: From 8ec59f99486316505e36c5deb4664146d1420c7e Mon Sep 17 00:00:00 2001 From: Bhaskar Saraogi Date: Fri, 10 Sep 2021 15:00:18 +0530 Subject: [PATCH 035/813] Fix grammar in pipelining doc for easier consumption (#1024) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Viktor Söderqvist --- topics/pipelining.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/pipelining.md b/topics/pipelining.md index 6b656b9fbf..66935d35dd 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -144,12 +144,12 @@ in the same physical machine: END After all if both the Redis process and the benchmark are running in the same -box, isn't this just copying messages in memory from one place to another without +box, isn't it just copying messages in memory from one place to another without any actual latency or networking involved? The reason is that processes in a system are not always running, actually it is -the kernel scheduler that let the process run, so what happens is that, for -instance, the benchmark is allowed to run, reads the reply from the Redis server +the kernel scheduler that lets the process run. So, for +instance, when the benchmark is allowed to run, it reads the reply from the Redis server (related to the last command executed), and writes a new command. The command is now in the loopback interface buffer, but in order to be read by the server, the kernel should schedule the server process (currently blocked in a system call) From be38fe6f1c58a6f21dfbdc4398d378884d7c914e Mon Sep 17 00:00:00 2001 From: Huang Zhw Date: Fri, 10 Sep 2021 17:35:37 +0800 Subject: [PATCH 036/813] Add INFO metrics {total,current}_active_defrag_time (#1643) Add info metrics total_active_defrag_time and current_active_defrag_time introduced in redis#9377. --- commands/info.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/info.md b/commands/info.md index ea88b234c6..aae7abf035 100644 --- a/commands/info.md +++ b/commands/info.md @@ -267,6 +267,8 @@ Here is the meaning of all fields in the **stats** section: * `active_defrag_key_hits`: Number of keys that were actively defragmented * `active_defrag_key_misses`: Number of keys that were skipped by the active defragmentation process +* `total_active_defrag_time`: Total time memory fragmentation was over the limit, in milliseconds +* `current_active_defrag_time`: The time passed since memory fragmentation last was over the limit, in milliseconds * `tracking_total_keys`: Number of keys being tracked by the server * `tracking_total_items`: Number of items, that is the sum of clients number for each key, that are being tracked From df6e3889af5bd6a220d83527115d8db4dceff686 Mon Sep 17 00:00:00 2001 From: "Meir Shpilraien (Spielrein)" Date: Fri, 10 Sep 2021 16:51:42 +0300 Subject: [PATCH 037/813] Update Lua RESP3 reply format (#1619) Added: * Big number reply format * Verbatim String reply format * Update commands/eval.md Co-authored-by: Itamar Haber --- commands/eval.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands/eval.md b/commands/eval.md index 11026de51c..8f12b61d6c 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -647,8 +647,12 @@ At this point the new conversions are available, specifically: * Redis true reply -> Lua true boolean value. * Redis false reply -> Lua false boolean value. * Redis double reply -> Lua table with a single `score` field containing a Lua number representing the double value. +* Redis big number reply -> Lua table with a single `big_number` field containing a Lua string representing the big number value. +* Redis verbatim string reply -> Lua table with a single `verbatim_string` field containing a Lua table with two fields, `string` and `format`, representing the verbatim string and verbatim format respectively. * All the RESP2 old conversions still apply. +Note: the big number and verbatim replies are only available in Redis 7 or greater. Also, presently RESP3 attributes are not supported in Lua. + **Lua to Redis** conversion table specific for RESP3. * Lua boolean -> Redis boolean true or false. **Note that this is a change compared to the RESP2 mode**, where returning true from Lua returned the number 1 to the Redis client, and returning false used to return NULL. From de7eeb2ccebe2469e05c548c2782b308e8d8e8ad Mon Sep 17 00:00:00 2001 From: Bjorn Svensson Date: Fri, 10 Sep 2021 15:59:26 +0200 Subject: [PATCH 038/813] Update MONITOR command (#1641) Updating MONITOR docs to match that some commands are now included in the output. Changed since 6.2.4 via redis/redis#8859 EXEC seems to always have been logged. (Verified in 3.0, 5.0, 6.0 and 6.2.) --- commands/monitor.md | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/commands/monitor.md b/commands/monitor.md index 66dbdb9c59..6b00ee4e8d 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -13,8 +13,9 @@ $ redis-cli monitor 1339518087.877697 [0 127.0.0.1:60866] "dbsize" 1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" 1339518096.506257 [0 127.0.0.1:60866] "get" "x" -1339518099.363765 [0 127.0.0.1:60866] "del" "x" -1339518100.544926 [0 127.0.0.1:60866] "get" "x" +1339518099.363765 [0 127.0.0.1:60866] "eval" "return redis.call('set','x','7')" "0" +1339518100.363799 [0 lua] "set" "x" "7" +1339518100.544926 [0 127.0.0.1:60866] "del" "x" ``` Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. @@ -42,15 +43,10 @@ via `telnet`. ## Commands not logged by MONITOR -Because of security concerns, all administrative commands are not logged -by `MONITOR`'s output. +Because of security concerns, no administrative commands are logged +by `MONITOR`'s output and sensitive data is redacted in the command `AUTH`. -Furthermore, the following commands are also not logged: - - * `AUTH` - * `EXEC` - * `HELLO` - * `QUIT` +Furthermore, the command `QUIT` is also not logged. ## Cost of running MONITOR @@ -91,5 +87,6 @@ flow. @history -* `>= 6.2`: `RESET` can be called to exit monitor mode. * `>= 6.0`: `AUTH` excluded from the command's output. +* `>= 6.2`: `RESET` can be called to exit monitor mode. +* `>= 6.2.4`: `AUTH`, `HELLO`, `EVAL`, `EVAL_RO`, `EVALSHA` and `EVALSHA_RO` included in the command's output. From b9155866aeb8fec9e1250e3ee433a41de88ab66f Mon Sep 17 00:00:00 2001 From: Huang Zhw Date: Mon, 13 Sep 2021 15:03:51 +0800 Subject: [PATCH 039/813] add bitfield_ro command (#1645) --- commands.json | 24 ++++++++++++++++++++++++ commands/bitfield_ro.md | 19 +++++++++++++++++++ 2 files changed, 43 insertions(+) create mode 100644 commands/bitfield_ro.md diff --git a/commands.json b/commands.json index ec4daecb8c..186f43df23 100644 --- a/commands.json +++ b/commands.json @@ -254,6 +254,30 @@ "since": "3.2.0", "group": "bitmap" }, + "BITFIELD_RO": { + "summary": "Perform arbitrary bitfield integer operations on strings. Read-only variant of BITFIELD", + "complexity": "O(1) for each subcommand specified", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "command": "GET", + "name": [ + "type", + "offset" + ], + "type": [ + "type", + "integer" + ], + "multiple": true + } + ], + "since": "6.2.0", + "group": "bitmap" + }, "BITOP": { "summary": "Perform bitwise operations between strings", "complexity": "O(N)", diff --git a/commands/bitfield_ro.md b/commands/bitfield_ro.md new file mode 100644 index 0000000000..94057a1183 --- /dev/null +++ b/commands/bitfield_ro.md @@ -0,0 +1,19 @@ +Read-only variant of the `BITFIELD` command. +It is like the original `BITFIELD` but only accepts `!GET` subcommand and can safely be used in read-only replicas. + +Since the original `BITFIELD` has `!SET` and `!INCRBY` options it is technically flagged as a writing command in the Redis command table. +For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). + +Since Redis 6.2, the `BITFIELD_RO` variant was introduced in order to allow `BITFIELD` behavior in read-only replicas without breaking compatibility on command flags. + +See original `BITFIELD` for more details. + +@examples + +``` +BITFIELD_RO hello GET i8 16 +``` + +@return + +@array-reply: An array with each entry being the corresponding result of the subcommand given at the same position. From 2fd52e72b198dfbbf986899399092737fa3575e6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Viktor=20S=C3=B6derqvist?= Date: Mon, 13 Sep 2021 09:34:57 +0200 Subject: [PATCH 040/813] Add ASKING command (#1642) --- commands.json | 7 +++++++ commands/asking.md | 10 ++++++++++ 2 files changed, 17 insertions(+) create mode 100644 commands/asking.md diff --git a/commands.json b/commands.json index 186f43df23..c776a1d4a2 100644 --- a/commands.json +++ b/commands.json @@ -133,6 +133,13 @@ "since": "2.0.0", "group": "string" }, + "ASKING": { + "summary": "Sent by cluster clients after an -ASK redirect", + "complexity": "O(1)", + "arguments": [], + "since": "3.0.0", + "group": "cluster" + }, "AUTH": { "summary": "Authenticate to the server", "arguments": [ diff --git a/commands/asking.md b/commands/asking.md new file mode 100644 index 0000000000..d98643c25c --- /dev/null +++ b/commands/asking.md @@ -0,0 +1,10 @@ +When a cluster client receives an `-ASK` redirect, the `ASKING` command is sent to the target node followed by the command which was redirected. +This is normally done automatically by cluster clients. + +If an `-ASK` redirect is received during a transaction, only one ASKING command needs to be sent to the target node before sending the complete transaction to the target node. + +See [ASK redirection in the Redis Cluster Specification](/topics/cluster-spec#ask-redirection) for details. + +@return + +@simple-string-reply: `OK`. From dddae77217aa8d0e0d1d00cd14993032156aa705 Mon Sep 17 00:00:00 2001 From: yoav-steinberg Date: Mon, 13 Sep 2021 10:51:17 +0300 Subject: [PATCH 041/813] Temp workaround for https://github.com/redis/redis-io/issues/251 (#1646) --- commands.json | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/commands.json b/commands.json index c776a1d4a2..4f40282829 100644 --- a/commands.json +++ b/commands.json @@ -278,8 +278,7 @@ "type": [ "type", "integer" - ], - "multiple": true + ] } ], "since": "6.2.0", From 099aae476a57effc3ab55eb0e7a1c4add6f7e581 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 13 Sep 2021 18:34:06 +0300 Subject: [PATCH 042/813] Escapes keywords that are also commands --- commands/migrate.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/commands/migrate.md b/commands/migrate.md index 6559b1b441..77e1f824b6 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -48,9 +48,9 @@ uses pipelining in order to migrate multiple keys between instances without incurring in the round trip time latency and other overheads that there are when moving each key with a single `MIGRATE` call. -In order to enable this form, the `KEYS` option is used, and the normal *key* +In order to enable this form, the `!KEYS` option is used, and the normal *key* argument is set to an empty string. The actual key names will be provided -after the `KEYS` argument itself, like in the following example: +after the `!KEYS` argument itself, like in the following example: MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3 @@ -60,17 +60,17 @@ just a single key exists. ## Options -* `COPY` -- Do not remove the key from the local instance. +* `!COPY` -- Do not remove the key from the local instance. * `REPLACE` -- Replace existing key on the remote instance. -* `KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `KEYS` option (see the above section for more info). -* `AUTH` -- Authenticate with the given password to the remote instance. +* `!KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `KEYS` option (see the above section for more info). +* `!AUTH` -- Authenticate with the given password to the remote instance. * `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style). @history -* `>= 3.0.0`: Added the `COPY` and `REPLACE` options. -* `>= 3.0.6`: Added the `KEYS` option. -* `>= 4.0.7`: Added the `AUTH` option. +* `>= 3.0.0`: Added the `!COPY` and `REPLACE` options. +* `>= 3.0.6`: Added the `!KEYS` option. +* `>= 4.0.7`: Added the `!AUTH` option. * `>= 6.0.0`: Added the `AUTH2` option. @return From 93a2afb336b3456699d755a7c3d7b2ab27298a9a Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Wed, 15 Sep 2021 18:40:01 +0300 Subject: [PATCH 043/813] Escapes Redis commands --- commands/set.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/commands/set.md b/commands/set.md index 25611a17cd..280811aed5 100644 --- a/commands/set.md +++ b/commands/set.md @@ -13,7 +13,7 @@ The `SET` command supports a set of options that modify its behavior: * `NX` -- Only set the key if it does not already exist. * `XX` -- Only set the key if it already exist. * `KEEPTTL` -- Retain the time to live associated with the key. -* `GET` -- Return the old string stored at key, or nil if key did not exist. An error is returned and `SET` aborted if the value stored at key is not a string. +* `!GET` -- Return the old string stored at key, or nil if key did not exist. An error is returned and `SET` aborted if the value stored at key is not a string. Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, `GETSET`, it is possible that in future versions of Redis these commands will be deprecated and finally removed. @@ -23,7 +23,7 @@ Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, `G @nil-reply: `(nil)` if the `SET` operation was not performed because the user specified the `NX` or `XX` option but the condition was not met. -If the command is issued with the `GET` option, the above does not apply. It will instead reply as follows, regardless if the `SET` was actually performed: +If the command is issued with the `!GET` option, the above does not apply. It will instead reply as follows, regardless if the `SET` was actually performed: @bulk-string-reply: the old string value stored at key. @@ -34,8 +34,8 @@ If the command is issued with the `GET` option, the above does not apply. It wil * `>= 2.6.12`: Added the `EX`, `PX`, `NX` and `XX` options. * `>= 6.0`: Added the `KEEPTTL` option. -* `>= 6.2`: Added the `GET`, `EXAT` and `PXAT` option. -* `>= 7.0`: Allowed the `NX` and `GET` options to be used together. +* `>= 6.2`: Added the `!GET`, `EXAT` and `PXAT` option. +* `>= 7.0`: Allowed the `NX` and `!GET` options to be used together. @examples From 849bf62e1d42676b04abddd3078b8c10de356c58 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Sat, 18 Sep 2021 01:26:00 +0300 Subject: [PATCH 044/813] Renames "Redis Labs" to "Redis Ltd." (#1650) --- topics/benchmarks.md | 2 +- topics/faq.md | 2 +- topics/governance.md | 14 +++++++------- topics/license.md | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/topics/benchmarks.md b/topics/benchmarks.md index 804a488c8e..25c672e5f3 100644 --- a/topics/benchmarks.md +++ b/topics/benchmarks.md @@ -455,7 +455,7 @@ Another one using a 64-bit box, a Xeon L5420 clocked at 2.5 GHz: There are several third-party tools that can be used for benchmarking Redis. Refer to each tool's documentation for more information about its goals and capabilities. -* [memtier_benchmark](https://github.com/redislabs/memtier_benchmark) from [Redis Labs](https://twitter.com/RedisLabs) is a NoSQL Redis and Memcache traffic generation and benchmarking tool. +* [memtier_benchmark](https://github.com/redislabs/memtier_benchmark) from [Redis Ltd.](https://twitter.com/RedisInc) is a NoSQL Redis and Memcache traffic generation and benchmarking tool. * [rpc-perf](https://github.com/twitter/rpc-perf) from [Twitter](https://twitter.com/twitter) is a tool for benchmarking RPC services that supports Redis and Memcache. * [YCSB](https://github.com/brianfrankcooper/YCSB) from [Yahoo @Yahoo](https://twitter.com/Yahoo) is a benchmarking framework with clients to many databases, including Redis. diff --git a/topics/faq.md b/topics/faq.md index c44a7c64a4..c8ea0ad1e5 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -39,7 +39,7 @@ If your real problem is not the total RAM needed, but the fact that you need to split your data set into multiple Redis instances, please read the [Partitioning page](/topics/partitioning) in this documentation for more info. -Recently Redis Labs, the company sponsoring Redis developments, developed a +Recently Redis Ltd., the company sponsoring Redis developments, developed a "Redis on flash" solution that is able to use a mixed RAM/flash approach for larger data sets with a biased access pattern. You may check their offering for more information, however this feature is not part of the open source Redis diff --git a/topics/governance.md b/topics/governance.md index b7484fe1d7..fbb46d8857 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -4,7 +4,7 @@ Since 2009, the Redis open source project has become very successful and extremely popular. -During this time, Salvatore Sanfillipo has led, managed, and maintained the project. While contributors from Redis Labs and others have made significant contributions, the project never adopted a formal governance structure and de-facto was operating as a [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)-style project. +During this time, Salvatore Sanfillipo has led, managed, and maintained the project. While contributors from Redis Ltd. and others have made significant contributions, the project never adopted a formal governance structure and de-facto was operating as a [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)-style project. As Redis grows, matures, and continues to expand its user base, it becomes increasingly important to form a sustainable structure for the ongoing development and maintenance of Redis. We want to ensure the project’s continuity and reflect its larger community. @@ -16,13 +16,13 @@ Redis has adopted a _light governance_ model that matches the current size of th Salvatore Sanfilippo has stepped down as head of the project and named two successors to take over and lead the Redis project: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) and Oran Agra ([oranagra](https://github.com/oranagra)) -With the backing and blessing of Redis Labs, we wish to use this opportunity and create a more open, scalable, and community-driven “core team” structure to run the project. The core team will consist of members selected based on demonstrated, long-term personal involvement and contributions. +With the backing and blessing of Redis Ltd., we wish to use this opportunity and create a more open, scalable, and community-driven “core team” structure to run the project. The core team will consist of members selected based on demonstrated, long-term personal involvement and contributions. The core team comprises of: -* Project Lead: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) from Redis Labs -* Project Lead: Oran Agra ([oranagra](https://github.com/oranagra)) from Redis Labs -* Community Lead: Itamar Haber ([itamarhaber](https://github.com/itamarhaber)) from Redis Labs +* Project Lead: Yossi Gottlieb ([yossigo](https://github.com/yossigo)) from Redis Ltd. +* Project Lead: Oran Agra ([oranagra](https://github.com/oranagra)) from Redis Ltd. +* Community Lead: Itamar Haber ([itamarhaber](https://github.com/itamarhaber)) from Redis Ltd. * Member: Zhao Zhao ([soloestoy](https://github.com/soloestoy)) from Alibaba * Member: Madelyn Olson ([madolson](https://github.com/madolson)) from Amazon Web Services @@ -61,8 +61,8 @@ The core team will aim to form and empower a community of contributors by furthe #### Core team membership * The core team is not expected to serve for life, however, long-term participation is desired to provide stability and consistency in the Redis programming style and the community. -* If a core-team member whose work is funded by Redis Labs must be replaced, the replacement will be designated by Redis Labs after consultation with the remaining core-team members. -* If a core-team member not funded by Redis Labs will no longer participate, for whatever reason, the other team members will select a replacement. +* If a core-team member whose work is funded by Redis Ltd. must be replaced, the replacement will be designated by Redis Ltd. after consultation with the remaining core-team members. +* If a core-team member not funded by Redis Ltd. will no longer participate, for whatever reason, the other team members will select a replacement. ## Community forums and communications diff --git a/topics/license.md b/topics/license.md index d7ad7e9296..da229e9a24 100644 --- a/topics/license.md +++ b/topics/license.md @@ -2,7 +2,7 @@ Redis is **open source software** released under the terms of the **three clause BSD license**. Most of the Redis source code was written and is copyrighted by Salvatore Sanfilippo and Pieter Noordhuis. A list of other contributors can be found in the git history. -The Redis trademark and logo are owned by Redis Labs and can be +The Redis trademark and logo are owned by Redis Ltd. and can be used in accordance with the [Redis Trademark Guidelines](/topics/trademark). # Three clause BSD license From a9e0202abb71150f28690cfe6334f84a0e79fb53 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 20 Sep 2021 15:08:07 +0300 Subject: [PATCH 045/813] Adds a link --- topics/governance.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/governance.md b/topics/governance.md index fbb46d8857..64d16d6aa7 100644 --- a/topics/governance.md +++ b/topics/governance.md @@ -4,7 +4,7 @@ Since 2009, the Redis open source project has become very successful and extremely popular. -During this time, Salvatore Sanfillipo has led, managed, and maintained the project. While contributors from Redis Ltd. and others have made significant contributions, the project never adopted a formal governance structure and de-facto was operating as a [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)-style project. +During this time, Salvatore Sanfillipo has led, managed, and maintained the project. While contributors from [Redis Ltd.](https://redis.com) and others have made significant contributions, the project never adopted a formal governance structure and de-facto was operating as a [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life)-style project. As Redis grows, matures, and continues to expand its user base, it becomes increasingly important to form a sustainable structure for the ongoing development and maintenance of Redis. We want to ensure the project’s continuity and reflect its larger community. From 2c1955babf3a59bd7f434986987e67d92452101b Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Thu, 23 Sep 2021 15:41:53 +0300 Subject: [PATCH 046/813] Replicates effort to additional pages (#1655) --- commands/readwrite.md | 4 +- commands/xclaim.md | 2 +- topics/admin.md | 18 +-- topics/clients.md | 2 +- topics/cluster-spec.md | 262 ++++++++++++++++++------------------- topics/cluster-tutorial.md | 95 +++++++------- topics/config.md | 2 +- topics/distlock.md | 6 +- topics/faq.md | 8 +- topics/modules-intro.md | 2 +- topics/partitioning.md | 4 +- topics/rediscli.md | 18 +-- topics/replication.md | 2 +- 13 files changed, 213 insertions(+), 212 deletions(-) diff --git a/commands/readwrite.md b/commands/readwrite.md index 847ca9f301..d6d7089be4 100644 --- a/commands/readwrite.md +++ b/commands/readwrite.md @@ -1,6 +1,6 @@ -Disables read queries for a connection to a Redis Cluster slave node. +Disables read queries for a connection to a Redis Cluster replica node. -Read queries against a Redis Cluster slave node are disabled by default, +Read queries against a Redis Cluster replica node are disabled by default, but you can use the `READONLY` command to change this behavior on a per- connection basis. The `READWRITE` command resets the readonly mode flag of a connection back to readwrite. diff --git a/commands/xclaim.md b/commands/xclaim.md index 480ecb6ddd..2ee762dd3d 100644 --- a/commands/xclaim.md +++ b/commands/xclaim.md @@ -18,7 +18,7 @@ Moreover, as a side effect, `XCLAIM` will increment the count of attempted deliv The command has multiple options, however most are mainly for internal use in order to transfer the effects of `XCLAIM` or other commands to the AOF file -and to propagate the same effects to the slaves, and are unlikely to be +and to propagate the same effects to the replicas, and are unlikely to be useful to normal users: 1. `IDLE `: Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is reset because the message has now a new owner trying to process it. diff --git a/topics/admin.md b/topics/admin.md index 958a854a08..4be6579710 100644 --- a/topics/admin.md +++ b/topics/admin.md @@ -39,14 +39,14 @@ However from time to time a restart is mandatory, for instance in order to upgra The following steps provide a very commonly used way in order to avoid any downtime. -* Setup your new Redis instance as a slave for your current Redis instance. In order to do so you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time. -* If you use a single server, make sure that the slave is started in a different port than the master instance, otherwise the slave will not be able to start at all. -* Wait for the replication initial synchronization to complete (check the slave log file). -* Make sure using INFO that there are the same number of keys in the master and in the slave. Check with redis-cli that the slave is working as you wish and is replying to your commands. -* Allow writes to the slave using **CONFIG SET slave-read-only no** -* Configure all your clients in order to use the new instance (that is, the slave). Note that you may want to use the `CLIENT PAUSE` command in order to make sure that no client can write to the old master during the switch. -* Once you are sure that the master is no longer receiving any query (you can check this with the [MONITOR command](/commands/monitor)), elect the slave to master using the **SLAVEOF NO ONE** command, and shut down your master. - -If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way in order to upgrade to newer versions, is to upgrade a slave after the other, then perform a manual fail-over in order to promote one of the upgraded replicas as master, and finally promote the last slave. +* Setup your new Redis instance as a replica for your current Redis instance. In order to do so you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time. +* If you use a single server, make sure that the replica is started in a different port than the master instance, otherwise the replica will not be able to start at all. +* Wait for the replication initial synchronization to complete (check the replica's log file). +* Make sure using INFO that there are the same number of keys in the master and in the replica. Check with redis-cli that the replica is working as you wish and is replying to your commands. +* Allow writes to the replica using **CONFIG SET slave-read-only no** +* Configure all your clients in order to use the new instance (that is, the replica). Note that you may want to use the `CLIENT PAUSE` command in order to make sure that no client can write to the old master during the switch. +* Once you are sure that the master is no longer receiving any query (you can check this with the [MONITOR command](/commands/monitor)), elect the replica to master using the **REPLICAOF NO ONE** command, and shut down your master. + +If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way in order to upgrade to newer versions, is to upgrade a replica after the other, then perform a manual fail-over in order to promote one of the upgraded replicas as master, and finally promote the last replica. Note however that Redis Cluster 4.0 is not compatible with Redis Cluster 3.2 at cluster bus protocol level, so a mass restart is needed in this case. However Redis 5 cluster bus is backward compatible with Redis 4. diff --git a/topics/clients.md b/topics/clients.md index da8e6d75ff..0d1fd4b06b 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -104,7 +104,7 @@ Different kind of clients have different default limits: * **Normal clients** have a default limit of 0, that means, no limit at all, because most normal clients use blocking implementations sending a single command and waiting for the reply to be completely read before sending the next command, so it is always not desirable to close the connection in case of a normal client. * **Pub/Sub clients** have a default hard limit of 32 megabytes and a soft limit of 8 megabytes per 60 seconds. -* **Slaves** have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 second. +* **Replicas** have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 second. It is possible to change the limit at runtime using the `CONFIG SET` command or in a permanent way using the Redis configuration file `redis.conf`. See the example `redis.conf` in the Redis distribution for more information about how to set the limit. diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 9e15f62b84..7ee3addce9 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -16,7 +16,7 @@ Redis Cluster is a distributed implementation of Redis with the following goals, * High performance and linear scalability up to 1000 nodes. There are no proxies, asynchronous replication is used, and no merge operations are performed on values. * Acceptable degree of write safety: the system tries (in a best-effort way) to retain all the writes originating from clients connected with the majority of the master nodes. Usually there are small windows where acknowledged writes can be lost. Windows to lose acknowledged writes are larger when clients are in a minority partition. -* Availability: Redis Cluster is able to survive partitions where the majority of the master nodes are reachable and there is at least one reachable slave for every master node that is no longer reachable. Moreover using *replicas migration*, masters no longer replicated by any slave will receive one from a master which is covered by multiple slaves. +* Availability: Redis Cluster is able to survive partitions where the majority of the master nodes are reachable and there is at least one reachable replica for every master node that is no longer reachable. Moreover using *replicas migration*, masters no longer replicated by any replica will receive one from a master which is covered by multiple replicas. What is described in this document is implemented in Redis 3.0 or greater. @@ -42,7 +42,7 @@ Clients and Servers roles in the Redis Cluster protocol In Redis Cluster nodes are responsible for holding the data, and taking the state of the cluster, including mapping keys to the right nodes. Cluster nodes are also able to auto-discover other nodes, detect non-working -nodes, and promote slave nodes to master when needed in order +nodes, and promote replica nodes to master when needed in order to continue to operate when a failure occurs. To perform their tasks all the cluster nodes are connected using a @@ -73,14 +73,14 @@ Redis Cluster tries harder to retain writes that are performed by clients connec The following are examples of scenarios that lead to loss of acknowledged writes received in the majority partitions during failures: -1. A write may reach a master, but while the master may be able to reply to the client, the write may not be propagated to slaves via the asynchronous replication used between master and slave nodes. If the master dies without the write reaching the slaves, the write is lost forever if the master is unreachable for a long enough period that one of its slaves is promoted. This is usually hard to observe in the case of a total, sudden failure of a master node since masters try to reply to clients (with the acknowledge of the write) and slaves (propagating the write) at about the same time. However it is a real world failure mode. +1. A write may reach a master, but while the master may be able to reply to the client, the write may not be propagated to replicas via the asynchronous replication used between master and replica nodes. If the master dies without the write reaching the replicas, the write is lost forever if the master is unreachable for a long enough period that one of its replicas is promoted. This is usually hard to observe in the case of a total, sudden failure of a master node since masters try to reply to clients (with the acknowledge of the write) and replicas (propagating the write) at about the same time. However it is a real world failure mode. 2. Another theoretically possible failure mode where writes are lost is the following: * A master is unreachable because of a partition. -* It gets failed over by one of its slaves. +* It gets failed over by one of its replicas. * After some time it may be reachable again. -* A client with an out-of-date routing table may write to the old master before it is converted into a slave (of the new master) by the cluster. +* A client with an out-of-date routing table may write to the old master before it is converted into a replica (of the new master) by the cluster. The second failure mode is unlikely to happen because master nodes unable to communicate with the majority of the other masters for enough time to be failed over will no longer accept writes, and when the partition is fixed writes are still refused for a small amount of time to allow other nodes to inform about configuration changes. This failure mode also requires that the client's routing table has not yet been updated. @@ -91,18 +91,18 @@ Specifically, for a master to be failed over it must be unreachable by the major Availability --- -Redis Cluster is not available in the minority side of the partition. In the majority side of the partition assuming that there are at least the majority of masters and a slave for every unreachable master, the cluster becomes available again after `NODE_TIMEOUT` time plus a few more seconds required for a slave to get elected and failover its master (failovers are usually executed in a matter of 1 or 2 seconds). +Redis Cluster is not available in the minority side of the partition. In the majority side of the partition assuming that there are at least the majority of masters and a replica for every unreachable master, the cluster becomes available again after `NODE_TIMEOUT` time plus a few more seconds required for a replica to get elected and failover its master (failovers are usually executed in a matter of 1 or 2 seconds). This means that Redis Cluster is designed to survive failures of a few nodes in the cluster, but it is not a suitable solution for applications that require availability in the event of large net splits. -In the example of a cluster composed of N master nodes where every node has a single slave, the majority side of the cluster will remain available as long as a single node is partitioned away, and will remain available with a probability of `1-(1/(N*2-1))` when two nodes are partitioned away (after the first node fails we are left with `N*2-1` nodes in total, and the probability of the only master without a replica to fail is `1/(N*2-1))`. +In the example of a cluster composed of N master nodes where every node has a single replica, the majority side of the cluster will remain available as long as a single node is partitioned away, and will remain available with a probability of `1-(1/(N*2-1))` when two nodes are partitioned away (after the first node fails we are left with `N*2-1` nodes in total, and the probability of the only master without a replica to fail is `1/(N*2-1))`. -For example, in a cluster with 5 nodes and a single slave per node, there is a `1/(5*2-1) = 11.11%` probability that after two nodes are partitioned away from the majority, the cluster will no longer be available. +For example, in a cluster with 5 nodes and a single replica per node, there is a `1/(5*2-1) = 11.11%` probability that after two nodes are partitioned away from the majority, the cluster will no longer be available. Thanks to a Redis Cluster feature called **replicas migration** the Cluster availability is improved in many real world scenarios by the fact that replicas migrate to orphaned masters (masters no longer having replicas). -So at every successful failure event, the cluster may reconfigure the slaves +So at every successful failure event, the cluster may reconfigure the replicas layout in order to better resist the next failure. Performance @@ -147,7 +147,7 @@ Each master node in a cluster handles a subset of the 16384 hash slots. The cluster is **stable** when there is no cluster reconfiguration in progress (i.e. where hash slots are being moved from one node to another). When the cluster is stable, a single hash slot will be served by a single node -(however the serving node can have one or more slaves that will replace it in the case of net splits or failures, +(however the serving node can have one or more replicas that will replace it in the case of net splits or failures, and that can be used in order to scale read operations where reading stale data is acceptable). The base algorithm used to map keys to hash slots is the following @@ -268,7 +268,7 @@ a node was pinged, is instead local to each node. Every node maintains the following information about other nodes that it is aware of in the cluster: The node ID, IP and port of the node, a set of -flags, what is the master of the node if it is flagged as `slave`, last time +flags, what is the master of the node if it is flagged as `replica`, last time the node was pinged and the last time the pong was received, the current *configuration epoch* of the node (explained later in this specification), the link state and finally the set of hash slots served. @@ -351,7 +351,7 @@ MOVED Redirection --- A Redis client is free to send queries to every node in the cluster, including -slave nodes. The node will analyze the query, and if it is acceptable +replica nodes. The node will analyze the query, and if it is acceptable (that is, only a single key is mentioned in the query, or the multiple keys mentioned are all to the same hash slot) it will lookup what node is responsible for the hash slot where the key or keys belong. @@ -563,7 +563,7 @@ addresses in two different situations: Note that a client may handle the `MOVED` redirection by updating just the moved slot in its table, however this is usually not efficient since often the configuration of multiple slots is modified at once (for example if a -slave is promoted to master, all the slots served by the old master will +replica is promoted to master, all the slots served by the old master will be remapped). It is much simpler to react to a `MOVED` redirection by fetching the full map of slots to nodes from scratch. @@ -572,7 +572,7 @@ an alternative to the `CLUSTER NODES` command that does not require parsing, and only provides the information strictly needed to clients. The new command is called `CLUSTER SLOTS` and provides an array of slots -ranges, and the associated master and slave nodes serving the specified range. +ranges, and the associated master and replica nodes serving the specified range. The following is an example of output of `CLUSTER SLOTS`: @@ -601,12 +601,12 @@ The following is an example of output of `CLUSTER SLOTS`: The first two sub-elements of every element of the returned array are the start-end slots of the range. The additional elements represent address-port pairs. The first address-port pair is the master serving the slot, and the -additional address-port pairs are all the slaves serving the same slot +additional address-port pairs are all the replicas serving the same slot that are not in an error condition (i.e. the FAIL flag is not set). For example the first element of the output says that slots from 5461 to 10922 (start and end included) are served by 127.0.0.1:7001, and it is possible -to scale read-only load contacting the slave at 127.0.0.1:7004. +to scale read-only load contacting the replica at 127.0.0.1:7004. `CLUSTER SLOTS` is not guaranteed to return ranges that cover the full 16384 slots if the cluster is misconfigured, so clients should initialize the @@ -640,22 +640,22 @@ The client can try the operation after some time, or report back the error. As soon as migration of the specified hash slot has terminated, all multi-key operations are available again for that hash slot. -Scaling reads using slave nodes +Scaling reads using replica nodes --- -Normally slave nodes will redirect clients to the authoritative master for -the hash slot involved in a given command, however clients can use slaves +Normally replica nodes will redirect clients to the authoritative master for +the hash slot involved in a given command, however clients can use replicas in order to scale reads using the `READONLY` command. -`READONLY` tells a Redis Cluster slave node that the client is ok reading +`READONLY` tells a Redis Cluster replica node that the client is ok reading possibly stale data and is not interested in running write queries. When the connection is in readonly mode, the cluster will send a redirection to the client only if the operation involves keys not served -by the slave's master node. This may happen because: +by the replica's master node. This may happen because: -1. The client sent a command about hash slots never served by the master of this slave. -2. The cluster was reconfigured (for example resharded) and the slave is no longer able to serve commands for a given hash slot. +1. The client sent a command about hash slots never served by the master of this replica. +2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot. When this happens the client should update its hashslot map as explained in the previous sections. @@ -695,12 +695,12 @@ Ping and pong packets contain a header that is common to all types of packets (f The common header has the following information: * Node ID, a 160 bit pseudorandom string that is assigned the first time a node is created and remains the same for all the life of a Redis Cluster node. -* The `currentEpoch` and `configEpoch` fields of the sending node that are used to mount the distributed algorithms used by Redis Cluster (this is explained in detail in the next sections). If the node is a slave the `configEpoch` is the last known `configEpoch` of its master. -* The node flags, indicating if the node is a slave, a master, and other single-bit node information. -* A bitmap of the hash slots served by the sending node, or if the node is a slave, a bitmap of the slots served by its master. +* The `currentEpoch` and `configEpoch` fields of the sending node that are used to mount the distributed algorithms used by Redis Cluster (this is explained in detail in the next sections). If the node is a replica the `configEpoch` is the last known `configEpoch` of its master. +* The node flags, indicating if the node is a replica, a master, and other single-bit node information. +* A bitmap of the hash slots served by the sending node, or if the node is a replica, a bitmap of the slots served by its master. * The sender TCP base port (that is, the port used by Redis to accept client commands; add 10000 to this to obtain the cluster bus port). * The state of the cluster from the point of view of the sender (down or ok). -* The master node ID of the sending node, if it is a slave. +* The master node ID of the sending node, if it is a replica. Ping and pong packets also contain a gossip section. This section offers to the receiver a view of what the sender node thinks about other nodes in the cluster. The gossip section only contains information about a few random nodes among the set of nodes known to the sender. The number of nodes mentioned in a gossip section is proportional to the cluster size. @@ -715,19 +715,19 @@ Gossip sections allow receiving nodes to get information about the state of othe Failure detection --- -Redis Cluster failure detection is used to recognize when a master or slave node is no longer reachable by the majority of nodes and then respond by promoting a slave to the role of master. When slave promotion is not possible the cluster is put in an error state to stop receiving queries from clients. +Redis Cluster failure detection is used to recognize when a master or replica node is no longer reachable by the majority of nodes and then respond by promoting a replica to the role of master. When replica promotion is not possible the cluster is put in an error state to stop receiving queries from clients. As already mentioned, every node takes a list of flags associated with other known nodes. There are two flags that are used for failure detection that are called `PFAIL` and `FAIL`. `PFAIL` means *Possible failure*, and is a non-acknowledged failure type. `FAIL` means that a node is failing and that this condition was confirmed by a majority of masters within a fixed amount of time. **PFAIL flag:** -A node flags another node with the `PFAIL` flag when the node is not reachable for more than `NODE_TIMEOUT` time. Both master and slave nodes can flag another node as `PFAIL`, regardless of its type. +A node flags another node with the `PFAIL` flag when the node is not reachable for more than `NODE_TIMEOUT` time. Both master and replica nodes can flag another node as `PFAIL`, regardless of its type. The concept of non-reachability for a Redis Cluster node is that we have an **active ping** (a ping that we sent for which we have yet to get a reply) pending for longer than `NODE_TIMEOUT`. For this mechanism to work the `NODE_TIMEOUT` must be large compared to the network round trip time. In order to add reliability during normal operations, nodes will try to reconnect with other nodes in the cluster as soon as half of the `NODE_TIMEOUT` has elapsed without a reply to a ping. This mechanism ensures that connections are kept alive so broken connections usually won't result in false failure reports between nodes. **FAIL flag:** -The `PFAIL` flag alone is just local information every node has about other nodes, but it is not sufficient to trigger a slave promotion. For a node to be considered down the `PFAIL` condition needs to be escalated to a `FAIL` condition. +The `PFAIL` flag alone is just local information every node has about other nodes, but it is not sufficient to trigger a replica promotion. For a node to be considered down the `PFAIL` condition needs to be escalated to a `FAIL` condition. As outlined in the node heartbeats section of this document, every node sends gossip messages to every other node including the state of a few random known nodes. Every node eventually receives a set of node flags for every other node. This way every node has a mechanism to signal other nodes about failure conditions they have detected. @@ -746,9 +746,9 @@ The `FAIL` message will force every receiving node to mark the node in `FAIL` st Note that *the FAIL flag is mostly one way*. That is, a node can go from `PFAIL` to `FAIL`, but a `FAIL` flag can only be cleared in the following situations: -* The node is already reachable and is a slave. In this case the `FAIL` flag can be cleared as slaves are not failed over. +* The node is already reachable and is a replica. In this case the `FAIL` flag can be cleared as replicas are not failed over. * The node is already reachable and is a master not serving any slot. In this case the `FAIL` flag can be cleared as masters without slots do not really participate in the cluster and are waiting to be configured in order to join the cluster. -* The node is already reachable and is a master, but a long time (N times the `NODE_TIMEOUT`) has elapsed without any detectable slave promotion. It's better for it to rejoin the cluster and continue in this case. +* The node is already reachable and is a master, but a long time (N times the `NODE_TIMEOUT`) has elapsed without any detectable replica promotion. It's better for it to rejoin the cluster and continue in this case. It is useful to note that while the `PFAIL` -> `FAIL` transition uses a form of agreement, the agreement used is weak: @@ -759,9 +759,9 @@ However the Redis Cluster failure detection has a liveness requirement: eventual **Case 1**: If a majority of masters have flagged a node as `FAIL`, because of failure detection and the *chain effect* it generates, every other node will eventually flag the master as `FAIL`, since in the specified window of time enough failures will be reported. -**Case 2**: When only a minority of masters have flagged a node as `FAIL`, the slave promotion will not happen (as it uses a more formal algorithm that makes sure everybody knows about the promotion eventually) and every node will clear the `FAIL` state as per the `FAIL` state clearing rules above (i.e. no promotion after N times the `NODE_TIMEOUT` has elapsed). +**Case 2**: When only a minority of masters have flagged a node as `FAIL`, the replica promotion will not happen (as it uses a more formal algorithm that makes sure everybody knows about the promotion eventually) and every node will clear the `FAIL` state as per the `FAIL` state clearing rules above (i.e. no promotion after N times the `NODE_TIMEOUT` has elapsed). -**The `FAIL` flag is only used as a trigger to run the safe part of the algorithm** for the slave promotion. In theory a slave may act independently and start a slave promotion when its master is not reachable, and wait for the masters to refuse to provide the acknowledgment if the master is actually reachable by the majority. However the added complexity of the `PFAIL -> FAIL` state, the weak agreement, and the `FAIL` message forcing the propagation of the state in the shortest amount of time in the reachable part of the cluster, have practical advantages. Because of these mechanisms, usually all the nodes will stop accepting writes at about the same time if the cluster is in an error state. This is a desirable feature from the point of view of applications using Redis Cluster. Also erroneous election attempts initiated by slaves that can't reach its master due to local problems (the master is otherwise reachable by the majority of other master nodes) are avoided. +**The `FAIL` flag is only used as a trigger to run the safe part of the algorithm** for the replica promotion. In theory a replica may act independently and start a replica promotion when its master is not reachable, and wait for the masters to refuse to provide the acknowledgment if the master is actually reachable by the majority. However the added complexity of the `PFAIL -> FAIL` state, the weak agreement, and the `FAIL` message forcing the propagation of the state in the shortest amount of time in the reachable part of the cluster, have practical advantages. Because of these mechanisms, usually all the nodes will stop accepting writes at about the same time if the cluster is in an error state. This is a desirable feature from the point of view of applications using Redis Cluster. Also erroneous election attempts initiated by replicas that can't reach its master due to local problems (the master is otherwise reachable by the majority of other master nodes) are avoided. Configuration handling, propagation, and failovers === @@ -773,7 +773,7 @@ Redis Cluster uses a concept similar to the Raft algorithm "term". In Redis Clus The `currentEpoch` is a 64 bit unsigned number. -At node creation every Redis Cluster node, both slaves and master nodes, set the `currentEpoch` to 0. +At node creation every Redis Cluster node, both replicas and master nodes, set the `currentEpoch` to 0. Every time a packet is received from another node, if the epoch of the sender (part of the cluster bus messages header) is greater than the local node epoch, the `currentEpoch` is updated to the sender epoch. @@ -781,7 +781,7 @@ Because of these semantics, eventually all the nodes will agree to the greatest This information is used when the state of the cluster is changed and a node seeks agreement in order to perform some action. -Currently this happens only during slave promotion, as described in the next section. Basically the epoch is a logical clock for the cluster and dictates that given information wins over one with a smaller epoch. +Currently this happens only during replica promotion, as described in the next section. Basically the epoch is a logical clock for the cluster and dictates that given information wins over one with a smaller epoch. Configuration epoch --- @@ -790,110 +790,110 @@ Every master always advertises its `configEpoch` in ping and pong packets along The `configEpoch` is set to zero in masters when a new node is created. -A new `configEpoch` is created during slave election. Slaves trying to replace +A new `configEpoch` is created during replica election. replicas trying to replace failing masters increment their epoch and try to get authorization from -a majority of masters. When a slave is authorized, a new unique `configEpoch` -is created and the slave turns into a master using the new `configEpoch`. +a majority of masters. When a replica is authorized, a new unique `configEpoch` +is created and the replica turns into a master using the new `configEpoch`. As explained in the next sections the `configEpoch` helps to resolve conflicts when different nodes claim divergent configurations (a condition that may happen because of network partitions and node failures). -Slave nodes also advertise the `configEpoch` field in ping and pong packets, but in the case of slaves the field represents the `configEpoch` of its master as of the last time they exchanged packets. This allows other instances to detect when a slave has an old configuration that needs to be updated (master nodes will not grant votes to slaves with an old configuration). +replica nodes also advertise the `configEpoch` field in ping and pong packets, but in the case of replicas the field represents the `configEpoch` of its master as of the last time they exchanged packets. This allows other instances to detect when a replica has an old configuration that needs to be updated (master nodes will not grant votes to replicas with an old configuration). Every time the `configEpoch` changes for some known node, it is permanently stored in the nodes.conf file by all the nodes that receive this information. The same also happens for the `currentEpoch` value. These two variables are guaranteed to be saved and `fsync-ed` to disk when updated before a node continues its operations. The `configEpoch` values generated using a simple algorithm during failovers are guaranteed to be new, incremental, and unique. -Slave election and promotion +Replica election and promotion --- -Slave election and promotion is handled by slave nodes, with the help of master nodes that vote for the slave to promote. -A slave election happens when a master is in `FAIL` state from the point of view of at least one of its slaves that has the prerequisites in order to become a master. +replica election and promotion is handled by replica nodes, with the help of master nodes that vote for the replica to promote. +A replica election happens when a master is in `FAIL` state from the point of view of at least one of its replicas that has the prerequisites in order to become a master. -In order for a slave to promote itself to master, it needs to start an election and win it. All the slaves for a given master can start an election if the master is in `FAIL` state, however only one slave will win the election and promote itself to master. +In order for a replica to promote itself to master, it needs to start an election and win it. All the replicas for a given master can start an election if the master is in `FAIL` state, however only one replica will win the election and promote itself to master. -A slave starts an election when the following conditions are met: +A replica starts an election when the following conditions are met: -* The slave's master is in `FAIL` state. +* The replica's master is in `FAIL` state. * The master was serving a non-zero number of slots. -* The slave replication link was disconnected from the master for no longer than a given amount of time, in order to ensure the promoted slave's data is reasonably fresh. This time is user configurable. +* The replica replication link was disconnected from the master for no longer than a given amount of time, in order to ensure the promoted replica's data is reasonably fresh. This time is user configurable. -In order to be elected, the first step for a slave is to increment its `currentEpoch` counter, and request votes from master instances. +In order to be elected, the first step for a replica is to increment its `currentEpoch` counter, and request votes from master instances. -Votes are requested by the slave by broadcasting a `FAILOVER_AUTH_REQUEST` packet to every master node of the cluster. Then it waits for a maximum time of two times the `NODE_TIMEOUT` for replies to arrive (but always for at least 2 seconds). +Votes are requested by the replica by broadcasting a `FAILOVER_AUTH_REQUEST` packet to every master node of the cluster. Then it waits for a maximum time of two times the `NODE_TIMEOUT` for replies to arrive (but always for at least 2 seconds). -Once a master has voted for a given slave, replying positively with a `FAILOVER_AUTH_ACK`, it can no longer vote for another slave of the same master for a period of `NODE_TIMEOUT * 2`. In this period it will not be able to reply to other authorization requests for the same master. This is not needed to guarantee safety, but useful for preventing multiple slaves from getting elected (even if with a different `configEpoch`) at around the same time, which is usually not wanted. +Once a master has voted for a given replica, replying positively with a `FAILOVER_AUTH_ACK`, it can no longer vote for another replica of the same master for a period of `NODE_TIMEOUT * 2`. In this period it will not be able to reply to other authorization requests for the same master. This is not needed to guarantee safety, but useful for preventing multiple replicas from getting elected (even if with a different `configEpoch`) at around the same time, which is usually not wanted. -A slave discards any `AUTH_ACK` replies with an epoch that is less than the `currentEpoch` at the time the vote request was sent. This ensures it doesn't count votes intended for a previous election. +A replica discards any `AUTH_ACK` replies with an epoch that is less than the `currentEpoch` at the time the vote request was sent. This ensures it doesn't count votes intended for a previous election. -Once the slave receives ACKs from the majority of masters, it wins the election. +Once the replica receives ACKs from the majority of masters, it wins the election. Otherwise if the majority is not reached within the period of two times `NODE_TIMEOUT` (but always at least 2 seconds), the election is aborted and a new one will be tried again after `NODE_TIMEOUT * 4` (and always at least 4 seconds). -Slave rank +Replica rank --- -As soon as a master is in `FAIL` state, a slave waits a short period of time before trying to get elected. That delay is computed as follows: +As soon as a master is in `FAIL` state, a replica waits a short period of time before trying to get elected. That delay is computed as follows: DELAY = 500 milliseconds + random delay between 0 and 500 milliseconds + - SLAVE_RANK * 1000 milliseconds. + REPLICA_RANK * 1000 milliseconds. -The fixed delay ensures that we wait for the `FAIL` state to propagate across the cluster, otherwise the slave may try to get elected while the masters are still unaware of the `FAIL` state, refusing to grant their vote. +The fixed delay ensures that we wait for the `FAIL` state to propagate across the cluster, otherwise the replica may try to get elected while the masters are still unaware of the `FAIL` state, refusing to grant their vote. -The random delay is used to desynchronize slaves so they're unlikely to start an election at the same time. +The random delay is used to desynchronize replicas so they're unlikely to start an election at the same time. -The `SLAVE_RANK` is the rank of this slave regarding the amount of replication data it has processed from the master. -Slaves exchange messages when the master is failing in order to establish a (best effort) rank: -the slave with the most updated replication offset is at rank 0, the second most updated at rank 1, and so forth. -In this way the most updated slaves try to get elected before others. +The `REPLICA_RANK` is the rank of this replica regarding the amount of replication data it has processed from the master. +Replicas exchange messages when the master is failing in order to establish a (best effort) rank: +the replica with the most updated replication offset is at rank 0, the second most updated at rank 1, and so forth. +In this way the most updated replicas try to get elected before others. -Rank order is not strictly enforced; if a slave of higher rank fails to be +Rank order is not strictly enforced; if a replica of higher rank fails to be elected, the others will try shortly. -Once a slave wins the election, it obtains a new unique and incremental `configEpoch` which is higher than that of any other existing master. It starts advertising itself as master in ping and pong packets, providing the set of served slots with a `configEpoch` that will win over the past ones. +Once a replica wins the election, it obtains a new unique and incremental `configEpoch` which is higher than that of any other existing master. It starts advertising itself as master in ping and pong packets, providing the set of served slots with a `configEpoch` that will win over the past ones. In order to speedup the reconfiguration of other nodes, a pong packet is broadcast to all the nodes of the cluster. Currently unreachable nodes will eventually be reconfigured when they receive a ping or pong packet from another node or will receive an `UPDATE` packet from another node if the information it publishes via heartbeat packets are detected to be out of date. -The other nodes will detect that there is a new master serving the same slots served by the old master but with a greater `configEpoch`, and will upgrade their configuration. Slaves of the old master (or the failed over master if it rejoins the cluster) will not just upgrade the configuration but will also reconfigure to replicate from the new master. How nodes rejoining the cluster are configured is explained in the next sections. +The other nodes will detect that there is a new master serving the same slots served by the old master but with a greater `configEpoch`, and will upgrade their configuration. Replicas of the old master (or the failed over master if it rejoins the cluster) will not just upgrade the configuration but will also reconfigure to replicate from the new master. How nodes rejoining the cluster are configured is explained in the next sections. -Masters reply to slave vote request +Masters reply to replica vote request --- -In the previous section it was discussed how slaves try to get elected. This section explains what happens from the point of view of a master that is requested to vote for a given slave. +In the previous section it was discussed how replicas try to get elected. This section explains what happens from the point of view of a master that is requested to vote for a given replica. -Masters receive requests for votes in form of `FAILOVER_AUTH_REQUEST` requests from slaves. +Masters receive requests for votes in form of `FAILOVER_AUTH_REQUEST` requests from replicas. For a vote to be granted the following conditions need to be met: 1. A master only votes a single time for a given epoch, and refuses to vote for older epochs: every master has a lastVoteEpoch field and will refuse to vote again as long as the `currentEpoch` in the auth request packet is not greater than the lastVoteEpoch. When a master replies positively to a vote request, the lastVoteEpoch is updated accordingly, and safely stored on disk. -2. A master votes for a slave only if the slave's master is flagged as `FAIL`. -3. Auth requests with a `currentEpoch` that is less than the master `currentEpoch` are ignored. Because of this the master reply will always have the same `currentEpoch` as the auth request. If the same slave asks again to be voted, incrementing the `currentEpoch`, it is guaranteed that an old delayed reply from the master can not be accepted for the new vote. +2. A master votes for a replica only if the replica's master is flagged as `FAIL`. +3. Auth requests with a `currentEpoch` that is less than the master `currentEpoch` are ignored. Because of this the master reply will always have the same `currentEpoch` as the auth request. If the same replica asks again to be voted, incrementing the `currentEpoch`, it is guaranteed that an old delayed reply from the master can not be accepted for the new vote. Example of the issue caused by not using rule number 3: Master `currentEpoch` is 5, lastVoteEpoch is 1 (this may happen after a few failed elections) -* Slave `currentEpoch` is 3. -* Slave tries to be elected with epoch 4 (3+1), master replies with an ok with `currentEpoch` 5, however the reply is delayed. -* Slave will try to be elected again, at a later time, with epoch 5 (4+1), the delayed reply reaches the slave with `currentEpoch` 5, and is accepted as valid. +* Replica `currentEpoch` is 3. +* Replica tries to be elected with epoch 4 (3+1), master replies with an ok with `currentEpoch` 5, however the reply is delayed. +* Replica will try to be elected again, at a later time, with epoch 5 (4+1), the delayed reply reaches the replica with `currentEpoch` 5, and is accepted as valid. -4. Masters don't vote for a slave of the same master before `NODE_TIMEOUT * 2` has elapsed if a slave of that master was already voted for. This is not strictly required as it is not possible for two slaves to win the election in the same epoch. However, in practical terms it ensures that when a slave is elected it has plenty of time to inform the other slaves and avoid the possibility that another slave will win a new election, performing an unnecessary second failover. -5. Masters make no effort to select the best slave in any way. If the slave's master is in `FAIL` state and the master did not vote in the current term, a positive vote is granted. The best slave is the most likely to start an election and win it before the other slaves, since it will usually be able to start the voting process earlier because of its *higher rank* as explained in the previous section. -6. When a master refuses to vote for a given slave there is no negative response, the request is simply ignored. -7. Masters don't vote for slaves sending a `configEpoch` that is less than any `configEpoch` in the master table for the slots claimed by the slave. Remember that the slave sends the `configEpoch` of its master, and the bitmap of the slots served by its master. This means that the slave requesting the vote must have a configuration for the slots it wants to failover that is newer or equal the one of the master granting the vote. +4. Masters don't vote for a replica of the same master before `NODE_TIMEOUT * 2` has elapsed if a replica of that master was already voted for. This is not strictly required as it is not possible for two replicas to win the election in the same epoch. However, in practical terms it ensures that when a replica is elected it has plenty of time to inform the other replicas and avoid the possibility that another replica will win a new election, performing an unnecessary second failover. +5. Masters make no effort to select the best replica in any way. If the replica's master is in `FAIL` state and the master did not vote in the current term, a positive vote is granted. The best replica is the most likely to start an election and win it before the other replicas, since it will usually be able to start the voting process earlier because of its *higher rank* as explained in the previous section. +6. When a master refuses to vote for a given replica there is no negative response, the request is simply ignored. +7. Masters don't vote for replicas sending a `configEpoch` that is less than any `configEpoch` in the master table for the slots claimed by the replica. Remember that the replica sends the `configEpoch` of its master, and the bitmap of the slots served by its master. This means that the replica requesting the vote must have a configuration for the slots it wants to failover that is newer or equal the one of the master granting the vote. Practical example of configuration epoch usefulness during partitions --- -This section illustrates how the epoch concept is used to make the slave promotion process more resistant to partitions. +This section illustrates how the epoch concept is used to make the replica promotion process more resistant to partitions. -* A master is no longer reachable indefinitely. The master has three slaves A, B, C. -* Slave A wins the election and is promoted to master. +* A master is no longer reachable indefinitely. The master has three replicas A, B, C. +* Replica A wins the election and is promoted to master. * A network partition makes A not available for the majority of the cluster. -* Slave B wins the election and is promoted as master. +* Replica B wins the election and is promoted as master. * A partition makes B not available for the majority of the cluster. * The previous partition is fixed, and A is available again. -At this point B is down and A is available again with a role of master (actually `UPDATE` messages would reconfigure it promptly, but here we assume all `UPDATE` messages were lost). At the same time, slave C will try to get elected in order to fail over B. This is what happens: +At this point B is down and A is available again with a role of master (actually `UPDATE` messages would reconfigure it promptly, but here we assume all `UPDATE` messages were lost). At the same time, replica C will try to get elected in order to fail over B. This is what happens: 1. C will try to get elected and will succeed, since for the majority of masters its master is actually down. It will obtain a new incremental `configEpoch`. 2. A will not be able to claim to be the master for its hash slots, because the other nodes already have the same hash slots associated with a higher configuration epoch (the one of B) compared to the one published by A. @@ -907,14 +907,14 @@ has stale information and will send an `UPDATE` message. Hash slots configuration propagation --- -An important part of Redis Cluster is the mechanism used to propagate the information about which cluster node is serving a given set of hash slots. This is vital to both the startup of a fresh cluster and the ability to upgrade the configuration after a slave was promoted to serve the slots of its failing master. +An important part of Redis Cluster is the mechanism used to propagate the information about which cluster node is serving a given set of hash slots. This is vital to both the startup of a fresh cluster and the ability to upgrade the configuration after a replica was promoted to serve the slots of its failing master. The same mechanism allows nodes partitioned away for an indefinite amount of time to rejoin the cluster in a sensible way. There are two ways hash slot configurations are propagated: -1. Heartbeat messages. The sender of a ping or pong packet always adds information about the set of hash slots it (or its master, if it is a slave) serves. +1. Heartbeat messages. The sender of a ping or pong packet always adds information about the set of hash slots it (or its master, if it is a replica) serves. 2. `UPDATE` messages. Since in every heartbeat packet there is information about the sender `configEpoch` and set of hash slots served, if a receiver of a heartbeat packet finds the sender information is stale, it will send a packet with new information, forcing the stale node to update its info. The receiver of a heartbeat or `UPDATE` message uses certain simple rules in @@ -947,13 +947,13 @@ When a new cluster is created, a system administrator needs to manually assign ( However this rule is not enough. We know that hash slot mapping can change during two events: -1. A slave replaces its master during a failover. +1. A replica replaces its master during a failover. 2. A slot is resharded from a node to a different one. -For now let's focus on failovers. When a slave fails over its master, it obtains +For now let's focus on failovers. When a replica fails over its master, it obtains a configuration epoch which is guaranteed to be greater than the one of its master (and more generally greater than any other configuration epoch -generated previously). For example node B, which is a slave of A, may failover +generated previously). For example node B, which is a replica of A, may failover A with configuration epoch of 4. It will start to send heartbeat packets (the first time mass-broadcasting cluster-wide) and because of the following second rule, receivers will update their hash slot tables: @@ -997,18 +997,18 @@ The same basic mechanism is used when a node rejoins a cluster. Continuing with the example above, node A will be notified that hash slots 1 and 2 are now served by B. Assuming that these two were the only hash slots served by A, the count of hash slots served by A will -drop to 0! So A will **reconfigure to be a slave of the new master**. +drop to 0! So A will **reconfigure to be a replica of the new master**. The actual rule followed is a bit more complex than this. In general it may happen that A rejoins after a lot of time, in the meantime it may happen that hash slots originally served by A are served by multiple nodes, for example hash slot 1 may be served by B, and hash slot 2 by C. -So the actual *Redis Cluster node role switch rule* is: **A master node will change its configuration to replicate (be a slave of) the node that stole its last hash slot**. +So the actual *Redis Cluster node role switch rule* is: **A master node will change its configuration to replicate (be a replica of) the node that stole its last hash slot**. -During reconfiguration, eventually the number of served hash slots will drop to zero, and the node will reconfigure accordingly. Note that in the base case this just means that the old master will be a slave of the slave that replaced it after a failover. However in the general form the rule covers all possible cases. +During reconfiguration, eventually the number of served hash slots will drop to zero, and the node will reconfigure accordingly. Note that in the base case this just means that the old master will be a replica of the replica that replaced it after a failover. However in the general form the rule covers all possible cases. -Slaves do exactly the same: they reconfigure to replicate the node that +Replicas do exactly the same: they reconfigure to replicate the node that stole the last hash slot of its former master. Replica migration @@ -1016,37 +1016,37 @@ Replica migration Redis Cluster implements a concept called *replica migration* in order to improve the availability of the system. The idea is that in a cluster with -a master-slave setup, if the map between slaves and masters is fixed +a master-replica setup, if the map between replicas and masters is fixed availability is limited over time if multiple independent failures of single nodes happen. -For example in a cluster where every master has a single slave, the cluster -can continue operations as long as either the master or the slave fail, but not +For example in a cluster where every master has a single replica, the cluster +can continue operations as long as either the master or the replica fail, but not if both fail the same time. However there is a class of failures that are the independent failures of single nodes caused by hardware or software issues that can accumulate over time. For example: -* Master A has a single slave A1. +* Master A has a single replica A1. * Master A fails. A1 is promoted as new master. -* Three hours later A1 fails in an independent manner (unrelated to the failure of A). No other slave is available for promotion since node A is still down. The cluster cannot continue normal operations. +* Three hours later A1 fails in an independent manner (unrelated to the failure of A). No other replica is available for promotion since node A is still down. The cluster cannot continue normal operations. -If the map between masters and slaves is fixed, the only way to make the cluster -more resistant to the above scenario is to add slaves to every master, however +If the map between masters and replicas is fixed, the only way to make the cluster +more resistant to the above scenario is to add replicas to every master, however this is costly as it requires more instances of Redis to be executed, more memory, and so forth. An alternative is to create an asymmetry in the cluster, and let the cluster layout automatically change over time. For example the cluster may have three -masters A, B, C. A and B have a single slave each, A1 and B1. However the master -C is different and has two slaves: C1 and C2. +masters A, B, C. A and B have a single replica each, A1 and B1. However the master +C is different and has two replicas: C1 and C2. -Replica migration is the process of automatic reconfiguration of a slave +Replica migration is the process of automatic reconfiguration of a replica in order to *migrate* to a master that has no longer coverage (no working -slaves). With replica migration the scenario mentioned above turns into the +replicas). With replica migration the scenario mentioned above turns into the following: * Master A fails. A1 is promoted. -* C2 migrates as slave of A1, that is otherwise not backed by any slave. +* C2 migrates as replica of A1, that is otherwise not backed by any replica. * Three hours later A1 fails as well. * C2 is promoted as new master to replace A1. * The cluster can continue the operations. @@ -1054,52 +1054,52 @@ following: Replica migration algorithm --- -The migration algorithm does not use any form of agreement since the slave +The migration algorithm does not use any form of agreement since the replica layout in a Redis Cluster is not part of the cluster configuration that needs to be consistent and/or versioned with config epochs. Instead it uses an -algorithm to avoid mass-migration of slaves when a master is not backed. +algorithm to avoid mass-migration of replicas when a master is not backed. The algorithm guarantees that eventually (once the cluster configuration is -stable) every master will be backed by at least one slave. +stable) every master will be backed by at least one replica. This is how the algorithm works. To start we need to define what is a -*good slave* in this context: a good slave is a slave not in `FAIL` state +*good replica* in this context: a good replica is a replica not in `FAIL` state from the point of view of a given node. -The execution of the algorithm is triggered in every slave that detects that -there is at least a single master without good slaves. However among all the -slaves detecting this condition, only a subset should act. This subset is -actually often a single slave unless different slaves have in a given moment +The execution of the algorithm is triggered in every replica that detects that +there is at least a single master without good replicas. However among all the +replicas detecting this condition, only a subset should act. This subset is +actually often a single replica unless different replicas have in a given moment a slightly different view of the failure state of other nodes. -The *acting slave* is the slave among the masters with the maximum number -of attached slaves, that is not in FAIL state and has the smallest node ID. +The *acting replica* is the replica among the masters with the maximum number +of attached replicas, that is not in FAIL state and has the smallest node ID. -So for example if there are 10 masters with 1 slave each, and 2 masters with -5 slaves each, the slave that will try to migrate is - among the 2 masters -having 5 slaves - the one with the lowest node ID. Given that no agreement +So for example if there are 10 masters with 1 replica each, and 2 masters with +5 replicas each, the replica that will try to migrate is - among the 2 masters +having 5 replicas - the one with the lowest node ID. Given that no agreement is used, it is possible that when the cluster configuration is not stable, -a race condition occurs where multiple slaves believe themselves to be -the non-failing slave with the lower node ID (it is unlikely for this to happen -in practice). If this happens, the result is multiple slaves migrating to the +a race condition occurs where multiple replicas believe themselves to be +the non-failing replica with the lower node ID (it is unlikely for this to happen +in practice). If this happens, the result is multiple replicas migrating to the same master, which is harmless. If the race happens in a way that will leave -the ceding master without slaves, as soon as the cluster is stable again -the algorithm will be re-executed again and will migrate a slave back to +the ceding master without replicas, as soon as the cluster is stable again +the algorithm will be re-executed again and will migrate a replica back to the original master. -Eventually every master will be backed by at least one slave. However, -the normal behavior is that a single slave migrates from a master with -multiple slaves to an orphaned master. +Eventually every master will be backed by at least one replica. However, +the normal behavior is that a single replica migrates from a master with +multiple replicas to an orphaned master. The algorithm is controlled by a user-configurable parameter called -`cluster-migration-barrier`: the number of good slaves a master -must be left with before a slave can migrate away. For example, if this -parameter is set to 2, a slave can try to migrate only if its master remains -with two working slaves. +`cluster-migration-barrier`: the number of good replicas a master +must be left with before a replica can migrate away. For example, if this +parameter is set to 2, a replica can try to migrate only if its master remains +with two working replicas. configEpoch conflicts resolution algorithm --- -When new `configEpoch` values are created via slave promotion during +When new `configEpoch` values are created via replica promotion during failovers, they are guaranteed to be unique. However there are two distinct events where new configEpoch values are @@ -1107,7 +1107,7 @@ created in an unsafe way, just incrementing the local `currentEpoch` of the local node and hoping there are no conflicts at the same time. Both the events are system-administrator triggered: -1. `CLUSTER FAILOVER` command with `TAKEOVER` option is able to manually promote a slave node into a master *without the majority of masters being available*. This is useful, for example, in multi data center setups. +1. `CLUSTER FAILOVER` command with `TAKEOVER` option is able to manually promote a replica node into a master *without the majority of masters being available*. This is useful, for example, in multi data center setups. 2. Migration of slots for cluster rebalancing also generates new configuration epochs inside the local node without agreement for performance reasons. Specifically, during manual resharding, when a hash slot is migrated from @@ -1133,7 +1133,7 @@ Moreover, software bugs and filesystem corruptions can also contribute to multiple nodes having the same configuration epoch. When masters serving different hash slots have the same `configEpoch`, there -are no issues. It is more important that slaves failing over a master have +are no issues. It is more important that replicas failing over a master have unique configuration epochs. That said, manual interventions or resharding may change the cluster @@ -1177,7 +1177,7 @@ provided, a soft reset is performed. The following is a list of operations performed by a reset: -1. Soft and hard reset: If the node is a slave, it is turned into a master, and its dataset is discarded. If the node is a master and contains keys the reset operation is aborted. +1. Soft and hard reset: If the node is a replica, it is turned into a master, and its dataset is discarded. If the node is a master and contains keys the reset operation is aborted. 2. Soft and hard reset: All the slots are released, and the manual failover state is reset. 3. Soft and hard reset: All the other nodes in the nodes table are removed, so the node no longer knows any other node. 4. Hard reset only: `currentEpoch`, `configEpoch`, and `lastVoteEpoch` are set to 0. diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index b824fe9dfb..1c47e40cb1 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -117,21 +117,21 @@ inside the string is hashed, so for example `this{foo}key` and `another{foo}key` are guaranteed to be in the same hash slot, and can be used together in a command with multiple keys as arguments. -Redis Cluster master-slave model +Redis Cluster master-replica model --- In order to remain available when a subset of master nodes are failing or are not able to communicate with the majority of nodes, Redis Cluster uses a -master-slave model where every hash slot has from 1 (the master itself) to N -replicas (N-1 additional slaves nodes). +master-replica model where every hash slot has from 1 (the master itself) to N +replicas (N-1 additional replica nodes). In our example cluster with nodes A, B, C, if node B fails the cluster is not able to continue, since we no longer have a way to serve hash slots in the range 5501-11000. -However when the cluster is created (or at a later time) we add a slave +However when the cluster is created (or at a later time) we add a replica node to every master, so that the final cluster is composed of A, B, C -that are master nodes, and A1, B1, C1 that are slave nodes. +that are master nodes, and A1, B1, C1 that are replica nodes. This way, the system is able to continue if node B fails. Node B1 replicates B, and B fails, the cluster will promote node B1 as the new @@ -153,13 +153,13 @@ happens: * Your client writes to the master B. * The master B replies OK to your client. -* The master B propagates the write to its slaves B1, B2 and B3. +* The master B propagates the write to its replicas B1, B2 and B3. As you can see, B does not wait for an acknowledgement from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, -but crashes before being able to send the write to its slaves, one of the -slaves (that did not receive the write) can be promoted to master, losing +but crashes before being able to send the write to its replicas, one of the +replicas (that did not receive the write) can be promoted to master, losing the write forever. This is **very similar to what happens** with most databases that are @@ -177,7 +177,7 @@ Redis Cluster has support for synchronous writes when absolutely needed, implemented via the `WAIT` command. This makes losing writes a lot less likely. However, note that Redis Cluster does not implement strong consistency even when synchronous replication is used: it is always possible, under more -complex failure scenarios, that a slave that was not able to receive the write +complex failure scenarios, that a replica that was not able to receive the write will be elected as master. There is another notable scenario where Redis Cluster will lose writes, that @@ -185,7 +185,7 @@ happens during a network partition where a client is isolated with a minority of instances including at least a master. Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, -with 3 masters and 3 slaves. There is also a client, that we will call Z1. +with 3 masters and 3 replicas. There is also a client, that we will call Z1. After a partition occurs, it is possible that in one side of the partition we have A, C, A1, B1, C1, and in the other side we have B and Z1. @@ -198,7 +198,7 @@ in the mean time will be lost. Note that there is a **maximum window** to the amount of writes Z1 will be able to send to B: if enough time has elapsed for the majority side of the -partition to elect a slave as master, every master node in the minority +partition to elect a replica as master, every master node in the minority side will have stopped accepting writes. This amount of time is a very important configuration directive of Redis @@ -220,9 +220,9 @@ as you continue reading. * **cluster-enabled ``**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usual. * **cluster-config-file ``**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. -* **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its slaves. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. -* **cluster-slave-validity-factor ``**: If set to zero, a slave will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no slave that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster. -* **cluster-migration-barrier ``**: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information. +* **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. +* **cluster-slave-validity-factor ``**: If set to zero, a replica will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the replica remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a replica disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster. +* **cluster-migration-barrier ``**: Minimum number of replicas a master will remain connected with, for another replica to migrate to a master which is no longer covered by any replica. See the appropriate section about replica migration in this tutorial for more information. * **cluster-require-full-coverage ``**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. * **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. @@ -258,7 +258,7 @@ by the Redis Cluster instances, and updated every time it is needed. Note that the **minimal cluster** that works as expected requires to contain at least three master nodes. For your first tests it is strongly suggested -to start a six nodes cluster with three masters and three slaves. +to start a six nodes cluster with three masters and three replicas. To do so, enter a new directory, and create the following directories named after the port number of the instance we'll run inside any given directory. @@ -322,12 +322,12 @@ Using `redis-trib.rb` for Redis 4 or 3 type: 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 The command used here is **create**, since we want to create a new cluster. -The option `--cluster-replicas 1` means that we want a slave for every master created. +The option `--cluster-replicas 1` means that we want a replica for every master created. The other arguments are the list of addresses of the instances I want to use to create the new cluster. Obviously the only setup with our requirements is to create a cluster with -3 masters and 3 slaves. +3 masters and 3 replicas. Redis-cli will propose you a configuration. Accept the proposed configuration by typing **yes**. The cluster will be configured and *joined*, which means, instances will be @@ -349,7 +349,7 @@ system (but you'll not learn the same amount of operational details). Just check `utils/create-cluster` directory in the Redis distribution. There is a script called `create-cluster` inside (same name as the directory it is contained into), it's a simple bash script. In order to start -a 6 nodes cluster with 3 masters and 3 slaves just type the following +a 6 nodes cluster with 3 masters and 3 replicas just type the following commands: 1. `create-cluster start` @@ -735,14 +735,14 @@ sound unexpected as in the first part of this tutorial we stated that Redis Cluster can lose writes during the failover because it uses asynchronous replication. What we did not say is that this is not very likely to happen because Redis sends the reply to the client, and the commands to replicate -to the slaves, about at the same time, so there is a very small window to +to the replicas, about at the same time, so there is a very small window to lose data. However the fact that it is hard to trigger does not mean that it is impossible, so this does not change the consistency guarantees provided by Redis cluster. We can now check what is the cluster setup after the failover (note that in the meantime I restarted the crashed instance so that it rejoins the -cluster as a slave): +cluster as a replica): ``` $ redis-cli -p 7000 cluster nodes @@ -755,15 +755,15 @@ a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4 ``` Now the masters are running on ports 7000, 7001 and 7005. What was previously -a master, that is the Redis instance running on port 7002, is now a slave of +a master, that is the Redis instance running on port 7002, is now a replica of 7005. The output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens: * Node ID * ip:port -* flags: master, slave, myself, fail, ... -* if it is a slave, the Node ID of the master +* flags: master, replica, myself, fail, ... +* if it is a replica, the Node ID of the master * Time of the last pending PING still waiting for a reply. * Time of the last PONG received. * Configuration epoch for this node (see the Cluster specification). @@ -775,11 +775,11 @@ Manual failover Sometimes it is useful to force a failover without actually causing any problem on a master. For example in order to upgrade the Redis process of one of the -master nodes it is a good idea to failover it in order to turn it into a slave +master nodes it is a good idea to failover it in order to turn it into a replica with minimal impact on availability. Manual failovers are supported by Redis Cluster using the `CLUSTER FAILOVER` -command, that must be executed in one of the **slaves** of the master you want +command, that must be executed in one of the **replicas** of the master you want to failover. Manual failovers are special and are safer compared to failovers resulting from @@ -788,7 +788,7 @@ process, by switching clients from the original master to the new master only when the system is sure that the new master processed all the replication stream from the old one. -This is what you see in the slave log when you perform a manual failover: +This is what you see in the replica log when you perform a manual failover: # Manual failover user request accepted. # Received replication offset for paused master manual failover: 347540 @@ -798,7 +798,7 @@ This is what you see in the slave log when you perform a manual failover: # Failover election won: I'm the new master. Basically clients connected to the master we are failing over are stopped. -At the same time the master sends its replication offset to the slave, that +At the same time the master sends its replication offset to the replica, that waits to reach the offset on its side. When the replication offset is reached, the failover starts, and the old master is informed about the configuration switch. When the clients are unblocked on the old master, they are redirected @@ -815,7 +815,7 @@ Adding a new node Adding a new node is basically the process of adding an empty node and then moving some data into it, in case it is a new master, or telling it to -setup as a replica of a known node, in case it is a slave. +setup as a replica of a known node, in case it is a replica. We'll show both, starting with the addition of a new master instance. @@ -867,7 +867,7 @@ able to redirect client queries correctly and is generally speaking part of the cluster. However it has two peculiarities compared to the other masters: * It holds no data as it has no assigned hash slots. -* Because it is a master without assigned slots, it does not participate in the election process when a slave wants to become a master. +* Because it is a master without assigned slots, it does not participate in the election process when a replica wants to become a master. Now it is possible to assign hash slots to this node using the resharding feature of `redis-cli`. It is basically useless to show this as we already @@ -896,7 +896,7 @@ This way we assign the new replica to a specific master. A more manual way to add a replica to a specific master is to add the new node as an empty master, and then turn it into a replica using the -`CLUSTER REPLICATE` command. This also works if the node was added as a slave +`CLUSTER REPLICATE` command. This also works if the node was added as a replica but you want to move it as a replica of a different master. For example in order to add a replica for the node 127.0.0.1:7005 that is @@ -916,12 +916,12 @@ f093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170 2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected ``` -The node 3c3a0c... now has two slaves, running on ports 7002 (the existing one) and 7006 (the new one). +The node 3c3a0c... now has two replicas, running on ports 7002 (the existing one) and 7006 (the new one). Removing a node --- -To remove a slave node just use the `del-node` command of redis-cli: +To remove a replica node just use the `del-node` command of redis-cli: redis-cli --cluster del-node 127.0.0.1:7000 `` @@ -933,14 +933,14 @@ remove a master node it must be empty**. If the master is not empty you need to reshard data away from it to all the other master nodes before. An alternative to remove a master node is to perform a manual failover of it -over one of its slaves and remove the node after it turned into a slave of the +over one of its replicas and remove the node after it turned into a replica of the new master. Obviously this does not help when you want to reduce the actual number of masters in your cluster, in that case, a resharding is needed. Replicas migration --- -In Redis Cluster it is possible to reconfigure a slave to replicate with a +In Redis Cluster it is possible to reconfigure a replica to replicate with a different master at any time just using the following command: CLUSTER REPLICATE @@ -964,21 +964,21 @@ serving. However while net-splits are likely to isolate a number of nodes at the same time, many other kind of failures, like hardware or software failures local to a single node, are a very notable class of failures that are unlikely to happen at the same time, so it is possible that in your cluster where -every master has a slave, the slave is killed at 4am, and the master is killed +every master has a replica, the replica is killed at 4am, and the master is killed at 6am. This still will result in a cluster that can no longer operate. To improve reliability of the system we have the option to add additional replicas to every master, but this is expensive. Replica migration allows to -add more slaves to just a few masters. So you have 10 masters with 1 slave +add more replicas to just a few masters. So you have 10 masters with 1 replica each, for a total of 20 instances. However you add, for example, 3 instances -more as slaves of some of your masters, so certain masters will have more -than a single slave. +more as replicas of some of your masters, so certain masters will have more +than a single replica. With replicas migration what happens is that if a master is left without -slaves, a replica from a master that has multiple slaves will migrate to -the *orphaned* master. So after your slave goes down at 4am as in the example -we made above, another slave will take its place, and when the master -will fail as well at 5am, there is still a slave that can be elected so that +replicas, a replica from a master that has multiple replicas will migrate to +the *orphaned* master. So after your replica goes down at 4am as in the example +we made above, another replica will take its place, and when the master +will fail as well at 5am, there is still a replica that can be elected so that the cluster can continue to operate. So what you should know about replicas migration in short? @@ -990,17 +990,17 @@ So what you should know about replicas migration in short? Upgrading nodes in a Redis Cluster --- -Upgrading slave nodes is easy since you just need to stop the node and restart +Upgrading replica nodes is easy since you just need to stop the node and restart it with an updated version of Redis. If there are clients scaling reads using -slave nodes, they should be able to reconnect to a different slave if a given +replica nodes, they should be able to reconnect to a different replica if a given one is not available. Upgrading masters is a bit more complex, and the suggested procedure is: 1. Use `CLUSTER FAILOVER` to trigger a manual failover of the master to one of its replicas. (See the [Manual failover](#manual-failover) section in this document.) -2. Wait for the master to turn into a slave. -3. Finally upgrade the node as you do for slaves. +2. Wait for the master to turn into a replica. +3. Finally upgrade the node as you do for replicas. 4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master. Following this procedure you should upgrade one node after the other until @@ -1036,7 +1036,7 @@ in order to migrate your data set to Redis Cluster: 1. Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application / environment. 2. Generate an append only file for all of your N masters using the BGREWRITEAOF command, and waiting for the AOF file to be completely generated. 3. Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers). -4. Create a Redis Cluster composed of N masters and zero slaves. You'll add slaves later. Make sure all your nodes are using the append only file for persistence. +4. Create a Redis Cluster composed of N masters and zero replicas. You'll add replicas later. Make sure all your nodes are using the append only file for persistence. 5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N. 6. Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration. 7. Use `redis-cli --cluster fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not. @@ -1053,3 +1053,4 @@ may be slow since 2.8 does not implement migrate connection caching, so you may want to restart your source instance with a Redis 3.x version before to perform such operation. +**A note about the word slave used in this page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/topics/config.md b/topics/config.md index a40c290e54..3bc6c0e2db 100644 --- a/topics/config.md +++ b/topics/config.md @@ -45,7 +45,7 @@ Passing arguments via the command line Since Redis 2.6 it is possible to also pass Redis configuration parameters using the command line directly. This is very useful for testing purposes. The following is an example that starts a new Redis instance using port 6380 -as a slave of the instance running at 127.0.0.1 port 6379. +as a replica of the instance running at 127.0.0.1 port 6379. ./redis-server --port 6380 --slaveof 127.0.0.1 6379 diff --git a/topics/distlock.md b/topics/distlock.md index 139a738a04..f3067bbc01 100644 --- a/topics/distlock.md +++ b/topics/distlock.md @@ -58,13 +58,13 @@ To understand what we want to improve, let’s analyze the current state of affa The simplest way to use Redis to lock a resource is to create a key in an instance. The key is usually created with a limited time to live, using the Redis expires feature, so that eventually it will get released (property 2 in our list). When the client needs to release the resource, it deletes the key. Superficially this works well, but there is a problem: this is a single point of failure in our architecture. What happens if the Redis master goes down? -Well, let’s add a slave! And use it if the master is unavailable. This is unfortunately not viable. By doing so we can’t implement our safety property of mutual exclusion, because Redis replication is asynchronous. +Well, let’s add a replica! And use it if the master is unavailable. This is unfortunately not viable. By doing so we can’t implement our safety property of mutual exclusion, because Redis replication is asynchronous. There is an obvious race condition with this model: 1. Client A acquires the lock in the master. -2. The master crashes before the write to the key is transmitted to the slave. -3. The slave gets promoted to master. +2. The master crashes before the write to the key is transmitted to the replica. +3. The replica gets promoted to master. 4. Client B acquires the lock to the same resource A already holds a lock for. **SAFETY VIOLATION!** Sometimes it is perfectly fine that under special circumstances, like during a failure, multiple clients can hold the lock at the same time. diff --git a/topics/faq.md b/topics/faq.md index c8ea0ad1e5..009a57ec9d 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -151,16 +151,16 @@ Every hash, list, set, and sorted set, can hold 2^32 elements. In other words your limit is likely the available memory in your system. -## My slave claims to have a different number of keys compared to its master, why? +## My replica claims to have a different number of keys compared to its master, why? If you use keys with limited time to live (Redis expires) this is normal behavior. This is what happens: -* The master generates an RDB file on the first synchronization with the slave. +* The master generates an RDB file on the first synchronization with the replica. * The RDB file will not include keys already expired in the master, but that are still in memory. * However these keys are still in the memory of the Redis master, even if logically expired. They'll not be considered as existing, but the memory will be reclaimed later, both incrementally and explicitly on access. However while these keys are not logical part of the dataset, they are advertised in `INFO` output and by the `DBSIZE` command. -* When the slave reads the RDB file generated by the master, this set of keys will not be loaded. +* When the replica reads the RDB file generated by the master, this set of keys will not be loaded. -As a result of this, it is common for users with many keys with an expire set to see less keys in the slaves, because of this artifact, but there is no actual logical difference in the instances content. +As a result of this, it is common for users with many keys with an expire set to see less keys in the replicas, because of this artifact, but there is no actual logical difference in the instances content. ## What does Redis actually mean? diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 564252c013..927486d2d0 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -780,7 +780,7 @@ since they are not guaranteed to mix well. However this is not the only option. It's also possible to exactly tell Redis what commands to replicate as the effect of the command execution, using an API similar to `RedisModule_Call()` but that instead of calling the command -sends it to the AOF / slaves stream. Example: +sends it to the AOF / replicas stream. Example: RedisModule_Replicate(ctx,"INCRBY","cl","foo",my_increment); diff --git a/topics/partitioning.md b/topics/partitioning.md index d7c626d629..f72bec58e4 100644 --- a/topics/partitioning.md +++ b/topics/partitioning.md @@ -75,10 +75,10 @@ In this way as your data storage needs increase and you need more Redis servers, Using Redis replication you will likely be able to do the move with minimal or no downtime for your users: * Start empty instances in your new server. -* Move data configuring these new instances as slaves for your source instances. +* Move data configuring these new instances as replicas for your source instances. * Stop your clients. * Update the configuration of the moved instances with the new server IP address. -* Send the `SLAVEOF NO ONE` command to the slaves in the new server. +* Send the `REPLICAOF NO ONE` command to the replicas in the new server. * Restart your clients with the new updated configuration. * Finally shut down the no longer used instances in the old server. diff --git a/topics/rediscli.md b/topics/rediscli.md index 65fce1f958..8b0afcf39a 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -15,7 +15,7 @@ a good typing experience. However `redis-cli` is not just that. There are options you can use to launch the program in order to put it into special modes, so that `redis-cli` can -definitely do more complex tasks, like simulate a slave and print the +definitely do more complex tasks, like simulate a replica and print the replication stream it receives from the master, check the latency of a Redis server and show statistics or even an ASCII-art spectrogram of latency samples and frequencies, and many other things. @@ -421,7 +421,7 @@ are explained in the next sections: * Checking the [latency](/topics/latency) of a Redis server in different ways. * Checking the scheduler latency of the local computer. * Transferring RDB backups from a remote Redis server locally. -* Acting as a Redis slave for showing what a slave receives. +* Acting as a Redis replica for showing what a replica receives. * Simulating [LRU](/topics/lru-cache) workloads for showing stats about keys hits. * A client for the Lua debugger. @@ -680,7 +680,7 @@ millisecond from time to time. ## Remote backups of RDB files -During Redis replication's first synchronization, the master and the slave +During Redis replication's first synchronization, the master and the replica exchange the whole data set in form of an RDB file. This feature is exploited by `redis-cli` in order to provide a remote backup facility, that allows to transfer an RDB file from any Redis instance to the local computer running @@ -701,15 +701,15 @@ If it is non zero, an error occurred like in the following example: $ echo $? 1 -## Slave mode +## Replcia mode -The slave mode of the CLI is an advanced feature useful for +The replica mode of the CLI is an advanced feature useful for Redis developers and for debugging operations. -It allows to inspect what a master sends to its slaves in the replication +It allows to inspect what a master sends to its replicas in the replication stream in order to propagate the writes to its replicas. The option -name is simply `--slave`. This is how it works: +name is simply `--replica`. This is how it works: - $ redis-cli --slave + $ redis-cli --replica SYNC with master, discarding 13256 bytes of bulk transfer... SYNC done. Logging commands from master. "PING" @@ -721,7 +721,7 @@ name is simply `--slave`. This is how it works: The command begins by discarding the RDB file of the first synchronization and then logs each command received as in CSV format. -If you think some of the commands are not replicated correctly in your slaves +If you think some of the commands are not replicated correctly in your replicas this is a good way to check what's happening, and also useful information in order to improve the bug report. diff --git a/topics/replication.md b/topics/replication.md index 306464e3bb..3e0527e574 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -1,7 +1,7 @@ Replication === -At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a very simple to use and configure *leader follower* (master-slave) replication: it allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master. +At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a very simple to use and configure *leader follower* (master-replica) replication: it allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master. This system works using three main mechanisms: From 139e8710df457d77c553de8008716071b2401865 Mon Sep 17 00:00:00 2001 From: Leibale Eidelman Date: Thu, 23 Sep 2021 14:39:02 -0400 Subject: [PATCH 047/813] Update cluster-slots.md (#1656) --- commands/cluster-slots.md | 52 ++++++++++----------------------------- 1 file changed, 13 insertions(+), 39 deletions(-) diff --git a/commands/cluster-slots.md b/commands/cluster-slots.md index 39d745b6b7..b081a48820 100644 --- a/commands/cluster-slots.md +++ b/commands/cluster-slots.md @@ -2,7 +2,7 @@ Redis instances. The command is suitable to be used by Redis Cluster client libraries implementations in order to retrieve (or update when a redirection is received) the map associating cluster *hash slots* with actual nodes -network coordinates (composed of an IP address and a TCP port), so that when +network coordinates (composed of an IP address, a TCP port, and the node ID), so that when a command is received, it can be sent to what is likely the right instance for the keys specified in the command. @@ -11,7 +11,7 @@ Each nested result is: - Start slot range - End slot range - - Master for slot range represented as nested IP/Port array + - Master for slot range represented as nested IP/Port/ID array - First replica of master for slot range - Second replica - ...continues until all replicas for this master are returned. @@ -19,54 +19,23 @@ Each nested result is: Each result includes all active replicas of the master instance for the listed slot range. Failed replicas are not returned. -The third nested reply is guaranteed to be the IP/Port pair of +The third nested reply is guaranteed to be the IP/Port/ID array of the master instance for the slot range. -All IP/Port pairs after the third nested reply are replicas +All IP/Port/ID arrays after the third nested reply are replicas of the master. If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then -master and replica IP/Port results will be duplicated for each top-level +master and replica IP/Port/ID results will be duplicated for each top-level slot range reply. -**Warning:** Newer versions of Redis Cluster will output, for each Redis instance, not just the IP and port, but also the node ID as third element of the array. In future versions there could be more elements describing the node better. In general a client implementation should just rely on the fact that certain parameters are at fixed positions as specified, but more parameters may follow and should be ignored. Similarly a client library should try if possible to cope with the fact that older versions may just have the IP and port parameter. - @return -@array-reply: nested list of slot ranges with IP/Port mappings. - -### Sample Output (old version) -``` -127.0.0.1:7001> cluster slots -1) 1) (integer) 0 - 2) (integer) 4095 - 3) 1) "127.0.0.1" - 2) (integer) 7000 - 4) 1) "127.0.0.1" - 2) (integer) 7004 -2) 1) (integer) 12288 - 2) (integer) 16383 - 3) 1) "127.0.0.1" - 2) (integer) 7003 - 4) 1) "127.0.0.1" - 2) (integer) 7007 -3) 1) (integer) 4096 - 2) (integer) 8191 - 3) 1) "127.0.0.1" - 2) (integer) 7001 - 4) 1) "127.0.0.1" - 2) (integer) 7005 -4) 1) (integer) 8192 - 2) (integer) 12287 - 3) 1) "127.0.0.1" - 2) (integer) 7002 - 4) 1) "127.0.0.1" - 2) (integer) 7006 -``` +@array-reply: nested list of slot ranges with IP/Port/ID mappings. +@examples -### Sample Output (new version, includes IDs) ``` -127.0.0.1:30001> cluster slots +> CLUSTER SLOTS 1) 1) (integer) 0 2) (integer) 5460 3) 1) "127.0.0.1" @@ -93,3 +62,8 @@ slot range reply. 3) "58e6e48d41228013e5d9c1c37c5060693925e97e" ``` +@history + +* `>= 4.0`: Added node IDs. + +**Warning:** In future versions there could be more elements describing the node better. In general a client implementation should just rely on the fact that certain parameters are at fixed positions as specified, but more parameters may follow and should be ignored. Similarly a client library should try if possible to cope with the fact that older versions may just have the IP and port parameter. From 399441a618c2f35e6790c533554c5fda8046fa38 Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Tue, 28 Sep 2021 14:46:26 +0100 Subject: [PATCH 048/813] Minor tidy up and removed work in progress updates soon (#1660) Removed work in progress updates soon section. --- topics/memory-optimization.md | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index ff252e20fb..b398d5c33a 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -29,7 +29,7 @@ Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, Use hashes when possible ------------------------ -Small hashes are encoded in a very small space, so you should try representing your data using hashes every time it is possible. For instance if you have objects representing users in a web application, instead of using different keys for name, surname, email, password, use a single hash with all the required fields. +Small hashes are encoded in a very small space, so you should try representing your data using hashes whenever possible. For instance if you have objects representing users in a web application, instead of using different keys for name, surname, email, password, use a single hash with all the required fields. If you want to know more about this, read the next section. @@ -42,7 +42,7 @@ Basically it is possible to model a plain key-value store using Redis where values can just be just strings, that is not just more memory efficient than Redis plain keys but also much more memory efficient than memcached. -Let's start with some fact: a few keys use a lot more memory than a single key +Let's start with some facts: a few keys use a lot more memory than a single key containing a hash with a few fields. How is this possible? We use a trick. In theory in order to guarantee that we perform lookups in constant time (also known as O(1) in big O notation) there is the need to use a data structure @@ -212,15 +212,10 @@ the allocations performed by Redis). Because the RSS reflects the peak memory, when the (virtually) used memory is low since a lot of keys / values were freed, but the RSS is high, the ratio `RSS / mem_used` will be very high. -If `maxmemory` is not set Redis will keep allocating memory as it finds +If `maxmemory` is not set Redis will keep allocating memory as it sees fit and thus it can (gradually) eat up all your free memory. Therefore it is generally advisable to configure some limit. You may also want to set `maxmemory-policy` to `noeviction` (which is *not* the default value in some older versions of Redis). It makes Redis return an out of memory error for write commands if and when it reaches the limit - which in turn may result in errors in the application but will not render the whole machine dead because of memory starvation. - -Work in progress ----------------- - -Work in progress... more tips will be added soon. From 1e5d8e82f9979d067cecb11e7984e5c47644b128 Mon Sep 17 00:00:00 2001 From: Simon Prickett Date: Tue, 28 Sep 2021 14:47:21 +0100 Subject: [PATCH 049/813] Minor phrasing improvements. (#1659) --- commands/xdel.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/commands/xdel.md b/commands/xdel.md index 6ac1477fd8..a67a9e8e52 100644 --- a/commands/xdel.md +++ b/commands/xdel.md @@ -1,9 +1,9 @@ Removes the specified entries from a stream, and returns the number of entries -deleted, that may be different from the number of IDs passed to the command in -case certain IDs do not exist. +deleted. This number may be less than the number of IDs passed to the command in +the case where some of the specified IDs do not exist in the stream. Normally you may think at a Redis stream as an append-only data structure, -however Redis streams are represented in memory, so we are able to also +however Redis streams are represented in memory, so we are also able to delete entries. This may be useful, for instance, in order to comply with certain privacy policies. @@ -18,8 +18,8 @@ Eventually if all the entries in a macro-node are marked as deleted, the whole node is destroyed and the memory reclaimed. This means that if you delete a large amount of entries from a stream, for instance more than 50% of the entries appended to the stream, the memory usage per entry may increment, since -what happens is that the stream will start to be fragmented. However the stream -performances will remain the same. +what happens is that the stream will become fragmented. However the stream +performance will remain the same. In future versions of Redis it is possible that we'll trigger a node garbage collection in case a given macro-node reaches a given amount of deleted From a6a621240432e5f6fffed05125807c0383be6844 Mon Sep 17 00:00:00 2001 From: yoav-steinberg Date: Wed, 29 Sep 2021 14:24:40 +0300 Subject: [PATCH 050/813] Client eviction (#1657) --- commands.json | 16 +++++++++++++++ commands/client-no-evict.md | 11 ++++++++++ topics/clients.md | 40 +++++++++++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+) create mode 100644 commands/client-no-evict.md diff --git a/commands.json b/commands.json index 4f40282829..45cb3d0a78 100644 --- a/commands.json +++ b/commands.json @@ -801,6 +801,22 @@ "since": "5.0.0", "group": "connection" }, + "CLIENT NO-EVICT": { + "summary": "Set client eviction mode for the current connection", + "complexity": "O(1)", + "since": "7.0.0", + "arguments": [ + { + "name": "enabled", + "type": "enum", + "enum": [ + "ON", + "OFF" + ] + } + ], + "group": "connection" + }, "CLUSTER ADDSLOTS": { "summary": "Assign new hash slots to receiving node", "complexity": "O(N) where N is the total number of hash slot arguments", diff --git a/commands/client-no-evict.md b/commands/client-no-evict.md new file mode 100644 index 0000000000..70070a6abb --- /dev/null +++ b/commands/client-no-evict.md @@ -0,0 +1,11 @@ +The `CLIENT NO-EVICT` command sets the [client eviction](/topics/clients#client-eviction) mode for the current connection. + +When turned on and client eviction is configured, the current connection will be excluded from the client eviction process even if we're above the configured client eviction threshold. + +When turned off, the current client will be re-included in the pool of potential clients to be evicted (and evicted if needed). + +See [client eviction](/topics/clients#client-eviction) for more details. + +@return + +@simple-string-reply: `OK`. diff --git a/topics/clients.md b/topics/clients.md index 0d1fd4b06b..914919bae3 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -113,6 +113,46 @@ Query buffer hard limit Every client is also subject to a query buffer limit. This is a non-configurable hard limit that will close the connection when the client query buffer (that is the buffer we use to accumulate commands from the client) reaches 1 GB, and is actually only an extreme limit to avoid a server crash in case of client or server software bugs. +Client Eviction +--- + +Redis commonly handles a very large number of client connections. +Client connections tend to consume memory, and when there are many of them, the aggregate memory consumption can be extremely high, leading to data eviction or out-of-memory errors. +These cases can be mitigated to an extent using [output buffer limits](#output-buffers-limits), but Redis allows us a more robust configuration to limit the aggregate memory used by all clients' connections. + + +This mechanism is called **client eviction**, and it's essentially a safety mechanism that will disconnect clients once the aggregate memory usage of all clients is above a threshold. +The mechanism first attempts to disconnect clients that use the most memory. +It disconnects the minimal number of clients needed to return below the `maxmemroy-clients` threshold. + +`maxmemroy-clients` defines the maximum aggregate memory usage of all clients connected to Redis. +The aggregation takes into account all the memory used by the client connections: the [query buffer](#query-buffer-hard-limit), the output buffer, and other intermediate buffers. + +Note that replica and master connections aren't affected by the client eviction mechanism. Therefore, such connections are never evicted. + +`maxmemory-clients` can be set permanently in the configuration file (`redis.conf`) or via the `CONFIG SET` command. +This setting can either be 0 (meaning no limit), a size in bytes (possibly with `mb`/`gb` suffix), +or a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of the `maxmemory` configuration). + +The default setting it 0, meaning client eviction is turned off by default. +However, for any large production deployment, it is highly recommended to configure some non-zero `maxmemory-clients` value. +A value `5%`, for example, can be a good place to start. + +It is possible to flag a specific client connection to be excluded from the client eviction mechanism. +This is useful for control path connections. +If, for example, you have an application that monitors the server via the `INFO` command and alerts you in case of a problem, you might want to make sure this connection isn't evicted. +You can do so using the following command (from the relevant client's connection): + +`CLIENT NO-EVICT` `on` + +And you can revert that with: + +`CLIENT NO-EVICT` `off` + +For more information and an example refer to the `maxmemory-clients` section in the default `redis.conf` file. + +Client eviction is available since Redis 7.0. + Client timeouts --- From 3fdb6df44ecd5c4d99ea52a0133177f5ebc24805 Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Mon, 4 Oct 2021 11:55:10 +0300 Subject: [PATCH 051/813] Fixes formatting and sorting (#1658) --- topics/acl.md | 68 +++++++++++++++++++++++++-------------------------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 1a62b512b0..9fed4a84d1 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -270,38 +270,38 @@ to. The following is a list of command categories and their meanings: -* keyspace - Writing or reading from keys, databases, or their metadata +* **admin** - Administrative commands. Normal applications will never need to use + these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc. +* **bitmap** - Data type: bitmaps related. +* **blocking** - Potentially blocking the connection until released by another + command. +* **connection** - Commands affecting the connection or other connections. + This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc. +* **dangerous** - Potentially dangerous commands (each should be considered with care for + various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`, + `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc. +* **geo** - Data type: geospatial indexes related. +* **hash** - Data type: hashes related. +* **hyperloglog** - Data type: hyperloglog related. +* **fast** - Fast O(1) commands. May loop on the number of arguments, but not the + number of elements in the key. +* **keyspace** - Writing or reading from keys, databases, or their metadata in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`, `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace, key or metadata will also have `write` category. Commands that only read the keyspace, key or metadata will have the `read` category. -* read - Reading from keys (values or metadata). Note that commands that don't +* **list** - Data type: lists related. +* **pubsub** - PubSub-related commands. +* **read** - Reading from keys (values or metadata). Note that commands that don't interact with keys, will not have either `read` or `write`. -* write - Writing to keys (values or metadata). -* admin - Administrative commands. Normal applications will never need to use - these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc. -* dangerous - Potentially dangerous commands (each should be considered with care for - various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`, - `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc. -* connection - Commands affecting the connection or other connections. - This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc. -* blocking - Potentially blocking the connection until released by another - command. -* fast - Fast O(1) commands. May loop on the number of arguments, but not the - number of elements in the key. -* slow - All commands that are not `fast`. -* pubsub - PubSub-related commands. -* transaction - `WATCH` / `MULTI` / `EXEC` related commands. -* scripting - Scripting related. -* set - Data type: sets related. -* sortedset - Data type: sorted sets related. -* list - Data type: lists related. -* hash - Data type: hashes related. -* string - Data type: strings related. -* bitmap - Data type: bitmaps related. -* hyperloglog - Data type: hyperloglog related. -* geo - Data type: geospatial indexes related. -* stream - Data type: streams related. +* **scripting** - Scripting related. +* **set** - Data type: sets related. +* **sortedset** - Data type: sorted sets related. +* **slow** - All commands that are not `fast`. +* **stream** - Data type: streams related. +* **string** - Data type: strings related. +* **transaction** - `WATCH` / `MULTI` / `EXEC` related commands. +* **write** - Writing to keys (values or metadata). Redis can also show you a list of all categories, and the exact commands each category includes using the redis `ACL` command's `CAT` subcommand that can be used in two forms: @@ -421,9 +421,9 @@ However ACL *passwords* are not really passwords: they are shared secrets between the server and the client, because in that case the password is not an authentication token used by a human being. For instance: - * There are no length limits, the password will just be memorized in some client software, there is no human that need to recall a password in this context. - * The ACL password does not protect any other thing: it will never be, for instance, the password for some email account. - * Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you have already access to what such password is protecting: the Redis instance stability and the data it contains. +* There are no length limits, the password will just be memorized in some client software, there is no human that need to recall a password in this context. +* The ACL password does not protect any other thing: it will never be, for instance, the password for some email account. +* Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you have already access to what such password is protecting: the Redis instance stability and the data it contains. For this reason to slowdown the password authentication in order to use an algorithm that uses time and space, in order to make password cracking hard, @@ -445,8 +445,8 @@ you should use in order to generate Redis passwords. There are two ways in order to store users inside the Redis configuration. - 1. Users can be specified directly inside the `redis.conf` file. - 2. It is possible to specify an external ACL file. +1. Users can be specified directly inside the `redis.conf` file. +2. It is possible to specify an external ACL file. The two methods are *mutually incompatible*, Redis will ask you to use one or the other. To specify users inside `redis.conf` is a very simple way @@ -474,8 +474,8 @@ inside the file by rewriting it. The external ACL file however is more powerful. You can do the following: - * Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*, otherwise an error is reported to the user, and the old configuration will remain valid. - * USE `ACL SAVE` in order to save the current ACL configuration to the ACL file. +* Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*, otherwise an error is reported to the user, and the old configuration will remain valid. +* USE `ACL SAVE` in order to save the current ACL configuration to the ACL file. Note that `CONFIG REWRITE` does not also trigger `ACL SAVE`: when you use an ACL file the configuration and the ACLs are handled separately. From f3e954f6b05bc3ec9de4cc52845ce8a9ff8fd5d2 Mon Sep 17 00:00:00 2001 From: "Yehoshua (Josh) Hershberg" Date: Tue, 5 Oct 2021 18:18:12 +0300 Subject: [PATCH 052/813] Clarify diff between FAIL msg <-> FAIL condition in hearbeat (#1661) --- topics/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 7ee3addce9..08a837d3f7 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -740,7 +740,7 @@ A `PFAIL` condition is escalated to a `FAIL` condition when the following set of If all the above conditions are true, Node A will: * Mark the node as `FAIL`. -* Send a `FAIL` message to all the reachable nodes. +* Send a `FAIL` message (as opposted to a `FAIL` condition within a heartbeat message) to all the reachable nodes. The `FAIL` message will force every receiving node to mark the node in `FAIL` state, whether or not it already flagged the node in `PFAIL` state. From e4d1042c86bb753375c4cc8b9726bffb340a5311 Mon Sep 17 00:00:00 2001 From: hwware Date: Fri, 15 Oct 2021 13:34:24 -0400 Subject: [PATCH 053/813] update doc for cluster-port parameter --- topics/cluster-spec.md | 23 ++++++++++++++++------- topics/cluster-tutorial.md | 9 ++++----- 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 08a837d3f7..934633b66a 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -291,12 +291,20 @@ The Cluster bus --- Every Redis Cluster node has an additional TCP port for receiving -incoming connections from other Redis Cluster nodes. This port is at a fixed -offset from the normal TCP port used to receive incoming connections -from clients. To obtain the Redis Cluster port, 10000 should be added to -the normal commands port. For example, if a Redis node is listening for -client connections on port 6379, the Cluster bus port 16379 will also be -opened. +incoming connections from other Redis Cluster nodes. This port could be specified in redis.conf file, +or it could be obtained by adding 10000 to the data port. + +Example 1: + +If a Redis node is listening for client connections on port 6379, +and you do not add cluster-port parameter in redis.conf, +the Cluster bus port 16379 will be opened. + +Example 2: + +If a Redis node is listening for client connections on port 6379, +and you set cluster-port 20000 in redis.conf, +the Cluster bus port 20000 will be opened. Node-to-node communication happens exclusively using the Cluster bus and the Cluster bus protocol: a binary protocol composed of frames @@ -698,7 +706,8 @@ The common header has the following information: * The `currentEpoch` and `configEpoch` fields of the sending node that are used to mount the distributed algorithms used by Redis Cluster (this is explained in detail in the next sections). If the node is a replica the `configEpoch` is the last known `configEpoch` of its master. * The node flags, indicating if the node is a replica, a master, and other single-bit node information. * A bitmap of the hash slots served by the sending node, or if the node is a replica, a bitmap of the slots served by its master. -* The sender TCP base port (that is, the port used by Redis to accept client commands; add 10000 to this to obtain the cluster bus port). +* The sender TCP base port that is the port used by Redis to accept client commands. +* The cluster port that is the port used by Redis to node-to-node communication. * The state of the cluster from the point of view of the sender (down or ok). * The master node ID of the sending node, if it is a replica. diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 1c47e40cb1..5827f80912 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -40,8 +40,9 @@ Redis Cluster TCP ports --- Every Redis Cluster node requires two TCP connections open. The normal Redis -TCP port used to serve clients, for example 6379, plus the port obtained by -adding 10000 to the data port, so 16379 in the example. +TCP port used to serve clients, for example 6379, plus the second port named +cluster bus port. The cluster bus port could be specified in redis.conf file, or it could +be obtained by adding 10000 to the data port, so 16379 in the example. This second *high* port is used for the Cluster bus, that is a node-to-node communication channel using a binary protocol. The Cluster bus is used by @@ -51,12 +52,10 @@ port, but always with the normal Redis command port, however make sure you open both ports in your firewall, otherwise Redis cluster nodes will be not able to communicate. -The command port and cluster bus port offset is fixed and is always 10000. - Note that for a Redis Cluster to work properly you need, for each node: 1. The normal client communication port (usually 6379) used to communicate with clients to be open to all the clients that need to reach the cluster, plus all the other cluster nodes (that use the client port for keys migrations). -2. The cluster bus port (the client port + 10000) must be reachable from all the other cluster nodes. +2. The cluster bus port must be reachable from all the other cluster nodes. If you don't open both TCP ports, your cluster will not work as expected. From 3c0fc8422386bbb5dc1fdb74b98e34f37f5dbef6 Mon Sep 17 00:00:00 2001 From: Oran Agra Date: Sun, 17 Oct 2021 15:38:13 +0300 Subject: [PATCH 054/813] fix typo (#1665) --- topics/modules-api-ref.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index 552a347992..b1bd7f1448 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -1348,7 +1348,7 @@ the string, and later call StringDMA() again to get the pointer. int RedisModule_StringTruncate(RedisModuleKey *key, size_t newlen); -If the string is open for writing and is of string type, resize it, padding +If the key is open for writing and is of string type, resize it, padding with zero bytes if the new length is greater than the old one. After this call, [`RedisModule_StringDMA()`](#RedisModule_StringDMA) must be called again to continue From c4afba700116cbc27fe6ca4c1f3b2ac73f127aba Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C5=81ukasz=20=C5=81api=C5=84ski?= Date: Sun, 17 Oct 2021 22:10:03 +0200 Subject: [PATCH 055/813] Typo: replcia --> replica (#1666) --- topics/rediscli.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 8b0afcf39a..24b75282f0 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -701,7 +701,7 @@ If it is non zero, an error occurred like in the following example: $ echo $? 1 -## Replcia mode +## Replica mode The replica mode of the CLI is an advanced feature useful for Redis developers and for debugging operations. From 6b96d367f63612d9d7fecd158134557f837d9605 Mon Sep 17 00:00:00 2001 From: Thomas Depierre Date: Mon, 18 Oct 2021 14:27:12 +0200 Subject: [PATCH 056/813] Add a third argument to eval.md to avoid confusion (#1668) --- commands/eval.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 8f12b61d6c..687fe89eba 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -19,11 +19,12 @@ keys (so `ARGV[1]`, `ARGV[2]`, ...). The following example should clarify what stated above: ``` -> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second +> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2], ARGV[3]}" 2 key1 key2 first second third 1) "key1" 2) "key2" 3) "first" 4) "second" +5) "third" ``` Note: as you can see Lua arrays are returned as Redis multi bulk replies, that From 11daf0dc663c5ab4fc22510f2747d482d6f1b977 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Mon, 18 Oct 2021 11:25:27 -0400 Subject: [PATCH 057/813] Update topics/cluster-spec.md Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com> --- topics/cluster-spec.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 934633b66a..d15916214a 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -291,8 +291,7 @@ The Cluster bus --- Every Redis Cluster node has an additional TCP port for receiving -incoming connections from other Redis Cluster nodes. This port could be specified in redis.conf file, -or it could be obtained by adding 10000 to the data port. +incoming connections from other Redis Cluster nodes. This port will be derived by adding 10000 to the data port or it can be specified with the cluster-port config. Example 1: From 64d8c798d9dccfc3cfab08a224bbc516143ce9d2 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Mon, 18 Oct 2021 11:26:18 -0400 Subject: [PATCH 058/813] Update topics/cluster-tutorial.md Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com> --- topics/cluster-tutorial.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 5827f80912..ec20230c87 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -41,8 +41,7 @@ Redis Cluster TCP ports Every Redis Cluster node requires two TCP connections open. The normal Redis TCP port used to serve clients, for example 6379, plus the second port named -cluster bus port. The cluster bus port could be specified in redis.conf file, or it could -be obtained by adding 10000 to the data port, so 16379 in the example. +cluster bus port. The cluster bus port will be derived by adding 10000 to the data port, 16379 in this example, or by overiding it with the cluster-port config. This second *high* port is used for the Cluster bus, that is a node-to-node communication channel using a binary protocol. The Cluster bus is used by From e71546d27370e5a5762eac9139beb30c56a03391 Mon Sep 17 00:00:00 2001 From: Wen Hui Date: Mon, 18 Oct 2021 11:26:36 -0400 Subject: [PATCH 059/813] Update topics/cluster-spec.md Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com> --- topics/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index d15916214a..a5b7d94b30 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -706,7 +706,7 @@ The common header has the following information: * The node flags, indicating if the node is a replica, a master, and other single-bit node information. * A bitmap of the hash slots served by the sending node, or if the node is a replica, a bitmap of the slots served by its master. * The sender TCP base port that is the port used by Redis to accept client commands. -* The cluster port that is the port used by Redis to node-to-node communication. +* The cluster port that is the port used by Redis for node-to-node communication. * The state of the cluster from the point of view of the sender (down or ok). * The master node ID of the sending node, if it is a replica. From e22aa405c42e6139805303f5dd3535422383b591 Mon Sep 17 00:00:00 2001 From: SiLeader Date: Tue, 19 Oct 2021 21:40:23 +0900 Subject: [PATCH 060/813] Add dedis to Dart client (#1663) --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index fe2e41a99f..634abfaafb 100644 --- a/clients.json +++ b/clients.json @@ -1954,6 +1954,15 @@ "repository": "https://github.com/SomajitDey/redis-client", "description": "extensible client library for Bash scripting or command-line + connection pooling + redis-cli", "active": true + }, + + { + "name": "dedis", + "language": "Dart", + "url": "https://pub.dev/packages/dedis", + "repository": "https://github.com/SiLeader/dedis", + "description": "Simple Redis Client for Dart", + "authors": ["cerussite127"] } ] From be3b1bcc0d1e8b3e15981504ea8649707e3708b8 Mon Sep 17 00:00:00 2001 From: guybe7 Date: Mon, 25 Oct 2021 13:50:16 +0200 Subject: [PATCH 061/813] Adjust the ACL topic for new subcommands scheme (#1670) --- topics/acl.md | 54 +++++++++++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 23 deletions(-) diff --git a/topics/acl.md b/topics/acl.md index 9fed4a84d1..648ae98501 100644 --- a/topics/acl.md +++ b/topics/acl.md @@ -95,11 +95,11 @@ Enable and disallow users: Allow and disallow commands: -* `+`: Add the command to the list of commands the user can call. -* `-`: Remove the command to the list of commands the user can call. +* `+`: Add the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g "+config|get"). +* `-`: Remove the command to the list of commands the user can call. Can be used with `|` for blocking subcommands (e.g "-config|set"). * `+@`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules. * `-@`: Like `+@` but removes the commands from the list of commands the client can call. -* `+|subcommand`: Allow a specific subcommand of an otherwise disabled command. Note that this form is not allowed as negative like `-DEBUG|SEGFAULT`, but only additive starting with "+". This ACL will cause an error if the command is already active as a whole. +* `+|first-arg`: Allow a specific first argument of an otherwise disabled command. Note that this form is not allowed as negative like `-SELECT|1`, but only additive starting with "+". * `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system. * `nocommands`: Alias for -@all. @@ -350,39 +350,47 @@ Note that commands may be part of multiple categories, so for instance an ACL rule like `+@geo -@read` will result in certain geo commands to be excluded because they are read-only commands. -## Adding subcommands +## Allowing/blocking subcommands -Often the ability to exclude or include a command as a whole is not enough. -Many Redis commands do multiple things based on the subcommand passed as -argument. For example the `CLIENT` command can be used in order to do -dangerous and non dangerous operations. Many deployments may not be happy to -provide the ability to execute `CLIENT KILL` to non admin-level users, but may -still want them to be able to run `CLIENT SETNAME`. +Starting from Redis 7.0 subcommands can be allowed/blocked just like other +commands (by using the seperator `|` between the command and subcommand, for +example: `+config|get` or `-config|set`) -_Note: the new RESP3 `HELLO` handshake command provides a `SETNAME` option, but this is still a good example for subcommand control._ +That is true for all commands except DEBUG. In order to allow/block specific +DEBUG subcommands see next section. + +## Allowing the first-arg of an otherwise blocked command + +Often the ability to exclude or include a command or a subcommand as a whole is not enough. +Many deployments may not be happy providing the ability to execute a `SELECT` for any DB, but may +still want to be able to run `SELECT 0`. In such case I could alter the ACL of a user in the following way: - ACL SETUSER myuser -client +client|setname +client|getname + ACL SETUSER myuser -select +select|0 -I started removing the `CLIENT` command, and later added the two allowed -subcommands. Note that **it is not possible to do the reverse**, the subcommands +I started by removing the `SELECT` command, and later added the allowed +first-arg. Note that **it is not possible to do the reverse**, first-args can be only added, and not excluded, because it is possible that in the future -new subcommands may be added: it is a lot safer to specify all the subcommands -that are valid for some user. Moreover, if you add a subcommand about a command -that is not already disabled, an error is generated, because this can only -be a bug in the ACL rules: +new first-args may be added: it is a lot safer to specify all the first-args +that are valid for some user. + +Another exmaple: - > ACL SETUSER default +debug|segfault - (error) ERR Error in ACL SETUSER modifier '+debug|segfault': Adding a - subcommand of a command already fully added is not allowed. Remove the - command to start. Example: -DEBUG +DEBUG|DIGEST + ACL SETUSER myuser -debug +debug|digest -Note that subcommand matching may add some performance penalty, however such +Note that first-arg matching may add some performance penalty, however such penalty is very hard to measure even with synthetic benchmarks, and the additional CPU cost is only payed when such command is called, and not when other commands are called. +It is possible to use this mechanism in order to allow subcommands in Redis +versions prior to 7.0 (see above section). +Starting from Redis 7.0 it is possible to allow first-args of subcommands. +Example: + + ACL SETUSER myuser -config +config|get +config|set|loglevel + ## +@all VS -@all In the previous section it was observed how it is possible to define commands From cf6015c3ce5cce293bf9a1b50c7898619c4b3e28 Mon Sep 17 00:00:00 2001 From: Wang Yuan Date: Wed, 27 Oct 2021 15:06:25 +0800 Subject: [PATCH 062/813] Some memory fields in INFO command (#1672) Co-authored-by: Oran Agra --- commands/info.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands/info.md b/commands/info.md index aae7abf035..5d5567f054 100644 --- a/commands/info.md +++ b/commands/info.md @@ -136,6 +136,12 @@ Here is the meaning of all fields in the **memory** section: * `allocator_allocated`: Total bytes allocated form the allocator, including internal-fragmentation. Normally the same as `used_memory`. * `allocator_active`: Total bytes in the allocator active pages, this includes external-fragmentation. * `allocator_resident`: Total bytes resident (RSS) in the allocator, this includes pages that can be released to the OS (by `MEMORY PURGE`, or just waiting). +* `mem_not_counted_for_evict`: Used memory that's not counted for key eviction. This is basically transient replica and AOF buffers. +* `mem_clients_normal`: Memory used by normal clients +* `mem_clients_slaves`: Memory used by replica clients - Starting Redis 7.0, replica buffers share memory with the replication backlog, so this field can show 0 when replicas don't trigger an increase of memory usage. +* `mem_aof_buffer`: Transient memory used for AOF and AOF rewrite buffers +* `mem_replication_backlog`: Memory used by replication backlog +* `mem_total_replication_buffers`: Total memory consumed for replication buffers - Added in Redis 7.0. * `mem_allocator`: Memory allocator, chosen at compile time. * `active_defrag_running`: When `activedefrag` is enabled, this indicates whether defragmentation is currently active, and the CPU percentage it intends to utilize. * `lazyfree_pending_objects`: The number of objects waiting to be freed (as a From 79ddd8bbe1f904bb772bed8990875935c051bf0b Mon Sep 17 00:00:00 2001 From: Rakshit Midha Date: Mon, 1 Nov 2021 07:55:59 +0100 Subject: [PATCH 063/813] Update clients.md (#1673) Typo maxmemory-clients --- topics/clients.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/clients.md b/topics/clients.md index 914919bae3..fc40355f20 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -123,9 +123,9 @@ These cases can be mitigated to an extent using [output buffer limits](#output-b This mechanism is called **client eviction**, and it's essentially a safety mechanism that will disconnect clients once the aggregate memory usage of all clients is above a threshold. The mechanism first attempts to disconnect clients that use the most memory. -It disconnects the minimal number of clients needed to return below the `maxmemroy-clients` threshold. +It disconnects the minimal number of clients needed to return below the `maxmemory-clients` threshold. -`maxmemroy-clients` defines the maximum aggregate memory usage of all clients connected to Redis. +`maxmemory-clients` defines the maximum aggregate memory usage of all clients connected to Redis. The aggregation takes into account all the memory used by the client connections: the [query buffer](#query-buffer-hard-limit), the output buffer, and other intermediate buffers. Note that replica and master connections aren't affected by the client eviction mechanism. Therefore, such connections are never evicted. From 7620c13bdb9e4e63db34a6c5861c6fc18b63534e Mon Sep 17 00:00:00 2001 From: enjoy-binbin Date: Wed, 3 Nov 2021 16:17:05 +0800 Subject: [PATCH 064/813] Minor fix in rediscli --- topics/rediscli.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 24b75282f0..0267ca58ca 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -261,7 +261,7 @@ The string `127.0.0.1:6379>` is the prompt. It reminds you that you are connected to a given Redis instance. The prompt changes as the server you are connected to changes, or when you -are operating on a database different than the database number zero: +are operating on a database different from the database number zero: 127.0.0.1:6379> select 2 OK @@ -354,7 +354,7 @@ There are two ways to customize the CLI's behavior. The file `.redisclirc` in your home directory is loaded by the CLI on startup. You can override the file's default location by setting the `REDISCLI_RCFILE` environment variable to an alternative path. Preferences can also be set during a CLI session, in which -case they will last only the the duration of the session. +case they will last only the duration of the session. To set preferences, use the special `:set` command. The following preferences can be set, either by typing the command in the CLI or adding it to the @@ -613,7 +613,7 @@ a very fast instance tends to be overestimated a bit because of the latency due to the kernel scheduler of the system running `redis-cli` itself, so the average latency of 0.19 above may easily be 0.01 or less. However this is usually not a big problem, since we are interested in -events of a few millisecond or more. +events of a few milliseconds or more. Sometimes it is useful to study how the maximum and average latencies evolve during time. The `--latency-history` option is used for that @@ -739,7 +739,7 @@ This means that 20% of keys will be requested 80% of times, which is a common distribution in caching scenarios. Theoretically, given the distribution of the requests and the Redis memory -overhead, it should be possible to compute the hit rate analytically with +overhead, it should be possible to compute the hit rate analytically with a mathematical formula. However, Redis can be configured with different LRU settings (number of samples) and LRU's implementation, which is approximated in Redis, changes a lot between different versions. Similarly @@ -784,7 +784,7 @@ the actual figure we can expect in the long time: 127000 Gets/sec | Hits: 50870 (40.06%) | Misses: 76130 (59.94%) 124250 Gets/sec | Hits: 50147 (40.36%) | Misses: 74103 (59.64%) -A miss rage of 59% may not be acceptable for our use case. So we know that +A miss rate of 59% may not be acceptable for our use case. So we know that 100MB of memory is not enough. Let's try with half gigabyte. After a few minutes we'll see the output stabilize to the following figures: From 1f2890be7972ba0de7f981611b96eb72121252b1 Mon Sep 17 00:00:00 2001 From: "Kevin Bloch (@codingthat)" Date: Fri, 5 Nov 2021 12:43:08 +0100 Subject: [PATCH 065/813] Typo (#1676) --- commands/geohash.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/geohash.md b/commands/geohash.md index 2517c3f28d..a99ade2f43 100644 --- a/commands/geohash.md +++ b/commands/geohash.md @@ -10,7 +10,7 @@ described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and Geohash string properties --- -The command returns 11 characters Geohash strings, so no precision is loss +The command returns 11 characters Geohash strings, so no precision is lost compared to the Redis internal 52 bit representation. The returned Geohashes have the following properties: From 57064f08f95cb7875371211edd8089f509b140fe Mon Sep 17 00:00:00 2001 From: Binbin Date: Fri, 5 Nov 2021 21:32:41 +0800 Subject: [PATCH 066/813] Add new ZMPOP and BZMPOP commands. (#1654) --- commands.json | 66 ++++++++++++++++++++++++++++++++++++++++++++++ commands/bzmpop.md | 16 +++++++++++ commands/zmpop.md | 36 +++++++++++++++++++++++++ 3 files changed, 118 insertions(+) create mode 100644 commands/bzmpop.md create mode 100644 commands/zmpop.md diff --git a/commands.json b/commands.json index 45cb3d0a78..ccda7bcc7d 100644 --- a/commands.json +++ b/commands.json @@ -529,6 +529,41 @@ "since": "5.0.0", "group": "sorted_set" }, + "BZMPOP": { + "summary": "Remove and return members with scores in a sorted set or block until one is available", + "complexity": "O(K) + O(N*log(M)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.", + "arguments": [ + { + "name": "timeout", + "type": "double" + }, + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "name": "where", + "type": "enum", + "enum": [ + "MIN", + "MAX" + ] + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "7.0.0", + "group": "sorted_set" + }, "CLIENT CACHING": { "summary": "Instruct the server about tracking or not keys in the next request", "complexity": "O(1)", @@ -4597,6 +4632,37 @@ "since": "5.0.0", "group": "sorted_set" }, + "ZMPOP": { + "summary": "Remove and return members with scores in a sorted set", + "complexity": "O(K) + O(N*log(M)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.", + "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "name": "where", + "type": "enum", + "enum": [ + "MIN", + "MAX" + ] + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "7.0.0", + "group": "sorted_set" + }, "ZRANDMEMBER": { "summary": "Get one or multiple random elements from a sorted set", "complexity": "O(N) where N is the number of elements returned", diff --git a/commands/bzmpop.md b/commands/bzmpop.md new file mode 100644 index 0000000000..dc0c077d96 --- /dev/null +++ b/commands/bzmpop.md @@ -0,0 +1,16 @@ +`BZMPOP` is the blocking variant of `ZMPOP`. + +When any of the sorted sets contains elements, this command behaves exactly like `ZMPOP`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `ZMPOP`. +When all sorted sets are empty, Redis will block the connection until another client adds members to one of the keys or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. +A `timeout` of zero can be used to block indefinitely. + +See `ZMPOP` for more information. + +@return + +@array-reply: specifically: + +* A `nil` when no element could be popped. +* A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score. + diff --git a/commands/zmpop.md b/commands/zmpop.md new file mode 100644 index 0000000000..16848a0e08 --- /dev/null +++ b/commands/zmpop.md @@ -0,0 +1,36 @@ +Pops one or more elements, that are member-score pairs, from the first non-empty sorted set in the provided list of key names. + +`ZMPOP` and `BZMPOP` are similar to the following, more limited, commands: + +- `ZPOPMIN` or `ZPOPMAX` which take only one key, and can return multiple elements. +- `BZPOPMIN` or `BZPOPMAX` which take multiple keys, but return only one element from just one key. + +See `BZMPOP` for the blocking variant of this command. + +When the `MIN` modifier is used, the elements popped are those with the lowest scores from the first non-empty sorted set. The `MAX` modifier causes elements with the highest scores to be popped. +The optional `COUNT` can be used to specify the number of elements to pop, and is set to 1 by default. + +The number of popped elements is the minimum from the sorted set's cardinality and `COUNT`'s value. + +@return + +@array-reply: specifically: + +* A `nil` when no element could be popped. +* A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of the popped elements. Every entry in the elements array is also an array that contains the member and its score. + +@examples + +```cli +ZMPOP 1 notsuchkey MIN +ZADD myzset 1 "one" 2 "two" 3 "three" +ZMPOP 1 myzset MIN +ZRANGE myzset 0 -1 WITHSCORES +ZMPOP 1 myzset MAX COUNT 10 +ZADD myzset2 4 "four" 5 "five" 6 "six" +ZMPOP 2 myzset myzset2 MIN COUNT 10 +ZRANGE myzset 0 -1 WITHSCORES +ZMPOP 2 myzset myzset2 MAX COUNT 10 +ZRANGE myzset2 0 -1 WITHSCORES +EXISTS myzset myzset2 +``` From 358ad954c49dfbf72701e3a3c919c21de4679988 Mon Sep 17 00:00:00 2001 From: Binbin Date: Fri, 5 Nov 2021 22:10:39 +0800 Subject: [PATCH 067/813] Remove optional flag in LMPOP/BLMPOP and remove BLMPOP examples (#1677) --- commands.json | 2 -- commands/blmpop.md | 15 --------------- commands/lmpop.md | 1 + 3 files changed, 1 insertion(+), 17 deletions(-) diff --git a/commands.json b/commands.json index ccda7bcc7d..9f8fbe8a52 100644 --- a/commands.json +++ b/commands.json @@ -438,7 +438,6 @@ { "name": "key", "type": "key", - "optional": true, "multiple": true }, { @@ -474,7 +473,6 @@ { "name": "key", "type": "key", - "optional": true, "multiple": true }, { diff --git a/commands/blmpop.md b/commands/blmpop.md index fd31eb8ae4..262713ef31 100644 --- a/commands/blmpop.md +++ b/commands/blmpop.md @@ -13,18 +13,3 @@ See `LMPOP` for more information. * A `nil` when no element could be popped, and timeout is reached. * A two-element array with the first element being the name of the key from which elements were popped, and the second element is an array of elements. - -@examples - -```cli -DEL mylist mylist2 -LPUSH mylist "one" "two" "three" "four" "five" -BLMPOP 1 1 mylist LEFT COUNT 2 -LRANGE mylist 0 -1 -LPUSH mylist2 "a" "b" "c" "d" "e" -BLMPOP 1 2 mylist mylist2 LEFT COUNT 3 -LRANGE mylist 0 -1 -BLMPOP 1 2 mylist mylist2 RIGHT COUNT 10 -LRANGE mylist2 0 -1 -EXISTS mylist mylist2 -``` diff --git a/commands/lmpop.md b/commands/lmpop.md index 6caa35f3de..b716ee9435 100644 --- a/commands/lmpop.md +++ b/commands/lmpop.md @@ -1,6 +1,7 @@ Pops one or more elements from the first non-empty list key from the list of provided key names. LMPOP and BLMPOP are similar to the following, more limited, commands: + - `LPOP` or `RPOP` which take only one key, and can return multiple elements. - `BLPOP` or `BRPOP` which take multiple keys, but return only one element from just one key. From fec135bb54e726c55df243da23ffd9f093bcc4e1 Mon Sep 17 00:00:00 2001 From: Guy Korland Date: Fri, 5 Nov 2021 16:11:56 +0200 Subject: [PATCH 068/813] Replace REDISMODULE_POSTPONED_ARRAY_LEN with REDISMODULE_POSTPONED_LEN (#1671) --- topics/modules-api-ref.md | 8 ++++---- topics/modules-intro.md | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/topics/modules-api-ref.md b/topics/modules-api-ref.md index b1bd7f1448..31c43d5b95 100644 --- a/topics/modules-api-ref.md +++ b/topics/modules-api-ref.md @@ -682,7 +682,7 @@ of the array. When producing arrays with a number of element that is not known beforehand the function can be called with the special count -`REDISMODULE_POSTPONED_ARRAY_LEN`, and the actual number of elements can be +`REDISMODULE_POSTPONED_LEN`, and the actual number of elements can be later set with [`RedisModule_ReplySetArrayLength()`](#RedisModule_ReplySetArrayLength) (which will set the latest "open" count if there are multiple ones). @@ -716,7 +716,7 @@ The function always returns `REDISMODULE_OK`. void RedisModule_ReplySetArrayLength(RedisModuleCtx *ctx, long len); When [`RedisModule_ReplyWithArray()`](#RedisModule_ReplyWithArray) is used with the argument -`REDISMODULE_POSTPONED_ARRAY_LEN`, because we don't know beforehand the number +`REDISMODULE_POSTPONED_LEN`, because we don't know beforehand the number of items we are going to output as elements of the array, this function will take care to set the array length. @@ -727,9 +727,9 @@ that was created in a postponed way. For example in order to output an array like [1,[10,20,30]] we could write: - RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN); RedisModule_ReplyWithLongLong(ctx,1); - RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN); RedisModule_ReplyWithLongLong(ctx,10); RedisModule_ReplyWithLongLong(ctx,20); RedisModule_ReplyWithLongLong(ctx,30); diff --git a/topics/modules-intro.md b/topics/modules-intro.md index 927486d2d0..1a486c5788 100644 --- a/topics/modules-intro.md +++ b/topics/modules-intro.md @@ -469,7 +469,7 @@ later produce the command reply, a better solution is to start an array reply where the length is not known, and set it later. This is accomplished with a special argument to `RedisModule_ReplyWithArray()`: - RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); The above call starts an array reply so we can use other `ReplyWith` calls in order to produce the array items. Finally in order to set the length, @@ -480,7 +480,7 @@ use the following call: In the case of the FACTOR command, this translates to some code similar to this: - RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); number_of_factors = 0; while(still_factors) { RedisModule_ReplyWithLongLong(ctx, some_factor); @@ -495,9 +495,9 @@ It is possible to have multiple nested arrays with postponed reply. Each call to `SetArray()` will set the length of the latest corresponding call to `ReplyWithArray()`: - RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); ... generate 100 elements ... - RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN); + RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN); ... generate 10 elements ... RedisModule_ReplySetArrayLength(ctx, 10); RedisModule_ReplySetArrayLength(ctx, 100); From d1b50ce64d400dbb1268e0c7480951786b5ee246 Mon Sep 17 00:00:00 2001 From: Chayim Date: Fri, 5 Nov 2021 16:12:39 +0200 Subject: [PATCH 069/813] example for replicaof command (#1667) --- commands/replicaof.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/commands/replicaof.md b/commands/replicaof.md index d202cf5edb..1c3ec93dce 100644 --- a/commands/replicaof.md +++ b/commands/replicaof.md @@ -9,3 +9,13 @@ The form `REPLICAOF` NO ONE will stop replication, turning the server into a MAS @return @simple-string-reply + +@examples + +``` +> REPLICAOF NO ONE +"OK" + +> REPLICAOF 127.0.0.1 6799 +"OK" +``` From 7f163533ada148cb664bf320ed517ba0934fec88 Mon Sep 17 00:00:00 2001 From: Binbin Date: Fri, 5 Nov 2021 22:17:52 +0800 Subject: [PATCH 070/813] Adds limit to SINTERCARD/ZINTERCARD. (#1647) --- commands.json | 16 ++++++++++++++++ commands/sintercard.md | 9 ++++++++- commands/zintercard.md | 5 +++++ 3 files changed, 29 insertions(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 9f8fbe8a52..fc22d7a424 100644 --- a/commands.json +++ b/commands.json @@ -3804,10 +3804,20 @@ "summary": "Intersect multiple sets and return the cardinality of the result", "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", "arguments": [ + { + "name": "numkeys", + "type": "integer" + }, { "name": "key", "type": "key", "multiple": true + }, + { + "command": "LIMIT", + "name": "limit", + "type": "integer", + "optional": true } ], "since": "7.0.0", @@ -4532,6 +4542,12 @@ "name": "key", "type": "key", "multiple": true + }, + { + "command": "LIMIT", + "name": "limit", + "type": "integer", + "optional": true } ], "since": "7.0.0", diff --git a/commands/sintercard.md b/commands/sintercard.md index 4b06982d6c..24473e50b5 100644 --- a/commands/sintercard.md +++ b/commands/sintercard.md @@ -1,8 +1,13 @@ +This command is similar to `SINTER`, but instead of returning the result set, it returns just the cardinality of the result. Returns the cardinality of the set which would result from the intersection of all the given sets. Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). +By default, the command calculates the cardinality of the intersection of all given sets. +When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. +Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. + @return @integer-reply: the number of elements in the resulting intersection. @@ -13,9 +18,11 @@ With one of the keys being an empty set, the resulting set is also empty (since SADD key1 "a" SADD key1 "b" SADD key1 "c" +SADD key1 "d" SADD key2 "c" SADD key2 "d" SADD key2 "e" SINTER key1 key2 -SINTERCARD key1 key2 +SINTERCARD 2 key1 key2 +SINTERCARD 2 key1 key2 LIMIT 1 ``` diff --git a/commands/zintercard.md b/commands/zintercard.md index 84abe27ffd..613849fc2d 100644 --- a/commands/zintercard.md +++ b/commands/zintercard.md @@ -3,6 +3,10 @@ This command is similar to `ZINTER`, but instead of returning the result set, it Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). +By default, the command calculates the cardinality of the intersection of all given sets. +When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. +Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. + @return @integer-reply: the number of elements in the resulting intersection. @@ -17,4 +21,5 @@ ZADD zset2 2 "two" ZADD zset2 3 "three" ZINTER 2 zset1 zset2 ZINTERCARD 2 zset1 zset2 +ZINTERCARD 2 zset1 zset2 LIMIT 1 ``` From b2c2858480cf3e17a7ebce00386e61a0280e614d Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 5 Nov 2021 16:19:21 +0200 Subject: [PATCH 071/813] Adds some missing types to commands.json (#1652) --- commands.json | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index fc22d7a424..c0256b3d00 100644 --- a/commands.json +++ b/commands.json @@ -664,7 +664,8 @@ "type": "block", "block": [ { - "command": "ID" + "command": "ID", + "type": "command" }, { "name": "client-id", @@ -1496,7 +1497,8 @@ "optional": true, "block": [ { - "command": "TO" + "command": "TO", + "type": "command" }, { "name": "host", @@ -1508,12 +1510,14 @@ }, { "command": "FORCE", + "type": "command", "optional": true } ] }, { "command": "ABORT", + "type": "command", "optional": true }, { @@ -5378,6 +5382,7 @@ }, { "command": "NOMKSTREAM", + "type": "command", "optional": true }, { @@ -5628,6 +5633,7 @@ }, { "command": "MKSTREAM", + "type": "command", "optional": true } ], @@ -5825,6 +5831,7 @@ }, { "name": "force", + "type": "enum", "enum": [ "FORCE" ], @@ -5832,6 +5839,7 @@ }, { "name": "justid", + "type": "enum", "enum": [ "JUSTID" ], @@ -5873,6 +5881,7 @@ }, { "name": "justid", + "type": "enum", "enum": [ "JUSTID" ], From 90f4ee3c75b3ea9f506f54e7334e1807d157b25d Mon Sep 17 00:00:00 2001 From: Huang Zhw Date: Fri, 5 Nov 2021 22:22:32 +0800 Subject: [PATCH 072/813] add additional argument BYTE|BIT to BITCOUNT and BITPOS (#1644) --- commands.json | 48 +++++++++++++++++++++++++++++++++++--------- commands/bitcount.md | 11 ++++++++++ commands/bitpos.md | 16 +++++++++++++-- 3 files changed, 63 insertions(+), 12 deletions(-) diff --git a/commands.json b/commands.json index c0256b3d00..c3dd40376c 100644 --- a/commands.json +++ b/commands.json @@ -185,13 +185,26 @@ "type": "key" }, { - "name": [ - "start", - "end" - ], - "type": [ - "integer", - "integer" + "name": "index", + "type": "block", + "block": [ + { + "name": "start", + "type": "integer" + }, + { + "name": "end", + "type": "integer" + }, + { + "name": "index_unit", + "type": "enum", + "enum": [ + "BYTE", + "BIT" + ], + "optional": true + } ], "optional": true } @@ -327,9 +340,24 @@ "type": "integer" }, { - "name": "end", - "type": "integer", - "optional": true + "name": "end_index", + "type": "block", + "optional": true, + "block": [ + { + "name": "end", + "type": "integer" + }, + { + "name": "index_unit", + "type": "enum", + "enum": [ + "BYTE", + "BIT" + ], + "optional": true + } + ] } ] } diff --git a/commands/bitcount.md b/commands/bitcount.md index 35f65b6bc1..e0ce101b45 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -10,12 +10,21 @@ byte, -2 is the penultimate, and so forth. Non-existent keys are treated as empty strings, so the command will return zero. +By default, the additional arguments _start_ and _end_ specify a byte index. +We can use an additional argument `BIT` to specify a bit index. +So 0 is the first bit, 1 is the second bit, and so forth. +For negative values, -1 is the last bit, -2 is the penultimate, and so forth. + @return @integer-reply The number of bits set to 1. +@history + +* `>= 7.0`: Added the `BYTE|BIT` option. + @examples ```cli @@ -23,6 +32,8 @@ SET mykey "foobar" BITCOUNT mykey BITCOUNT mykey 0 0 BITCOUNT mykey 1 1 +BITCOUNT mykey 1 1 BYTE +BITCOUNT mykey 5 30 BIT ``` ## Pattern: real-time metrics using bitmaps diff --git a/commands/bitpos.md b/commands/bitpos.md index ad645a09f6..ec9bc07bff 100644 --- a/commands/bitpos.md +++ b/commands/bitpos.md @@ -7,13 +7,18 @@ byte's most significant bit is at position 8, and so forth. The same bit position convention is followed by `GETBIT` and `SETBIT`. By default, all the bytes contained in the string are examined. -It is possible to look for bits only in a specified interval passing the additional arguments _start_ and _end_ (it is possible to just pass _start_, the operation will assume that the end is the last byte of the string. However there are semantic differences as explained later). The range is interpreted as a range of bytes and not a range of bits, so `start=0` and `end=2` means to look at the first three bytes. +It is possible to look for bits only in a specified interval passing the additional arguments _start_ and _end_ (it is possible to just pass _start_, the operation will assume that the end is the last byte of the string. However there are semantic differences as explained later). +By default, the range is interpreted as a range of bytes and not a range of bits, so `start=0` and `end=2` means to look at the first three bytes. + +You can use the optional `BIT` modifier to specify that the range should be interpreted as a range of bits. +So `start=0` and `end=2` means to look at the first three bits. Note that bit positions are returned always as absolute values starting from bit zero even when _start_ and _end_ are used to specify a range. Like for the `GETRANGE` command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last -byte, -2 is the penultimate, and so forth. +byte, -2 is the penultimate, and so forth. When `BIT` is specified, -1 is the last +bit, -2 is the penultimate, and so forth. Non-existent keys are treated as empty strings. @@ -31,6 +36,10 @@ Basically, the function considers the right of the string as padded with zeros i However, this behavior changes if you are looking for clear bits and specify a range with both __start__ and __end__. If no clear bit is found in the specified range, the function returns -1 as the user specified a clear range and there are no 0 bits in that range. +@history + +* `>= 7.0`: Added the `BYTE|BIT` option. + @examples ```cli @@ -39,6 +48,9 @@ BITPOS mykey 0 SET mykey "\x00\xff\xf0" BITPOS mykey 1 0 BITPOS mykey 1 2 +BITPOS mykey 1 2 -1 BYTE +BITPOS mykey 1 7 15 BIT set mykey "\x00\x00\x00" BITPOS mykey 1 +BITPOS mykey 1 7 -3 BIT ``` From 3f6868b9225593148406022a13719c6ce85de79d Mon Sep 17 00:00:00 2001 From: yoav-steinberg Date: Fri, 5 Nov 2021 16:23:21 +0200 Subject: [PATCH 073/813] Interval for scan and bigkeys (#1638) --- topics/rediscli.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/topics/rediscli.md b/topics/rediscli.md index 0267ca58ca..8e0b1ce0f4 100644 --- a/topics/rediscli.md +++ b/topics/rediscli.md @@ -462,8 +462,8 @@ and produces quite a verbose output: $ redis-cli --bigkeys # Scanning the entire keyspace to find biggest keys as well as - # average sizes per key type. You can use -i 0.1 to sleep 0.1 sec - # per 100 SCAN commands (not usually needed). + # average sizes per key type. You can use -i 0.01 to sleep 0.01 sec + # per SCAN command (not usually needed). [00.00%] Biggest string found so far 'key-419' with 3 bytes [05.14%] Biggest list found so far 'mylist' with 100004 items @@ -492,7 +492,7 @@ provides general stats about the data inside the Redis instance. The program uses the `SCAN` command, so it can be executed against a busy server without impacting the operations, however the `-i` option can be used in order to throttle the scanning process of the specified fraction -of second for each 100 keys requested. For example, `-i 0.1` will slow down +of second for each `SCAN` command. For example, `-i 0.01` will slow down the program execution a lot, but will also reduce the load on the server to a tiny amount. @@ -547,6 +547,9 @@ kind of objects, by key name: $ redis-cli --scan --pattern 'user:*' | wc -l 3829433 +You can use `-i 0.01` to add a delay between calls to the `SCAN` command. +This will make the command slower but will significantly reduce load on the server. + ## Pub/sub mode The CLI is able to publish messages in Redis Pub/Sub channels just using From 8678b903ca5e2d4f8afe1478326ddd89e2c35eed Mon Sep 17 00:00:00 2001 From: Itamar Haber Date: Fri, 5 Nov 2021 16:30:11 +0200 Subject: [PATCH 074/813] Splits several container commands (#1675) --- commands.json | 385 +++++++++++++++++++----------- commands/acl-help.md | 3 +- commands/object-encoding.md | 15 ++ commands/object-freq.md | 9 + commands/object-help.md | 5 + commands/object-idletime.md | 9 + commands/object-refcount.md | 7 + commands/object.md | 85 ------- commands/pubsub-channels.md | 11 + commands/pubsub-help.md | 5 + commands/pubsub-numpat.md | 9 + commands/pubsub-numsub.md | 11 + commands/pubsub.md | 48 ---- commands/slowlog-get.md | 30 +++ commands/slowlog-help.md | 5 + commands/slowlog-len.md | 12 + commands/slowlog-reset.md | 7 + commands/slowlog.md | 91 ------- commands/xgroup-create.md | 18 ++ commands/xgroup-createconsumer.md | 7 + commands/xgroup-delconsumer.md | 10 + commands/xgroup-destroy.md | 7 + commands/xgroup-help.md | 5 + commands/xgroup-setid.md | 11 + commands/xgroup.md | 80 ------- commands/xinfo-consumers.md | 29 +++ commands/xinfo-groups.md | 34 +++ commands/xinfo-help.md | 5 + commands/xinfo-stream.md | 103 ++++++++ commands/xinfo.md | 184 -------------- wordlist | 1 + 31 files changed, 610 insertions(+), 631 deletions(-) create mode 100644 commands/object-encoding.md create mode 100644 commands/object-freq.md create mode 100644 commands/object-help.md create mode 100644 commands/object-idletime.md create mode 100644 commands/object-refcount.md delete mode 100644 commands/object.md create mode 100644 commands/pubsub-channels.md create mode 100644 commands/pubsub-help.md create mode 100644 commands/pubsub-numpat.md create mode 100644 commands/pubsub-numsub.md delete mode 100644 commands/pubsub.md create mode 100644 commands/slowlog-get.md create mode 100644 commands/slowlog-help.md create mode 100644 commands/slowlog-len.md create mode 100644 commands/slowlog-reset.md delete mode 100644 commands/slowlog.md create mode 100644 commands/xgroup-create.md create mode 100644 commands/xgroup-createconsumer.md create mode 100644 commands/xgroup-delconsumer.md create mode 100644 commands/xgroup-destroy.md create mode 100644 commands/xgroup-help.md create mode 100644 commands/xgroup-setid.md delete mode 100644 commands/xgroup.md create mode 100644 commands/xinfo-consumers.md create mode 100644 commands/xinfo-groups.md create mode 100644 commands/xinfo-help.md create mode 100644 commands/xinfo-stream.md delete mode 100644 commands/xinfo.md diff --git a/commands.json b/commands.json index c3dd40376c..9ad79367f7 100644 --- a/commands.json +++ b/commands.json @@ -3077,24 +3077,60 @@ "since": "1.2.0", "group": "transactions" }, - "OBJECT": { - "summary": "Inspect the internals of Redis objects", - "complexity": "O(1) for all the currently implemented subcommands.", + "OBJECT ENCODING": { + "summary": "Inspect the internal encoding of a Redis object", + "complexity": "O(1)", "since": "2.2.3", "group": "generic", "arguments": [ { - "name": "subcommand", - "type": "string" - }, + "name": "key", + "type": "key" + } + ] + }, + "OBJECT FREQ": { + "summary": "Get the logarithmic access frequency counter of a Redis object", + "complexity": "O(1)", + "since": "4.0.0", + "group": "generic", + "arguments": [ { - "name": "arguments", - "type": "string", - "optional": true, - "multiple": true + "name": "key", + "type": "key" } ] }, + "OBJECT IDLETIME": { + "summary": "Get the time since a Redis object was last accessed", + "complexity": "O(1)", + "since": "2.2.3", + "group": "generic", + "arguments": [ + { + "name": "key", + "type": "key" + } + ] + }, + "OBJECT REFCOUNT": { + "summary": "Get the number of references to the value of the key", + "complexity": "O(1)", + "since": "2.2.3", + "group": "generic", + "arguments": [ + { + "name": "key", + "type": "key" + } + ] + }, + "OBJECT HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "6.2.0", + "group": "generic" + }, "PERSIST": { "summary": "Remove the expiration from a key", "complexity": "O(1)", @@ -3270,22 +3306,43 @@ "since": "2.0.0", "group": "pubsub" }, - "PUBSUB": { - "summary": "Inspect the state of the Pub/Sub subsystem", - "complexity": "O(N) for the CHANNELS subcommand, where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns). O(N) for the NUMSUB subcommand, where N is the number of requested channels. O(1) for the NUMPAT subcommand.", + "PUBSUB CHANNELS": { + "summary": "List active channels", + "complexity": "O(N) where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns)", + "since": "2.8.0", + "group": "pubsub", "arguments": [ { - "name": "subcommand", - "type": "string" - }, + "name": "pattern", + "type": "string", + "optional": true + } + ] + }, + "PUBSUB NUMPAT": { + "summary": "Get the count of unique patterns pattern subscriptions", + "complexity": "O(1)", + "since": "2.8.0", + "group": "pubsub" + }, + "PUBSUB NUMSUB": { + "summary": "Get the count of subscribers for channels", + "complexity": "O(N) for the NUMSUB subcommand, where N is the number of requested channels", + "since": "2.8.0", + "group": "pubsub", + "arguments": [ { - "name": "argument", + "name": "channel", "type": "string", "optional": true, "multiple": true } - ], - "since": "2.8.0", + ] + }, + "PUBSUB HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "6.2.0", "group": "pubsub" }, "PTTL": { @@ -3935,22 +3992,37 @@ "since": "5.0.0", "group": "server" }, - "SLOWLOG": { - "summary": "Manages the Redis slow queries log", + "SLOWLOG GET": { + "summary": "Get the slow log's entries", + "complexity": "O(N) where N is the number of entries returned", + "since": "2.2.12", + "group": "server", "arguments": [ { - "name": "subcommand", - "type": "string" - }, - { - "name": "argument", - "type": "string", + "name": "count", + "type": "integer", "optional": true } - ], + ] + }, + "SLOWLOG LEN": { + "summary": "Get the slow log's length", + "complexity": "O(1)", + "since": "2.2.12", + "group": "server" + }, + "SLOWLOG RESET": { + "summary": "Clear all entries from the slow log", + "complexity": "O(N) where N is the number of entries in the slowlog", "since": "2.2.12", "group": "server" }, + "SLOWLOG HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "6.2.0", + "group": "server" + }, "SMEMBERS": { "summary": "Get all the members in a set", "complexity": "O(N) where N is the set cardinality.", @@ -5360,39 +5432,55 @@ "since": "2.8.0", "group": "sorted_set" }, - "XINFO": { - "summary": "Get information on streams and consumer groups", - "complexity": "O(N) with N being the number of returned items for the subcommands CONSUMERS and GROUPS. The STREAM subcommand is O(log N) with N being the number of items in the stream.", + "XINFO CONSUMERS": { + "summary": "List the consumers in a consumer group", + "complexity": "O(1)", "arguments": [ { - "command": "CONSUMERS", - "name": [ - "key", - "groupname" - ], - "type": [ - "key", - "string" - ], - "optional": true + "name": "key", + "type": "key" }, { - "command": "GROUPS", + "name": "groupname", + "type": "string" + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XINFO GROUPS": { + "summary": "List the consumer groups of a stream", + "complexity": "O(1)", + "arguments": [ + { "name": "key", - "type": "key", - "optional": true - }, + "type": "key" + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XINFO STREAM": { + "summary": "Get information about a stream", + "complexity": "O(1)", + "arguments": [ { - "command": "STREAM", "name": "key", - "type": "key", - "optional": true + "type": "key" }, { - "name": "help", - "type": "enum", - "enum": [ - "HELP" + "name": "full", + "type": "block", + "block": [ + { + "command": "FULL" + }, + { + "name": "count", + "command": "COUNT", + "type": "integer", + "optional": true + } ], "optional": true } @@ -5400,6 +5488,12 @@ "since": "5.0.0", "group": "stream" }, + "XINFO HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "5.0.0", + "group": "stream" + }, "XADD": { "summary": "Appends a new entry to a stream", "complexity": "O(1) when adding a new entry, O(N) when trimming where N being the number of entires evicted.", @@ -5632,111 +5726,120 @@ "since": "5.0.0", "group": "stream" }, - "XGROUP": { - "summary": "Create, destroy, and manage consumer groups.", - "complexity": "O(1) for all the subcommands, with the exception of the DESTROY subcommand which takes an additional O(M) time in order to delete the M entries inside the consumer group pending entries list (PEL).", + "XGROUP CREATE": { + "summary": "Create a consumer group.", + "complexity": "O(1)", "arguments": [ { - "name": "create", - "type": "block", - "block": [ - { - "command": "CREATE", - "name": [ - "key", - "groupname" - ], - "type": [ - "key", - "string" - ] - }, - { - "name": "id", - "type": "enum", - "enum": [ - "ID", - "$" - ] - }, - { - "command": "MKSTREAM", - "type": "command", - "optional": true - } - ], - "optional": true + "name": "key", + "type": "key" }, { - "name": "setid", - "type": "block", - "block": [ - { - "command": "SETID", - "name": [ - "key", - "groupname" - ], - "type": [ - "key", - "string" - ] - }, - { - "name": "id", - "type": "enum", - "enum": [ - "ID", - "$" - ] - } - ], - "optional": true + "name": "groupname", + "type": "string" }, { - "command": "DESTROY", - "name": [ - "key", - "groupname" - ], - "type": [ - "key", - "string" - ], - "optional": true + "name": "id", + "type": "enum", + "enum": [ + "id", + "$" + ] }, { - "command": "CREATECONSUMER", - "name": [ - "key", - "groupname", - "consumername" - ], - "type": [ - "key", - "string", - "string" - ], + "command": "MKSTREAM", "optional": true + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XGROUP CREATECONSUMER": { + "summary": "Create a consumer in a consumer group.", + "complexity": "O(1)", + "arguments": [ + { + "name": "key", + "type": "key" }, { - "command": "DELCONSUMER", - "name": [ - "key", - "groupname", - "consumername" - ], - "type": [ - "key", - "string", - "string" - ], - "optional": true + "name": "groupname", + "type": "string" + }, + { + "name": "consumername", + "type": "string" + } + ], + "since": "6.2.0", + "group": "stream" + }, + "XGROUP DELCONSUMER": { + "summary": "Delete a consumer from a consumer group.", + "complexity": "O(1)", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "groupname", + "type": "string" + }, + { + "name": "consumername", + "type": "string" + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XGROUP DESTROY": { + "summary": "Destroy a consumer group.", + "complexity": "O(N) where N is the number of entries in the group's pending entries list (PEL).", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "groupname", + "type": "string" + } + ], + "since": "5.0.0", + "group": "stream" + }, + "XGROUP SETID": { + "summary": "Set a consumer group to an arbitrary last delivered ID value.", + "complexity": "O(1)", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "groupname", + "type": "string" + }, + { + "name": "id", + "type": "string", + "enum": [ + "id", + "$" + ] } ], "since": "5.0.0", "group": "stream" }, + "XGROUP HELP": { + "summary": "Show helpful text about the different subcommands", + "complexity": "O(1)", + "since": "5.0.0", + "group": "stream" + }, "XREADGROUP": { "summary": "Return new entries from a stream using a consumer group, or access the history of the pending entries for a given consumer. Can block.", "complexity": "For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", diff --git a/commands/acl-help.md b/commands/acl-help.md index 3ec1ffbbb0..ddb9432f3c 100644 --- a/commands/acl-help.md +++ b/commands/acl-help.md @@ -1,5 +1,4 @@ -The `ACL HELP` command returns a helpful text describing the different -subcommands. +The `ACL HELP` command returns a helpful text describing the different subcommands. @return diff --git a/commands/object-encoding.md b/commands/object-encoding.md new file mode 100644 index 0000000000..838d5e5cc4 --- /dev/null +++ b/commands/object-encoding.md @@ -0,0 +1,15 @@ +Returns the internal encoding for the Redis object stored at `` + +Redis objects can be encoded in different ways: + +* Strings can be encoded as `raw` (normal string encoding) or `int` (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space). +* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the special representation that is used to save space for small lists. +* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special encoding used for small sets composed solely of integers. +* Hashes can be encoded as `ziplist` or `hashtable`. The `ziplist` is a special encoding used for small hashes. +* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size. + +All the specially encoded types are automatically converted to the general type once you perform an operation that makes it impossible for Redis to retain the space saving encoding. + +@return + +@bulk-string-reply: the encoding of the object, or `nil` if the key doesn't exist diff --git a/commands/object-freq.md b/commands/object-freq.md new file mode 100644 index 0000000000..fdf891e83d --- /dev/null +++ b/commands/object-freq.md @@ -0,0 +1,9 @@ +This command returns the logarithmic access frequency counter of a Redis object stored at ``. + +The command is only available when the `maxmemory-policy` configuration directive is set to one of the LFU policies. + +@return + +@integer-reply + +The counter's value. \ No newline at end of file diff --git a/commands/object-help.md b/commands/object-help.md new file mode 100644 index 0000000000..f98196c5e1 --- /dev/null +++ b/commands/object-help.md @@ -0,0 +1,5 @@ +The `OBJECT HELP` command returns a helpful text describing the different subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/object-idletime.md b/commands/object-idletime.md new file mode 100644 index 0000000000..7360454797 --- /dev/null +++ b/commands/object-idletime.md @@ -0,0 +1,9 @@ +This command returns the time in milliseconds since the last access to the value stored at ``. + +The command is only available when the `maxmemory-policy` configuration directive is set to one of the LRU policies. + +@return + +@integer-reply + +The idle time in milliseconds. \ No newline at end of file diff --git a/commands/object-refcount.md b/commands/object-refcount.md new file mode 100644 index 0000000000..639c899dbd --- /dev/null +++ b/commands/object-refcount.md @@ -0,0 +1,7 @@ +This command returns the reference count of the stored at ``. + +@return + +@integer-reply + +The number of references. \ No newline at end of file diff --git a/commands/object.md b/commands/object.md deleted file mode 100644 index 67f313f79d..0000000000 --- a/commands/object.md +++ /dev/null @@ -1,85 +0,0 @@ -The `OBJECT` command allows to inspect the internals of Redis Objects associated -with keys. -It is useful for debugging or to understand if your keys are using the specially -encoded data types to save space. -Your application may also use the information reported by the `OBJECT` command -to implement application level key eviction policies when using Redis as a -Cache. - -The `OBJECT` command supports multiple sub commands: - -* `OBJECT REFCOUNT ` returns the number of references of the value - associated with the specified key. - This command is mainly useful for debugging. -* `OBJECT ENCODING ` returns the kind of internal representation used in - order to store the value associated with a key. -* `OBJECT IDLETIME ` returns the number of seconds since the object stored - at the specified key is idle (not requested by read or write operations). - While the value is returned in seconds the actual resolution of this timer is - 10 seconds, but may vary in future implementations. This subcommand is - available when `maxmemory-policy` is set to an LRU policy or `noeviction` - and `maxmemory` is set. -* `OBJECT FREQ ` returns the logarithmic access frequency counter of the - object stored at the specified key. This subcommand is available when - `maxmemory-policy` is set to an LFU policy. -* `OBJECT HELP` returns a succinct help text. - -Objects can be encoded in different ways: - -* Strings can be encoded as `raw` (normal string encoding) or `int` (strings - representing integers in a 64 bit signed interval are encoded in this way in - order to save space). -* Lists can be encoded as `ziplist` or `linkedlist`. - The `ziplist` is the special representation that is used to save space for - small lists. -* Sets can be encoded as `intset` or `hashtable`. - The `intset` is a special encoding used for small sets composed solely of - integers. -* Hashes can be encoded as `ziplist` or `hashtable`. - The `ziplist` is a special encoding used for small hashes. -* Sorted Sets can be encoded as `ziplist` or `skiplist` format. - As for the List type small sorted sets can be specially encoded using - `ziplist`, while the `skiplist` encoding is the one that works with sorted - sets of any size. - -All the specially encoded types are automatically converted to the general type -once you perform an operation that makes it impossible for Redis to retain the -space saving encoding. - -@return - -Different return values are used for different subcommands. - -* Subcommands `refcount` and `idletime` return integers. -* Subcommand `encoding` returns a bulk reply. - -If the object you try to inspect is missing, a null bulk reply is returned. - -@examples - -``` -redis> lpush mylist "Hello World" -(integer) 4 -redis> object refcount mylist -(integer) 1 -redis> object encoding mylist -"ziplist" -redis> object idletime mylist -(integer) 10 -``` - -In the following example you can see how the encoding changes once Redis is no -longer able to use the space saving encoding. - -``` -redis> set foo 1000 -OK -redis> object encoding foo -"int" -redis> append foo bar -(integer) 7 -redis> get foo -"1000bar" -redis> object encoding foo -"raw" -``` diff --git a/commands/pubsub-channels.md b/commands/pubsub-channels.md new file mode 100644 index 0000000000..8b9a06eefd --- /dev/null +++ b/commands/pubsub-channels.md @@ -0,0 +1,11 @@ +Lists the currently *active channels*. + +An active channel is a Pub/Sub channel with one or more subscribers (excluding clients subscribed to patterns). + +If no `pattern` is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. + +@return + +@array-reply: a list of active channels, optionally matching the specified pattern. diff --git a/commands/pubsub-help.md b/commands/pubsub-help.md new file mode 100644 index 0000000000..a7ab2a359f --- /dev/null +++ b/commands/pubsub-help.md @@ -0,0 +1,5 @@ +The `PUBSUB HELP` command returns a helpful text describing the different subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/pubsub-numpat.md b/commands/pubsub-numpat.md new file mode 100644 index 0000000000..6f3a7c9e14 --- /dev/null +++ b/commands/pubsub-numpat.md @@ -0,0 +1,9 @@ +Returns the number of unique patterns that are subscribed to by clients (that are performed using the `PSUBSCRIBE` command). + +Note that this isn't the count of clients subscribed to patterns, but the total number of unique patterns all the clients are subscribed to. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. + +@return + +@integer-reply: the number of patterns all the clients are subscribed to. diff --git a/commands/pubsub-numsub.md b/commands/pubsub-numsub.md new file mode 100644 index 0000000000..d4d6b85e7b --- /dev/null +++ b/commands/pubsub-numsub.md @@ -0,0 +1,11 @@ +Returns the number of subscribers (exclusive of clients subscribed to patterns) for the specified channels. + +Note that it is valid to call this command without channels. In this case it will just return an empty list. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. + +@return + +@array-reply: a list of channels and number of subscribers for every channel. + +The format is channel, count, channel, count, ..., so the list is flat. The order in which the channels are listed is the same as the order of the channels specified in the command call. diff --git a/commands/pubsub.md b/commands/pubsub.md deleted file mode 100644 index 9d86bc9bc6..0000000000 --- a/commands/pubsub.md +++ /dev/null @@ -1,48 +0,0 @@ -The PUBSUB command is an introspection command that allows to inspect the -state of the Pub/Sub subsystem. It is composed of subcommands that are -documented separately. The general form is: - - PUBSUB ... args ... - -Cluster note: in a Redis Cluster clients can subscribe to every node, and can -also publish to every other node. The cluster will make sure that published -messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only -report information from the node's Pub/Sub context, rather than the entire -cluster. - -# PUBSUB CHANNELS [pattern] - -Lists the currently *active channels*. An active channel is a Pub/Sub channel -with one or more subscribers (not including clients subscribed to patterns). - -If no `pattern` is specified, all the channels are listed, otherwise if pattern -is specified only channels matching the specified glob-style pattern are -listed. - -@return - -@array-reply: a list of active channels, optionally matching the specified pattern. - -# `PUBSUB NUMSUB [channel-1 ... channel-N]` - -Returns the number of subscribers (not counting clients subscribed to patterns) -for the specified channels. - -@return - -@array-reply: a list of channels and number of subscribers for every channel. The format is channel, count, channel, count, ..., so the list is flat. -The order in which the channels are listed is the same as the order of the -channels specified in the command call. - -Note that it is valid to call this command without channels. In this case it -will just return an empty list. - -# `PUBSUB NUMPAT` - -Returns the number of unique patterns that are subscribed to by clients (that are performed using the -`PSUBSCRIBE` command). Note that this is not the count of clients subscribed -to patterns but the total number of unique patterns all the clients are subscribed to. - -@return - -@integer-reply: the number of patterns all the clients are subscribed to. diff --git a/commands/slowlog-get.md b/commands/slowlog-get.md new file mode 100644 index 0000000000..f42cdfcffd --- /dev/null +++ b/commands/slowlog-get.md @@ -0,0 +1,30 @@ +The `SLOWLOG GET` command returns entries from the slow log in chronological order. + +The Redis Slow Log is a system to log queries that exceeded a specified execution time. +The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). + +A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. +The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. + +By default the command returns all of the entries in the log. The optional `count` argument limits the number of returned entries, so the command returns at most up to `count` entries. + +Each entry from the slow log is comprised of the following six values: + +1. A unique progressive identifier for every slow log entry. +2. The unix timestamp at which the logged command was processed. +3. The amount of time needed for its execution, in microseconds. +4. The array composing the arguments of the command. +5. Client IP address and port. +6. Client name if set via the `CLIENT SETNAME` command. + +The entry's unique ID can be used in order to avoid processing slow log entries multiple times (for instance you may have a script sending you an email alert for every new slow log entry). +The ID is never reset in the course of the Redis server execution, only a server +restart will reset it. + +@reply + +@array-reply: a list of slow log entries. + +@history + +* `>= 4.0`: Added client IP address, port and name to the reply. diff --git a/commands/slowlog-help.md b/commands/slowlog-help.md new file mode 100644 index 0000000000..a70f3a5d4e --- /dev/null +++ b/commands/slowlog-help.md @@ -0,0 +1,5 @@ +The `SLOWLOG HELP` command returns a helpful text describing the different subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/slowlog-len.md b/commands/slowlog-len.md new file mode 100644 index 0000000000..6f0d97758a --- /dev/null +++ b/commands/slowlog-len.md @@ -0,0 +1,12 @@ +This command returns the current number of entries in the slow log. + +A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. +The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. +Once the slog log reaches its maximal size, the oldest entry is removed whenever a new entry is created. +The slow log can be cleared with the `SLOWLOG RESET` command. + +@reply + +@integer-reply + +The number of entries in the slow log. diff --git a/commands/slowlog-reset.md b/commands/slowlog-reset.md new file mode 100644 index 0000000000..b522c26a5c --- /dev/null +++ b/commands/slowlog-reset.md @@ -0,0 +1,7 @@ +This command resets the slow log, clearing all entries in it. + +Once deleted the information is lost forever. + +@reply + +@simple-string-reply: `OK` diff --git a/commands/slowlog.md b/commands/slowlog.md deleted file mode 100644 index 59d9a0afd0..0000000000 --- a/commands/slowlog.md +++ /dev/null @@ -1,91 +0,0 @@ -This command is used in order to read and reset the Redis slow queries log. - -## Redis slow log overview - -The Redis Slow Log is a system to log queries that exceeded a specified -execution time. -The execution time does not include I/O operations like talking with the client, -sending the reply and so forth, but just the time needed to actually execute the -command (this is the only stage of command execution where the thread is blocked -and can not serve other requests in the meantime). - -You can configure the slow log with two parameters: _slowlog-log-slower-than_ -tells Redis what is the execution time, in microseconds, to exceed in order for -the command to get logged. -Note that a negative number disables the slow log, while a value of zero forces -the logging of every command. -_slowlog-max-len_ is the length of the slow log. -The minimum value is zero. -When a new command is logged and the slow log is already at its maximum length, -the oldest one is removed from the queue of logged commands in order to make -space. - -The configuration can be done by editing `redis.conf` or while the server is -running using the `CONFIG GET` and `CONFIG SET` commands. - -## Reading the slow log - -The slow log is accumulated in memory, so no file is written with information -about the slow command executions. -This makes the slow log remarkably fast at the point that you can enable the -logging of all the commands (setting the _slowlog-log-slower-than_ config -parameter to zero) with minor performance hit. - -To read the slow log the **SLOWLOG GET** command is used, that returns every -entry in the slow log. -It is possible to return only the N most recent entries passing an additional -argument to the command (for instance **SLOWLOG GET 10**). -The default requested length is 10 (when the argument is omitted). It's possible to pass -1 to get the entire slowlog. - -Note that you need a recent version of redis-cli in order to read the slow log -output, since it uses some features of the protocol that were not formerly -implemented in redis-cli (deeply nested multi bulk replies). - -## Output format - -``` -redis 127.0.0.1:6379> slowlog get 2 -1) 1) (integer) 14 - 2) (integer) 1309448221 - 3) (integer) 15 - 4) 1) "ping" -2) 1) (integer) 13 - 2) (integer) 1309448128 - 3) (integer) 30 - 4) 1) "slowlog" - 2) "get" - 3) "100" -``` - -There are also optional fields emitted only by Redis 4.0 or greater: - -``` -5) "127.0.0.1:58217" -6) "worker-123" -``` - -Every entry is composed of four (or six starting with Redis 4.0) fields: - -* A unique progressive identifier for every slow log entry. -* The unix timestamp at which the logged command was processed. -* The amount of time needed for its execution, in microseconds. -* The array composing the arguments of the command. -* Client IP address and port (4.0 only). -* Client name if set via the `CLIENT SETNAME` command (4.0 only). - -The entry's unique ID can be used in order to avoid processing slow log entries -multiple times (for instance you may have a script sending you an email alert -for every new slow log entry). - -The ID is never reset in the course of the Redis server execution, only a server -restart will reset it. - -## Obtaining the current length of the slow log - -It is possible to get just the length of the slow log using the command -**SLOWLOG LEN**. - -## Resetting the slow log. - -You can reset the slow log using the **SLOWLOG RESET** command. -Once deleted the information is lost forever. diff --git a/commands/xgroup-create.md b/commands/xgroup-create.md new file mode 100644 index 0000000000..cd5e76d82d --- /dev/null +++ b/commands/xgroup-create.md @@ -0,0 +1,18 @@ +This command creates a new consumer group uniquely identified by `` for the stream stored at ``. + +Every group has a unique name in a given stream. When a consumer group with the same name already exists, the command returns a `-BUSYGROUP` error. + +The command's `` argument specifies the last delivered entry in the stream from the new group's perspective. +The special ID `$` means the ID of the last entry in the stream, but you can provide any valid ID instead. +For example, if you want the group's consumers to fetch the entire stream from the beginning, use zero as the starting ID for the consumer group: + + XGROUP CREATE mystream mygroup 0 + +By default, the `XGROUP CREATE` command insists that the target stream exists and returns an error when it doesn't. +However, you can use the optional `MKSTREAM` subcommand as the last argument after the `` to automatically create the stream (with length of 0) if it doesn't exist: + + XGROUP CREATE mystream mygroup $ MKSTREAM + +@return + +@simple-string-reply: `OK` on success. diff --git a/commands/xgroup-createconsumer.md b/commands/xgroup-createconsumer.md new file mode 100644 index 0000000000..17274a5eab --- /dev/null +++ b/commands/xgroup-createconsumer.md @@ -0,0 +1,7 @@ +Create a consumer named `` in the consumer group `` of the stream that's stored at ``. + +Consumers are also created automatically whenever an operation, such as `XREADGROUP`, references a consumer that doesn't exist. + +@return + +@integer-reply: the number of created consumers (0 or 1) \ No newline at end of file diff --git a/commands/xgroup-delconsumer.md b/commands/xgroup-delconsumer.md new file mode 100644 index 0000000000..9e73da8922 --- /dev/null +++ b/commands/xgroup-delconsumer.md @@ -0,0 +1,10 @@ +The `XGROUP DELCONSUMER` command deletes a consumer from the consumer group. + +Sometimes it may be useful to remove old consumers since they are no longer used. + +Note, however, that any pending messages that the consumer had will become unclaimable after it was deleted. +It is strongly recommended, therefore, that any pending messages are claimed or acknowledged prior to deleting the consumer from the group. + +@return + +@integer-reply: the number of pending messages that the consumer had before it was deleted diff --git a/commands/xgroup-destroy.md b/commands/xgroup-destroy.md new file mode 100644 index 0000000000..448468ba14 --- /dev/null +++ b/commands/xgroup-destroy.md @@ -0,0 +1,7 @@ +The `XGROUP DESTROY` command completely destroys a consumer group. + +The consumer group will be destroyed even if there are active consumers, and pending messages, so make sure to call this command only when really needed. + +@return + +@integer-reply: the number of destroyed consumer groups (0 or 1) \ No newline at end of file diff --git a/commands/xgroup-help.md b/commands/xgroup-help.md new file mode 100644 index 0000000000..1eb1a7bb34 --- /dev/null +++ b/commands/xgroup-help.md @@ -0,0 +1,5 @@ +The `XGROUP HELP` command returns a helpful text describing the different subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/xgroup-setid.md b/commands/xgroup-setid.md new file mode 100644 index 0000000000..094bab354c --- /dev/null +++ b/commands/xgroup-setid.md @@ -0,0 +1,11 @@ +Set the **last delivered ID** for a consumer group. + +Normally, a consumer group's last delivered ID is set when the group is created with `XGROUP CREATE`. +The `XGROUP SETID` command allows modifying the group's last delivered ID, without having to delete and recreate the group. +For instance if you want the consumers in a consumer group to re-process all the messages in a stream, you may want to set its next ID to 0: + + XGROUP SETID mystream mygroup 0 + +@return + +@simple-string-reply: `OK` on success. diff --git a/commands/xgroup.md b/commands/xgroup.md deleted file mode 100644 index 07f42a8e38..0000000000 --- a/commands/xgroup.md +++ /dev/null @@ -1,80 +0,0 @@ -This command is used in order to manage the consumer groups associated -with a stream data structure. Using `XGROUP` you can: - -* Create a new consumer group associated with a stream. -* Destroy a consumer group. -* Remove a specific consumer from a consumer group. -* Set the consumer group *last delivered ID* to something else. - -To create a new consumer group, use the following form: - - XGROUP CREATE mystream consumer-group-name $ - -The last argument is the ID of the last item in the stream to consider already -delivered. In the above case we used the special ID '$' (that means: the ID -of the last item in the stream). In this case the consumers fetching data -from that consumer group will only see new elements arriving in the stream. - -If instead you want consumers to fetch the whole stream history, use -zero as the starting ID for the consumer group: - - XGROUP CREATE mystream consumer-group-name 0 - -Of course it is also possible to use any other valid ID. If the specified -consumer group already exists, the command returns a `-BUSYGROUP` error. -Otherwise, the operation is performed and a @simple-string-reply `OK` is returned. -There are no hard limits to the number of consumer groups you can associate with a given stream. - -If the specified stream doesn't exist when creating a group, an error will be -returned. You can use the optional `MKSTREAM` subcommand as the last argument -after the `ID` to automatically create the stream, if it doesn't exist. Note -that if the stream is created in this way it will have a length of 0: - - XGROUP CREATE mystream consumer-group-name $ MKSTREAM - -A consumer group can be destroyed completely by using the following form: - - XGROUP DESTROY mystream consumer-group-name - -The consumer group will be destroyed even if there are active consumers -and pending messages, so make sure to call this command only when really -needed. -This form returns an @integer-reply with the number of destroyed consumer groups (0 or 1). - -Consumers in a consumer group are auto-created every time a new consumer -name is mentioned by some command. They can also be explicitly created -by using the following form: - - XGROUP CREATECONSUMER mystream consumer-group-name myconsumer123 - -This form returns an @integer-reply with the number of created consumers (0 or 1). - -To just remove a given consumer from a consumer group, the following -form is used: - - XGROUP DELCONSUMER mystream consumer-group-name myconsumer123 - -Sometimes it may be useful to remove old consumers since they are no longer -used. -This form returns an @integer-reply with the number of pending messages that the consumer had before it was deleted. - -Finally it possible to set the next message to deliver using the -`SETID` subcommand. Normally the next ID is set when the consumer is -created, as the last argument of `XGROUP CREATE`. However using this form -the next ID can be modified later without deleting and creating the consumer -group again. For instance if you want the consumers in a consumer group -to re-process all the messages in a stream, you may want to set its next -ID to 0: - - XGROUP SETID mystream consumer-group-name 0 - -This form returns a @simple-string-reply `OK` or an error. - -Finally to get some help if you don't remember the syntax, use the -HELP subcommand: - - XGROUP HELP - -@history - - * `>= 6.2.0`: Supports the `CREATECONSUMER` subcommand. diff --git a/commands/xinfo-consumers.md b/commands/xinfo-consumers.md new file mode 100644 index 0000000000..f65366d3c9 --- /dev/null +++ b/commands/xinfo-consumers.md @@ -0,0 +1,29 @@ +This command returns the list of consumers that belong to the `` consumer group of the stream stored at ``. + +The following information is provided for each consumer in the group: + +* **name**: the consumer's name +* **pending**: the number of pending messages for the client, which are messages that were delivered but are yet to be acknowledged +* **idle**: the number of milliseconds that have passed since the consumer last interacted with the server + +@reply + +@array-reply: a list of consumers. + +@examples + +``` +> XINFO CONSUMERS mystream mygroup +1) 1) name + 2) "Alice" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 9104628 +2) 1) name + 2) "Bob" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 83841983 +``` diff --git a/commands/xinfo-groups.md b/commands/xinfo-groups.md new file mode 100644 index 0000000000..b0d65342b2 --- /dev/null +++ b/commands/xinfo-groups.md @@ -0,0 +1,34 @@ +This command returns the list of all consumers groups of the stream stored at ``. + +By default, only the following information is provided for each of the groups: + +* **name**: the consumer group's name +* **consumers**: the number of consumers in the group +* **pending**: the length of the group's pending entries list (PEL), which are messages that were delivered but are yet to be acknowledged +* **last-delivered-id**: the ID of the last entry delivered the group's consumers + +@reply + +@array-reply: a list of consumer groups. + +@examples + +``` +> XINFO GROUPS mystream +1) 1) name + 2) "mygroup" + 3) consumers + 4) (integer) 2 + 5) pending + 6) (integer) 2 + 7) last-delivered-id + 8) "1588152489012-0" +2) 1) name + 2) "some-other-group" + 3) consumers + 4) (integer) 1 + 5) pending + 6) (integer) 0 + 7) last-delivered-id + 8) "1588152498034-0" +``` diff --git a/commands/xinfo-help.md b/commands/xinfo-help.md new file mode 100644 index 0000000000..293892fd8f --- /dev/null +++ b/commands/xinfo-help.md @@ -0,0 +1,5 @@ +The `XINFO HELP` command returns a helpful text describing the different subcommands. + +@return + +@array-reply: a list of subcommands and their descriptions diff --git a/commands/xinfo-stream.md b/commands/xinfo-stream.md new file mode 100644 index 0000000000..07708b4776 --- /dev/null +++ b/commands/xinfo-stream.md @@ -0,0 +1,103 @@ +This command returns information about the stream stored at ``. + +The informative details provided by this command are: + +* **length**: the number of entries in the stream (see `XLEN`) +* **radix-tree-keys**: the number of keys in the underlying radix data structure +* **radix-tree-nodes**: the number of nodes in the underlying radix data structure +* **groups**: the number of consumer groups defined for the stream +* **last-generated-id**: the ID of the least-recently entry that was added to the stream +* **first-entry**: the ID and field-value tuples of the first entry in the stream +* **last-entry**: the ID and field-value tuples of the last entry in the stream + +The optional `FULL` modifier provides a more verbose reply. +When provided, the `FULL` reply includes an **entries** array that consists of the stream entries (ID and field-value tuples) in ascending order. +Furthermore, **groups** is also an array, and for each of the consumer groups it consists of the information reported by `XINFO GROUP` and `XINFO CONSUMERS`. + +The `COUNT` option can be used to limit the number of stream and PEL entries that are returned (The first `` entries are returned). +The default `COUNT` is 10 and a `COUNT` of 0 means that all entries will be returned (execution time may be long if the stream has a lot of entries). + +@return + +@array-reply: a list of informational bits + +@examples + +Default reply: + +``` +> XINFO STREAM mystream + 1) length + 2) (integer) 2 + 3) radix-tree-keys + 4) (integer) 1 + 5) radix-tree-nodes + 6) (integer) 2 + 7) groups + 8) (integer) 2 + 9) last-generated-id +10) 1538385846314-0 +11) first-entry +12) 1) 1538385820729-0 + 2) 1) "foo" + 2) "bar" +13) last-entry +14) 1) 1538385846314-0 + 2) 1) "field" + 2) "value" +``` + +Full reply: + +``` +> XADD mystream * foo bar +"1588152471065-0" +> XADD mystream * foo bar2 +"1588152473531-0" +> XGROUP CREATE mystream mygroup 0-0 +OK +> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > +1) 1) "mystream" + 2) 1) 1) "1588152471065-0" + 2) 1) "foo" + 2) "bar" +> XINFO STREAM mystream FULL + 1) "length" + 2) (integer) 2 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1588152473531-0" + 9) "entries" +10) 1) 1) "1588152471065-0" + 2) 1) "foo" + 2) "bar" + 2) 1) "1588152473531-0" + 2) 1) "foo" + 2) "bar2" +11) "groups" +12) 1) 1) "name" + 2) "mygroup" + 3) "last-delivered-id" + 4) "1588152471065-0" + 5) "pel-count" + 6) (integer) 1 + 7) "pending" + 8) 1) 1) "1588152471065-0" + 2) "Alice" + 3) (integer) 1588152520299 + 4) (integer) 1 + 9) "consumers" + 10) 1) 1) "name" + 2) "Alice" + 3) "seen-time" + 4) (integer) 1588152520299 + 5) "pel-count" + 6) (integer) 1 + 7) "pending" + 8) 1) 1) "1588152471065-0" + 2) (integer) 1588152520299 + 3) (integer) 1 +``` diff --git a/commands/xinfo.md b/commands/xinfo.md deleted file mode 100644 index 2e27fcba58..0000000000 --- a/commands/xinfo.md +++ /dev/null @@ -1,184 +0,0 @@ -This is an introspection command used in order to retrieve different information -about the streams and associated consumer groups. Three forms are possible: - -* `XINFO STREAM ` - -In this form the command returns general information about the stream stored -at the specified key. - -``` -> XINFO STREAM mystream - 1) length - 2) (integer) 2 - 3) radix-tree-keys - 4) (integer) 1 - 5) radix-tree-nodes - 6) (integer) 2 - 7) groups - 8) (integer) 2 - 9) last-generated-id -10) 1538385846314-0 -11) first-entry -12) 1) 1538385820729-0 - 2) 1) "foo" - 2) "bar" -13) last-entry -14) 1) 1538385846314-0 - 2) 1) "field" - 2) "value" -``` - -In the above example you can see that the reported information are the number -of elements of the stream, details about the radix tree representing the -stream mostly useful for optimization and debugging tasks, the number of -consumer groups associated with the stream, the last generated ID that may -not be the same as the last entry ID in case some entry was deleted. Finally -the full first and last entry in the stream are shown, in order to give some -sense about what is the stream content. - -* `XINFO STREAM FULL [COUNT ]` - -In this form the command returns the entire state of the stream, including -entries, groups, consumers and Pending Entries Lists (PELs). -This form is available since Redis 6.0. - -``` -> XADD mystream * foo bar -"1588152471065-0" -> XADD mystream * foo bar2 -"1588152473531-0" -> XGROUP CREATE mystream mygroup 0-0 -OK -> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > -1) 1) "mystream" - 2) 1) 1) "1588152471065-0" - 2) 1) "foo" - 2) "bar" -> XINFO STREAM mystream FULL - 1) "length" - 2) (integer) 2 - 3) "radix-tree-keys" - 4) (integer) 1 - 5) "radix-tree-nodes" - 6) (integer) 2 - 7) "last-generated-id" - 8) "1588152473531-0" - 9) "entries" -10) 1) 1) "1588152471065-0" - 2) 1) "foo" - 2) "bar" - 2) 1) "1588152473531-0" - 2) 1) "foo" - 2) "bar2" -11) "groups" -12) 1) 1) "name" - 2) "mygroup" - 3) "last-delivered-id" - 4) "1588152471065-0" - 5) "pel-count" - 6) (integer) 1 - 7) "pending" - 8) 1) 1) "1588152471065-0" - 2) "Alice" - 3) (integer) 1588152520299 - 4) (integer) 1 - 9) "consumers" - 10) 1) 1) "name" - 2) "Alice" - 3) "seen-time" - 4) (integer) 1588152520299 - 5) "pel-count" - 6) (integer) 1 - 7) "pending" - 8) 1) 1) "1588152471065-0" - 2) (integer) 1588152520299 - 3) (integer) 1 -``` - -The reported information contains all of the fields reported by the simple -form of `XINFO STREAM`, with some additional information: - -1. Stream entries are returned, including fields and values. -2. Groups, consumers and PELs are returned. - -The `COUNT` option is used to limit the amount of stream/PEL entries that are -returned (The first `` entries are returned). The default `COUNT` is 10 and -a `COUNT` of 0 means that all entries will be returned (Execution time may be -long if the stream has a lot of entries) - -* `XINFO GROUPS ` - -In this form we just get as output all the consumer groups associated with the -stream: - -``` -> XINFO GROUPS mystream -1) 1) name - 2) "mygroup" - 3) consumers - 4) (integer) 2 - 5) pending - 6) (integer) 2 - 7) last-delivered-id - 8) "1588152489012-0" -2) 1) name - 2) "some-other-group" - 3) consumers - 4) (integer) 1 - 5) pending - 6) (integer) 0 - 7) last-delivered-id - 8) "1588152498034-0" -``` - -For each consumer group listed the command also shows the number of consumers -known in that group and the pending messages (delivered but not yet acknowledged) -in that group. - -* `XINFO CONSUMERS ` - -Finally it is possible to get the list of every consumer in a specific consumer -group: - -``` -> XINFO CONSUMERS mystream mygroup -1) 1) name - 2) "Alice" - 3) pending - 4) (integer) 1 - 5) idle - 6) (integer) 9104628 -2) 1) name - 2) "Bob" - 3) pending - 4) (integer) 1 - 5) idle - 6) (integer) 83841983 -``` - -We can see the idle time in milliseconds (last field) together with the -consumer name and the number of pending messages for this specific -consumer. - -**Note that you should not rely on the fields exact position**, nor on the -number of fields, new fields may be added in the future. So a well behaving -client should fetch the whole list, and report it to the user, for example, -as a dictionary data structure. Low level clients such as C clients where -the items will likely be reported back in a linear array should document -that the order is undefined. - -Finally it is possible to get help from the command, in case the user can't -remember the exact syntax, by using the `HELP` subcommand: - -``` -> XINFO HELP -1) XINFO arg arg ... arg. Subcommands are: -2) CONSUMERS -- Show consumer groups of group . -3) GROUPS -- Show the stream consumer groups. -4) STREAM -- Show information about the stream. -5) HELP -``` - -@history - -* `>= 6.0.0`: Added the `FULL` option to `XINFO STREAM`. diff --git a/wordlist b/wordlist index 9faaeee186..616e31b516 100644 --- a/wordlist +++ b/wordlist @@ -395,6 +395,7 @@ trib tuple tuples unary +unclaimable underflows unencrypted unguessable From abdcf4b78085183e3d41f3585850d618d7be3e70 Mon Sep 17 00:00:00 2001 From: Binbin Date: Wed, 17 Nov 2021 01:21:43 +0800 Subject: [PATCH 075/813] Minor fixes in sentinel documentation. (#1681) --- topics/sentinel.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 5a20315c3f..e743f9fde7 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -8,7 +8,7 @@ without human intervention certain kinds of failures. Redis Sentinel also provides other collateral tasks such as monitoring, notifications and acts as a configuration provider for clients. -This is the full list of Sentinel capabilities at a macroscopical level (i.e. the *big picture*): +This is the full list of Sentinel capabilities at a macroscopic level (i.e. the *big picture*): * **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected. * **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. @@ -207,7 +207,7 @@ Network partitions are shown as interrupted lines using slashes: Also note that: * Masters are called M1, M2, M3, ..., Mn. -* replicas are called R1, R2, R3, ..., Rn (R stands for *replica*). +* Replicas are called R1, R2, R3, ..., Rn (R stands for *replica*). * Sentinels are called S1, S2, S3, ..., Sn. * Clients are called C1, C2, C3, ..., Cn. * When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention. @@ -270,7 +270,7 @@ be able to authorize a failover, making clients able to continue. In every Sentinel setup, as Redis uses asynchronous replication, there is always the risk of losing some writes because a given acknowledged write may not be able to reach the replica which is promoted to master. However in -the above setup there is an higher risk due to clients being partitioned away +the above setup there is a higher risk due to clients being partitioned away with an old master, like in the following picture: +----+ @@ -617,7 +617,7 @@ The `SENTINEL` command is the main API for Sentinel. The following is the list o * **SENTINEL CONFIG GET ``** (`>= 6.2`) Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command. * **SENTINEL CONFIG SET `` ``** (`>= 6.2`) Set the value of a global Sentinel configuration parameter. * **SENTINEL CKQUORUM ``** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. -* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. +* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. * **SENTINEL FAILOVER ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). * **SENTINEL GET-MASTER-ADDR-BY-NAME ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. * **SENTINEL INFO-CACHE** (`>= 3.2`) Return cached `INFO` output from masters and replicas. @@ -638,7 +638,7 @@ For connection management and administration purposes, Sentinel supports the fol * **ACL** (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](/topics/acl) documentation page and the [_Sentinel Access Control List authentication_](#sentinel-access-control-list-authentication). * **AUTH** (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [_Configuring Sentinel instances with authentication_ section](#configuring-sentinel-instances-with-authentication). -* **CLIENT** This command manages client connections. For more information refer to the its subcommands' pages. +* **CLIENT** This command manages client connections. For more information refer to its subcommands' pages. * **COMMAND** (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands. * **HELLO** (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command. * **INFO** Return information and statistics about the Sentinel server. For more information see the `INFO` command. @@ -651,11 +651,11 @@ Lastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and Reconfiguring Sentinel at Runtime --- -Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagates the changes to the other Sentinels in the network. +Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network. The following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance. -* **SENTINEL MONITOR `` `` `` ``** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use an hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. +* **SENTINEL MONITOR `` `` `` ``** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use a hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. * **SENTINEL REMOVE ``** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth. * **SENTINEL SET `` [`