From e9116958becab7d3e2022fae0d0c07af729ca416 Mon Sep 17 00:00:00 2001 From: ienumerable Date: Wed, 4 Jan 2012 19:41:32 -0800 Subject: [PATCH 0001/2880] Some grammar fixes, though this one could probably use a little more love at some point. --- commands/slaveof.md | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/commands/slaveof.md b/commands/slaveof.md index 04eb0c8ffa..d751ca076b 100644 --- a/commands/slaveof.md +++ b/commands/slaveof.md @@ -1,19 +1,18 @@ - The `SLAVEOF` command can change the replication settings of a slave on the fly. If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE -will turn off the replication turning the Redis server into a MASTER. -In the proper form `SLAVEOF` hostname port will make the server a slave of the -specific server listening at the specified hostname and port. +will turn off the replication, turning the Redis server into a MASTER. +In the proper form `SLAVEOF` hostname port will make the server a slave of another +server listening at the specified hostname and port. If a server is already a slave of some master, `SLAVEOF` hostname port will stop the replication against the old server and start the synchronization -against the new one discarding the old dataset. +against the new one, discarding the old dataset. -The form `SLAVEOF` no one will stop replication turning the server into a -MASTER but will not discard the replication. So if the old master stop working +The form `SLAVEOF` NO ONE will stop replication, turning the server into a +MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the slave into a master and set the application to -use the new master in read/write. Later when the other Redis server will be -fixed it can be configured in order to work as slave. +use this new master in read/write. Later when the other Redis server is +fixed, it can be reconfigured to work as a slave. @return From 89c9eb5c62c7fed3d7b78d19f19136b5c66e02c8 Mon Sep 17 00:00:00 2001 From: quantmind Date: Fri, 20 Jan 2012 16:58:06 +0000 Subject: [PATCH 0002/2880] time complexity of ZREM --- commands/zrem.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/zrem.md b/commands/zrem.md index b466fa1889..2f0f71b7f6 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -1,6 +1,6 @@ @complexity -O(log(N)) with N being the number of elements in the sorted set. +O(M log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed. Removes the specified members from the sorted set stored at `key`. Non existing members are ignored. From ed7c2e7cab1ae4c1f8ed8c8fb670ba06e073eeaf Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:29:51 +0100 Subject: [PATCH 0003/2880] Debugging page draft added --- topics/debugging.md | 183 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 183 insertions(+) create mode 100644 topics/debugging.md diff --git a/topics/debugging.md b/topics/debugging.md new file mode 100644 index 0000000000..e140ecc5d7 --- /dev/null +++ b/topics/debugging.md @@ -0,0 +1,183 @@ +Redis debugging guide +=== + +Redis is developed with a great stress on stability: we do our best with +every release to make sure you'll experience a very stable product and no +crashes. However even with our best efforts it is impossible to avoid all +the critical bugs with 100% of success. + +When Redis crashes it produces a detailed report of what happened, however +sometimes looking at the crash report is not enough, nor it is possible for +the Redis core team to reproduce the issue independently: in this scenario we +need help from the user that is able to reproduce the issue. + +This little guide shows how to use GDB to provide all the informations the +Redis developers will need to track the bug more easily. + +What is GDB? +------------ + +GDB is the Gnu Debugger: a program that is able to inspect the internal state +of another program. Usually tracking and fixing a bug is an exercise in +gathering more informations about the state of the program at the moment the +bug happens, so GDB is an extremely useful bug. + +GDB can be used in two ways: + ++ It can attach to a running program and inspect the state of it at runtime. ++ It can inspect the state of a program that already terminated using what is called a *core file*, that is, the image of the memory at the time the program was running. + +From the point of view of investigating Redis bugs we need to use both this +GDB modes: the user able to reproduce the bug attaches GDB to his running Redis instance, and when the crash happens, he creates the `core` file that the in turn the developer will use to inspect the Redis internals at the time of the crash. + +This way the developer can perform all the inspections in his computer without the help of the user, and the user is free to restart Redis in the production environment. + +Compiling Redis without optimizations +------------------------------------- + +By default Redis is compiled with the `-O2` switch, this means that compiler +optimizations are enabled. This makes the Redis executable faster, but at the +same time it makes Redis (like any other program) harder to inspect using GDB. + +It is better to attach GDB to Redis compiled without optimizations using the +`make noopt` command to compile it (instead of just using the plain `make` +command). However if you have an already running Redis in production there is +no need to recompile and restart it if this is going to create problems in +your side. Even if at a lesser extend GDB still works against executables +compiled with optimizations. + +It is great if you make sure to recompile Redis with `make noopt` after the +first crash, so that the next time it will be simpler to track the issue. + +You should not be concerned with the loss of performances compiling Redis +without optimizations, it is very unlikely that this will cause problems in +your environment since it is usually just a matter of a small percentage +because Redis is not very CPU-bound (it does a lot of I/O to serve queries). + +Attaching GDB to a running process +---------------------------------- + +If you have an already running Redis server, you can attach GDB to it, so that +if Redis will crash it will be possible to both inspect the internals and +generate a `core dump` file. + +After you attach GDB to the Redis process it will continue running as usually without any loss of performance, so this is not a dangerous procedure. + +In order to attach GDB the first thing you need is the *process ID* of the running Redis instance (the *pid* of the process). You can easily obtain it using `redis-cli`: + + $ redis-cli info | grep process_id + process_id:58414 + +In the above example the process ID is **58414**. + ++ Login into your Redis server. ++ (Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection will timeout. If you don't know what screen is does yourself a favour and [Read this article](http://www.linuxjournal.com/article/6340) ++ Attach GDB to the running Redis server typing: + + gdb + + For example: gdb /usr/local/bin/redis-server 58414 + +GDB will start and will attach to the running server printing something like the followig: + + Reading symbols for shared libraries + done + 0x00007fff8d4797e6 in epoll_wait () + (gdb) ++ At this point GDB is attached but **your Redis instance** is blocked by GDB. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter: + + (gdb) continue + Continuing. ++ Done! Now your Redis instance has GDB attached. All you need is to wait for a rash... ++ Now it's time to detach from your screen / tmux session if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. + +After the crash +--------------- + +Redis has a command to simulate a segmentation fault (in other words a bad +crash) using the `DEBUG SEGFAULT` command (don't use it against a real production instance of course ;). So I'll use this command to crash my instance to show what happens in the GDB side: + + (gdb) continue + Continuing. + + Program received signal EXC_BAD_ACCESS, Could not access memory. + Reason: KERN_INVALID_ADDRESS at address: 0xffffffffffffffff + debugCommand (c=0x7ffc32005000) at debug.c:220 + 220 *((char*)-1) = 'x'; + +As you can see GDB detected that Redis crashed, and was able to show me +even the file name and line number causing the crash. This is already much +better than the Redis crash report back trace, that contains just function +names and binary offsets. + +Obtaining the stack trace +------------------------- + +The first thing to do is to obtain a full stack trace with GDB. This is as +simple as using the **bt** command: (that is a short for backtrace): + + (gdb) bt + #0 debugCommand (c=0x7ffc32005000) at debug.c:220 + #1 0x000000010d246d63 in call (c=0x7ffc32005000) at redis.c:1163 + #2 0x000000010d247290 in processCommand (c=0x7ffc32005000) at redis.c:1305 + #3 0x000000010d251660 in processInputBuffer (c=0x7ffc32005000) at networking.c:959 + #4 0x000000010d251872 in readQueryFromClient (el=0x0, fd=5, privdata=0x7fff76f1c0b0, mask=220924512) at networking.c:1021 + #5 0x000000010d243523 in aeProcessEvents (eventLoop=0x7fff6ce408d0, flags=220829559) at ae.c:352 + #6 0x000000010d24373b in aeMain (eventLoop=0x10d429ef0) at ae.c:397 + #7 0x000000010d2494ff in main (argc=1, argv=0x10d2b2900) at redis.c:2046 + +This shows the backtrace, but we also want to dump the processor registers using the **info registers** command: + + (gdb) info registers + rax 0x0 0 + rbx 0x7ffc32005000 140721147367424 + rcx 0x10d2b0a60 4515891808 + rdx 0x7fff76f1c0b0 140735188943024 + rsi 0x10d299777 4515796855 + rdi 0x0 0 + rbp 0x7fff6ce40730 0x7fff6ce40730 + rsp 0x7fff6ce40650 0x7fff6ce40650 + r8 0x4f26b3f7 1327936503 + r9 0x7fff6ce40718 140735020271384 + r10 0x81 129 + r11 0x10d430398 4517462936 + r12 0x4b7c04f8babc0 1327936503000000 + r13 0x10d3350a0 4516434080 + r14 0x10d42d9f0 4517452272 + r15 0x10d430398 4517462936 + rip 0x10d26cfd4 0x10d26cfd4 + eflags 0x10246 66118 + cs 0x2b 43 + ss 0x0 0 + ds 0x0 0 + es 0x0 0 + fs 0x0 0 + gs 0x0 0 + +Please **make sure to include** both this outputs in your bug report. + +Obtaining the core file +----------------------- + +The next step is to generate the core dump, that is the image of the memory of the running Redis process. This is performed using the 'gcore' command: + + (gdb) gcore + Saved corefile core.58414 + +Now you have the core dump to send to the Redis developer, but **it is important to understand** that this happens to contain all the data that was inside the Redis instance at the time of the crash: Redis developers will make sure to don't share the content with any other, and will delete the file as soon as it is no longer used for debugging purposes, but you are warned that sending the core file you are sending your data. + +If there are sensible stuff in the data set we suggest sending the dump directly to Salvatore Sanfilippo (that is the guy writing this doc) at the email address **antirez at gmail dot com**. + +What to send to developers +-------------------------- + +Finally you can send everything to the Redis core team: + ++ The Redis executable you are using. ++ The stack trace produced by the **bt** command, and the registers dump. ++ The core file you generated with gdb. ++ Informations about the operating system and GCC version, and Redis version you are using. + +Thank you +--------- + +Your help is extremely important! Many issues can only be tracked this way, thanks! It is also possible that helping Redis debugging you'll be among the winners of the next [Redis Moka Award](http://antirez.com/post/redis-moka-awards-2011.html). From 08b98d7a4ca0e29bd4a0dec3f40cd7143170d8e0 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:34:35 +0100 Subject: [PATCH 0004/2880] grammar fixes to debugging page --- topics/debugging.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/debugging.md b/topics/debugging.md index e140ecc5d7..7f2a945742 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -42,8 +42,8 @@ same time it makes Redis (like any other program) harder to inspect using GDB. It is better to attach GDB to Redis compiled without optimizations using the `make noopt` command to compile it (instead of just using the plain `make` command). However if you have an already running Redis in production there is -no need to recompile and restart it if this is going to create problems in -your side. Even if at a lesser extend GDB still works against executables +no need to recompile and restart it if this is going to create problems on +your side. Even if by a lesser extent GDB still works against executables compiled with optimizations. It is great if you make sure to recompile Redis with `make noopt` after the From 89683ec4aa6b6d77198d70dfe9c37820c99f4e0b Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:36:51 +0100 Subject: [PATCH 0005/2880] typo fixed --- topics/debugging.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/debugging.md b/topics/debugging.md index 7f2a945742..cdca3a5c28 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -71,7 +71,7 @@ In order to attach GDB the first thing you need is the *process ID* of the runni In the above example the process ID is **58414**. + Login into your Redis server. -+ (Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection will timeout. If you don't know what screen is does yourself a favour and [Read this article](http://www.linuxjournal.com/article/6340) ++ (Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection will timeout. If you don't know what screen is do yourself a favour and [Read this article](http://www.linuxjournal.com/article/6340) + Attach GDB to the running Redis server typing: gdb From 69497460818874153921103966f47c297a873818 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:37:45 +0100 Subject: [PATCH 0006/2880] markdown fix --- topics/debugging.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/debugging.md b/topics/debugging.md index cdca3a5c28..2bcb5d6dee 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -74,7 +74,7 @@ In the above example the process ID is **58414**. + (Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection will timeout. If you don't know what screen is do yourself a favour and [Read this article](http://www.linuxjournal.com/article/6340) + Attach GDB to the running Redis server typing: - gdb + gdb `` `` For example: gdb /usr/local/bin/redis-server 58414 From 9c49ebb870115e7ae9e55e43bf472f881cad2af7 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:39:15 +0100 Subject: [PATCH 0007/2880] more fixes to debugging page --- topics/debugging.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/topics/debugging.md b/topics/debugging.md index 2bcb5d6dee..58c1252533 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -83,10 +83,12 @@ GDB will start and will attach to the running server printing something like the Reading symbols for shared libraries + done 0x00007fff8d4797e6 in epoll_wait () (gdb) -+ At this point GDB is attached but **your Redis instance** is blocked by GDB. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter: + ++ At this point GDB is attached but **your Redis instance is blocked by GDB**. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter: (gdb) continue Continuing. + + Done! Now your Redis instance has GDB attached. All you need is to wait for a rash... + Now it's time to detach from your screen / tmux session if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. From 552d7fb8ac3e417e952246abd29f0320b2085c3d Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:41:39 +0100 Subject: [PATCH 0008/2880] markdown fix again --- topics/debugging.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/debugging.md b/topics/debugging.md index 58c1252533..c1bee644d4 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -86,6 +86,7 @@ GDB will start and will attach to the running server printing something like the + At this point GDB is attached but **your Redis instance is blocked by GDB**. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter: + (gdb) continue Continuing. From 06d6752a247777766a1a1786c91fb38f30a4b234 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:42:29 +0100 Subject: [PATCH 0009/2880] check with an additional newline... --- topics/debugging.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/debugging.md b/topics/debugging.md index c1bee644d4..462929bbde 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -90,6 +90,7 @@ GDB will start and will attach to the running server printing something like the (gdb) continue Continuing. + + Done! Now your Redis instance has GDB attached. All you need is to wait for a rash... + Now it's time to detach from your screen / tmux session if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. From 6a200db8007f5eb1d8675f003a80dbde8da2a2ea Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:44:47 +0100 Subject: [PATCH 0010/2880] markdown fix --- topics/debugging.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/topics/debugging.md b/topics/debugging.md index 462929bbde..dfb76fcaa7 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -84,13 +84,11 @@ GDB will start and will attach to the running server printing something like the 0x00007fff8d4797e6 in epoll_wait () (gdb) -+ At this point GDB is attached but **your Redis instance is blocked by GDB**. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter: - ++ At this point GDB is attached but **your Redis instance is blocked by GDB**. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter. (gdb) continue Continuing. - + Done! Now your Redis instance has GDB attached. All you need is to wait for a rash... + Now it's time to detach from your screen / tmux session if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. From 1add2bb4f6b0a212bac1d5d36cf127818cfe114f Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:47:27 +0100 Subject: [PATCH 0011/2880] debugging.md fixes --- topics/debugging.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/debugging.md b/topics/debugging.md index dfb76fcaa7..f2ccc6fe71 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -89,8 +89,8 @@ GDB will start and will attach to the running server printing something like the (gdb) continue Continuing. -+ Done! Now your Redis instance has GDB attached. All you need is to wait for a rash... -+ Now it's time to detach from your screen / tmux session if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. ++ Done! Now your Redis instance has GDB attached. You can wait for... the next crash :) ++ Now it's time to detach your screen / tmux session, if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. After the crash --------------- From accd05a140918dc24a21cc2f64fb45964a2b802e Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 16:52:22 +0100 Subject: [PATCH 0012/2880] debugging.md fixes --- topics/debugging.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/debugging.md b/topics/debugging.md index f2ccc6fe71..ed9f7c1f10 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -108,8 +108,8 @@ crash) using the `DEBUG SEGFAULT` command (don't use it against a real productio As you can see GDB detected that Redis crashed, and was able to show me even the file name and line number causing the crash. This is already much -better than the Redis crash report back trace, that contains just function -names and binary offsets. +better than the Redis crash report back trace (containing just function +names and binary offsets). Obtaining the stack trace ------------------------- @@ -160,7 +160,7 @@ Please **make sure to include** both this outputs in your bug report. Obtaining the core file ----------------------- -The next step is to generate the core dump, that is the image of the memory of the running Redis process. This is performed using the 'gcore' command: +The next step is to generate the core dump, that is the image of the memory of the running Redis process. This is performed using the `gcore` command: (gdb) gcore Saved corefile core.58414 From fc951447668f016f6f4ab52f1dd2c5a54a7e8ba0 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 30 Jan 2012 17:33:49 +0100 Subject: [PATCH 0013/2880] typo fix plus a fix for what seems a markdown parser issue --- topics/debugging.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/debugging.md b/topics/debugging.md index ed9f7c1f10..c6fcc1be8b 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -20,7 +20,7 @@ What is GDB? GDB is the Gnu Debugger: a program that is able to inspect the internal state of another program. Usually tracking and fixing a bug is an exercise in gathering more informations about the state of the program at the moment the -bug happens, so GDB is an extremely useful bug. +bug happens, so GDB is an extremely useful tool. GDB can be used in two ways: @@ -86,8 +86,8 @@ GDB will start and will attach to the running server printing something like the + At this point GDB is attached but **your Redis instance is blocked by GDB**. In order to let the Redis instance continue the execution just type **continue** at the GDB prompt, and press enter. - (gdb) continue - Continuing. + (gdb) continue + Continuing. + Done! Now your Redis instance has GDB attached. You can wait for... the next crash :) + Now it's time to detach your screen / tmux session, if you are running GDB using it, pressing the usual **Ctrl-a a** key combination. From 95b78a63ecdd8ff1cd490f87bacfbf9b07a5a32a Mon Sep 17 00:00:00 2001 From: Zhehao Mao Date: Tue, 31 Jan 2012 10:08:48 -0500 Subject: [PATCH 0014/2880] Edit section about errors and discuss sending integers in multibulk replies --- topics/protocol.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/topics/protocol.md b/topics/protocol.md index 625b2e6ad6..534cdf889b 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -95,6 +95,10 @@ client when an Error Reply is received. +The redis server usually precedes error messages with "ERR". Some client libraries +assume this, so you may wish to add "ERR" after the minus sign if you are +writing a server implementation. + Integer reply ------------- @@ -170,6 +174,10 @@ always `*`. Example: As you can see the multi bulk reply is exactly the same format used in order to send commands to the Redis server using the unified protocol. +To send integers in a multibulk reply, just send a colon following by the +integer like you would for a regular integer reply. Do not send the size +before sending the integer. + The first line the server sent is `*4\r\n` in order to specify that four bulk replies will follow. Then every bulk write is transmitted. From 34c7e17e90e1cf7d45d5086efb35efcde2ce5faa Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 6 Feb 2012 17:05:49 +0100 Subject: [PATCH 0015/2880] Added info about latency mode of redis-cli --- topics/latency.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/topics/latency.md b/topics/latency.md index e252ec2811..cbd9564db0 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -9,6 +9,16 @@ issues a command and the time the reply to the command is received by the client. Usually Redis processing time is extremely low, in the sub microsecond range, but there are certain conditions leading to higher latency figures. +Measuring latency +----------------- + +If you are experiencing latency problems, probably you know how to measure +it in the context of your application, or maybe your latency problem is very +evident even macroscopically. However redis-cli can be used to measure the +latency of a Redis server in milliseconds, just try: + + redis-cli --latency -h `host` -p `port` + Latency induced by network and communication -------------------------------------------- From 65f6cd5587670b37f28c3dc2d2530f557445ccb8 Mon Sep 17 00:00:00 2001 From: Falko Peters Date: Mon, 6 Feb 2012 20:46:31 +0100 Subject: [PATCH 0016/2880] Add Haskell client 'Hedis' to clients.json. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 029c91e320..639b9ce2b0 100644 --- a/clients.json +++ b/clients.json @@ -82,6 +82,15 @@ "authors": ["simonz05"] }, + { + "name": "hedis", + "language": "Haskell", + "url": "http://hackage.haskell.org/package/hedis", + "repository": "https://github.com/informatikr/hedis", + "description": "Supports the complete command set. Commands are automatically pipelined for high performance.", + "authors": [] + }, + { "name": "redis", "language": "Haskell", From 083088a743de9f9d7eaeaf28b693741c02de9a49 Mon Sep 17 00:00:00 2001 From: Takahiro Hozumi Date: Wed, 8 Feb 2012 03:09:57 +0900 Subject: [PATCH 0017/2880] fix a notation error. issue#redis331 --- topics/data-types.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/data-types.md b/topics/data-types.md index bd7dde8d1f..ac4674dd12 100644 --- a/topics/data-types.md +++ b/topics/data-types.md @@ -40,7 +40,7 @@ Some example of list operations and resulting lists: LPUSH mylist b # now the list is "b","a" RPUSH mylist c # now the list is "b","a","c" (RPUSH was used this time) -The max length of a list is 2^32-1 elements (4294967295, more than 4 billion of elements per list). +The max length of a list is 2^32 - 1 elements (4294967295, more than 4 billion of elements per list). The main features of Redis Lists from the point of view of time complexity are the support for constant time insertion and deletion of elements near the @@ -70,7 +70,7 @@ A very interesting thing about Redis Sets is that they support a number of server side commands to compute sets starting from existing sets, so you can do unions, intersections, differences of sets in very short time. -The max number of members in a set is 2^32-1 (4294967295, more than 4 billion of members per set). +The max number of members in a set is 2^32 - 1 (4294967295, more than 4 billion of members per set). You can do many interesting things using Redis Sets, for instance you can: @@ -97,7 +97,7 @@ Redis instance. While Hashes are used mainly to represent objects, they are capable of storing many elements, so you can use Hashes for many other tasks as well. -Every hash can store up to 2^32-1 field-value pairs (more than 4 billion). +Every hash can store up to 2^32 - 1 field-value pairs (more than 4 billion). Check the [full list of Hash commands](/commands#hash) for more information. From 4a8e201b07d98f066635651420c73eaf512f604d Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 15 Feb 2012 11:39:14 +0100 Subject: [PATCH 0018/2880] string limit fixed, 512MB not 1GB. --- topics/data-types-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index ec5e1a9f2c..425d89079e 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -63,7 +63,7 @@ command](/commands/get) is trivial to set values to strings and have this strings returned back. Values can be strings (including binary data) of every kind, for instance you -can store a jpeg image inside a key. A value can't be bigger than 1 Gigabyte. +can store a jpeg image inside a key. A value can't be bigger than 512 MB. Even if strings are the basic values of Redis, there are interesting operations you can perform against them. For instance one is atomic increment: From b15ca0db1bd214fc98bdb634b09f6016ecf2d041 Mon Sep 17 00:00:00 2001 From: Eike Herzbach Date: Sun, 19 Feb 2012 18:17:59 +0100 Subject: [PATCH 0019/2880] Typo --- topics/persistence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/persistence.md b/topics/persistence.md index b51fdad4b1..2f52b6cee1 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -120,7 +120,7 @@ So Redis supports an interesting feature: it is able to rebuild the AOF in the background without interrupting service to clients. Whenever you issue a `BGREWRITEAOF` Redis will write the shortest sequence of commands needed to rebuild the current dataset in memory. If you're -using the AOF with Redid 2.2 you'll need to run `BGREWRITEAOF` from time to +using the AOF with Redis 2.2 you'll need to run `BGREWRITEAOF` from time to time. Redis 2.4 is able to trigger log rewriting automatically (see the 2.4 example configuration file for more information). From 7f8203be084e49b88c9871eea34a87ff55136fb8 Mon Sep 17 00:00:00 2001 From: Colin Mollenhour Date: Sun, 19 Feb 2012 16:16:56 -0500 Subject: [PATCH 0020/2880] Add Credis to list of PHP drivers. --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 639b9ce2b0..b7711da68a 100644 --- a/clients.json +++ b/clients.json @@ -247,6 +247,14 @@ "authors": ["justinpoliey"] }, + { + "name": "Credis", + "language": "PHP", + "repository": "https://github.com/colinmollenhour/credis", + "description": "Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.", + "authors": ["colinmollenhour"] + }, + { "name": "redis-py", "language": "Python", From f8a5a9115539e0eb0f4942eaa742260413d2bb9b Mon Sep 17 00:00:00 2001 From: Jack Danger Canty Date: Tue, 21 Feb 2012 15:55:20 -0800 Subject: [PATCH 0021/2880] Typo: "servers requests" -> "serves requests" --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index cbd9564db0..26294e94dc 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -80,7 +80,7 @@ such as reading data from or writing data to a socket. I said that Redis is *mostly* single threaded since actually from Redis 2.4 we use threads in Redis in order to perform some slow I/O operations in the background, mainly related to disk I/O, but this does not change the fact -that Redis servers all the requests using a single thread. +that Redis serves all the requests using a single thread. Latency generated by slow commands ---------------------------------- From 7369671c5152aee059739dc207e117febd59c937 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 29 Feb 2012 19:26:10 +0100 Subject: [PATCH 0022/2880] Security topic added --- topics/security.md | 166 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 topics/security.md diff --git a/topics/security.md b/topics/security.md new file mode 100644 index 0000000000..a36338e571 --- /dev/null +++ b/topics/security.md @@ -0,0 +1,166 @@ +Redis Security +=== + +This document provides an introduction to the topic of security from the point of +view of Redis: the access control provided by Redis, code security concerns, +attacks that can be triggered from the outside selecting malicious inputs and +other similar topics are covered. + +Redis general security model +---- + +Redis is designed to be accessed by trusted clients inside trusted environments. +This means that usually it is not a good idea to expose the Redis instance +directly on the internet, and in general in an environment where untrusted +clients can directly access the Redis TCP port or UNIX socket. + +For instance in the common context of a web application implemented using Redis +as a database, cache, or messaging system, the clients inside the front-end +(web side) of the application will query Redis to generate the pages or +to perform the operations requested or triggered by the web application user. + +In this case the web application mediates the access between the Redis and +untrusted clients (the user browsers accessing the web application). + +This is a specific example, but in general, untrusted access to Redis should +be always mediated by a layer implementing ACLs, validating the user input, +and deciding what operations to perform against the Redis instance. + +In general Redis is not optimized for maximum security but for maximum +performances and simplicity. + +Network security +--- + +Accessing to the Redis port should be denied to everybody but trusted clients +in the network, so the servers running Redis should either be directly accessible +only by the computers implementing the application using Redis. + +In the common case of a single computer directly exposed on the internet such +as a virtualized Linux instance (Linode, EC2, ...) the Redis port should be +firewalled to prevent access from the outside. Clients will still be able to +access Redis using the loopback interface. + +Note that it is possible to bind Redis to a single interface adding a line +like the following to the **redis.conf** file: + + bind 127.0.0.1 + +Failing to protect the Redis port from the outside can have a big security +impact because of the nature of Redis. For instance a single **FLUSHALL** command +can be used by an external attack to delete the whole data set with a single +command. + +Authentication feature +--- + +While Redis does not tries to implement Access Control, nevertheless it provides +a tiny later of authentication that is optionally turned on editing the +redis.conf file. + +When the authorization layer is enabled Redis will refuse any query by +unauthenticated clients. A client can authenticate itself by sending the +**AUTH** command followed by the password. + +The password is set by the system administrator in clear inside the +redis.conf file. It should be long enough in order to prevent brute force +attacks for two reasons: + +* Redis is very fast at serving queries. Many passwords per second can be tested by an external client. +* The Redis password is stored inside the redis.conf and inside client configuration, so does not need to be remembered by the system administrator, thus it can be very long. + +The goal of the authentication layer is to optionally provide a layer of +redundancy. Should firewalling or any other system implemented to protect Redis +from external attackers fail for some reason an external client will still not +be able to access the Redis instance. + +The AUTH command is sent unencrypted similarly to every other Redis command, so it does not protect in the case of an attacker that has enough access to the network to perform eavesdropping. + +Data encryption support +--- + +Redis does not support encryption, so in order to implement setups where +trusted parties can access a Redis instance over the internet or other +untrusted networks an additional layer of protection should be implemented, +like for instance an SSL proxy. + +Disabling of specific commands +--- + +It is possible to disable commands in Redis, or to rename them into an unguessable +name, so that normal clients are limited to a specified set of commands. + +For instance a virtualized servers provider may provide a managed Redis instance +service. However in this context normal users should probably not be able to +call the Redis **CONFIG** command to alter the configuration of the instance, +but the systems that provide and remove instances should be able to do so. + +In this case it is possible to use a feature that makes it possible to either +rename or completely shadow commands from the command table. This feature +is available as a statement that can be used inside the redis.conf configuration +file. The following is an example: + + rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 + +In the above example the **CONFIG** command was renamed into an unguessable name. +It is also possible to completely disable it (or any other command) renaming it +to the empty string, like in the following example: + + rename-command CONFIG "" + +Attacks triggered by carefully selected inputs by external clients +--- + +There is a class of attacks that an attacker can trigger from the outside even +without external access to the instance. An example of such attackers are +the ability to insert data into Redis that triggers pathological (worst case) +algorithm complexity on data structures implemented inside Redis internals. + +For instance an attacker could supply, via a web form, a set of strings that +is known to hash to the same bucket into an hash table in order to turn the +O(1) expected time (the average time) to the O(N) worst case, consuming more +CPU than expected, and ultimately causing a Denial of Service. + +To prevent this specific attack Redis uses a per-execution pseudo random +seed to the hash function. + +Redis also uses the qsort algorithm in order to implement the SORT command, +it is possible by carefully selecting the right set of inputs to trigger an +quadratic worst-case behavior of qsort since currently the algorithm is not +randomized. + +String escaping and NoSQL injection +--- + +In the Redis protocol there is no concept of string escaping, so no injecting +is possible under normal circumstances using a normal client library. +All the protocol uses prefixed-length strings and is completely binary safe. + +Lua scripts executed by the **EVAL** and **EVALSHA** commands also follow the +same rules, thus those commands are also safe. + +While it would be a very strange use case, nevertheless the application should +avoid composing the body of the Lua script using strings obtained by untrusted +sources. + +Code security +--- + +In a classical Redis setup clients are allowed to have full access to the command +set, however accessing the instance should never result into the ability to +control the system where Redis is running. + +Redis internally uses all the well known practices for writing secure code, to +prevent buffer overflows, format bugs and other memory corruption issues. +However the ability to control the server configuration using the **CONFIG** +command makes the client able to change the working dir of the program and +the name of the dump file. This makes clients able to write RDB Redis files +at random paths, that is a security issue that may easily lead to the ability +to run untrusted code as the same user as Redis is running. + +Redis does not requires root privileges in order to run, it is recommended to +run it as an unprivileged *redis* user that is only used for this scope. +The Redis authors are currently investigating the possibility of adding a new +configuration parameter to prevent **CONFIG SET/GET dir** and other run-time +configuration directives similar to this in order to prevent clients from +forcing the server to write Redis dump files at arbitrary locations. From 2009aa094609fbbe00622a7a10ea59d9502e6bd8 Mon Sep 17 00:00:00 2001 From: pw Date: Wed, 29 Feb 2012 12:44:52 -0700 Subject: [PATCH 0023/2880] copy-editing (small changes) --- topics/security.md | 130 ++++++++++++++++++++++----------------------- 1 file changed, 64 insertions(+), 66 deletions(-) diff --git a/topics/security.md b/topics/security.md index a36338e571..da8842b83d 100644 --- a/topics/security.md +++ b/topics/security.md @@ -3,7 +3,7 @@ Redis Security This document provides an introduction to the topic of security from the point of view of Redis: the access control provided by Redis, code security concerns, -attacks that can be triggered from the outside selecting malicious inputs and +attacks that can be triggered from the outside by selecting malicious inputs and other similar topics are covered. Redis general security model @@ -11,104 +11,104 @@ Redis general security model Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance -directly on the internet, and in general in an environment where untrusted +directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket. -For instance in the common context of a web application implemented using Redis +For instance, in the common context of a web application implemented using Redis as a database, cache, or messaging system, the clients inside the front-end -(web side) of the application will query Redis to generate the pages or -to perform the operations requested or triggered by the web application user. +(web side) of the application will query Redis to generate pages or +to perform operations requested or triggered by the web application user. -In this case the web application mediates the access between the Redis and +In this case, the web application mediates access between Redis and untrusted clients (the user browsers accessing the web application). -This is a specific example, but in general, untrusted access to Redis should -be always mediated by a layer implementing ACLs, validating the user input, +This is a specific example, but, in general, untrusted access to Redis should +always be mediated by a layer implementing ACLs, validating user input, and deciding what operations to perform against the Redis instance. -In general Redis is not optimized for maximum security but for maximum -performances and simplicity. +In general, Redis is not optimized for maximum security but for maximum +performance and simplicity. Network security --- -Accessing to the Redis port should be denied to everybody but trusted clients -in the network, so the servers running Redis should either be directly accessible +Access to the Redis port should be denied to everybody but trusted clients +in the network, so the servers running Redis should be directly accessible only by the computers implementing the application using Redis. -In the common case of a single computer directly exposed on the internet such -as a virtualized Linux instance (Linode, EC2, ...) the Redis port should be +In the common case of a single computer directly exposed to the internet, such +as a virtualized Linux instance (Linode, EC2, ...), the Redis port should be firewalled to prevent access from the outside. Clients will still be able to access Redis using the loopback interface. -Note that it is possible to bind Redis to a single interface adding a line +Note that it is possible to bind Redis to a single interface by adding a line like the following to the **redis.conf** file: bind 127.0.0.1 Failing to protect the Redis port from the outside can have a big security -impact because of the nature of Redis. For instance a single **FLUSHALL** command -can be used by an external attack to delete the whole data set with a single -command. +impact because of the nature of Redis. For instance, a single **FLUSHALL** command +can be used by an external attacker to delete the whole data set. Authentication feature --- -While Redis does not tries to implement Access Control, nevertheless it provides -a tiny later of authentication that is optionally turned on editing the -redis.conf file. +While Redis does not tries to implement Access Control, it provides +a tiny layer of authentication that is optionally turned on editing the +**redis.conf** file. -When the authorization layer is enabled Redis will refuse any query by +When the authorization layer is enabled, Redis will refuse any query by unauthenticated clients. A client can authenticate itself by sending the **AUTH** command followed by the password. -The password is set by the system administrator in clear inside the -redis.conf file. It should be long enough in order to prevent brute force -attacks for two reasons: +The password is set by the system administrator in clear text inside the +redis.conf file. It should be long enough to prevent brute force attacks +for two reasons: * Redis is very fast at serving queries. Many passwords per second can be tested by an external client. -* The Redis password is stored inside the redis.conf and inside client configuration, so does not need to be remembered by the system administrator, thus it can be very long. +* The Redis password is stored inside the **redis.conf** file and inside the client configuration, so it does not need to be remembered by the system administrator, and thus it can be very long. The goal of the authentication layer is to optionally provide a layer of redundancy. Should firewalling or any other system implemented to protect Redis -from external attackers fail for some reason an external client will still not -be able to access the Redis instance. +from external attackers fail, an external client will still not be able to +access the Redis instance. -The AUTH command is sent unencrypted similarly to every other Redis command, so it does not protect in the case of an attacker that has enough access to the network to perform eavesdropping. +The AUTH command, like every other Redis command, is sent unencrypted, so it +does not protect against an attacker that has enough access to the network to +perform eavesdropping. Data encryption support --- -Redis does not support encryption, so in order to implement setups where +Redis does not support encryption. In order to implement setups where trusted parties can access a Redis instance over the internet or other -untrusted networks an additional layer of protection should be implemented, -like for instance an SSL proxy. +untrusted networks, an additional layer of protection should be implemented, +such as an SSL proxy. Disabling of specific commands --- -It is possible to disable commands in Redis, or to rename them into an unguessable +It is possible to disable commands in Redis or to rename them into an unguessable name, so that normal clients are limited to a specified set of commands. -For instance a virtualized servers provider may provide a managed Redis instance -service. However in this context normal users should probably not be able to +For instance, a virtualized server provider may offer a managed Redis instance +service. In this context, normal users should probably not be able to call the Redis **CONFIG** command to alter the configuration of the instance, but the systems that provide and remove instances should be able to do so. -In this case it is possible to use a feature that makes it possible to either -rename or completely shadow commands from the command table. This feature -is available as a statement that can be used inside the redis.conf configuration -file. The following is an example: +In this case, it is possible to either rename or completely shadow commands from +the command table. This feature is available as a statement that can be used +inside the redis.conf configuration file. For example: rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -In the above example the **CONFIG** command was renamed into an unguessable name. -It is also possible to completely disable it (or any other command) renaming it +In the above example, the **CONFIG** command was renamed into an unguessable name. +It is also possible to completely disable it (or any other command) by renaming it to the empty string, like in the following example: rename-command CONFIG "" -Attacks triggered by carefully selected inputs by external clients +Attacks triggered by carefully selected inputs from external clients --- There is a class of attacks that an attacker can trigger from the outside even @@ -121,46 +121,44 @@ is known to hash to the same bucket into an hash table in order to turn the O(1) expected time (the average time) to the O(N) worst case, consuming more CPU than expected, and ultimately causing a Denial of Service. -To prevent this specific attack Redis uses a per-execution pseudo random +To prevent this specific attack, Redis uses a per-execution pseudo-random seed to the hash function. -Redis also uses the qsort algorithm in order to implement the SORT command, -it is possible by carefully selecting the right set of inputs to trigger an -quadratic worst-case behavior of qsort since currently the algorithm is not -randomized. +Redis implements the SORT command using the qsort algorithm. Currently, +the algorithm is not randomized, so it is possible to trigger a quadratic +worst-case behavior by carefully selecting the right set of inputs. String escaping and NoSQL injection --- -In the Redis protocol there is no concept of string escaping, so no injecting -is possible under normal circumstances using a normal client library. -All the protocol uses prefixed-length strings and is completely binary safe. +The Redis protocol has no concept of string escaping, so injection +is impossible under normal circumstances using a normal client library. +The protocol uses prefixed-length strings and is completely binary safe. -Lua scripts executed by the **EVAL** and **EVALSHA** commands also follow the -same rules, thus those commands are also safe. +Lua scripts executed by the **EVAL** and **EVALSHA** commands follow the +same rules, and thus those commands are also safe. -While it would be a very strange use case, nevertheless the application should -avoid composing the body of the Lua script using strings obtained by untrusted -sources. +While it would be a very strange use case, the application should avoid composing +the body of the Lua script using strings obtained from untrusted sources. Code security --- -In a classical Redis setup clients are allowed to have full access to the command -set, however accessing the instance should never result into the ability to -control the system where Redis is running. +In a classical Redis setup, clients are allowed full access to the command set, +but accessing the instance should never result in the ability to control the +system where Redis is running. -Redis internally uses all the well known practices for writing secure code, to +Internally, Redis uses all the well known practices for writing secure code, to prevent buffer overflows, format bugs and other memory corruption issues. -However the ability to control the server configuration using the **CONFIG** +However, the ability to control the server configuration using the **CONFIG** command makes the client able to change the working dir of the program and -the name of the dump file. This makes clients able to write RDB Redis files +the name of the dump file. This allows clients to write RDB Redis files at random paths, that is a security issue that may easily lead to the ability to run untrusted code as the same user as Redis is running. -Redis does not requires root privileges in order to run, it is recommended to -run it as an unprivileged *redis* user that is only used for this scope. +Redis does not requires root privileges to run. It is recommended to +run it as an unprivileged *redis* user that is only used for this purpose. The Redis authors are currently investigating the possibility of adding a new -configuration parameter to prevent **CONFIG SET/GET dir** and other run-time -configuration directives similar to this in order to prevent clients from -forcing the server to write Redis dump files at arbitrary locations. +configuration parameter to prevent **CONFIG SET/GET dir** and other similar run-time +configuration directives. This would prevent clients from forcing the server to +write Redis dump files at arbitrary locations. From e2e6f56635883afb378b90610c24dd3f5d3c448b Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 29 Feb 2012 21:49:11 +0100 Subject: [PATCH 0024/2880] Fixed typo in security.md --- topics/security.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/security.md b/topics/security.md index a36338e571..8f5cac37f0 100644 --- a/topics/security.md +++ b/topics/security.md @@ -112,7 +112,7 @@ Attacks triggered by carefully selected inputs by external clients --- There is a class of attacks that an attacker can trigger from the outside even -without external access to the instance. An example of such attackers are +without external access to the instance. An example of such attacks are the ability to insert data into Redis that triggers pathological (worst case) algorithm complexity on data structures implemented inside Redis internals. From ab10621d8a3f9cee40d282f03324f131935d7266 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 2 Mar 2012 09:30:57 +0100 Subject: [PATCH 0025/2880] Fixed a sentence that sounded strange in English. --- topics/security.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/security.md b/topics/security.md index bd8e79274f..f06229cfae 100644 --- a/topics/security.md +++ b/topics/security.md @@ -69,9 +69,9 @@ for two reasons: * The Redis password is stored inside the **redis.conf** file and inside the client configuration, so it does not need to be remembered by the system administrator, and thus it can be very long. The goal of the authentication layer is to optionally provide a layer of -redundancy. Should firewalling or any other system implemented to protect Redis +redundancy. If firewalling or any other system implemented to protect Redis from external attackers fail, an external client will still not be able to -access the Redis instance. +access the Redis instance without knowledge of the authentication password. The AUTH command, like every other Redis command, is sent unencrypted, so it does not protect against an attacker that has enough access to the network to From 320bbe95493d7e5be05e77f0ed76c9549780a071 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 5 Mar 2012 12:44:13 +0100 Subject: [PATCH 0026/2880] FAQ updated. --- topics/faq.md | 296 +++++++++----------------------------------------- 1 file changed, 52 insertions(+), 244 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index dfb42a0665..e20e8a442e 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -1,230 +1,72 @@ # FAQ -## Why do I need Redis instead of memcachedb, Tokyo Cabinet, ...? - -Memcachedb is basically memcached made persistent. Redis is a different -evolution path in the key-value DBs, the idea is that the main advantages of -key-value DBs are retained even without severe loss of comfort of plain -key-value DBs. So Redis offers more features: - -* Keys can store different data types, not just strings. Notably Lists and - Sets. For example if you want to use Redis as a log storage system for - different computers every computer can just `RPUSH data to the computer_ID - key`. Don't want to save more than 1000 log lines per computer? Just issue a - `LTRIM computer_ID 0 999` command to trim the list after every push. -* Another example is about Sets. Imagine to build a social news site like - [Reddit][reddit]. Every time a user upvotes a given news you can just add to - the news_ID_upmods key holding a value of type SET the id of the user that - did the upmodding. Sets can also be used to index things. Every key can be a - tag holding a SET with the IDs of all the objects associated to this tag. - Using Redis set intersection you obtain the list of IDs having all this tags - at the same time. -* We wrote a [simple Twitter Clone][retwis] using just Redis as database. - Download the source code from the download section and imagine to write it - with a plain key-value DB without support for lists and sets... it's *much* - harder. -* Multiple DBs. Using the SELECT command the client can select different - datasets. This is useful because Redis provides a MOVE atomic primitive that - moves a key form a DB to another one, if the target DB already contains such - a key it returns an error: this basically means a way to perform locking in - distributed processing. -* *So what is Redis really about?* The User interface with the programmer. - Redis aims to export to the programmer the right tools to model a wide range - of problems. *Sets, Lists with O(1) push operation, lrange and ltrim, - server-side fast intersection between sets, are primitives that allow to - model complex problems with a key value database*. - -[reddit]: http://reddit.com -[retwis]: http://retwis.antirez.com - -## Isn't this key-value thing just hype? - -I imagine key-value DBs, in the short term future, to be used like you use -memory in a program, with lists, hashes, and so on. With Redis it's like this, -but this special kind of memory containing your data structures is shared, -atomic, persistent. - -When we write code it is obvious, when we take data in memory, to use the most -sensible data structure for the work, right? Incredibly when data is put inside -a relational DB this is no longer true, and we create an absurd data model even -if our need is to put data and get this data back in the same order we put it -inside (an ORDER BY is required when the data should be already sorted. -Strange, don't you think?). - -Key-value DBs bring this back at home, to create sensible data models and use -the right data structures for the problem we are trying to solve. - -## Can I backup a Redis DB while the server is working? - -Yes you can. When Redis saves the DB it actually creates a temp file, then -rename(2) that temp file name to the destination file name. So even while the -server is working it is safe to save the database file just with the _cp_ UNIX -command. Note that you can use master-slave replication in order to have -redundancy of data, but if all you need is backups, cp or scp will do the work -pretty well. +## Why Redis is different compared to other key-value stores? + +There are two main reasons. + +* Redis is a different evolution path in the key-value DBs where values can contain more complex data types, with atomic operations defined against those data types. Redis data types are closely related to fundamental data structures and are exposed to the programmer as such, without additional abstraction layers. +* Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. Another advantage of +in memory databases is that the memory representation of complex data structure +is much simpler to manipulate compared to the same data structure on disk, so +Redis can do a lot with little internal complexity. At the same time an on-disk +format that does not need to be suitable for random access is compact and +always generated in an append-only fashion. ## What's the Redis memory footprint? -Worst case scenario: 1 Million keys with the key being the natural numbers from +To give you an example: 1 Million keys with the key being the natural numbers from 0 to 999999 and the string "Hello World" as value use 100MB on my Intel MacBook (32bit). Note that the same data stored linearly in an unique string takes -something like 16MB, this is the norm because with small keys and values there -is a lot of overhead. Memcached will perform similarly. +something like 16MB, this is expected because with small keys and values there +is a lot of overhead. Memcached will perform similarly, but a bit better as +Redis has more overhead (type information, refcount and so forth) to represent +differnet kinds of objects. With large keys/values the ratio is much better of course. -64 bit systems will use much more memory than 32 bit systems to store the same -keys, especially if the keys and values are small, this is because pointers -takes 8 bytes in 64 bit systems. But of course the advantage is that you can -have a lot of memory in 64 bit systems, so to run large Redis servers a 64 bit -system is more or less required. +64 bit systems will use considerably more memory than 32 bit systems to store the same keys, especially if the keys and values are small, this is because pointers takes 8 bytes in 64 bit systems. But of course the advantage is that you can +have a lot of memory in 64 bit systems, so in order to run large Redis servers a 64 bit system is more or less required. ## I like Redis high level operations and features, but I don't like that it takes everything in memory and I can't have a dataset larger the memory. Plans to change this? -Short answer: If you are using a Redis client that supports consistent hashing -you can distribute the dataset across different nodes. For instance the Ruby -clients supports this feature. There are plans to develop redis-cluster that -basically is a dummy Redis server that is only used in order to distribute the -requests among N different nodes using consistent hashing. - -## Why Redis takes the whole dataset in RAM? - -Redis takes the whole dataset in memory and writes asynchronously on disk in -order to be very fast, you have the best of both worlds: hyper-speed and -persistence of data, but the price to pay is exactly this, that the dataset -must fit on your computers RAM. - -If the data is larger then memory, and this data is stored on disk, what -happens is that the bottleneck of the disk I/O speed will start to ruin the -performances. Maybe not in benchmarks, but once you have real load from -multiple clients with distributed key accesses the data must come from disk, -and the disk is damn slow. Not only, but Redis supports higher level data -structures than the plain values. To implement this things on disk is even -slower. +In the past the Redis developers experimented with Virtual Memory and other systems in order to allow larger than RAM datasets, but after all we are very happy if we can do one thing well: data served from memory, disk used for storage. So for now there are no plans to create an on disk backend for Redis. Most of what +Redis is, after all, is a direct result of its current design. -Redis will always continue to hold the whole dataset in memory because this -days scalability requires to use RAM as storage media, and RAM is getting -cheaper and cheaper. Today it is common for an entry level server to have 16 GB -of RAM! And in the 64-bit era there are no longer limits to the amount of RAM -you can have in theory. +However many large users solved the issue of large datasets distributing among multiple Redis nodes, using client-side hashing. **Craigslist** and **Groupon** are two examples. -Amazon EC2 now provides instances with 32 or 64 GB of RAM. +At the same time Redis Cluster, an automatically distributed and fault tolerant +implementation of a Redis subset, is a work in progress, and may be a good +solution for many use cases. ## If my dataset is too big for RAM and I don't want to use consistent hashing or other ways to distribute the dataset across different nodes, what I can do to use Redis anyway? -You may try to load a dataset larger than your memory in Redis and see what -happens, basically if you are using a modern Operating System, and you have a -lot of data in the DB that is rarely accessed, the OS's virtual memory -implementation will try to swap rarely used pages of memory on the disk, to -only recall this pages when they are needed. If you have many large values -rarely used this will work. If your DB is big because you have tons of little -values accessed at random without a specific pattern this will not work (at low -level a page is usually 4096 bytes, and you can have different keys/values -stored at a single page. The OS can't swap this page on disk if there are even -few keys used frequently). - -Another possible solution is to use both MySQL and Redis at the same time, -basically take the state on Redis, and all the things that get accessed very +A possible solution is to use both an on disk DB (MySQL or others) and Redis +at the same time, basically take the state on Redis (metadata, small but often written info), and all the other things that get accessed very frequently: user auth tokens, Redis Lists with chronologically ordered IDs of -the last N-comments, N-posts, and so on. Then use MySQL as a simple storage -engine for larger data, that is just create a table with an auto-incrementing -ID as primary key and a large BLOB field as data field. Access MySQL data only -by primary key (the ID). The application will run the high traffic queries -against Redis but when there is to take the big data will ask MySQL for +the last N-comments, N-posts, and so on. Then use MySQL (or any other) as a simple storage engine for larger data, that is just create a table with an auto-incrementing ID as primary key and a large BLOB field as data field. Access MySQL data only by primary key (the ID). The application will run the high traffic queries against Redis but when there is to take the big data will ask MySQL for specific resources IDs. -Update: it could be interesting to test how Redis performs with datasets larger -than memory if the OS swap partition is in one of this very fast Intel SSD -disks. - -## Do you plan to implement Virtual Memory in Redis? Why don't just let the Operating System handle it for you? - -Yes, in order to support datasets bigger than RAM there is the plan to -implement transparent Virtual Memory in Redis, that is, the ability to transfer -large values associated to keys rarely used on Disk, and reload them -transparently in memory when this values are requested in some way. - -So you may ask why don't let the operating system VM do the work for us. There -are two main reasons: in Redis even a large value stored at a given key, for -instance a 1 million elements list, is not allocated in a contiguous piece of -memory. It's actually *very* fragmented since Redis uses quite aggressive -object sharing and allocated Redis Objects structures reuse. - -So you can imagine the memory layout composed of 4096 bytes pages that actually -contain different parts of different large values. Not only, but a lot of -values that are large enough for us to swap out to disk, like a 1024k value, is -just one quarter the size of a memory page, and likely in the same page there -are other values that are not rarely used. So this value wil never be swapped -out by the operating system. This is the first reason for implementing -application-level virtual memory in Redis. - -There is another one, as important as the first. A complex object in memory -like a list or a set is something *10 times bigger* than the same object -serialized on disk. Probably you already noticed how Redis snapshots on disk -are damn smaller compared to the memory usage of Redis for the same objects. -This happens because when data is in memory is full of pointers, reference -counters and other metadata. Add to this malloc fragmentation and need to -return word-aligned chunks of memory and you have a clear picture of what -happens. So this means to have 10 times the I/O between memory and disk than -otherwise needed. - ## Is there something I can do to lower the Redis memory usage? -Yes, try to compile it with 32 bit target if you are using a 64 bit box. - -If you are using Redis >= 1.3, try using the Hash data type, it can save a lot -of memory. - -If you are using hashes or any other type with values bigger than 128 bytes try -also this to lower the RSS usage (Resident Set Size): `EXPORT -MMAP_THRESHOLD=4096` - -## I have an empty Redis server but INFO and logs are reporting megabytes of memory in use! - -This may happen and it's perfectly okay. Redis objects are small C structures -allocated and freed a lot of times. This costs a lot of CPU so instead of being -freed, released objects are taken into a free list and reused when needed. This -memory is taken exactly by this free objects ready to be reused. +If you can use Redis 32 bit instances, and make good use of small hashes, +lists, sorted sets, and sets of integers, since Redis is able to represent +those data types in the special case of a few elements in a much more compact +way. ## What happens if Redis runs out of memory? With modern operating systems malloc() returning NULL is not common, usually -the server will start swapping and Redis performances will be disastrous so -you'll know it's time to use more Redis servers or get more RAM. +the server will start swapping and Redis performances will degrade so +you'll probably notice there is something wrong. -The INFO command (work in progress in this days) will report the amount of -memory Redis is using so you can write scripts that monitor your Redis servers -checking for critical conditions. +The INFO command will report the amount of memory Redis is using so you can +write scripts that monitor your Redis servers checking for critical conditions. You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only -commands). - -## Does Redis use more memory running in 64 bit boxes? Can I use 32 bit Redis in 64 bit systems? - -Redis uses a lot more memory when compiled for 64 bit target, especially if the -dataset is composed of many small keys and values. Such a database will, for -instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB -for 64 bit! That's a big difference. - -You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without -problems. For OS X just use *make 32bit*. For Linux instead, make sure you have -*libc6-dev-i386* installed, then use *make 32bit* if you are using the latest -Git version. Instead for Redis `<= 1.2.2` you have to edit the Makefile and -replace "-arch i386" with "-m32". - -If your application is already able to perform application-level sharding, it -is very advisable to run N instances of Redis 32bit against a big 64 bit Redis -box (with more than 4GB of RAM) instead than a single 64 bit instance, as this -is much more memory efficient. - -## How much time it takes to load a big database at server startup? - -Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB -database on a fairly standard system, no RAID. This can give you some kind of -feeling about the order of magnitude of the time needed to load data when you -restart the server. +commands), or you can configure it to evict keys when the max memory limit +is reached. ## Background saving is failing with a fork() error under Linux even if I've a lot of free RAM! @@ -263,70 +105,36 @@ in RAM is also atomic from the point of view of the disk snapshot. ## Redis is single threaded, how can I exploit multiple CPU / cores? -Simply start multiple instances of Redis in different ports in the same box and -treat them as different servers! Given that Redis is a distributed database -anyway in order to scale you need to think in terms of multiple computational -units. At some point a single box may not be enough anyway. - -In general key-value databases are very scalable because of the property that -different keys can stay on different servers independently. - -In Redis there are client libraries such Redis-rb (the Ruby client) that are -able to handle multiple servers automatically using _consistent hashing_. We -are going to implement consistent hashing in all the other major client -libraries. If you use a different language you can implement it yourself -otherwise just hash the key before to SET / GET it from a given server. For -example imagine to have N Redis servers, server-0, server-1, ..., server-N. You -want to store the key "foo", what's the right server where to put "foo" in -order to distribute keys evenly among different servers? Just perform the _crc_ -= CRC32("foo"), then _servernum_ = _crc_ % N (the rest of the division for N). -This will give a number between 0 and N-1 for every key. Connect to this server -and store the key. The same for gets. - -This is a basic way of performing key partitioning, consistent hashing is much -better and this is why after Redis 1.0 will be released we'll try to implement -this in every widely used client library starting from Python and PHP (Ruby -already implements this support). - -## I'm using some form of key hashing for partitioning, but what about SORT BY? - -With [SortCommand SORT] BY you need that all the _weight keys_ are in the same -Redis instance of the list/set you are trying to sort. In order to make this -possible we developed a concept called _key tags_. A key tag is a special -pattern inside a key that, if preset, is the only part of the key hashed in -order to select the server for this key. For example in order to hash the key -"foo" I simply perform the CRC32 checksum of the whole string, but if this key -has a pattern in the form of the characters {...} I only hash this substring. -So for example for the key "foo{bared}" the key hashing code will simply -perform the CRC32 of "bared". This way using key tags you can ensure that -related keys will be stored on the same Redis instance just using the same key -tag for all this keys. Redis-rb already implements key tags. +Simply start multiple instances of Redis in the same box and +treat them as different servers. At some point a single box may not be +enough anyway, so if you want to use multiple CPUs you can start thinking +at some way to shard earlier. However note that using pipelining Redis running +on an average Linux system can deliver even 500k requests per second, so +if your application mainly uses O(N) or O(log(N)) commands it is hardly +going to use too much CPU. + +In Redis there are client libraries such Redis-rb (the Ruby client) and +Predis (one of the most used PHP clients) that are able to handle multiple +servers automatically using _consistent hashing_. ## What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set? In theory Redis can handle up to 2^32 keys, and was tested in practice to -handle at least 150 million of keys per instance. We are working in order to +handle at least 250 million of keys per instance. We are working in order to experiment with larger values. Every list, set, and ordered set, can hold 2^32 elements. -Actually Redis internals are ready to allow up to 2^64 elements but the current -disk dump format don't support this, and there is a lot time to fix this issues -in the future as currently even with 128 GB of RAM it's impossible to reach -2^32 elements. +In other words your limit is likely the available memory in your system. ## What Redis means actually? Redis means two things: -* It means REmote DIctionary Server -* It is a joke on the word Redistribute (instead to use just a Relational DB - redistribute your workload among Redis servers) +It means REmote DIctionary Server. ## Why did you started the Redis project? -In order to scale [LLOOGG][lloogg]. But after I got the basic server -working I liked the idea to share the work with other guys, and Redis was -turned into an open source project. +Originally Redis was started in order to scale [LLOOGG][lloogg]. But after I got the basic server working I liked the idea to share the work with other guys, and Redis was turned into an open source project. [lloogg]: http://lloogg.com From 7d6bc479c4613a4e071d6221415255655a2a7f16 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:05:55 +0100 Subject: [PATCH 0027/2880] Better EXPIRE man page. --- commands/expire.md | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/commands/expire.md b/commands/expire.md index 878bce9a32..ce5a043254 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -4,22 +4,37 @@ O(1) Set a timeout on `key`. After the timeout has expired, the key will -automatically be deleted. A key with an associated timeout is said to be +automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. -If `key` is updated before the timeout has expired, then the timeout is removed -as if the `PERSIST` command was invoked on `key`. +The timeout is cleared only when the key is removed using the [DEL](/commands/del) or overwritten using the [SET](/commands/set) command. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. -For Redis versions **< 2.1.3**, existing timeouts cannot be overwritten. So, if -`key` already has an associated timeout, it will do nothing and return `0`. -Since Redis **2.1.3**, you can update the timeout of a key. It is also possible -to remove the timeout using the `PERSIST` command. See the page on [key expiry][1] -for more information. +The timeout can also be cleared, turning the key back into a persistent key, +using the [PERSIST](/commands/persist) command. -Note that in Redis 2.4 the expire might not be pin-point accurate, and -it could be between zero to one seconds out. Development versions of -Redis fixed this bug and Redis 2.6 will feature a millisecond precision -`EXPIRE`. +If a key is renamed using the [RENAME](/commands/rename) command, the +associated time to live is transfered to the new key name. + +If a key is overwritten by [RENAME](commands/rename), like in the +case of an existing key `a` that is overwritten by a call like +`RENAME b a`, it does not matter if the original `a` had a timeout associated +or not, the new key `a` will inherit all the characteristics of `b`. + +Expire accuracy +--- + +In Redis 2.4 the expire might not be pin-point accurate, and it could be +between zero to one seconds out. + +Since Redis 2.6 the expire error is from 0 to 1 milliseconds. + +Differences in Redis prior 2.1.3 +--- + +In Redis versions prior **2.1.3** altering a key with an expire set using +a command altering its value had the effect of removing the key entirely. +This semantics was needed because of limitations in the replication layer that +are now fixed. [1]: /topics/expire From a611ef5d768b732aeaa7f2899e026b6b5c08bdcb Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:08:41 +0100 Subject: [PATCH 0028/2880] EXPIREAT now redirects the user to the EXPIRE commadn doc after specifying the differences. --- commands/expireat.md | 13 +++---------- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/commands/expireat.md b/commands/expireat.md index f5773faab3..918bc96bb6 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -3,17 +3,10 @@ O(1) -Set a timeout on `key`. After the timeout has expired, the key will -automatically be deleted. A key with an associated timeout is said to be -_volatile_ in Redis terminology. +`EXPIREAT` has the same effect and semantic as [EXPIRE](/commands/expire), but +instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [UNIX timestamp][2] (seconds since January 1, 1970). -`EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of -specifying the number of seconds representing the TTL (time to live), it takes -an absolute [UNIX timestamp][2] (seconds since January 1, 1970). - -As in the case of `EXPIRE` command, if `key` is updated before the timeout has -expired, then the timeout is removed as if the `PERSIST` command was invoked on -`key`. +Please for the specific semantics of the commands refer to the [EXPIRE command documentation](/commands/expire). [2]: http://en.wikipedia.org/wiki/Unix_time From 96ecd217d3e7147c43b29418b0161f0f4ab21bfc Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:10:33 +0100 Subject: [PATCH 0029/2880] PERSIST command documentation a bit less laconic. --- commands/persist.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/persist.md b/commands/persist.md index fda4c1f789..18d041a3c3 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -3,7 +3,7 @@ O(1) -Remove the existing timeout on `key`. +Remove the existing timeout on `key`, turning the key from _volatile_ (a key with an expire set) to _persistent_ (a key that will never expire as no timeout is associated). @return From 0b2a78cbc64c9b0006f00bd01e636d17482bfe8e Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:20:49 +0100 Subject: [PATCH 0030/2880] Added a pattern example in the EXPIRE documentation. --- commands/expire.md | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/commands/expire.md b/commands/expire.md index ce5a043254..ce8eecff88 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -20,6 +20,11 @@ case of an existing key `a` that is overwritten by a call like `RENAME b a`, it does not matter if the original `a` had a timeout associated or not, the new key `a` will inherit all the characteristics of `b`. +Refreshing expires +--- + +It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern above. + Expire accuracy --- @@ -36,6 +41,30 @@ a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed. +Pattern: Navigation session +--- + +Imagine you have a web service and you are interested in the latest N pages +*recently* visited by your users, such that each adiacent pageview was not +performed more than 60 seconds after the previous. Conceptually you may think +at this set of pageviews as a *Navigation session* if your user, that may +contain interesting informations about what kind of products he or she is +looking for currently, so that you can recommend related products. + +You can easily model this pattern in Redis using the following strategy: +every time the user does a pageview you call the following commands: + + MULTI + RPUSH pagewviews.user: http://..... + EXPIRE pagewviews.user: 60 + EXEC + +If the user will be idle more than 60 seconds, the key will be deleted and only +subsequent pageviews that have less than 60 seconds of difference will be +recorded. + +This pattern is easily modified to use counters using [INCR](/commands/incr) instead of lists using [RPUSH](/commands/rpush). + [1]: /topics/expire @return From 8a1684d7a49df5125a3d786e5330b4f9bb029537 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:26:13 +0100 Subject: [PATCH 0031/2880] Fixed small typo in EXPIRE page. --- commands/expire.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/expire.md b/commands/expire.md index ce8eecff88..4d7a1c37a5 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -7,7 +7,7 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. -The timeout is cleared only when the key is removed using the [DEL](/commands/del) or overwritten using the [SET](/commands/set) command. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. +The timeout is cleared only when the key is removed using the [DEL](/commands/del) command or overwritten using the [SET](/commands/set) command. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. The timeout can also be cleared, turning the key back into a persistent key, using the [PERSIST](/commands/persist) command. From 0b9e9a469c8c65aa5b315e4297cde05ca7ed45f2 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:28:19 +0100 Subject: [PATCH 0032/2880] Key names in EXPIRE doc RENAME example renamed to improve clearity. --- commands/expire.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/commands/expire.md b/commands/expire.md index 4d7a1c37a5..6fd3e5475a 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -16,9 +16,10 @@ If a key is renamed using the [RENAME](/commands/rename) command, the associated time to live is transfered to the new key name. If a key is overwritten by [RENAME](commands/rename), like in the -case of an existing key `a` that is overwritten by a call like -`RENAME b a`, it does not matter if the original `a` had a timeout associated -or not, the new key `a` will inherit all the characteristics of `b`. +case of an existing key `Key_A` that is overwritten by a call like +`RENAME Key_B Key_A`, it does not matter if the original `Key_A` had a timeout +associated or not, the new key `Key_A` will inherit all the characteristics +of `Key_B`. Refreshing expires --- From 23aeb1d1c34c19980ddc163b26fc1b8f2e48c022 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:29:23 +0100 Subject: [PATCH 0033/2880] Fixed small typo. --- commands/expire.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/expire.md b/commands/expire.md index 6fd3e5475a..08f94adbd1 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -24,7 +24,7 @@ of `Key_B`. Refreshing expires --- -It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern above. +It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern belove. Expire accuracy --- From 3e859f452f469dfcbc48c7e549b535533cc318ca Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:30:25 +0100 Subject: [PATCH 0034/2880] Another typo. --- commands/expire.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/expire.md b/commands/expire.md index 08f94adbd1..5948fdd6e3 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -24,7 +24,7 @@ of `Key_B`. Refreshing expires --- -It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern belove. +It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern section below. Expire accuracy --- From b6bf83c3aa7839267e6054e5c45b868196a3788a Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:39:03 +0100 Subject: [PATCH 0035/2880] Documented that also GETSET removes the expire from old key like SET does. --- commands/expire.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/expire.md b/commands/expire.md index 5948fdd6e3..c09fbcee6d 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -7,7 +7,7 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. -The timeout is cleared only when the key is removed using the [DEL](/commands/del) command or overwritten using the [SET](/commands/set) command. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. +The timeout is cleared only when the key is removed using the [DEL](/commands/del) command or overwritten using the [SET](/commands/set) or [GETSET](/commands/getset) commands. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. The timeout can also be cleared, turning the key back into a persistent key, using the [PERSIST](/commands/persist) command. From 62be4735ca9b3877c33a03e6a56f0c1319b5a2de Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:46:43 +0100 Subject: [PATCH 0036/2880] BGREWRITEAOF points the user to the persistence doc for more info. --- commands/bgrewriteaof.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 10bfed18c9..698eb40fac 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -2,6 +2,8 @@ Rewrites the [append-only file](/topics/persistence#append-only-file) to reflect If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. +Please refer to the [persistence documentation](/topics/persistence) for detailed information about AOF rewriting. + @return @status-reply: always `OK`. From 5133aa0f82c224fad7e1397a210aa641dd739ce7 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 10:47:48 +0100 Subject: [PATCH 0037/2880] BGSAVE doc points the user to the persistence doc for more info. --- commands/bgsave.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/bgsave.md b/commands/bgsave.md index 3fd2ee45db..233964381d 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -5,6 +5,8 @@ Redis forks, the parent continues to server the clients, the child saves the DB on disk then exit. A client my be able to check if the operation succeeded using the `LASTSAVE` command. +Please refer to the [persistence documentation](/topics/persistence) for detailed information. + @return @status-reply From c7a0fed8f54109c9ff1bfe073bd1e50e5516e28d Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 11:11:28 +0100 Subject: [PATCH 0038/2880] BLPOP/BRPOP doc improved. --- commands/bgsave.md | 7 +++++++ commands/blpop.md | 6 ++++++ commands/brpop.md | 16 +++++++++------- 3 files changed, 22 insertions(+), 7 deletions(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index 233964381d..9898e4b4f3 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -10,3 +10,10 @@ Please refer to the [persistence documentation](/topics/persistence) for detaile @return @status-reply + +@examples + + @cli + DEL list1 list2 + LPUSH list1 a b c + BLPOP list1 list2 0 diff --git a/commands/blpop.md b/commands/blpop.md index 82984e8be5..6c2a639822 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -68,3 +68,9 @@ infinite speed inside a `MULTI`/`EXEC` block. * A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element. +@examples + + @cli + DEL list1 list2 + LPUSH list1 a b c + BLPOP list1 list2 0 diff --git a/commands/brpop.md b/commands/brpop.md index 98c9607982..a58aa878be 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -3,13 +3,16 @@ O(1) -`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP` -because it blocks the connection when there are no elements to pop from any of -the given lists. An element is popped from the tail of the first list that is -non-empty, with the given keys being checked in the order that they are given. +`BRPOP` is a blocking list pop primitive. It is the blocking version of +[RPOP](/commands/rpop) because it blocks the connection when there are no +elements to pop from any of the given lists. An element is popped from the +tail of the first list that is non-empty, with the given keys being checked +in the order that they are given. -See `BLPOP` for the exact semantics. `BRPOP` is identical to `BLPOP`, apart -from popping from the tail of a list instead of the head of a list. +See the [BLPOP documentation](/commands/blpop) for the exact semantics, since +`BRPOP` is identical to [BLPOP](/commands/blpop) with the only difference +being that it pops elements from the tail of a list instead of popping from the +head. @return @@ -18,4 +21,3 @@ from popping from the tail of a list instead of the head of a list. * A `nil` multi-bulk when no element could be popped and the timeout expired. * A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element. - From 213d6632be178b826837d832cd1d747ae03af7ec Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 11:16:05 +0100 Subject: [PATCH 0039/2880] BLPOP/BRPOP example fixed. --- commands/bgsave.md | 7 ------- commands/blpop.md | 11 +++++++---- commands/brpop.md | 10 ++++++++++ 3 files changed, 17 insertions(+), 11 deletions(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index 9898e4b4f3..233964381d 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -10,10 +10,3 @@ Please refer to the [persistence documentation](/topics/persistence) for detaile @return @status-reply - -@examples - - @cli - DEL list1 list2 - LPUSH list1 a b c - BLPOP list1 list2 0 diff --git a/commands/blpop.md b/commands/blpop.md index 6c2a639822..e3427a042a 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -70,7 +70,10 @@ infinite speed inside a `MULTI`/`EXEC` block. @examples - @cli - DEL list1 list2 - LPUSH list1 a b c - BLPOP list1 list2 0 + redis> DEL list1 list2 + (integer) 0 + redis> RPUSH list1 a b c + (integer) 3 + redis> BLPOP list1 list2 0 + 1) "list1" + 2) "a" diff --git a/commands/brpop.md b/commands/brpop.md index a58aa878be..985b75ff22 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -21,3 +21,13 @@ head. * A `nil` multi-bulk when no element could be popped and the timeout expired. * A two-element multi-bulk with the first element being the name of the key where an element was popped and the second element being the value of the popped element. + +@examples + + redis> DEL list1 list2 + (integer) 0 + redis> RPUSH list1 a b c + (integer) 3 + redis> BRPOP list1 list2 0 + 1) "list1" + 2) "c" From 4f29a0dbdc6c537f367afebf1c2296197b19d344 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 11:27:57 +0100 Subject: [PATCH 0040/2880] Time series pattern example added to APPEND command. --- commands/append.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/commands/append.md b/commands/append.md index bb28ed562d..f5de633509 100644 --- a/commands/append.md +++ b/commands/append.md @@ -9,6 +9,26 @@ If `key` already exists and is a string, this command appends the `value` at the end of the string. If `key` does not exist it is created and set as an empty string, so `APPEND` will be similar to `SET` in this special case. +Pattern: Time series +--- + +The `APPEND` command can be used to create a very compact representation of +a list of fixed-size samples, usually referred as *time series*. +Every time a new sample arrives we can store it using the command + + APPEND timeseries "fixed-size sample" + +Accessing to individual elements in the time serie is not hard: + +* [STRLEN](/commands/strlen) can be used in order to obtain the number of samples. +* [GETRANGE](/commands/getrange) allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. +* [SETRANGE](/commands/setrange) can be used to overwrite an existing time serie. + +The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. + +Hint: it is possible to switch to a different key based on the current unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more +firendly to be distributed across many Redis instances. + @return @integer-reply: the length of the string after the append operation. From a7fb06c04a796612fb5446c873c449258ea88609 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 11:32:19 +0100 Subject: [PATCH 0041/2880] Patterns are better placed at the end of the command documentation. --- commands/append.md | 29 ++++++++++++++++++----------- commands/expire.md | 36 ++++++++++++++++++------------------ 2 files changed, 36 insertions(+), 29 deletions(-) diff --git a/commands/append.md b/commands/append.md index f5de633509..74530c60d7 100644 --- a/commands/append.md +++ b/commands/append.md @@ -9,10 +9,22 @@ If `key` already exists and is a string, this command appends the `value` at the end of the string. If `key` does not exist it is created and set as an empty string, so `APPEND` will be similar to `SET` in this special case. +@return + +@integer-reply: the length of the string after the append operation. + +@examples + + @cli + EXISTS mykey + APPEND mykey "Hello" + APPEND mykey " World" + GET mykey + Pattern: Time series --- -The `APPEND` command can be used to create a very compact representation of +the `APPEND` command can be used to create a very compact representation of a list of fixed-size samples, usually referred as *time series*. Every time a new sample arrives we can store it using the command @@ -29,15 +41,10 @@ The limitations of this pattern is that we are forced into an append-only mode o Hint: it is possible to switch to a different key based on the current unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more firendly to be distributed across many Redis instances. -@return - -@integer-reply: the length of the string after the append operation. - -@examples +An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations). @cli - EXISTS mykey - APPEND mykey "Hello" - APPEND mykey " World" - GET mykey - + APPEND ts "0043" + APPEND ts "0035" + GETRANGE ts 0 3 + GETRANGE ts 4 7 diff --git a/commands/expire.md b/commands/expire.md index c09fbcee6d..e5b721c1c2 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -42,6 +42,24 @@ a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed. +[1]: /topics/expire + +@return + +@integer-reply, specifically: + +* `1` if the timeout was set. +* `0` if `key` does not exist or the timeout could not be set. + +@examples + + @cli + SET mykey "Hello" + EXPIRE mykey 10 + TTL mykey + SET mykey "Hello World" + TTL mykey + Pattern: Navigation session --- @@ -65,21 +83,3 @@ subsequent pageviews that have less than 60 seconds of difference will be recorded. This pattern is easily modified to use counters using [INCR](/commands/incr) instead of lists using [RPUSH](/commands/rpush). - -[1]: /topics/expire - -@return - -@integer-reply, specifically: - -* `1` if the timeout was set. -* `0` if `key` does not exist or the timeout could not be set. - -@examples - - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - SET mykey "Hello World" - TTL mykey From 408de2ff22a766e59a1abd04ea993d3137b8129e Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 11:46:24 +0100 Subject: [PATCH 0042/2880] CONFIG GET documentation improved. --- commands/config get.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/commands/config get.md b/commands/config get.md index e5c0ff6a30..fd02a8192b 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -4,10 +4,13 @@ Not applicable. @description -The `CONFIG GET` command is used to read the configuration parameters of a running -Redis server. Not all the configuration parameters are supported. +The `CONFIG GET` command is used to read the configuration parameters of a +running Redis server. Not all the configuration parameters are +supported in Redis 2.4, while Redis 2.6 can read the whole configuration of +a server using this command. + The symmetric command used to alter the configuration at run time is -`CONFIG SET`. +[CONFIG SET](/commands/config-set). `CONFIG GET` takes a single argument, that is glob style pattern. All the configuration parameters matching this parameter are reported as a From 2b1a59cbb848e5054cb08173392cf8bdde32b563 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 14:06:08 +0100 Subject: [PATCH 0043/2880] DEBUG SEGFAULT and DEBUG OBJECT commands doc improved. --- commands/config resetstat.md | 2 +- commands/debug object.md | 9 +++------ commands/debug segfault.md | 9 +++------ 3 files changed, 7 insertions(+), 13 deletions(-) diff --git a/commands/config resetstat.md b/commands/config resetstat.md index 7e7a6002d2..c1312a9e66 100644 --- a/commands/config resetstat.md +++ b/commands/config resetstat.md @@ -2,7 +2,7 @@ O(1). -Resets the statistics reported by Redis using the `INFO` command. +Resets the statistics reported by Redis using the [INFO](/commands/info) command. These are the counters that are reset: diff --git a/commands/debug object.md b/commands/debug object.md index 8dafb3c81c..49ef0b9110 100644 --- a/commands/debug object.md +++ b/commands/debug object.md @@ -1,7 +1,4 @@ -@complexity +`DEBUG OBJECT` is a debugging command that should not be used by clients. +Check the [OBJECT](/commands/object) command instead. -@description - -@examples - -@return \ No newline at end of file +@status-reply diff --git a/commands/debug segfault.md b/commands/debug segfault.md index 8dafb3c81c..7524c166f2 100644 --- a/commands/debug segfault.md +++ b/commands/debug segfault.md @@ -1,7 +1,4 @@ -@complexity +`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. +It is used to simulate bugs during the development. -@description - -@examples - -@return \ No newline at end of file +@status-reply From 36908503106de40faf37e0f80c83fc550c48c67c Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:03:27 +0100 Subject: [PATCH 0044/2880] BRPOPLPUSH and RPOPLPUSH documentation improved. --- commands/brpoplpush.md | 21 +++++++++++++++------ commands/rpoplpush.md | 43 ++++++++++++++++++++++++++++++++++++------ 2 files changed, 52 insertions(+), 12 deletions(-) diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 85fc6a6349..cbe2a17b7c 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -2,15 +2,24 @@ O(1). -`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` -contains elements, this command behaves exactly like `RPOPLPUSH`. When -`source` is empty, Redis will block the connection until another client -pushes to it or until `timeout` is reached. A `timeout` of zero can be -used to block indefinitely. +`BRPOPLPUSH` is the blocking variant of [RPOPLPUSH](/commands/rpoplpush). +When `source` contains elements, this command behaves exactly like +[RPOPLPUSH](/commands/rpoplpush). When `source` is empty, Redis will block +the connection until another client pushes to it or until `timeout` is reached. A `timeout` of zero can be used to block indefinitely. -See `RPOPLPUSH` for more information. +See [RPOPLPUSH](/commands/rpoplpush) for more information. @return @bulk-reply: the element being popped from `source` and pushed to `destination`. If `timeout` is reached, a @nil-reply is returned. + +Pattern: Reliable queue +--- + +Please see the pattern description in the [RPOPLPUSH](/commands/rpoplpush) documentation. + +Pattern: Circular list +--- + +Please see the pattern description in the [RPOPLPUSH](/commands/rpoplpush) documentation. diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index b02c59b4a8..bcee21de41 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -47,11 +47,42 @@ processed. Another process (that we call _Helper_), can monitor the backup list to check for timed out entries to re-push against the main queue. -## Design pattern: server-side O(N) list traversal +Pattern: Reliable queue +--- -Using `RPOPLPUSH` with the same source and destination key, a process can -visit all the elements of an N-elements list in O(N) without transferring -the full list from the server to the client in a single `LRANGE` operation. -Note that a process can traverse the list even while other processes -are actively pushing to the list, and still no element will be skipped. +Redis is often used as a messaging server to implement processing of +background jobs or other kinds of messaging tasks. A simple form of queue +is often obtained pushing values into a list in the producer side, and +waiting for this values in the consumer side using [RPOP](/commadns/rpop) +(using polling), or [BRPOP](/commands/brpop) if the client is better served +by a blocking operation. +However in this context the obtained queue is not *reliable* as messages can +be lost, for example in the case there is a network problem or if the consumer +crashes just after the message is received but it is still to process. + +`RPOPLPUSH` (or [BRPOPLPUSH](/commands/brpoplpush) for the blocking variant) +offers a way to avoid this problem: the consumer fetches the message and +at the same time pushes it into a *processing* list. It will use the +[LREM](/commands/lrem) command in order to remove the message from the +*processing* list once the message has been processed. + +An additional client may monitor the *processing* list for items that remain +there for too much time, and will push those timed out items into the queue +again if needed. + +Pattern: Circular list +--- + +Using `RPOPLPUSH` with the same source and destination key, a client can +visit all the elements of an N-elements list, one after the other, in O(N) +without transferring the full list from the server to the client using a single +[LRANGE](/commands/lrange) operation. + +The above pattern works even if the following two conditions: +* There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. +* Even if other clients are actively pushing new items at the end of the list. + +The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers. + +Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration. From 063dfd3973fb8cebf58f1f8f9acd4175249435c7 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:04:36 +0100 Subject: [PATCH 0045/2880] Removed old pattern description. --- commands/rpoplpush.md | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index bcee21de41..e48e8b37f2 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -30,23 +30,6 @@ element of the list, so it can be considered as a list rotation command. LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 -## Design pattern: safe queues - -Redis lists are often used as queues in order to exchange messages between -different programs. A program can add a message performing an `LPUSH` operation -against a Redis list (we call this program the _Producer_), while another program -(that we call _Consumer_) can process the messages performing an `RPOP` command -in order to start reading the messages starting at the oldest. - -Unfortunately, if a _Consumer_ crashes just after an `RPOP` operation, the message -is lost. `RPOPLPUSH` solves this problem since the returned message is -added to another backup list. The _Consumer_ can later remove the message -from the backup list using the `LREM` command when the message was correctly -processed. - -Another process (that we call _Helper_), can monitor the backup list to check for -timed out entries to re-push against the main queue. - Pattern: Reliable queue --- From 70b77a9c933a1d8cd4b2be1513699d663be0192b Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:16:19 +0100 Subject: [PATCH 0046/2880] BLPOP doc: pattern added. --- commands/blpop.md | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/commands/blpop.md b/commands/blpop.md index e3427a042a..44fc3fe7c0 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -77,3 +77,31 @@ infinite speed inside a `MULTI`/`EXEC` block. redis> BLPOP list1 list2 0 1) "list1" 2) "a" + +## Pattern: Event notification + +Using blocking list operations it is possible to mount different blocking +primitives. For instance for some application you may need to block +waiting for elements into a Redis Set, so that as far as a new element is +added to the Set, it is possible to retrieve it without resort to polling. +This would require a blocking version of [SPOP](/commands/spop) that is +not available, but using blocking list operations we can easily accomplish +this task: + +This can be obtained using the following algorithm. The consumer will do: + + LOOP forever + WHILE SPOP(key) returns elements + ... process elements ... + END + BRPOP helper_key + END + +While in the producer side we'll use simply: + + MULTI + SADD key element + LPUSH helper_key x + EXEC + + From 29363e6482818d87e4947eda969c76683db6553d Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:18:06 +0100 Subject: [PATCH 0047/2880] Typo fixed --- commands/blpop.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/blpop.md b/commands/blpop.md index 44fc3fe7c0..cde4abe595 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -86,9 +86,9 @@ waiting for elements into a Redis Set, so that as far as a new element is added to the Set, it is possible to retrieve it without resort to polling. This would require a blocking version of [SPOP](/commands/spop) that is not available, but using blocking list operations we can easily accomplish -this task: +this task. -This can be obtained using the following algorithm. The consumer will do: +The consumer will do: LOOP forever WHILE SPOP(key) returns elements From 3353e3e206fbfc00ae7ce90fbd86460f3dae3907 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:21:43 +0100 Subject: [PATCH 0048/2880] DECR page improved a little bit. --- commands/decr.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/commands/decr.md b/commands/decr.md index 385bbea9e7..c76887f371 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -9,7 +9,8 @@ error is returned if the key contains a value of the wrong type or contains a string that is not representable as integer. This operation is limited to 64 bit signed integers. -See `INCR` for extra information on increment/decrement operations. +See [INCR](/commands/incr) for extra information on increment/decrement +operations. @return @@ -20,4 +21,5 @@ See `INCR` for extra information on increment/decrement operations. @cli SET mykey "10" DECR mykey - + SET mykey "234293482390480948029348230948" + DECR mykey From fbe5f8acb9b0bda0ac3f243d16cc8b51dd3e0145 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 15 Mar 2012 16:24:14 +0100 Subject: [PATCH 0049/2880] 64 bit signed integer limit of INCR/DECR made bold. --- commands/decr.md | 4 ++-- commands/incr.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/commands/decr.md b/commands/decr.md index c76887f371..b8bd9813ba 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -6,8 +6,8 @@ O(1) Decrements the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 -bit signed integers. +string that is not representable as integer. This operation is limited to **64 +bit signed integers**. See [INCR](/commands/incr) for extra information on increment/decrement operations. diff --git a/commands/incr.md b/commands/incr.md index 351ac0aa6b..5a28e954a9 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -10,8 +10,8 @@ string that is not representable as integer. This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated -integer type. The the string stored at the key is interpreted as a base-10 64 -bit signed integer to execute the operation. +integer type. The the string stored at the key is interpreted as a base-10 **64 +bit signed integer** to execute the operation. Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the From ad026b0a72d429ed103b35be9f434c9e3fc10570 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Thu, 15 Mar 2012 12:29:36 -0300 Subject: [PATCH 0050/2880] Update URL for Ruby client. --- clients.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/clients.json b/clients.json index 639b9ce2b0..eebe8975e4 100644 --- a/clients.json +++ b/clients.json @@ -3,7 +3,7 @@ "name": "redis-rb", "language": "Ruby", "url": "http://redis-rb.keyvalue.org", - "repository": "https://github.com/ezmobius/redis-rb", + "repository": "https://github.com/redis/redis-rb", "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performances.", "authors": ["ezmobius", "soveran", "djanowski", "pnoordhuis"], "recommended": true @@ -238,7 +238,7 @@ "description": "Standalone and full-featured class for Redis in PHP", "authors": ["OZ"] }, - + { "name": "Redisent", "language": "PHP", @@ -436,7 +436,7 @@ "description": "Static Library for iOS4 device and Simulator, plus Objective-C Framework for MacOS 10.5 and higher", "authors": ["loopole"] }, - + { "name": "Puredis", "language": "Pure Data", From b394f28980b6c6bbb26c7bbf8a35169b545e35fd Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Thu, 15 Mar 2012 09:54:09 -0700 Subject: [PATCH 0051/2880] Wrap paragraphs; fix links --- commands/expire.md | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/commands/expire.md b/commands/expire.md index e5b721c1c2..cb36c0aebc 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -7,24 +7,32 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. -The timeout is cleared only when the key is removed using the [DEL](/commands/del) command or overwritten using the [SET](/commands/set) or [GETSET](/commands/getset) commands. This means that all the operations that conceptually *alter* the value stored at key without replacing it with a new one will leave the expire untouched. For instance incrementing the value of a key with [INCR](/commands/incr), pushing a new value into a list with [LPUSH](/commands/lpush), or altering the field value of an Hash with [HSET](/commands/hset), are all operations that will leave the expire untouched. +The timeout is cleared only when the key is removed using the `DEL` command or +overwritten using the `SET` or `GETSET` commands. This means that all the +operations that conceptually *alter* the value stored at the key without +replacing it with a new one will leave the timeout untouched. For instance, +incrementing the value of a key with `INCR`, pushing a new value into a list +with `LPUSH`, or altering the field value of a hash with `HSET` are all +operations that will leave the timeout untouched. The timeout can also be cleared, turning the key back into a persistent key, -using the [PERSIST](/commands/persist) command. +using the `PERSIST` command. -If a key is renamed using the [RENAME](/commands/rename) command, the -associated time to live is transfered to the new key name. +If a key is renamed with `RENAME`, the associated time to live is transfered to +the new key name. -If a key is overwritten by [RENAME](commands/rename), like in the -case of an existing key `Key_A` that is overwritten by a call like -`RENAME Key_B Key_A`, it does not matter if the original `Key_A` had a timeout -associated or not, the new key `Key_A` will inherit all the characteristics -of `Key_B`. +If a key is overwritten by `RENAME`, like in the case of an existing key +`Key_A` that is overwritten by a call like `RENAME Key_B Key_A`, it does not +matter if the original `Key_A` had a timeout associated or not, the new key +`Key_A` will inherit all the characteristics of `Key_B`. Refreshing expires --- -It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern section below. +It is possible to call `EXPIRE` using as argument a key that already has an +existing expire set. In this case the time to live of a key is *updated* to the +new value. There are many useful applications for this, an example is +documented in the *Navigation session* pattern section below. Expire accuracy --- @@ -42,8 +50,6 @@ a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed. -[1]: /topics/expire - @return @integer-reply, specifically: @@ -82,4 +88,5 @@ If the user will be idle more than 60 seconds, the key will be deleted and only subsequent pageviews that have less than 60 seconds of difference will be recorded. -This pattern is easily modified to use counters using [INCR](/commands/incr) instead of lists using [RPUSH](/commands/rpush). +This pattern is easily modified to use counters using `INCR` instead of lists +using `RPUSH`. From ac59631807225ce6908401da51ef361c6665be1e Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 16 Mar 2012 09:39:12 +0100 Subject: [PATCH 0052/2880] Two new patterns for INCR. --- commands/incr.md | 115 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/commands/incr.md b/commands/incr.md index 5a28e954a9..470e9de466 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -28,3 +28,118 @@ string representation of the integer. INCR mykey GET mykey +## Pattern: Counter + +The counter pattern is the most obvious thing you can do with Redis atomic +increment operations. The idea is simply send an `INCR` command to Redis every +time an operation occurs. For instance in a web application we may want to know +how many page views this user did every day of the year. + +To do so the web application may simply increment a key every time the user +performs a page view, creating the key name concatenating the User ID and a +string representing the current date. + +This simple pattern can be extended in may ways: +* It is possible to use `INCR` and `EXPIRE` together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds. +* A client may use GETSET in order to atomically get the current counter value and reset it to zero. +* Using other atomic increment/decrement commands like `DECR` or `INCRBY` it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game. + +## Pattern: Rate limiter + +The rate limiter pattern is a special counter that is used to limit the rate +at which an operation can be performed. The classical materialization of this +pattern involves limiting the number of requests that can be performed against +a public API. + +We provide two implementations of this pattern using `INCR`, where we assume +that the problem to solve is limiting the number of API calls to a maximum +of *ten requests per second per IP address*. + +## Pattern: Rate limiter 1 + +The more simple and direct implementation of this pattern is the following: + + FUNCTION LIMIT_API_CALL(ip) + ts = CURRENT_UNIX_TIME() + keyname = ip+":"+ts + current = GET(keyname) + IF current != NULL AND current > 10 THEN + ERROR "too many requests per second" + ELSE + MULTI + INCR(keyname,1) + EXPIRE(keyname,10) + EXEC + PERFORM_API_CALL() + END + +Basically we have a counter for every IP, for every differet second. +But this counters are always incremented setting an expire of 10 seconds so +that they'll be removed by Redis automatically when the current second is +a different one. + +Note the used of `MULTI` and `EXEC` in order to make sure that we'll both +increment and set the expire at every API call. + +## Pattern: Rate limiter 2 + +An alternative implementation uses a single counter, but is a bit more +complex to get it right without race conditions. We'll examine different +variants. + + FUNCTION LIMIT_API_CALL(ip): + current = GET(ip) + IF current != NULL AND current > 10 THEN + ERROR "too many requests per second" + ELSE + value = INCR(ip) + IF value == 1 THEN + EXPIRE(value,1) + END + PERFORM_API_CALL() + END + +The counter is created in a way that it only will survive one second, starting +from the first request performed in the current second. If there are more than +10 requests in the same second the counter will reach a value greater than +10, otherwise it will expire and start again from 0. + +**In the above code there is a race condition**. If for some reason the +client performs the `INCR` command but does not perform the `EXPIRE` the +key will be leaked until we'll see the same IP address again. + +This can be fixed easily turning the `INCR` with optional `EXPIRE` into a +Lua script that is send using the `EVAL` command (only available since Redis +version 2.6). + + local current + current = redis.incr(KEYS[1]) + if tonumber(current) == 1 then + redis.expire(KEYS[1],1) + end + +There is a different way to fix this issue without using scripting, but using +Redis lists instead of counters. +The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application. + + FUNCTION LIMIT_API_CALL(ip) + current = LLEN(ip) + IF current > 10 THEN + ERROR "too many requests per second" + ELSE + IF EXISTS(ip) == FALSE + MULTI + RPUSH(ip,ip) + EXPIRE(ip,1) + EXEC + ELSE + RPUSHX(ip,ip) + END + PERFORM_API_CALL() + END + +The `RPUSHX` command only pushes the element if the key already exists. + +Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside the +`MULTI`/`EXEC` block. However this race will just miss an API call under rare +conditons, so the rate limiting will still work correctly. From f9c49f0dbd6a8ebfa9dc89488b74973755e0e9db Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 16 Mar 2012 10:22:03 +0100 Subject: [PATCH 0053/2880] Lua script fixed in INCR command man page. --- commands/incr.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/incr.md b/commands/incr.md index 470e9de466..cd0ad6fe34 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -113,9 +113,9 @@ Lua script that is send using the `EVAL` command (only available since Redis version 2.6). local current - current = redis.incr(KEYS[1]) + current = redis.call("incr",KEYS[1]) if tonumber(current) == 1 then - redis.expire(KEYS[1],1) + redis.call("expire",KEYS[1],1) end There is a different way to fix this issue without using scripting, but using From af6136434b57b68a5711727f58805aaed30ecc0f Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Fri, 16 Mar 2012 18:02:22 +0100 Subject: [PATCH 0054/2880] Newline missing, list was not correctly displayed. --- commands/incr.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/incr.md b/commands/incr.md index cd0ad6fe34..902d5b83f4 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -40,6 +40,7 @@ performs a page view, creating the key name concatenating the User ID and a string representing the current date. This simple pattern can be extended in may ways: + * It is possible to use `INCR` and `EXPIRE` together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds. * A client may use GETSET in order to atomically get the current counter value and reset it to zero. * Using other atomic increment/decrement commands like `DECR` or `INCRBY` it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game. From 61926bb7a032266654093cd4c00fb6ff2e4cea6b Mon Sep 17 00:00:00 2001 From: Michel Martens Date: Fri, 16 Mar 2012 14:16:56 -0300 Subject: [PATCH 0055/2880] Fix typo. --- commands/incr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/incr.md b/commands/incr.md index 902d5b83f4..707f7a4936 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -39,7 +39,7 @@ To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a string representing the current date. -This simple pattern can be extended in may ways: +This simple pattern can be extended in many ways: * It is possible to use `INCR` and `EXPIRE` together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds. * A client may use GETSET in order to atomically get the current counter value and reset it to zero. From d0465017af5139a6345941a86fcc859e6562d6c7 Mon Sep 17 00:00:00 2001 From: quiver Date: Tue, 20 Mar 2012 00:04:09 +0900 Subject: [PATCH 0056/2880] fix typo --- topics/data-types-intro.md | 2 +- topics/debugging.md | 2 +- topics/faq.md | 2 +- topics/persistence.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 425d89079e..5be69f87bb 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -323,7 +323,7 @@ fortunately, and this is the sane version: returned 123456). * Finally associate this new ID to our tag with *SETNX tag:b840fc02d524045429941cc15f59e41cb7be6c52:id 123456*. By using SETNX if a - different client was faster than this one the key wil not be setted. Not + different client was faster than this one the key will not be setted. Not only, SETNX returns 1 if the key is set, 0 otherwise. So... let's add a final step to our computation. * If SETNX returned 1 (We set the key) return 123456 to the caller, it's our diff --git a/topics/debugging.md b/topics/debugging.md index c6fcc1be8b..dae05dfbf9 100644 --- a/topics/debugging.md +++ b/topics/debugging.md @@ -78,7 +78,7 @@ In the above example the process ID is **58414**. For example: gdb /usr/local/bin/redis-server 58414 -GDB will start and will attach to the running server printing something like the followig: +GDB will start and will attach to the running server printing something like the following: Reading symbols for shared libraries + done 0x00007fff8d4797e6 in epoll_wait () diff --git a/topics/faq.md b/topics/faq.md index e20e8a442e..f0178642e8 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -20,7 +20,7 @@ To give you an example: 1 Million keys with the key being the natural numbers fr something like 16MB, this is expected because with small keys and values there is a lot of overhead. Memcached will perform similarly, but a bit better as Redis has more overhead (type information, refcount and so forth) to represent -differnet kinds of objects. +different kinds of objects. With large keys/values the ratio is much better of course. diff --git a/topics/persistence.md b/topics/persistence.md index 2f52b6cee1..e5388bc7e2 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -228,7 +228,7 @@ from doing heavy disk I/O at the same time. When snapshotting is in progress and the user explicitly requests a log rewrite operation using BGREWRITEAOF the server will reply with an OK -status code telling the user the operation is scheduled, and the rewirte +status code telling the user the operation is scheduled, and the rewrite will start once the snapshotting is completed. In the case both AOF and RDB persistence are enabled and Redis restarts the From 09c5db2dcbcdb74960cd82e7692d7ae0900d9847 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 09:51:04 +0100 Subject: [PATCH 0057/2880] Capital strings inside backticks are automatically translated into command links, so all the explicit links are now removed. --- commands/append.md | 6 +++--- commands/blpop.md | 2 +- commands/brpop.md | 4 ++-- commands/brpoplpush.md | 10 +++++----- commands/config get.md | 2 +- commands/config resetstat.md | 2 +- commands/debug object.md | 2 +- commands/decr.md | 2 +- commands/expireat.md | 2 +- commands/rpoplpush.md | 10 +++++----- commands/slowlog.md | 3 +-- 11 files changed, 22 insertions(+), 23 deletions(-) diff --git a/commands/append.md b/commands/append.md index 74530c60d7..5c1791212d 100644 --- a/commands/append.md +++ b/commands/append.md @@ -32,9 +32,9 @@ Every time a new sample arrives we can store it using the command Accessing to individual elements in the time serie is not hard: -* [STRLEN](/commands/strlen) can be used in order to obtain the number of samples. -* [GETRANGE](/commands/getrange) allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. -* [SETRANGE](/commands/setrange) can be used to overwrite an existing time serie. +* `STRLEN` can be used in order to obtain the number of samples. +* `GETRANGE` allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. +* `SETRANGE` can be used to overwrite an existing time serie. The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. diff --git a/commands/blpop.md b/commands/blpop.md index cde4abe595..fe58379548 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -84,7 +84,7 @@ Using blocking list operations it is possible to mount different blocking primitives. For instance for some application you may need to block waiting for elements into a Redis Set, so that as far as a new element is added to the Set, it is possible to retrieve it without resort to polling. -This would require a blocking version of [SPOP](/commands/spop) that is +This would require a blocking version of `SPOP` that is not available, but using blocking list operations we can easily accomplish this task. diff --git a/commands/brpop.md b/commands/brpop.md index 985b75ff22..4b6ac14c87 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -4,13 +4,13 @@ O(1) `BRPOP` is a blocking list pop primitive. It is the blocking version of -[RPOP](/commands/rpop) because it blocks the connection when there are no +`RPOP` because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the tail of the first list that is non-empty, with the given keys being checked in the order that they are given. See the [BLPOP documentation](/commands/blpop) for the exact semantics, since -`BRPOP` is identical to [BLPOP](/commands/blpop) with the only difference +`BRPOP` is identical to `BLPOP` with the only difference being that it pops elements from the tail of a list instead of popping from the head. diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index cbe2a17b7c..cac71982a2 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -2,12 +2,12 @@ O(1). -`BRPOPLPUSH` is the blocking variant of [RPOPLPUSH](/commands/rpoplpush). +`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` contains elements, this command behaves exactly like -[RPOPLPUSH](/commands/rpoplpush). When `source` is empty, Redis will block +`RPOPLPUSH`. When `source` is empty, Redis will block the connection until another client pushes to it or until `timeout` is reached. A `timeout` of zero can be used to block indefinitely. -See [RPOPLPUSH](/commands/rpoplpush) for more information. +See `RPOPLPUSH` for more information. @return @@ -17,9 +17,9 @@ See [RPOPLPUSH](/commands/rpoplpush) for more information. Pattern: Reliable queue --- -Please see the pattern description in the [RPOPLPUSH](/commands/rpoplpush) documentation. +Please see the pattern description in the `RPOPLPUSH` documentation. Pattern: Circular list --- -Please see the pattern description in the [RPOPLPUSH](/commands/rpoplpush) documentation. +Please see the pattern description in the `RPOPLPUSH` documentation. diff --git a/commands/config get.md b/commands/config get.md index fd02a8192b..9efa723029 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -10,7 +10,7 @@ supported in Redis 2.4, while Redis 2.6 can read the whole configuration of a server using this command. The symmetric command used to alter the configuration at run time is -[CONFIG SET](/commands/config-set). +`CONFIG SET`. `CONFIG GET` takes a single argument, that is glob style pattern. All the configuration parameters matching this parameter are reported as a diff --git a/commands/config resetstat.md b/commands/config resetstat.md index c1312a9e66..7e7a6002d2 100644 --- a/commands/config resetstat.md +++ b/commands/config resetstat.md @@ -2,7 +2,7 @@ O(1). -Resets the statistics reported by Redis using the [INFO](/commands/info) command. +Resets the statistics reported by Redis using the `INFO` command. These are the counters that are reset: diff --git a/commands/debug object.md b/commands/debug object.md index 49ef0b9110..ffc969d8ab 100644 --- a/commands/debug object.md +++ b/commands/debug object.md @@ -1,4 +1,4 @@ `DEBUG OBJECT` is a debugging command that should not be used by clients. -Check the [OBJECT](/commands/object) command instead. +Check the `OBJECT` command instead. @status-reply diff --git a/commands/decr.md b/commands/decr.md index b8bd9813ba..f0645a8781 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -9,7 +9,7 @@ error is returned if the key contains a value of the wrong type or contains a string that is not representable as integer. This operation is limited to **64 bit signed integers**. -See [INCR](/commands/incr) for extra information on increment/decrement +See `INCR` for extra information on increment/decrement operations. @return diff --git a/commands/expireat.md b/commands/expireat.md index 918bc96bb6..312432cefe 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -3,7 +3,7 @@ O(1) -`EXPIREAT` has the same effect and semantic as [EXPIRE](/commands/expire), but +`EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [UNIX timestamp][2] (seconds since January 1, 1970). Please for the specific semantics of the commands refer to the [EXPIRE command documentation](/commands/expire). diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index e48e8b37f2..eb30504a71 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -36,18 +36,18 @@ Pattern: Reliable queue Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained pushing values into a list in the producer side, and -waiting for this values in the consumer side using [RPOP](/commadns/rpop) -(using polling), or [BRPOP](/commands/brpop) if the client is better served +waiting for this values in the consumer side using `RPOP` +(using polling), or `BRPOP` if the client is better served by a blocking operation. However in this context the obtained queue is not *reliable* as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process. -`RPOPLPUSH` (or [BRPOPLPUSH](/commands/brpoplpush) for the blocking variant) +`RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it into a *processing* list. It will use the -[LREM](/commands/lrem) command in order to remove the message from the +`LREM` command in order to remove the message from the *processing* list once the message has been processed. An additional client may monitor the *processing* list for items that remain @@ -60,7 +60,7 @@ Pattern: Circular list Using `RPOPLPUSH` with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without transferring the full list from the server to the client using a single -[LRANGE](/commands/lrange) operation. +`LRANGE` operation. The above pattern works even if the following two conditions: * There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. diff --git a/commands/slowlog.md b/commands/slowlog.md index 25939c0404..bf2a1b36c6 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -18,8 +18,7 @@ in order to make space. The configuration can be done both editing the redis.conf file or while the server is running using -the [CONFIG GET](/commands/config-get) and [CONFIG SET](/commands/config-set) -commands. +the `CONFIG GET` and `CONFIG SET` commands. ## Reading the slow log From f4834ad3fa44f5f4ec178d190bf64bcc11cd16e1 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:21:05 +0100 Subject: [PATCH 0058/2880] TIME command added --- commands.json | 5 +++++ commands/time.md | 20 ++++++++++++++++++++ 2 files changed, 25 insertions(+) create mode 100644 commands/time.md diff --git a/commands.json b/commands.json index e8bc6f758e..d47962826f 100644 --- a/commands.json +++ b/commands.json @@ -1342,6 +1342,11 @@ "since": "0.07", "group": "server" }, + "TIME": { + "summary": "Return the current server time", + "since": "2.6.0", + "group": "server" + }, "TTL": { "summary": "Get the time to live for a key", "arguments": [ diff --git a/commands/time.md b/commands/time.md new file mode 100644 index 0000000000..0f36631b61 --- /dev/null +++ b/commands/time.md @@ -0,0 +1,20 @@ +@complexity + +O(1) + +The `TIME` command returns the current server time as a two items lists: an unix timestamp and the amount of microseconds already elapsed in the current second. +Basically the interface is very similar to the one of the `gethostbyname` syscall. + +@return + +@multi-bulk-reply, specifically: + +A multi bulk reply containing two elements: +* unix time in seconds. +* microseconds. + +@examples + + @cli + TIME + TIME From 7d0bcd144bc01db573fca93c75cc947a02ca3c66 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:22:24 +0100 Subject: [PATCH 0059/2880] Typo fixed. --- commands/time.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/commands/time.md b/commands/time.md index 0f36631b61..afec86936c 100644 --- a/commands/time.md +++ b/commands/time.md @@ -1,6 +1,5 @@ -@complexity +@complexity O(1) -O(1) The `TIME` command returns the current server time as a two items lists: an unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the `gethostbyname` syscall. From a780cde5c276f5248da09aa546a4ea9ce953a9ff Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:22:49 +0100 Subject: [PATCH 0060/2880] Typo fixed, again. --- commands/time.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/commands/time.md b/commands/time.md index afec86936c..4be7f8046a 100644 --- a/commands/time.md +++ b/commands/time.md @@ -1,4 +1,6 @@ -@complexity O(1) +@complexity + +O(1) The `TIME` command returns the current server time as a two items lists: an unix timestamp and the amount of microseconds already elapsed in the current second. From ab63fdd456b123359878441238194284d61b7308 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:29:49 +0100 Subject: [PATCH 0061/2880] TIME command: proper list formatting. --- commands/time.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/time.md b/commands/time.md index 4be7f8046a..85c8f61ad1 100644 --- a/commands/time.md +++ b/commands/time.md @@ -11,6 +11,7 @@ Basically the interface is very similar to the one of the `gethostbyname` syscal @multi-bulk-reply, specifically: A multi bulk reply containing two elements: + * unix time in seconds. * microseconds. From e7c3d0a70a980778e9937a15a5a91d06cfad09c6 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:46:49 +0100 Subject: [PATCH 0062/2880] SHUTDOWN NOSAVE and SAVE modifiers documented. --- commands.json | 14 ++++++++++++++ commands/shutdown.md | 7 +++++++ 2 files changed, 21 insertions(+) diff --git a/commands.json b/commands.json index d47962826f..fffc8ae045 100644 --- a/commands.json +++ b/commands.json @@ -1093,6 +1093,20 @@ }, "SHUTDOWN": { "summary": "Synchronously save the dataset to disk and then shut down the server", + "arguments": [ + { + "name": "NOSAVE", + "type": "enum", + "enum": ["NOSAVE"], + "optional": true + }, + { + "name": "SAVE", + "type": "enum", + "enum": ["SAVE"], + "optional": true + } + ], "since": "0.07", "group": "server" }, diff --git a/commands/shutdown.md b/commands/shutdown.md index a3abdd1d81..e901bf4966 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -15,6 +15,13 @@ Note: A Redis instance that is configured for not persisting on disk `SHUTDOWN`, as usually you don't want Redis instances used only for caching to block on when shutting down. +## SAVE and NOSAVE modifiers + +It is possible to specify an optional modifier to alter the behavior of the command. Specifically: + +* **SHUTDOWN SAVE** will force a DB saving operation even if no save points are configured. +* **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more save points are configured. (You can think at this variant as an hypothetical **ABORT** command that just stops the server). + @return @status-reply on error. On success nothing is returned since the server From 7c480a2720d75ce6d8599ae628e7772f18f5b0a5 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 10:47:22 +0100 Subject: [PATCH 0063/2880] In TIME doc fixed typo where I wrote gethostbyname instead of gettimeofday... --- commands/time.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/time.md b/commands/time.md index 85c8f61ad1..456ed080fb 100644 --- a/commands/time.md +++ b/commands/time.md @@ -4,7 +4,7 @@ O(1) The `TIME` command returns the current server time as a two items lists: an unix timestamp and the amount of microseconds already elapsed in the current second. -Basically the interface is very similar to the one of the `gethostbyname` syscall. +Basically the interface is very similar to the one of the `gettimeofday` syscall. @return From 8eab31ea383c3f14da636fd605a20b529e43338b Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 11:13:23 +0100 Subject: [PATCH 0064/2880] PEXIRE, PTTL, PSETEX added in command.js --- commands.json | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/commands.json b/commands.json index fffc8ae045..fe41e6c7b2 100644 --- a/commands.json +++ b/commands.json @@ -791,11 +791,45 @@ "since": "2.1.2", "group": "generic" }, + "PEXPIRE": { + "summary": "Set a key's time to live in milliseconds", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "milliseconds", + "type": "integer" + } + ], + "since": "2.6.0", + "group": "generic" + }, "PING": { "summary": "Ping the server", "since": "0.07", "group": "connection" }, + "PSETEX": { + "summary": "Set the value and expiration in milliseconds of a key", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "milliseconds", + "type": "integer" + }, + { + "name": "value", + "type": "string" + } + ], + "since": "2.6.0", + "group": "string" + }, "PSUBSCRIBE": { "summary": "Listen for messages published to channels matching the given patterns", "arguments": [ @@ -808,6 +842,17 @@ "since": "1.3.8", "group": "pubsub" }, + "PTTL": { + "summary": "Get the time to live for a key in milliseconds", + "arguments": [ + { + "name": "key", + "type": "key" + } + ], + "since": "2.6.0", + "group": "generic" + }, "PUBLISH": { "summary": "Post a message to a channel", "arguments": [ From 397d41569560edc21d0eac9019a2d17e7a87d0ac Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 11:19:36 +0100 Subject: [PATCH 0065/2880] PTTL, PSETEX, PEXPIRE documented. --- commands/pexpire.md | 19 +++++++++++++++++++ commands/psetex.md | 12 ++++++++++++ commands/pttl.md | 18 ++++++++++++++++++ 3 files changed, 49 insertions(+) create mode 100644 commands/pexpire.md create mode 100644 commands/psetex.md create mode 100644 commands/pttl.md diff --git a/commands/pexpire.md b/commands/pexpire.md new file mode 100644 index 0000000000..9b59fa952e --- /dev/null +++ b/commands/pexpire.md @@ -0,0 +1,19 @@ +@complexity + +O(1) + +This command works exactly like `EXPIRE` but the time to live of the key is +specified in milliseconds instead of seconds. + +@integer-reply, specifically: + +* `1` if the timeout was set. +* `0` if `key` does not exist or the timeout could not be set. + +@examples + + @cli + SET mykey "Hello" + PEXPIRE mykey 1500 + TTL mykey + PTTL mykey diff --git a/commands/psetex.md b/commands/psetex.md new file mode 100644 index 0000000000..add3901b3a --- /dev/null +++ b/commands/psetex.md @@ -0,0 +1,12 @@ +@complexity + +O(1) + +`PSETEX` works exactly like `SETEX` with the sole difference that the expire time is specified in milliseconds instead of seconds. + +@examples + + @cli + PSETEX mykey 1000 "Hello" + PTTL mykey + GET mykey diff --git a/commands/pttl.md b/commands/pttl.md new file mode 100644 index 0000000000..473f7268e3 --- /dev/null +++ b/commands/pttl.md @@ -0,0 +1,18 @@ +@complexity + +O(1) + + +Like `TTL` this comand returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in secodns while `PTTL` returns it in milliseconds. + +@return + +@integer-reply: Time to live in milliseconds or `-1` when `key` does not exist or does not have a timeout. + +@examples + + @cli + SET mykey "Hello" + EXPIRE mykey 1 + PTTL mykey + From ec8f532d7f7a35892580804a439973287f41d596 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 11:28:10 +0100 Subject: [PATCH 0066/2880] PEXPIREAT documented. --- commands.json | 15 +++++++++++++++ commands/pexpireat.md | 21 +++++++++++++++++++++ 2 files changed, 36 insertions(+) create mode 100644 commands/pexpireat.md diff --git a/commands.json b/commands.json index fe41e6c7b2..99d8c6b764 100644 --- a/commands.json +++ b/commands.json @@ -806,6 +806,21 @@ "since": "2.6.0", "group": "generic" }, + "PEXPIREAT": { + "summary": "Set the expiration for a key as a UNIX timestamp specified in milliseconds", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "milliseconds timestamp", + "type": "posix time" + } + ], + "since": "2.6.0", + "group": "generic" + }, "PING": { "summary": "Ping the server", "since": "0.07", diff --git a/commands/pexpireat.md b/commands/pexpireat.md new file mode 100644 index 0000000000..e40b4ccd3d --- /dev/null +++ b/commands/pexpireat.md @@ -0,0 +1,21 @@ +@complexity + +O(1) + + +`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the unix time at which the key will expire is specified in milliseconds instead of seconds. + +@return + +@integer-reply, specifically: + +* `1` if the timeout was set. +* `0` if `key` does not exist or the timeout could not be set (see: `EXPIRE`). + +@examples + + @cli + SET mykey "Hello" + PEXPIREAT mykey 1555555555005 + TTL mykey + PTTL mykey From ca6c06963eb64c7f345e047e770614dfbeb30991 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 12:56:47 +0100 Subject: [PATCH 0067/2880] Commands "availabie since" field moved to the first stable release including the command. --- commands.json | 244 +++++++++++++++++++++++++------------------------- 1 file changed, 122 insertions(+), 122 deletions(-) diff --git a/commands.json b/commands.json index 99d8c6b764..b8dc854a23 100644 --- a/commands.json +++ b/commands.json @@ -11,7 +11,7 @@ "type": "string" } ], - "since": "1.3.3", + "since": "2.0.0", "group": "string" }, "AUTH": { @@ -22,17 +22,17 @@ "type": "string" } ], - "since": "0.08", + "since": "1.0.0", "group": "connection" }, "BGREWRITEAOF": { "summary": "Asynchronously rewrite the append-only file", - "since": "1.07", + "since": "1.0.0", "group": "server" }, "BGSAVE": { "summary": "Asynchronously save the dataset to disk", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "BLPOP": { @@ -48,7 +48,7 @@ "type": "integer" } ], - "since": "1.3.1", + "since": "2.0.0", "group": "list" }, "BRPOP": { @@ -64,7 +64,7 @@ "type": "integer" } ], - "since": "1.3.1", + "since": "2.0.0", "group": "list" }, "BRPOPLPUSH": { @@ -83,7 +83,7 @@ "type": "integer" } ], - "since": "2.1.7", + "since": "2.2.0", "group": "list" }, "CONFIG GET": { @@ -94,7 +94,7 @@ "type": "string" } ], - "since": "2.0", + "since": "2.0.0", "group": "server" }, "CONFIG SET": { @@ -109,17 +109,17 @@ "type": "string" } ], - "since": "2.0", + "since": "2.0.0", "group": "server" }, "CONFIG RESETSTAT": { "summary": "Reset the stats returned by INFO", - "since": "2.0", + "since": "2.0.0", "group": "server" }, "DBSIZE": { "summary": "Return the number of keys in the selected database", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "DEBUG OBJECT": { @@ -130,12 +130,12 @@ "type": "key" } ], - "since": "0.101", + "since": "1.0.0", "group": "server" }, "DEBUG SEGFAULT": { "summary": "Make the server crash", - "since": "0.101", + "since": "1.0.0", "group": "server" }, "DECR": { @@ -146,7 +146,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "DECRBY": { @@ -161,7 +161,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "DEL": { @@ -173,12 +173,12 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "DISCARD": { "summary": "Discard all commands issued after MULTI", - "since": "1.3.3", + "since": "2.0.0", "group": "transactions" }, "ECHO": { @@ -189,12 +189,12 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "connection" }, "EXEC": { "summary": "Execute all commands issued after MULTI", - "since": "1.1.95", + "since": "1.2.0", "group": "transactions" }, "EXISTS": { @@ -205,7 +205,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "EXPIRE": { @@ -220,7 +220,7 @@ "type": "integer" } ], - "since": "0.09", + "since": "1.0.0", "group": "generic" }, "EXPIREAT": { @@ -235,17 +235,17 @@ "type": "posix time" } ], - "since": "1.1", + "since": "1.2.0", "group": "generic" }, "FLUSHALL": { "summary": "Remove all keys from all databases", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "FLUSHDB": { "summary": "Remove all keys from the current database", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "GET": { @@ -256,7 +256,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "GETBIT": { @@ -271,7 +271,7 @@ "type": "integer" } ], - "since": "2.1.8", + "since": "2.2.0", "group": "string" }, "GETRANGE": { @@ -290,7 +290,7 @@ "type": "integer" } ], - "since": "1.3.4", + "since": "2.4.0", "group": "string" }, "GETSET": { @@ -305,7 +305,7 @@ "type": "string" } ], - "since": "0.091", + "since": "1.0.0", "group": "string" }, "HDEL": { @@ -321,7 +321,7 @@ "multiple": true } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HEXISTS": { @@ -336,7 +336,7 @@ "type": "string" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HGET": { @@ -351,7 +351,7 @@ "type": "string" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HGETALL": { @@ -362,7 +362,7 @@ "type": "key" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HINCRBY": { @@ -381,7 +381,7 @@ "type": "integer" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HKEYS": { @@ -392,7 +392,7 @@ "type": "key" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HLEN": { @@ -403,7 +403,7 @@ "type": "key" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HMGET": { @@ -419,7 +419,7 @@ "multiple": true } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HMSET": { @@ -435,7 +435,7 @@ "multiple": true } ], - "since": "1.3.8", + "since": "2.0.0", "group": "hash" }, "HSET": { @@ -454,7 +454,7 @@ "type": "string" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "HSETNX": { @@ -473,7 +473,7 @@ "type": "string" } ], - "since": "1.3.8", + "since": "2.0.0", "group": "hash" }, "HVALS": { @@ -484,7 +484,7 @@ "type": "key" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "hash" }, "INCR": { @@ -495,7 +495,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "INCRBY": { @@ -510,12 +510,12 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "INFO": { "summary": "Get information and statistics about the server", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "KEYS": { @@ -526,12 +526,12 @@ "type": "pattern" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "LASTSAVE": { "summary": "Get the UNIX time stamp of the last successful save to disk", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "LINDEX": { @@ -546,7 +546,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LINSERT": { @@ -570,7 +570,7 @@ "type": "string" } ], - "since": "2.1.1", + "since": "2.2.0", "group": "list" }, "LLEN": { @@ -581,7 +581,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LPOP": { @@ -592,7 +592,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LPUSH": { @@ -608,7 +608,7 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LPUSHX": { @@ -623,7 +623,7 @@ "type": "string" } ], - "since": "2.1.1", + "since": "2.2.0", "group": "list" }, "LRANGE": { @@ -642,7 +642,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LREM": { @@ -661,7 +661,7 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LSET": { @@ -680,7 +680,7 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "LTRIM": { @@ -699,7 +699,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "MGET": { @@ -711,12 +711,12 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "MONITOR": { "summary": "Listen for all requests received by the server in real time", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "MOVE": { @@ -731,7 +731,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "MSET": { @@ -743,7 +743,7 @@ "multiple": true } ], - "since": "1.001", + "since": "1.0.1", "group": "string" }, "MSETNX": { @@ -755,12 +755,12 @@ "multiple": true } ], - "since": "1.001", + "since": "1.0.1", "group": "string" }, "MULTI": { "summary": "Mark the start of a transaction block", - "since": "1.1.95", + "since": "1.2.0", "group": "transactions" }, "OBJECT": { @@ -788,7 +788,7 @@ "type": "key" } ], - "since": "2.1.2", + "since": "2.2.0", "group": "generic" }, "PEXPIRE": { @@ -823,7 +823,7 @@ }, "PING": { "summary": "Ping the server", - "since": "0.07", + "since": "1.0.0", "group": "connection" }, "PSETEX": { @@ -854,7 +854,7 @@ "multiple": true } ], - "since": "1.3.8", + "since": "2.0.0", "group": "pubsub" }, "PTTL": { @@ -880,7 +880,7 @@ "type": "string" } ], - "since": "1.3.8", + "since": "2.0.0", "group": "pubsub" }, "PUNSUBSCRIBE": { @@ -893,17 +893,17 @@ "multiple": true } ], - "since": "1.3.8", + "since": "2.0.0", "group": "pubsub" }, "QUIT": { "summary": "Close the connection", - "since": "0.07", + "since": "1.0.0", "group": "connection" }, "RANDOMKEY": { "summary": "Return a random key from the keyspace", - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "RENAME": { @@ -918,7 +918,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "RENAMENX": { @@ -933,7 +933,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "RPOP": { @@ -944,7 +944,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "RPOPLPUSH": { @@ -959,7 +959,7 @@ "type": "key" } ], - "since": "1.1", + "since": "1.2.0", "group": "list" }, "RPUSH": { @@ -975,7 +975,7 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "list" }, "RPUSHX": { @@ -990,7 +990,7 @@ "type": "string" } ], - "since": "2.1.1", + "since": "2.2.0", "group": "list" }, "SADD": { @@ -1006,12 +1006,12 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SAVE": { "summary": "Synchronously save the dataset to disk", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "SCARD": { @@ -1022,7 +1022,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SDIFF": { @@ -1034,7 +1034,7 @@ "multiple": true } ], - "since": "0.100", + "since": "1.0.0", "group": "set" }, "SDIFFSTORE": { @@ -1050,7 +1050,7 @@ "multiple": true } ], - "since": "0.100", + "since": "1.0.0", "group": "set" }, "SELECT": { @@ -1061,7 +1061,7 @@ "type": "integer" } ], - "since": "0.07", + "since": "1.0.0", "group": "connection" }, "SET": { @@ -1076,7 +1076,7 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "SETBIT": { @@ -1095,7 +1095,7 @@ "type": "string" } ], - "since": "2.1.8", + "since": "2.2.0", "group": "string" }, "SETEX": { @@ -1114,7 +1114,7 @@ "type": "string" } ], - "since": "1.3.10", + "since": "2.0.0", "group": "string" }, "SETNX": { @@ -1129,7 +1129,7 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "string" }, "SETRANGE": { @@ -1148,7 +1148,7 @@ "type": "string" } ], - "since": "2.1.8", + "since": "2.2.0", "group": "string" }, "SHUTDOWN": { @@ -1167,7 +1167,7 @@ "optional": true } ], - "since": "0.07", + "since": "1.0.0", "group": "server" }, "SINTER": { @@ -1179,7 +1179,7 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SINTERSTORE": { @@ -1195,7 +1195,7 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SISMEMBER": { @@ -1210,7 +1210,7 @@ "type": "string" } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SLAVEOF": { @@ -1225,7 +1225,7 @@ "type": "string" } ], - "since": "0.100", + "since": "1.0.0", "group": "server" }, "SLOWLOG": { @@ -1252,7 +1252,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "SMOVE": { @@ -1271,7 +1271,7 @@ "type": "string" } ], - "since": "0.091", + "since": "1.0.0", "group": "set" }, "SORT": { @@ -1319,7 +1319,7 @@ "optional": true } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "SPOP": { @@ -1330,7 +1330,7 @@ "type": "key" } ], - "since": "0.101", + "since": "1.0.0", "group": "set" }, "SRANDMEMBER": { @@ -1341,7 +1341,7 @@ "type": "key" } ], - "since": "1.001", + "since": "1.0.0", "group": "set" }, "SREM": { @@ -1357,7 +1357,7 @@ "multiple": true } ], - "since": "0.07", + "since": "1.0.0", "group": "set" }, "STRLEN": { @@ -1368,7 +1368,7 @@ "type": "key" } ], - "since": "2.1.2", + "since": "2.2.0", "group": "string" }, "SUBSCRIBE": { @@ -1380,7 +1380,7 @@ "multiple": true } ], - "since": "1.3.8", + "since": "2.0.0", "group": "pubsub" }, "SUNION": { @@ -1392,7 +1392,7 @@ "multiple": true } ], - "since": "0.091", + "since": "1.0.0", "group": "set" }, "SUNIONSTORE": { @@ -1408,12 +1408,12 @@ "multiple": true } ], - "since": "0.091", + "since": "1.0.0", "group": "set" }, "SYNC": { "summary": "Internal command used for replication", - "since": "0.07", + "since": "1.0.0", "group": "server" }, "TIME": { @@ -1429,7 +1429,7 @@ "type": "key" } ], - "since": "0.100", + "since": "1.0.0", "group": "generic" }, "TYPE": { @@ -1440,7 +1440,7 @@ "type": "key" } ], - "since": "0.07", + "since": "1.0.0", "group": "generic" }, "UNSUBSCRIBE": { @@ -1453,12 +1453,12 @@ "multiple": true } ], - "since": "1.3.8", + "since": "2.0.0", "group": "pubsub" }, "UNWATCH": { "summary": "Forget about all watched keys", - "since": "2.1.0", + "since": "2.2.0", "group": "transactions" }, "WATCH": { @@ -1470,7 +1470,7 @@ "multiple": true } ], - "since": "2.1.0", + "since": "2.2.0", "group": "transactions" }, "ZADD": { @@ -1499,7 +1499,7 @@ "optional": true } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZCARD": { @@ -1510,7 +1510,7 @@ "type": "key" } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZCOUNT": { @@ -1529,7 +1529,7 @@ "type": "double" } ], - "since": "1.3.3", + "since": "2.0.0", "group": "sorted_set" }, "ZINCRBY": { @@ -1548,7 +1548,7 @@ "type": "string" } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZINTERSTORE": { @@ -1582,7 +1582,7 @@ "optional": true } ], - "since": "1.3.10", + "since": "2.0.0", "group": "sorted_set" }, "ZRANGE": { @@ -1607,7 +1607,7 @@ "optional": true } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZRANGEBYSCORE": { @@ -1638,7 +1638,7 @@ "optional": true } ], - "since": "1.050", + "since": "1.0.5", "group": "sorted_set" }, "ZRANK": { @@ -1653,7 +1653,7 @@ "type": "string" } ], - "since": "1.3.4", + "since": "2.0.0", "group": "sorted_set" }, "ZREM": { @@ -1669,7 +1669,7 @@ "multiple": true } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZREMRANGEBYRANK": { @@ -1688,7 +1688,7 @@ "type": "integer" } ], - "since": "1.3.4", + "since": "2.0.0", "group": "sorted_set" }, "ZREMRANGEBYSCORE": { @@ -1707,7 +1707,7 @@ "type": "double" } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZREVRANGE": { @@ -1732,7 +1732,7 @@ "optional": true } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZREVRANGEBYSCORE": { @@ -1763,7 +1763,7 @@ "optional": true } ], - "since": "2.1.6", + "since": "2.2.0", "group": "sorted_set" }, "ZREVRANK": { @@ -1778,7 +1778,7 @@ "type": "string" } ], - "since": "1.3.4", + "since": "2.0.0", "group": "sorted_set" }, "ZSCORE": { @@ -1793,7 +1793,7 @@ "type": "string" } ], - "since": "1.1", + "since": "1.2.0", "group": "sorted_set" }, "ZUNIONSTORE": { @@ -1827,7 +1827,7 @@ "optional": true } ], - "since": "1.3.10", + "since": "2.0.0", "group": "sorted_set" }, "EVAL": { From 33ae408373e774eeb1e04f04fcf47a184951af49 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 15:16:33 +0100 Subject: [PATCH 0068/2880] typo fixed. --- commands/pttl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/pttl.md b/commands/pttl.md index 473f7268e3..93ad4a623e 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -3,7 +3,7 @@ O(1) -Like `TTL` this comand returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in secodns while `PTTL` returns it in milliseconds. +Like `TTL` this comand returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. @return From ae60889eca2ab134eaac21baa97ad24979f3609b Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 20 Mar 2012 23:11:51 +0100 Subject: [PATCH 0069/2880] EVAL in the right position in commands.json. --- commands.json | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/commands.json b/commands.json index b8dc854a23..8bfc8d888b 100644 --- a/commands.json +++ b/commands.json @@ -192,6 +192,31 @@ "since": "1.0.0", "group": "connection" }, + "EVAL": { + "summary": "Execute a Lua script server side", + "arguments": [ + { + "name": "script", + "type": "string" + }, + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "name": "arg", + "type": "string", + "multiple": true + } + ], + "since": "2.6.0", + "group": "generic" + }, "EXEC": { "summary": "Execute all commands issued after MULTI", "since": "1.2.0", @@ -1829,30 +1854,5 @@ ], "since": "2.0.0", "group": "sorted_set" - }, - "EVAL": { - "summary": "Execute a Lua script server side", - "arguments": [ - { - "name": "script", - "type": "string" - }, - { - "name": "numkeys", - "type": "integer" - }, - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "name": "arg", - "type": "string", - "multiple": true - } - ], - "since": "2.6.0", - "group": "generic" } } From 441c1c5ff9365ac88443f2c6b6f079e39d7d825c Mon Sep 17 00:00:00 2001 From: Michel Martens Date: Tue, 20 Mar 2012 15:19:22 -0300 Subject: [PATCH 0070/2880] Move complexity to commands.json. --- README.md | 16 ++---- commands.json | 100 +++++++++++++++++++++++++++++++++++ commands/append.md | 7 --- commands/auth.md | 2 - commands/blpop.md | 5 -- commands/brpop.md | 5 -- commands/brpoplpush.md | 4 -- commands/config get.md | 6 --- commands/config resetstat.md | 4 -- commands/config set.md | 6 --- commands/decr.md | 5 -- commands/decrby.md | 5 -- commands/del.md | 7 --- commands/echo.md | 2 - commands/eval.md | 5 -- commands/exists.md | 5 -- commands/expire.md | 5 -- commands/expireat.md | 5 -- commands/flushall.md | 2 - commands/flushdb.md | 2 - commands/get.md | 5 -- commands/getbit.md | 5 -- commands/getrange.md | 6 --- commands/getset.md | 5 -- commands/hdel.md | 5 -- commands/hexists.md | 5 -- commands/hget.md | 5 -- commands/hgetall.md | 4 -- commands/hincrby.md | 5 -- commands/hkeys.md | 4 -- commands/hlen.md | 5 -- commands/hmget.md | 4 -- commands/hmset.md | 4 -- commands/hset.md | 5 -- commands/hsetnx.md | 5 -- commands/hvals.md | 4 -- commands/incr.md | 5 -- commands/incrby.md | 5 -- commands/keys.md | 5 -- commands/lastsave.md | 2 - commands/lindex.md | 6 --- commands/linsert.md | 6 --- commands/llen.md | 5 -- commands/lpop.md | 5 -- commands/lpush.md | 5 -- commands/lpushx.md | 5 -- commands/lrange.md | 5 -- commands/lrem.md | 4 -- commands/lset.md | 5 -- commands/ltrim.md | 4 -- commands/mget.md | 5 -- commands/monitor.md | 4 +- commands/move.md | 5 -- commands/mset.md | 5 -- commands/msetnx.md | 5 -- commands/object.md | 4 -- commands/persist.md | 5 -- commands/ping.md | 2 - commands/psubscribe.md | 4 -- commands/publish.md | 6 --- commands/punsubscribe.md | 6 --- commands/quit.md | 2 - commands/randomkey.md | 5 -- commands/rename.md | 5 -- commands/renamenx.md | 5 -- commands/rpop.md | 5 -- commands/rpoplpush.md | 5 -- commands/rpush.md | 5 -- commands/rpushx.md | 5 -- commands/sadd.md | 5 -- commands/save.md | 6 +-- commands/scard.md | 5 -- commands/sdiff.md | 4 -- commands/sdiffstore.md | 4 -- commands/select.md | 2 - commands/set.md | 5 -- commands/setbit.md | 5 -- commands/setex.md | 5 -- commands/setnx.md | 5 -- commands/setrange.md | 6 --- commands/sinter.md | 5 -- commands/sinterstore.md | 5 -- commands/sismember.md | 5 -- commands/smembers.md | 4 -- commands/smove.md | 5 -- commands/sort.md | 4 -- commands/spop.md | 5 -- commands/srandmember.md | 5 -- commands/srem.md | 5 -- commands/strlen.md | 5 -- commands/subscribe.md | 4 -- commands/sunion.md | 4 -- commands/sunionstore.md | 4 -- commands/sync.md | 6 +-- commands/ttl.md | 5 -- commands/type.md | 5 -- commands/unsubscribe.md | 4 -- commands/unwatch.md | 4 -- commands/watch.md | 4 -- commands/zadd.md | 4 -- commands/zcard.md | 5 -- commands/zcount.md | 5 -- commands/zincrby.md | 4 -- commands/zinterstore.md | 6 --- commands/zrange.md | 5 -- commands/zrangebyscore.md | 6 --- commands/zrank.md | 5 -- commands/zrem.md | 4 -- commands/zremrangebyrank.md | 5 -- commands/zremrangebyscore.md | 5 -- commands/zrevrange.md | 5 -- commands/zrevrangebyscore.md | 6 --- commands/zrevrank.md | 5 -- commands/zscore.md | 5 -- commands/zunionstore.md | 5 -- 115 files changed, 107 insertions(+), 542 deletions(-) diff --git a/README.md b/README.md index 7843b35a56..bcb88384fa 100644 --- a/README.md +++ b/README.md @@ -49,20 +49,12 @@ backticks. For example: `INCR`. example: `@multi-bulk-reply`. These keywords will get expanded and auto-linked to relevant parts of the documentation. -There should be at least three predefined sections: time complexity, -description and return value. These sections are marked using magic -keywords, too: - - @complexity - - O(n), where N is the number of keys in the database. - - - @description +There should be at least two predefined sections: description and +return value. The return value section is marked using the @return +keyword: Returns all keys matching the given pattern. - @return @multi-bulk-reply: all the keys that matched the pattern. @@ -82,7 +74,7 @@ Once you're done, the very least you should do is make sure that all files compile properly. You can do this by running Rake inside your working directory. - $ rake + $ rake parse Additionally, if you have [Aspell](http://aspell.net/) installed, you can spell check the documentation: diff --git a/commands.json b/commands.json index e8bc6f758e..2eb7761ff9 100644 --- a/commands.json +++ b/commands.json @@ -1,6 +1,7 @@ { "APPEND": { "summary": "Append a value to a key", + "complexity": "O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.", "arguments": [ { "name": "key", @@ -37,6 +38,7 @@ }, "BLPOP": { "summary": "Remove and get the first element in a list, or block until one is available", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -53,6 +55,7 @@ }, "BRPOP": { "summary": "Remove and get the last element in a list, or block until one is available", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -69,6 +72,7 @@ }, "BRPOPLPUSH": { "summary": "Pop a value from a list, push it to another list and return it; or block until one is available", + "complexity": "O(1)", "arguments": [ { "name": "source", @@ -114,6 +118,7 @@ }, "CONFIG RESETSTAT": { "summary": "Reset the stats returned by INFO", + "complexity": "O(1)", "since": "2.0", "group": "server" }, @@ -140,6 +145,7 @@ }, "DECR": { "summary": "Decrement the integer value of a key by one", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -151,6 +157,7 @@ }, "DECRBY": { "summary": "Decrement the integer value of a key by the given number", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -166,6 +173,7 @@ }, "DEL": { "summary": "Delete a key", + "complexity": "O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1).", "arguments": [ { "name": "key", @@ -199,6 +207,7 @@ }, "EXISTS": { "summary": "Determine if a key exists", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -210,6 +219,7 @@ }, "EXPIRE": { "summary": "Set a key's time to live in seconds", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -225,6 +235,7 @@ }, "EXPIREAT": { "summary": "Set the expiration for a key as a UNIX timestamp", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -250,6 +261,7 @@ }, "GET": { "summary": "Get the value of a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -261,6 +273,7 @@ }, "GETBIT": { "summary": "Returns the bit value at offset in the string value stored at key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -276,6 +289,7 @@ }, "GETRANGE": { "summary": "Get a substring of the string stored at a key", + "complexity": "O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.", "arguments": [ { "name": "key", @@ -295,6 +309,7 @@ }, "GETSET": { "summary": "Set the string value of a key and return its old value", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -310,6 +325,7 @@ }, "HDEL": { "summary": "Delete one or more hash fields", + "complexity": "O(N) where N is the number of fields to be removed.", "arguments": [ { "name": "key", @@ -326,6 +342,7 @@ }, "HEXISTS": { "summary": "Determine if a hash field exists", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -341,6 +358,7 @@ }, "HGET": { "summary": "Get the value of a hash field", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -356,6 +374,7 @@ }, "HGETALL": { "summary": "Get all the fields and values in a hash", + "complexity": "O(N) where N is the size of the hash.", "arguments": [ { "name": "key", @@ -367,6 +386,7 @@ }, "HINCRBY": { "summary": "Increment the integer value of a hash field by the given number", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -386,6 +406,7 @@ }, "HKEYS": { "summary": "Get all the fields in a hash", + "complexity": "O(N) where N is the size of the hash.", "arguments": [ { "name": "key", @@ -397,6 +418,7 @@ }, "HLEN": { "summary": "Get the number of fields in a hash", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -408,6 +430,7 @@ }, "HMGET": { "summary": "Get the values of all the given hash fields", + "complexity": "O(N) where N is the number of fields being requested.", "arguments": [ { "name": "key", @@ -424,6 +447,7 @@ }, "HMSET": { "summary": "Set multiple hash fields to multiple values", + "complexity": "O(N) where N is the number of fields being set.", "arguments": [ { "name": "key", @@ -440,6 +464,7 @@ }, "HSET": { "summary": "Set the string value of a hash field", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -459,6 +484,7 @@ }, "HSETNX": { "summary": "Set the value of a hash field, only if the field does not exist", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -478,6 +504,7 @@ }, "HVALS": { "summary": "Get all the values in a hash", + "complexity": "O(N) where N is the size of the hash.", "arguments": [ { "name": "key", @@ -489,6 +516,7 @@ }, "INCR": { "summary": "Increment the integer value of a key by one", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -500,6 +528,7 @@ }, "INCRBY": { "summary": "Increment the integer value of a key by the given number", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -520,6 +549,7 @@ }, "KEYS": { "summary": "Find all keys matching the given pattern", + "complexity": "O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.", "arguments": [ { "name": "pattern", @@ -536,6 +566,7 @@ }, "LINDEX": { "summary": "Get an element from a list by its index", + "complexity": "O(N) where N is the number of elements to traverse to get to the element at `index`. This makes asking for the first or the last element of the list O(1).", "arguments": [ { "name": "key", @@ -551,6 +582,7 @@ }, "LINSERT": { "summary": "Insert an element before or after another element in a list", + "complexity": "O(N) where N is the number of elements to traverse before seeing the value `pivot`. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).", "arguments": [ { "name": "key", @@ -575,6 +607,7 @@ }, "LLEN": { "summary": "Get the length of a list", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -586,6 +619,7 @@ }, "LPOP": { "summary": "Remove and get the first element in a list", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -597,6 +631,7 @@ }, "LPUSH": { "summary": "Prepend one or multiple values to a list", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -613,6 +648,7 @@ }, "LPUSHX": { "summary": "Prepend a value to a list, only if the list exists", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -628,6 +664,7 @@ }, "LRANGE": { "summary": "Get a range of elements from a list", + "complexity": "O(S+N) where S is the `start` offset and N is the number of elements in the specified range.", "arguments": [ { "name": "key", @@ -647,6 +684,7 @@ }, "LREM": { "summary": "Remove elements from a list", + "complexity": "O(N) where N is the length of the list.", "arguments": [ { "name": "key", @@ -666,6 +704,7 @@ }, "LSET": { "summary": "Set the value of an element in a list by its index", + "complexity": "O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1).", "arguments": [ { "name": "key", @@ -685,6 +724,7 @@ }, "LTRIM": { "summary": "Trim a list to the specified range", + "complexity": "O(N) where N is the number of elements to be removed by the operation.", "arguments": [ { "name": "key", @@ -704,6 +744,7 @@ }, "MGET": { "summary": "Get the values of all the given keys", + "complexity": "O(N) where N is the number of keys to retrieve.", "arguments": [ { "name": "key", @@ -721,6 +762,7 @@ }, "MOVE": { "summary": "Move a key to another database", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -736,6 +778,7 @@ }, "MSET": { "summary": "Set multiple keys to multiple values", + "complexity": "O(N) where N is the number of keys to set.", "arguments": [ { "name": ["key", "value"], @@ -748,6 +791,7 @@ }, "MSETNX": { "summary": "Set multiple keys to multiple values, only if none of the keys exist", + "complexity": "O(N) where N is the number of keys to set.", "arguments": [ { "name": ["key", "value"], @@ -765,6 +809,7 @@ }, "OBJECT": { "summary": "Inspect the internals of Redis objects", + "complexity": "O(1) for all the currently implemented subcommands.", "since": "2.2.3", "group": "generic", "arguments": [ @@ -782,6 +827,7 @@ }, "PERSIST": { "summary": "Remove the expiration from a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -798,6 +844,7 @@ }, "PSUBSCRIBE": { "summary": "Listen for messages published to channels matching the given patterns", + "complexity": "O(N) where N is the number of patterns the client is already subscribed to.", "arguments": [ { "name": ["pattern"], @@ -810,6 +857,7 @@ }, "PUBLISH": { "summary": "Post a message to a channel", + "complexity": "O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).", "arguments": [ { "name": "channel", @@ -825,6 +873,7 @@ }, "PUNSUBSCRIBE": { "summary": "Stop listening for messages posted to channels matching the given patterns", + "complexity": "O(N+M) where N is the number of patterns the client is already subscribed and M is the number of total patterns subscribed in the system (by any client).", "arguments": [ { "name": "pattern", @@ -843,11 +892,13 @@ }, "RANDOMKEY": { "summary": "Return a random key from the keyspace", + "complexity": "O(1)", "since": "0.07", "group": "generic" }, "RENAME": { "summary": "Rename a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -863,6 +914,7 @@ }, "RENAMENX": { "summary": "Rename a key, only if the new key does not exist", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -878,6 +930,7 @@ }, "RPOP": { "summary": "Remove and get the last element in a list", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -889,6 +942,7 @@ }, "RPOPLPUSH": { "summary": "Remove the last element in a list, append it to another list and return it", + "complexity": "O(1)", "arguments": [ { "name": "source", @@ -904,6 +958,7 @@ }, "RPUSH": { "summary": "Append one or multiple values to a list", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -920,6 +975,7 @@ }, "RPUSHX": { "summary": "Append a value to a list, only if the list exists", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -935,6 +991,7 @@ }, "SADD": { "summary": "Add one or more members to a set", + "complexity": "O(N) where N is the number of members to be added.", "arguments": [ { "name": "key", @@ -956,6 +1013,7 @@ }, "SCARD": { "summary": "Get the number of members in a set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -967,6 +1025,7 @@ }, "SDIFF": { "summary": "Subtract multiple sets", + "complexity": "O(N) where N is the total number of elements in all given sets.", "arguments": [ { "name": "key", @@ -979,6 +1038,7 @@ }, "SDIFFSTORE": { "summary": "Subtract multiple sets and store the resulting set in a key", + "complexity": "O(N) where N is the total number of elements in all given sets.", "arguments": [ { "name": "destination", @@ -1006,6 +1066,7 @@ }, "SET": { "summary": "Set the string value of a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1021,6 +1082,7 @@ }, "SETBIT": { "summary": "Sets or clears the bit at offset in the string value stored at key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1040,6 +1102,7 @@ }, "SETEX": { "summary": "Set the value and expiration of a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1059,6 +1122,7 @@ }, "SETNX": { "summary": "Set the value of a key, only if the key does not exist", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1074,6 +1138,7 @@ }, "SETRANGE": { "summary": "Overwrite part of a string at key starting at the specified offset", + "complexity": "O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the `value` argument.", "arguments": [ { "name": "key", @@ -1098,6 +1163,7 @@ }, "SINTER": { "summary": "Intersect multiple sets", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", "arguments": [ { "name": "key", @@ -1110,6 +1176,7 @@ }, "SINTERSTORE": { "summary": "Intersect multiple sets and store the resulting set in a key", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", "arguments": [ { "name": "destination", @@ -1126,6 +1193,7 @@ }, "SISMEMBER": { "summary": "Determine if a given value is a member of a set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1172,6 +1240,7 @@ }, "SMEMBERS": { "summary": "Get all the members in a set", + "complexity": "O(N) where N is the set cardinality.", "arguments": [ { "name": "key", @@ -1183,6 +1252,7 @@ }, "SMOVE": { "summary": "Move a member from one set to another", + "complexity": "O(1)", "arguments": [ { "name": "source", @@ -1202,6 +1272,7 @@ }, "SORT": { "summary": "Sort the elements in a list, set or sorted set", + "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is currently O(N) as there is a copy step that will be avoided in next releases.", "arguments": [ { "name": "key", @@ -1250,6 +1321,7 @@ }, "SPOP": { "summary": "Remove and return a random member from a set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1261,6 +1333,7 @@ }, "SRANDMEMBER": { "summary": "Get a random member from a set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1272,6 +1345,7 @@ }, "SREM": { "summary": "Remove one or more members from a set", + "complexity": "O(N) where N is the number of members to be removed.", "arguments": [ { "name": "key", @@ -1288,6 +1362,7 @@ }, "STRLEN": { "summary": "Get the length of the value stored in a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1299,6 +1374,7 @@ }, "SUBSCRIBE": { "summary": "Listen for messages published to the given channels", + "complexity": "O(N) where N is the number of channels to subscribe to.", "arguments": [ { "name": ["channel"], @@ -1311,6 +1387,7 @@ }, "SUNION": { "summary": "Add multiple sets", + "complexity": "O(N) where N is the total number of elements in all given sets.", "arguments": [ { "name": "key", @@ -1323,6 +1400,7 @@ }, "SUNIONSTORE": { "summary": "Add multiple sets and store the resulting set in a key", + "complexity": "O(N) where N is the total number of elements in all given sets.", "arguments": [ { "name": "destination", @@ -1344,6 +1422,7 @@ }, "TTL": { "summary": "Get the time to live for a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1355,6 +1434,7 @@ }, "TYPE": { "summary": "Determine the type stored at key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1366,6 +1446,7 @@ }, "UNSUBSCRIBE": { "summary": "Stop listening for messages posted to the given channels", + "complexity": "O(N) where N is the number of clients already subscribed to a channel.", "arguments": [ { "name": "channel", @@ -1379,11 +1460,13 @@ }, "UNWATCH": { "summary": "Forget about all watched keys", + "complexity": "O(1)", "since": "2.1.0", "group": "transactions" }, "WATCH": { "summary": "Watch the given keys to determine execution of the MULTI/EXEC block", + "complexity": "O(1) for every key.", "arguments": [ { "name": "key", @@ -1396,6 +1479,7 @@ }, "ZADD": { "summary": "Add one or more members to a sorted set, or update its score if it already exists", + "complexity": "O(log(N)) where N is the number of elements in the sorted set.", "arguments": [ { "name": "key", @@ -1425,6 +1509,7 @@ }, "ZCARD": { "summary": "Get the number of members in a sorted set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1436,6 +1521,7 @@ }, "ZCOUNT": { "summary": "Count the members in a sorted set with scores within the given values", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M being the number of elements between `min` and `max`.", "arguments": [ { "name": "key", @@ -1455,6 +1541,7 @@ }, "ZINCRBY": { "summary": "Increment the score of a member in a sorted set", + "complexity": "O(log(N)) where N is the number of elements in the sorted set.", "arguments": [ { "name": "key", @@ -1474,6 +1561,7 @@ }, "ZINTERSTORE": { "summary": "Intersect multiple sorted sets and store the resulting sorted set in a new key", + "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", "arguments": [ { "name": "destination", @@ -1508,6 +1596,7 @@ }, "ZRANGE": { "summary": "Return a range of members in a sorted set, by index", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.", "arguments": [ { "name": "key", @@ -1533,6 +1622,7 @@ }, "ZRANGEBYSCORE": { "summary": "Return a range of members in a sorted set, by score", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with `LIMIT`), you can consider it O(log(N)).", "arguments": [ { "name": "key", @@ -1564,6 +1654,7 @@ }, "ZRANK": { "summary": "Determine the index of a member in a sorted set", + "complexity": "O(log(N))", "arguments": [ { "name": "key", @@ -1579,6 +1670,7 @@ }, "ZREM": { "summary": "Remove one or more members from a sorted set", + "complexity": "O(M*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed.", "arguments": [ { "name": "key", @@ -1595,6 +1687,7 @@ }, "ZREMRANGEBYRANK": { "summary": "Remove all members in a sorted set within the given indexes", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.", "arguments": [ { "name": "key", @@ -1614,6 +1707,7 @@ }, "ZREMRANGEBYSCORE": { "summary": "Remove all members in a sorted set within the given scores", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.", "arguments": [ { "name": "key", @@ -1633,6 +1727,7 @@ }, "ZREVRANGE": { "summary": "Return a range of members in a sorted set, by index, with scores ordered from high to low", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.", "arguments": [ { "name": "key", @@ -1658,6 +1753,7 @@ }, "ZREVRANGEBYSCORE": { "summary": "Return a range of members in a sorted set, by score, with scores ordered from high to low", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with `LIMIT`), you can consider it O(log(N)).", "arguments": [ { "name": "key", @@ -1689,6 +1785,7 @@ }, "ZREVRANK": { "summary": "Determine the index of a member in a sorted set, with scores ordered from high to low", + "complexity": "O(log(N))", "arguments": [ { "name": "key", @@ -1704,6 +1801,7 @@ }, "ZSCORE": { "summary": "Get the score associated with the given member in a sorted set", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -1719,6 +1817,7 @@ }, "ZUNIONSTORE": { "summary": "Add multiple sorted sets and store the resulting sorted set in a new key", + "complexity": "O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.", "arguments": [ { "name": "destination", @@ -1753,6 +1852,7 @@ }, "EVAL": { "summary": "Execute a Lua script server side", + "complexity": "Looking up the script both with `EVAL` or `EVALSHA` is an O(1) business. The additional complexity is up to the script you execute.", "arguments": [ { "name": "script", diff --git a/commands/append.md b/commands/append.md index 74530c60d7..e1125f44d0 100644 --- a/commands/append.md +++ b/commands/append.md @@ -1,10 +1,3 @@ -@complexity - -O(1). The amortized time complexity is O(1) assuming the appended value is -small and the already present value is of any size, since the dynamic string -library used by Redis will double the free space available on every -reallocation. - If `key` already exists and is a string, this command appends the `value` at the end of the string. If `key` does not exist it is created and set as an empty string, so `APPEND` will be similar to `SET` in this special case. diff --git a/commands/auth.md b/commands/auth.md index 4340d76575..fc3dba8d76 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,5 +1,3 @@ -@description - Request for authentication in a password protected Redis server. Redis can be instructed to require a password before allowing clients to execute commands. This is done using the `requirepass` directive in the diff --git a/commands/blpop.md b/commands/blpop.md index cde4abe595..9b8920c73f 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - `BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP` because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the head of the first list that is diff --git a/commands/brpop.md b/commands/brpop.md index 985b75ff22..935b9c28fc 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - `BRPOP` is a blocking list pop primitive. It is the blocking version of [RPOP](/commands/rpop) because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index cbe2a17b7c..fe1a9dd97d 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -1,7 +1,3 @@ -@complexity - -O(1). - `BRPOPLPUSH` is the blocking variant of [RPOPLPUSH](/commands/rpoplpush). When `source` contains elements, this command behaves exactly like [RPOPLPUSH](/commands/rpoplpush). When `source` is empty, Redis will block diff --git a/commands/config get.md b/commands/config get.md index fd02a8192b..1220945c7f 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -1,9 +1,3 @@ -@complexity - -Not applicable. - -@description - The `CONFIG GET` command is used to read the configuration parameters of a running Redis server. Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 can read the whole configuration of diff --git a/commands/config resetstat.md b/commands/config resetstat.md index c1312a9e66..1308780807 100644 --- a/commands/config resetstat.md +++ b/commands/config resetstat.md @@ -1,7 +1,3 @@ -@complexity - -O(1). - Resets the statistics reported by Redis using the [INFO](/commands/info) command. These are the counters that are reset: diff --git a/commands/config set.md b/commands/config set.md index 0c1d41aa1d..b59e683291 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -1,9 +1,3 @@ -@complexity - -Not applicable. - -@description - The `CONFIG SET` command is used in order to reconfigure the server at runtime without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command. diff --git a/commands/decr.md b/commands/decr.md index b8bd9813ba..7dcb861e64 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Decrements the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a diff --git a/commands/decrby.md b/commands/decrby.md index 773d07f039..b819c599d8 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Decrements the number stored at `key` by `decrement`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a diff --git a/commands/del.md b/commands/del.md index 0a05f3642f..058f1c49f1 100644 --- a/commands/del.md +++ b/commands/del.md @@ -1,10 +1,3 @@ -@complexity - -O(N) where N is the number of keys that will be removed. When a key to remove -holds a value other than a string, the individual complexity for this key is -O(M) where M is the number of elements in the list, set, sorted set or hash. -Removing a single key that holds a string value is O(1). - Removes the specified keys. A key is ignored if it does not exist. @return diff --git a/commands/echo.md b/commands/echo.md index b06a94bdf0..d72c833f7a 100644 --- a/commands/echo.md +++ b/commands/echo.md @@ -1,5 +1,3 @@ -@description - Returns `message`. @return diff --git a/commands/eval.md b/commands/eval.md index b4cce17bef..1dd8c89daf 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -1,8 +1,3 @@ -@complexity - -Looking up the script both with `EVAL` or `EVALSHA` is an O(1) business. The -additional complexity is up to the script you execute. - Warning --- diff --git a/commands/exists.md b/commands/exists.md index 5dab4f67b6..7d1cf0a80e 100644 --- a/commands/exists.md +++ b/commands/exists.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns if `key` exists. @return diff --git a/commands/expire.md b/commands/expire.md index cb36c0aebc..a2dddca8aa 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. diff --git a/commands/expireat.md b/commands/expireat.md index 918bc96bb6..8b81395652 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - `EXPIREAT` has the same effect and semantic as [EXPIRE](/commands/expire), but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [UNIX timestamp][2] (seconds since January 1, 1970). diff --git a/commands/flushall.md b/commands/flushall.md index e5f18817ff..f5ba483bc5 100644 --- a/commands/flushall.md +++ b/commands/flushall.md @@ -1,5 +1,3 @@ - - Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. @return diff --git a/commands/flushdb.md b/commands/flushdb.md index f233e3e764..1024207e30 100644 --- a/commands/flushdb.md +++ b/commands/flushdb.md @@ -1,5 +1,3 @@ - - Delete all the keys of the currently selected DB. This command never fails. @return diff --git a/commands/get.md b/commands/get.md index cfabc62cb2..baf29780bc 100644 --- a/commands/get.md +++ b/commands/get.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Get the value of `key`. If the key does not exist the special value `nil` is returned. An error is returned if the value stored at `key` is not a string, because `GET` only handles string values. diff --git a/commands/getbit.md b/commands/getbit.md index 9dd3554cb2..b0555f72e3 100644 --- a/commands/getbit.md +++ b/commands/getbit.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the bit value at _offset_ in the string value stored at _key_. When _offset_ is beyond the string length, the string is assumed to be a diff --git a/commands/getrange.md b/commands/getrange.md index f4c1bd017c..be937e0728 100644 --- a/commands/getrange.md +++ b/commands/getrange.md @@ -1,9 +1,3 @@ -@complexity - -O(N) where N is the length of the returned string. The complexity is ultimately -determined by the returned length, but because creating a substring from an -existing string is very cheap, it can be considered O(1) for small strings. - **Warning**: this command was renamed to `GETRANGE`, it is called `SUBSTR` in Redis versions `<= 2.0`. Returns the substring of the string value stored at `key`, determined by the diff --git a/commands/getset.md b/commands/getset.md index 64d537ba5b..b42389d36d 100644 --- a/commands/getset.md +++ b/commands/getset.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Atomically sets `key` to `value` and returns the old value stored at `key`. Returns an error when `key` exists but does not hold a string value. diff --git a/commands/hdel.md b/commands/hdel.md index 8f6aec6c27..99928296f2 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of fields to be removed. - - Removes the specified fields from the hash stored at `key`. Specified fields that do not exist within this hash are ignored. If `key` does not exist, it is treated as an empty hash and this command returns diff --git a/commands/hexists.md b/commands/hexists.md index ea6a213ca7..e52755e56f 100644 --- a/commands/hexists.md +++ b/commands/hexists.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns if `field` is an existing field in the hash stored at `key`. @return diff --git a/commands/hget.md b/commands/hget.md index ef07f9d58c..ff5448b4fd 100644 --- a/commands/hget.md +++ b/commands/hget.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the value associated with `field` in the hash stored at `key`. @return diff --git a/commands/hgetall.md b/commands/hgetall.md index e027ea6478..f4072756c8 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the size of the hash. - Returns all fields and values of the hash stored at `key`. In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the hash. diff --git a/commands/hincrby.md b/commands/hincrby.md index bf69ea7cd1..abc7cc5eac 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Increments the number stored at `field` in the hash stored at `key` by `increment`. If `key` does not exist, a new key holding a hash is created. If `field` does not exist the value is set to `0` before the operation is diff --git a/commands/hkeys.md b/commands/hkeys.md index 07d15e7203..2f6b4001d7 100644 --- a/commands/hkeys.md +++ b/commands/hkeys.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the size of the hash. - Returns all field names in the hash stored at `key`. @return diff --git a/commands/hlen.md b/commands/hlen.md index 9d79c3ced0..b068cd185f 100644 --- a/commands/hlen.md +++ b/commands/hlen.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the number of fields contained in the hash stored at `key`. @return diff --git a/commands/hmget.md b/commands/hmget.md index 7cc47b3afc..ea08ce6b7c 100644 --- a/commands/hmget.md +++ b/commands/hmget.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the number of fields being requested. - Returns the values associated with the specified `fields` in the hash stored at `key`. diff --git a/commands/hmset.md b/commands/hmset.md index 2dc715a90a..2b27655d6c 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the number of fields being set. - Sets the specified fields to their respective values in the hash stored at `key`. This command overwrites any existing fields in the hash. If `key` does not exist, a new key holding a hash is created. diff --git a/commands/hset.md b/commands/hset.md index a453b0f25d..f0f76ff454 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a new key holding a hash is created. If `field` already exists in the hash, it is overwritten. diff --git a/commands/hsetnx.md b/commands/hsetnx.md index 6e09ea835a..0bf86efe5f 100644 --- a/commands/hsetnx.md +++ b/commands/hsetnx.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Sets `field` in the hash stored at `key` to `value`, only if `field` does not yet exist. If `key` does not exist, a new key holding a hash is created. If `field` already exists, this operation has no effect. diff --git a/commands/hvals.md b/commands/hvals.md index 9e3b5c0231..d6793c300d 100644 --- a/commands/hvals.md +++ b/commands/hvals.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the size of the hash. - Returns all values in the hash stored at `key`. @return diff --git a/commands/incr.md b/commands/incr.md index 707f7a4936..02f82ee755 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Increments the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a diff --git a/commands/incrby.md b/commands/incrby.md index 58e587ff89..e6101beaa3 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Increments the number stored at `key` by `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a diff --git a/commands/keys.md b/commands/keys.md index aa1907688c..66d819252a 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -1,8 +1,3 @@ -@complexity - -O(N) with N being the number of keys in the database, under the assumption that -the key names in the database and the given pattern have limited length. - Returns all keys matching `pattern`. While the time complexity for this operation is O(N), the constant diff --git a/commands/lastsave.md b/commands/lastsave.md index 62f2de6e07..32b18ec259 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,5 +1,3 @@ - - Return the UNIX TIME of the last DB save executed with success. A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then issuing a `BGSAVE` command and checking at regular intervals diff --git a/commands/lindex.md b/commands/lindex.md index e7d00a1850..e88bd16855 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -1,9 +1,3 @@ -@complexity - -O(N) where N is the number of elements to traverse to get to the element -at `index`. This makes asking for the first or the last -element of the list O(1). - Returns the element at index `index` in the list stored at `key`. The index is zero-based, so `0` means the first element, `1` the second element and so on. Negative indices can be used to designate elements diff --git a/commands/linsert.md b/commands/linsert.md index 801a996bbd..3a5ed458c9 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -1,9 +1,3 @@ -@complexity - -O(N) where N is the number of elements to traverse before seeing the value -`pivot`. This means that inserting somewhere on the left end on the list (head) -can be considered O(1) and inserting somewhere on the right end (tail) is O(N). - Inserts `value` in the list stored at `key` either before or after the reference value `pivot`. diff --git a/commands/llen.md b/commands/llen.md index 4bcabdfc77..6e5eeeec2c 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the length of the list stored at `key`. If `key` does not exist, it is interpreted as an empty list and `0` is returned. An error is returned when the value stored at `key` is not a list. diff --git a/commands/lpop.md b/commands/lpop.md index ee88737580..056fea6efd 100644 --- a/commands/lpop.md +++ b/commands/lpop.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Removes and returns the first element of the list stored at `key`. @return diff --git a/commands/lpush.md b/commands/lpush.md index aaeb51debe..a3a497d7fb 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Insert all the specified values at the head of the list stored at `key`. If `key` does not exist, it is created as empty list before performing the push operations. diff --git a/commands/lpushx.md b/commands/lpushx.md index acf61c7bbf..10a85c40b2 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Inserts `value` at the head of the list stored at `key`, only if `key` already exists and holds a list. In contrary to `LPUSH`, no operation will be performed when `key` does not yet exist. diff --git a/commands/lrange.md b/commands/lrange.md index d570458c56..66f9ac0e7c 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -1,8 +1,3 @@ -@complexity - -O(S+N) where S is the `start` offset and N is the number of elements in the -specified range. - Returns the specified elements of the list stored at `key`. The offsets `start` and `stop` are zero-based indexes, with `0` being the first element of the list (the head of the list), `1` being the next element and so on. diff --git a/commands/lrem.md b/commands/lrem.md index 42c5fc64ab..418aca73c7 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the length of the list. - Removes the first `count` occurrences of elements equal to `value` from the list stored at `key`. The `count` argument influences the operation in the following ways: diff --git a/commands/lset.md b/commands/lset.md index 87cae024a7..94331dd492 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the length of the list. Setting either the first or the last -element of the list is O(1). - Sets the list element at `index` to `value`. For more information on the `index` argument, see `LINDEX`. diff --git a/commands/ltrim.md b/commands/ltrim.md index de8f564e34..2b649e5f22 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the number of elements to be removed by the operation. - Trim an existing list so that it will contain only the specified range of elements specified. Both `start` and `stop` are zero-based indexes, where `0` is the first element of the list (the head), `1` the next element and so on. diff --git a/commands/mget.md b/commands/mget.md index b3ee07cbba..899a354edc 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of keys to retrieve - - Returns the values of all specified keys. For every key that does not hold a string value or does not exist, the special value `nil` is returned. Because of this, the operation never fails. diff --git a/commands/monitor.md b/commands/monitor.md index 67811a2a31..78e773d8f7 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -1,5 +1,3 @@ - - `MONITOR` is a debugging command that outputs the whole sequence of commands received by the Redis server. is very handy in order to understand what is happening into the database. This command is used directly @@ -35,4 +33,4 @@ In order to end a monitoring session just issue a `QUIT` command by hand. @return **Non standard return value**, just dumps the received commands in an infinite -flow. \ No newline at end of file +flow. diff --git a/commands/move.md b/commands/move.md index 62f845b8c8..7af84ddd65 100644 --- a/commands/move.md +++ b/commands/move.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Move `key` from the currently selected database (see `SELECT`) to the specified destination database. When `key` already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to diff --git a/commands/mset.md b/commands/mset.md index 21c51124d2..1de357313e 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of keys to set - - Sets the given keys to their respective values. `MSET` replaces existing values with new values, just as regular `SET`. See `MSETNX` if you don't want to overwrite existing values. diff --git a/commands/msetnx.md b/commands/msetnx.md index d8f395e021..fc7db7dc8a 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of keys to set - - Sets the given keys to their respective values. `MSETNX` will not perform any operation at all even if just a single key already exists. diff --git a/commands/object.md b/commands/object.md index 86f12059f2..8b36531a05 100644 --- a/commands/object.md +++ b/commands/object.md @@ -1,7 +1,3 @@ -@complexity - -O(1) for all the currently implemented subcommands. - The `OBJECT` command allows to inspect the internals of Redis Objects associated with keys. It is useful for debugging or to understand if your keys are using the specially encoded data types to save space. Your application may also use diff --git a/commands/persist.md b/commands/persist.md index 18d041a3c3..013466a85a 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Remove the existing timeout on `key`, turning the key from _volatile_ (a key with an expire set) to _persistent_ (a key that will never expire as no timeout is associated). @return diff --git a/commands/ping.md b/commands/ping.md index b572a36c68..a04eb4408f 100644 --- a/commands/ping.md +++ b/commands/ping.md @@ -1,5 +1,3 @@ -@description - Returns `PONG`. This command is often used to test if a connection is still alive, or to measure latency. diff --git a/commands/psubscribe.md b/commands/psubscribe.md index ee6a6842eb..089d1df803 100644 --- a/commands/psubscribe.md +++ b/commands/psubscribe.md @@ -1,5 +1 @@ -@complexity - -O(N) where N is the number of patterns the client is already subscribed to. - Subscribes the client to the given patterns. diff --git a/commands/publish.md b/commands/publish.md index fa55e3702f..e4b338ab5b 100644 --- a/commands/publish.md +++ b/commands/publish.md @@ -1,9 +1,3 @@ -@complexity - -O(N+M) where N is the number of clients subscribed to the receiving -channel and M is the total number of subscribed patterns (by any -client). - Posts a message to the given channel. @return diff --git a/commands/punsubscribe.md b/commands/punsubscribe.md index 4f5cd4a7e3..0aba74a197 100644 --- a/commands/punsubscribe.md +++ b/commands/punsubscribe.md @@ -1,9 +1,3 @@ -@complexity - -O(N+M) where N is the number of patterns the client is already -subscribed and M is the number of total patterns subscribed in the -system (by any client). - Unsubscribes the client from the given patterns, or from all of them if none is given. diff --git a/commands/quit.md b/commands/quit.md index 333ddc696d..69cf085214 100644 --- a/commands/quit.md +++ b/commands/quit.md @@ -1,5 +1,3 @@ -@description - Ask the server to close the connection. The connection is closed as soon as all pending replies have been written to the client. diff --git a/commands/randomkey.md b/commands/randomkey.md index 2bd29212be..9517bfaad5 100644 --- a/commands/randomkey.md +++ b/commands/randomkey.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Return a random key from the currently selected database. @return diff --git a/commands/rename.md b/commands/rename.md index 329e0ad274..5e1d5fc191 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Renames `key` to `newkey`. It returns an error when the source and destination names are the same, or when `key` does not exist. If `newkey` already exists it is overwritten. diff --git a/commands/renamenx.md b/commands/renamenx.md index 8bb75909c3..0318690765 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error under the same conditions as `RENAME`. diff --git a/commands/rpop.md b/commands/rpop.md index 85299770f1..ec65f946dc 100644 --- a/commands/rpop.md +++ b/commands/rpop.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Removes and returns the last element of the list stored at `key`. @return diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index e48e8b37f2..69853f7a68 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Atomically returns and removes the last element (tail) of the list stored at `source`, and pushes the element at the first element (head) of the list stored at `destination`. diff --git a/commands/rpush.md b/commands/rpush.md index df832bff25..cdc897fe9b 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Insert all the specified values at the tail of the list stored at `key`. If `key` does not exist, it is created as empty list before performing the push operation. diff --git a/commands/rpushx.md b/commands/rpushx.md index 94aa6a6bc0..8c7f3d324f 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Inserts `value` at the tail of the list stored at `key`, only if `key` already exists and holds a list. In contrary to `RPUSH`, no operation will be performed when `key` does not yet exist. diff --git a/commands/sadd.md b/commands/sadd.md index 419f13c46c..9afe485ae4 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of members to be added. - - Add the specified members to the set stored at `key`. Specified members that are already a member of this set are ignored. If `key` does not exist, a new set is created before adding the specified members. diff --git a/commands/save.md b/commands/save.md index 8dafb3c81c..e3159429b9 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,7 +1,3 @@ -@complexity - -@description - @examples -@return \ No newline at end of file +@return diff --git a/commands/scard.md b/commands/scard.md index 0a4a29f5b3..e54ca5be14 100644 --- a/commands/scard.md +++ b/commands/scard.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the set cardinality (number of elements) of the set stored at `key`. @return diff --git a/commands/sdiff.md b/commands/sdiff.md index 3dcc2b83cf..272097b437 100644 --- a/commands/sdiff.md +++ b/commands/sdiff.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - Returns the members of the set resulting from the difference between the first set and all the successive sets. diff --git a/commands/sdiffstore.md b/commands/sdiffstore.md index 51a25c98aa..0bb3003137 100644 --- a/commands/sdiffstore.md +++ b/commands/sdiffstore.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - This command is equal to `SDIFF`, but instead of returning the resulting set, it is stored in `destination`. diff --git a/commands/select.md b/commands/select.md index b0fc4f74df..702efa463d 100644 --- a/commands/select.md +++ b/commands/select.md @@ -1,5 +1,3 @@ -@description - Select the DB with having the specified zero-based numeric index. New connections always use DB 0. diff --git a/commands/set.md b/commands/set.md index 4edab2faf8..f5be816656 100644 --- a/commands/set.md +++ b/commands/set.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Set `key` to hold the string `value`. If `key` already holds a value, it is overwritten, regardless of its type. diff --git a/commands/setbit.md b/commands/setbit.md index bf4cb1ebcd..3e6d801774 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Sets or clears the bit at _offset_ in the string value stored at _key_. The bit is either set or cleared depending on _value_, which can be either 0 or diff --git a/commands/setex.md b/commands/setex.md index 64adf98d9d..05f92b06ef 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Set `key` to hold the string `value` and set `key` to timeout after a given number of seconds. This command is equivalent to executing the following commands: diff --git a/commands/setnx.md b/commands/setnx.md index 100490b51d..d1700a0c25 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Set `key` to hold string `value` if `key` does not exist. In that case, it is equal to `SET`. When `key` already holds a value, no operation is performed. diff --git a/commands/setrange.md b/commands/setrange.md index e9f61f3be9..149fea9af4 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,9 +1,3 @@ -@complexity - -O(1), not counting the time taken to copy the new string in place. Usually, -this string is very small so the amortized complexity is O(1). Otherwise, -complexity is O(M) with M being the length of the _value_ argument. - Overwrites part of the string stored at _key_, starting at the specified offset, for the entire length of _value_. If the offset is larger than the current length of the string at _key_, the string is padded with zero-bytes to diff --git a/commands/sinter.md b/commands/sinter.md index c137fe0bf1..7a98ce7da0 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -1,8 +1,3 @@ -@complexity - -O(N\*M) worst case where N is the cardinality of the smallest set and M is the -number of sets. - Returns the members of the set resulting from the intersection of all the given sets. diff --git a/commands/sinterstore.md b/commands/sinterstore.md index 9f5ceba25e..26d6e3f381 100644 --- a/commands/sinterstore.md +++ b/commands/sinterstore.md @@ -1,8 +1,3 @@ -@complexity - -O(N*M) worst case where N is the cardinality of the smallest set and M is the -number of sets. - This command is equal to `SINTER`, but instead of returning the resulting set, it is stored in `destination`. diff --git a/commands/sismember.md b/commands/sismember.md index 995b8f4681..bfe474b58c 100644 --- a/commands/sismember.md +++ b/commands/sismember.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns if `member` is a member of the set stored at `key`. @return diff --git a/commands/smembers.md b/commands/smembers.md index 399de68cb6..a5f74eaa73 100644 --- a/commands/smembers.md +++ b/commands/smembers.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the set cardinality. - Returns all the members of the set value stored at `key`. This has the same effect as running `SINTER` with one argument `key`. diff --git a/commands/smove.md b/commands/smove.md index 14803f0834..9fd4ee0e53 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Move `member` from the set at `source` to the set at `destination`. This operation is atomic. In every given moment the element will appear to be a member of `source` **or** `destination` for other clients. diff --git a/commands/sort.md b/commands/sort.md index 0b788c2b39..e054d7c275 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,7 +1,3 @@ -@complexity - -O(N+M\*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is currently O(N) as there is a copy step that will be avoided in next releases. - Returns or stores the elements contained in the [list](/topics/data-types#lists), [set](/topics/data-types#set) or [sorted set](/topics/data-types#sorted-sets) at `key`. By default, sorting is numeric diff --git a/commands/spop.md b/commands/spop.md index d1ba8fdec9..7564dc797d 100644 --- a/commands/spop.md +++ b/commands/spop.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Removes and returns a random element from the set value stored at `key`. This operation is similar to `SRANDMEMBER`, that returns a random diff --git a/commands/srandmember.md b/commands/srandmember.md index 1ad4e0a5ce..9c4ab5b643 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Return a random element from the set value stored at `key`. This operation is similar to `SPOP`, however while `SPOP` also removes the diff --git a/commands/srem.md b/commands/srem.md index 32df527ddb..057585d19f 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -1,8 +1,3 @@ -@complexity - -O(N) where N is the number of members to be removed. - - Remove the specified members from the set stored at `key`. Specified members that are not a member of this set are ignored. If `key` does not exist, it is treated as an empty set and this command returns `0`. diff --git a/commands/strlen.md b/commands/strlen.md index 0605ffeaf1..cfdf1243a0 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the length of the string value stored at `key`. An error is returned when `key` holds a non-string value. diff --git a/commands/subscribe.md b/commands/subscribe.md index 4fce765ae0..33c8b4fd4c 100644 --- a/commands/subscribe.md +++ b/commands/subscribe.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the number of channels to subscribe to. - Subscribes the client to the specified channels. Once the client enters the subscribed state it is not supposed to issue diff --git a/commands/sunion.md b/commands/sunion.md index 4d59b85816..7de66a17f4 100644 --- a/commands/sunion.md +++ b/commands/sunion.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - Returns the members of the set resulting from the union of all the given sets. diff --git a/commands/sunionstore.md b/commands/sunionstore.md index 4db793cc0f..f3bf959c5d 100644 --- a/commands/sunionstore.md +++ b/commands/sunionstore.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - This command is equal to `SUNION`, but instead of returning the resulting set, it is stored in `destination`. diff --git a/commands/sync.md b/commands/sync.md index 8dafb3c81c..e3159429b9 100644 --- a/commands/sync.md +++ b/commands/sync.md @@ -1,7 +1,3 @@ -@complexity - -@description - @examples -@return \ No newline at end of file +@return diff --git a/commands/ttl.md b/commands/ttl.md index 7e06c4ae90..928ee551fb 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the remaining time to live of a key that has a timeout. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset. diff --git a/commands/type.md b/commands/type.md index 46a6d8bb14..21b2d948ea 100644 --- a/commands/type.md +++ b/commands/type.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the string representation of the type of the value stored at `key`. The different types that can be returned are: `string`, `list`, `set`, `zset` and `hash`. diff --git a/commands/unsubscribe.md b/commands/unsubscribe.md index a1b5e8b882..35a65eebf0 100644 --- a/commands/unsubscribe.md +++ b/commands/unsubscribe.md @@ -1,7 +1,3 @@ -@complexity - -O(N) where N is the number of clients already subscribed to a channel. - Unsubscribes the client from the given channels, or from all of them if none is given. diff --git a/commands/unwatch.md b/commands/unwatch.md index 32655426ce..40ac4b5194 100644 --- a/commands/unwatch.md +++ b/commands/unwatch.md @@ -1,7 +1,3 @@ -@complexity - -O(1). - Flushes all the previously watched keys for a [transaction](/topics/transactions). If you call `EXEC` or `DISCARD`, there's no need to manually call `UNWATCH`. diff --git a/commands/watch.md b/commands/watch.md index a44c711091..604ccf032b 100644 --- a/commands/watch.md +++ b/commands/watch.md @@ -1,7 +1,3 @@ -@complexity - -O(1) for every key. - Marks the given keys to be watched for conditional execution of a [transaction](/topics/transactions). @return diff --git a/commands/zadd.md b/commands/zadd.md index fc82a2984b..0a3eaf1efe 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -1,7 +1,3 @@ -@complexity - -O(log(N)) where N is the number of elements in the sorted set. - Adds all the specified members with the specified scores to the sorted set stored at `key`. It is possible to specify multiple score/member pairs. If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering. If `key` does not exist, a new sorted set with the specified members as sole members is created, like if the sorted set was empty. diff --git a/commands/zcard.md b/commands/zcard.md index 044eace628..5fb0f84dce 100644 --- a/commands/zcard.md +++ b/commands/zcard.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the sorted set cardinality (number of elements) of the sorted set stored at `key`. diff --git a/commands/zcount.md b/commands/zcount.md index 9efbebbec2..fc44a1ac2b 100644 --- a/commands/zcount.md +++ b/commands/zcount.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the -sorted set and M being the number of elements between `min` and `max`. - Returns the number of elements in the sorted set at `key` with a score between `min` and `max`. diff --git a/commands/zincrby.md b/commands/zincrby.md index a5fe4ea18a..b7c1ce16de 100644 --- a/commands/zincrby.md +++ b/commands/zincrby.md @@ -1,7 +1,3 @@ -@complexity - -O(log(N)) where N is the number of elements in the sorted set. - Increments the score of `member` in the sorted set stored at `key` by `increment`. If `member` does not exist in the sorted set, it is added with `increment` as its score (as if its previous score was `0.0`). If `key` does diff --git a/commands/zinterstore.md b/commands/zinterstore.md index 89056c41b2..abf98f0f62 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -1,9 +1,3 @@ -@complexity - -O(N\*K)+O(M\*log(M)) worst case with N being the smallest input sorted set, K -being the number of input sorted sets and M being the number of elements in the -resulting sorted set. - Computes the intersection of `numkeys` sorted sets given by the specified keys, and stores the result in `destination`. It is mandatory to provide the number of input keys (`numkeys`) before passing the input keys and the other diff --git a/commands/zrange.md b/commands/zrange.md index b31db4dc7c..37dc66cbab 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements returned. - Returns the specified range of elements in the sorted set stored at `key`. The elements are considered to be ordered from the lowest to the highest score. Lexicographical order is used for elements with equal score. diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 7308d7007b..88574ea15c 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -1,9 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements being returned. If M is constant (e.g. always asking for the -first 10 elements with `LIMIT`), you can consider it O(log(N)). - Returns all the elements in the sorted set at `key` with a score between `min` and `max` (including elements with score equal to `min` or `max`). The elements are considered to be ordered from low to high scores. diff --git a/commands/zrank.md b/commands/zrank.md index d6fdcbf74a..63be4d676e 100644 --- a/commands/zrank.md +++ b/commands/zrank.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)) - - Returns the rank of `member` in the sorted set stored at `key`, with the scores ordered from low to high. The rank (or index) is 0-based, which means that the member with the lowest score has rank `0`. diff --git a/commands/zrem.md b/commands/zrem.md index 2f0f71b7f6..42043db9a2 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -1,7 +1,3 @@ -@complexity - -O(M log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed. - Removes the specified members from the sorted set stored at `key`. Non existing members are ignored. An error is returned when `key` exists and does not hold a sorted set. diff --git a/commands/zremrangebyrank.md b/commands/zremrangebyrank.md index 99eb68877b..c9f97419e6 100644 --- a/commands/zremrangebyrank.md +++ b/commands/zremrangebyrank.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements removed by the operation. - Removes all elements in the sorted set stored at `key` with rank between `start` and `stop`. Both `start` and `stop` are `0`-based indexes with `0` being the element with the lowest score. These indexes can be negative numbers, diff --git a/commands/zremrangebyscore.md b/commands/zremrangebyscore.md index 88e38f94a4..d575d8d61e 100644 --- a/commands/zremrangebyscore.md +++ b/commands/zremrangebyscore.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements removed by the operation. - Removes all elements in the sorted set stored at `key` with a score between `min` and `max` (inclusive). diff --git a/commands/zrevrange.md b/commands/zrevrange.md index 3888b8a88c..ddf0a404a0 100644 --- a/commands/zrevrange.md +++ b/commands/zrevrange.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the -sorted set and M the number of elements returned. - Returns the specified range of elements in the sorted set stored at `key`. The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score. diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index 721cf06b31..a0b1866a2c 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -1,9 +1,3 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements being returned. If M is constant (e.g. always asking for the -first 10 elements with `LIMIT`), you can consider it O(log(N)). - Returns all the elements in the sorted set at `key` with a score between `max` and `min` (including elements with score equal to `max` or `min`). In contrary to the default ordering of sorted sets, for this command the elements are diff --git a/commands/zrevrank.md b/commands/zrevrank.md index 46283dc22f..8cc820ab9c 100644 --- a/commands/zrevrank.md +++ b/commands/zrevrank.md @@ -1,8 +1,3 @@ -@complexity - -O(log(N)) - - Returns the rank of `member` in the sorted set stored at `key`, with the scores ordered from high to low. The rank (or index) is 0-based, which means that the member with the highest score has rank `0`. diff --git a/commands/zscore.md b/commands/zscore.md index a683004f3b..7240b21add 100644 --- a/commands/zscore.md +++ b/commands/zscore.md @@ -1,8 +1,3 @@ -@complexity - -O(1) - - Returns the score of `member` in the sorted set at `key`. If `member` does not exist in the sorted set, or `key` does not exist, diff --git a/commands/zunionstore.md b/commands/zunionstore.md index 426aafc5d4..a235a2b2c2 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -1,8 +1,3 @@ -@complexity - -O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, -and M being the number of elements in the resulting sorted set. - Computes the union of `numkeys` sorted sets given by the specified keys, and stores the result in `destination`. It is mandatory to provide the number of input keys (`numkeys`) before passing the input keys and the other (optional) From 546e2ede2de942a5e0f7cb4dcaaa46ebf37e66bb Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 21 Mar 2012 16:47:18 +0100 Subject: [PATCH 0071/2880] More merging --- commands.json | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/commands.json b/commands.json index afa621554d..3bf3ede550 100644 --- a/commands.json +++ b/commands.json @@ -118,12 +118,8 @@ }, "CONFIG RESETSTAT": { "summary": "Reset the stats returned by INFO", -<<<<<<< HEAD - "since": "2.0.0", -======= "complexity": "O(1)", - "since": "2.0", ->>>>>>> origin/complexity-in-json + "since": "2.0.0", "group": "server" }, "DBSIZE": { From 00632dc57dce74ab3408f42da60e008f4372b2b1 Mon Sep 17 00:00:00 2001 From: Michel Martens Date: Wed, 21 Mar 2012 13:03:16 -0300 Subject: [PATCH 0072/2880] Remove markup from complexity descriptions. --- commands.json | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/commands.json b/commands.json index 3bf3ede550..9ed51d34b4 100644 --- a/commands.json +++ b/commands.json @@ -202,7 +202,7 @@ }, "EVAL": { "summary": "Execute a Lua script server side", - "complexity": "Looking up the script both with `EVAL` or `EVALSHA` is an O(1) business. The additional complexity is up to the script you execute.", + "complexity": "Looking up the script both with EVAL or EVALSHA is an O(1) business. The additional complexity is up to the script you execute.", "arguments": [ { "name": "script", @@ -592,7 +592,7 @@ }, "LINDEX": { "summary": "Get an element from a list by its index", - "complexity": "O(N) where N is the number of elements to traverse to get to the element at `index`. This makes asking for the first or the last element of the list O(1).", + "complexity": "O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).", "arguments": [ { "name": "key", @@ -608,7 +608,7 @@ }, "LINSERT": { "summary": "Insert an element before or after another element in a list", - "complexity": "O(N) where N is the number of elements to traverse before seeing the value `pivot`. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).", + "complexity": "O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).", "arguments": [ { "name": "key", @@ -690,7 +690,7 @@ }, "LRANGE": { "summary": "Get a range of elements from a list", - "complexity": "O(S+N) where S is the `start` offset and N is the number of elements in the specified range.", + "complexity": "O(S+N) where S is the start offset and N is the number of elements in the specified range.", "arguments": [ { "name": "key", @@ -1224,7 +1224,7 @@ }, "SETRANGE": { "summary": "Overwrite part of a string at key starting at the specified offset", - "complexity": "O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the `value` argument.", + "complexity": "O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument.", "arguments": [ { "name": "key", @@ -1626,7 +1626,7 @@ }, "ZCOUNT": { "summary": "Count the members in a sorted set with scores within the given values", - "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M being the number of elements between `min` and `max`.", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M being the number of elements between min and max.", "arguments": [ { "name": "key", @@ -1727,7 +1727,7 @@ }, "ZRANGEBYSCORE": { "summary": "Return a range of members in a sorted set, by score", - "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with `LIMIT`), you can consider it O(log(N)).", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", "arguments": [ { "name": "key", @@ -1858,7 +1858,7 @@ }, "ZREVRANGEBYSCORE": { "summary": "Return a range of members in a sorted set, by score, with scores ordered from high to low", - "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with `LIMIT`), you can consider it O(log(N)).", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", "arguments": [ { "name": "key", From d5898a6552370aa3d07eddf235606cd8bb4dfb23 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 21 Mar 2012 17:28:44 +0100 Subject: [PATCH 0073/2880] Complexity field added to json commands lacking it. --- commands.json | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands.json b/commands.json index 9ed51d34b4..e8aee5e04f 100644 --- a/commands.json +++ b/commands.json @@ -865,6 +865,7 @@ }, "PEXPIRE": { "summary": "Set a key's time to live in milliseconds", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -880,6 +881,7 @@ }, "PEXPIREAT": { "summary": "Set the expiration for a key as a UNIX timestamp specified in milliseconds", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -900,6 +902,7 @@ }, "PSETEX": { "summary": "Set the value and expiration in milliseconds of a key", + "complexity": "O(1)", "arguments": [ { "name": "key", @@ -932,6 +935,7 @@ }, "PTTL": { "summary": "Get the time to live for a key in milliseconds", + "complexity": "O(1)", "arguments": [ { "name": "key", From 2521d9215e3f21c7fe9867300b525cd2a513db9c Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Mar 2012 10:36:58 +0100 Subject: [PATCH 0074/2880] Added documentation about read only slaves in the replication page. --- topics/replication.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index ba578a03e8..711f673aab 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -20,7 +20,7 @@ the first synchronization. the first synchronization it can reply to queries using the old version of the data set, assuming you configured Redis to do so in redis.conf. Otherwise you can configure Redis slaves to send clients an error if the -link with the master is down. +link with the master is down. However there is a moment where the old dataset must be deleted and the new one must be loaded by the slave where it will block incoming connections. * Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example, heavy `SORT` @@ -70,6 +70,16 @@ Of course you need to replace 192.168.1.1 6379 with your master IP address (or hostname) and port. Alternatively, you can call the `SLAVEOF` command and the master host will start a sync with the slave. +Read only slave +--- + +Since Redis 2.6 slaves support a read-only mode that is enabled by default. +This behavior is controlled by the `slave-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. + +Read only slaves will reject all the write commadns, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However security of read-only instances can be improved disabling commands in redis.conf using the `rename-command` directive. + +You may wonder why it is possible to revert the default and have slave instances that can be target of write operations. The reason is that while this writes will be discarded if the slave and the master will resynchronize, or if the slave is restarted, often there is ephemeral data that is unimportant that can be stored into slaves. For instance clients may take information about reachability of master in the slave instance to coordinate a fail over strategy. + Setting a slave to authenticate to a master --- From ae675e86b86180133b84042118735699e28bfef6 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Mar 2012 13:48:39 +0100 Subject: [PATCH 0075/2880] typo fixed. --- topics/replication.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 711f673aab..43c5c27014 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -76,7 +76,7 @@ Read only slave Since Redis 2.6 slaves support a read-only mode that is enabled by default. This behavior is controlled by the `slave-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. -Read only slaves will reject all the write commadns, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However security of read-only instances can be improved disabling commands in redis.conf using the `rename-command` directive. +Read only slaves will reject all the write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However security of read-only instances can be improved disabling commands in redis.conf using the `rename-command` directive. You may wonder why it is possible to revert the default and have slave instances that can be target of write operations. The reason is that while this writes will be discarded if the slave and the master will resynchronize, or if the slave is restarted, often there is ephemeral data that is unimportant that can be stored into slaves. For instance clients may take information about reachability of master in the slave instance to coordinate a fail over strategy. From 13fc9ea7520d00ad1b5555ffbd2679b473cbf99c Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Mar 2012 15:35:21 +0100 Subject: [PATCH 0076/2880] EVAL documentation improved. --- commands/eval.md | 179 +++++++++++++++++++++++++++-------------------- 1 file changed, 102 insertions(+), 77 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 1dd8c89daf..60ac7a4d1a 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -1,32 +1,22 @@ -Warning ---- - -Redis scripting support is currently a work in progress. This feature -will be shipped as stable with the release of Redis 2.6. The information -in this document reflects what is currently implemented, but it is -possible that changes will be made before the release of the stable -version. - Introduction to EVAL --- `EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0. -The first argument of `EVAL` itself is a Lua script. The script does not need -to define a Lua function, it is just a Lua program that will run in the context -of the Redis server. +The first argument of `EVAL` is a Lua 5.1 script. The script does not need +to define a Lua function (and should not). It is just a Lua program that will run in the context of the Redis server. The second argument of `EVAL` is the number of arguments that follows -(starting from the third argument) that represent Redis key names. +the script (starting from the third argument) that represent Redis key names. This arguments can be accessed by Lua using the `KEYS` global variable in the form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). -All the additional arguments that should not represent key names can +All the additional arguments should not represent key names and can be accessed by Lua using the `ARGV` global variable, very similarly to what happens with keys (so `ARGV[1]`, `ARGV[2]`, ...). -The following example can clarify what stated above: +The following example should clarify what stated above: > eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second 1) "key1" @@ -36,18 +26,19 @@ The following example can clarify what stated above: Note: as you can see Lua arrays are returned as Redis multi bulk replies, that is a Redis return type that your client library will -likely convert into an Array in your programming language. +likely convert into an Array type in your programming language. -It is possible to call Redis program from a Lua script using two different +It is possible to call Redis commands from a Lua script using two different Lua functions: * `redis.call()` * `redis.pcall()` `redis.call()` is similar to `redis.pcall()`, the only difference is that if a -Redis command call will result into an error, `redis.call()` will raise a -Lua error that in turn will make `EVAL` to fail, while `redis.pcall` will trap -the error returning a Lua table representing the error. +Redis command call will result into an error, `redis.call()` will raise a Lua +error that in turn will force `EVAL` to return an error to the command caller, +while `redis.pcall` will trap the error returning a Lua table representing the +error. The arguments of the `redis.call()` and `redis.pcall()` functions are simply all the arguments of a well formed Redis command: @@ -55,7 +46,7 @@ all the arguments of a well formed Redis command: > eval "return redis.call('set','foo','bar')" 0 OK -The above script works and will set the key `foo` to the string "bar". +The above script actually sets the key `foo` to the string `bar`. However it violates the `EVAL` command semantics as all the keys that the script uses should be passed using the KEYS array, in the following way: @@ -70,18 +61,17 @@ In order for this to be true for `EVAL` also keys must be explicit. This is useful in many ways, but especially in order to make sure Redis Cluster is able to forward your request to the appropriate cluster node (Redis Cluster is a work in progress, but the scripting feature was designed -in order to play well with it). +in order to play well with it). However this rule is not envorced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster. -Lua scripts can return a value that is converted from Lua to the Redis protocol -using a set of conversion rules. +Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules. Conversion between Lua and Redis data types --- Redis return values are converted into Lua data types when Lua calls a Redis command using call() or pcall(). Similarly Lua data types are -converted into Redis data types when a script returns some value, that -we need to use as the `EVAL` reply. +converted into Redis protocol when a Lua script returns some value, so that +scripts can control what `EVAL` will reply to the client. This conversion between data types is designed in a way that if a Redis type is converted into a Lua type, and then the result is converted @@ -108,8 +98,8 @@ The following table shows you all the conversions rules: * Lua table with a single `err` field -> Redis error reply * Lua boolean false -> Redis Nil bulk reply. -There is an additional Lua to Redis conversion that has no corresponding -Redis to Lua conversion: +There is an additional Lua to Redis conversion rule that has no corresponding +Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. @@ -138,6 +128,8 @@ Redis uses the same Lua interpreter to run all the commands. Also Redis guarantees that a script is executed in an atomic way: no other script or Redis command will be executed while a script is being executed. This semantics is very similar to the one of `MULTI` / `EXEC`. +From the point of view of all the other clients the effects of a script +are either still not visible or already completed. However this also means that executing slow scripts is not a good idea. It is not hard to create fast scripts, as the script overhead is very low, @@ -150,7 +142,7 @@ Error handling As already stated calls to `redis.call()` resulting into a Redis command error will stop the execution of the script and will return that error back, in a -way that makes it obvious the error was generated by a script: +way that makes it obvious that the error was generated by a script: > del foo (integer) 1 @@ -161,34 +153,30 @@ way that makes it obvious the error was generated by a script: Using the `redis.pcall()` command no error is raised, but an error object is returned in the format specified above (as a Lua table with an `err` -field). The user can later return this error to the user just returning the -error object returned by `redis.pcall()`. +field). The user can later return this exact error to the user just returning +the error object returned by `redis.pcall()`. Bandwidth and EVALSHA --- -The `EVAL` command forces you to send the script body again and again, even if -it does not need to recompile the script every time as it uses an internal -caching mechanism. However paying the cost of the additional bandwidth may -not be optimal in all the contexts. +The `EVAL` command forces you to send the script body again and again. +Redis does not need to recompile the script every time as it uses an internal +caching mechanism, however paying the cost of the additional bandwidth may +not be optimal in may contexts. On the other hand defining commands using a special command or via `redis.conf` would be a problem for a few reasons: -* Different instances may have different versions of a command -implementation. +* Different instances may have different versions of a command implementation. -* Deployment is hard if there is to make sure all the instances contain -a given command, especially in a distributed environment. +* Deployment is hard if there is to make sure all the instances contain a given command, especially in a distributed environment. -* Reading an application code the full semantic could not be clear since -the application would call commands defined server side. +* Reading an application code the full semantic could not be clear since the application would call commands defined server side. In order to avoid the above three problems and at the same time don't incur -in the bandwidth penalty Redis implements the `EVALSHA` command. +in the bandwidth penalty, Redis implements the `EVALSHA` command. -`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument -it has the SHA1 sum of a script. The behavior is the following: +`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument it has the SHA1 sum of a script. The behavior is the following: * If the server still remembers a script whose SHA1 sum was the one specified, the script is executed. @@ -209,22 +197,24 @@ Example: The client library implementation can always optimistically send `EVALSHA` under the hoods even when the client actually called `EVAL`, in the hope the script -was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be -used instead. Passing keys and arguments as `EVAL` additional arguments is also +was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be used instead. + +Passing keys and arguments as `EVAL` additional arguments is also very useful in this context as the script string remains constant and can be efficiently cached by Redis. Script cache semantics --- -Executed scripts are guaranteed to be in the script cache forever. +Executed scripts are guaranteed to be in the script cache **forever**. This means that if an `EVAL` is performed against a Redis instance all the subsequent `EVALSHA` calls will succeed. The only way to flush the script cache is by explicitly calling the -SCRIPT FLUSH command, that will flush the scripts cache. This is usually +SCRIPT FLUSH command, that will *completely flush* the scripts cache removing +all the scripts executed so far. This is usually needed only when the instance is going to be instantiated for another -customer in a cloud environment. +customer or application in a cloud environment. The reason why scripts can be cached for long time is that it is unlikely for a well written application to have so many different scripts to create @@ -237,7 +227,7 @@ The fact that the user can count on Redis not removing scripts is semantically a very good thing. For instance an application taking a persistent connection to Redis can stay sure that if a script was sent once it is still in memory, thus for instance can use EVALSHA -against those scripts in a pipeline without the change that an error +against those scripts in a pipeline without the chance that an error will be generated since the script is not known (we'll see this problem in its details later). @@ -261,7 +251,8 @@ cache, while 0 means that a script with this SHA1 was never seen before * SCRIPT LOAD *script*. This command registers the specified script in the Redis script cache. The command is useful in all the contexts where we want to make sure that `EVALSHA` will not fail (for instance during a -pipeline or MULTI/EXEC operation). +pipeline or MULTI/EXEC operation), without the need to actually execute the +script. * SCRIPT KILL. This command is the only wait to interrupt a long running script that reached the configured maximum execution time for scripts. @@ -279,32 +270,38 @@ same script, instead of the resulting commands. The same happens for the Append Only File. The reason is that scripts are much faster than sending commands one after the other to a Redis instance, so if the client is taking the master very busy sending scripts, turning this scripts into single -commands for the slave / AOF would result in too much load for the replication -link or the Append Only File. +commands for the slave / AOF would result in too much bandwidth for the +replication link or the Append Only File (and also too much CPU since +dispatching a command received via network is a lot more work for Redis +compared to dispatching a command invoked by Lua scripts). The only drawback with this approach is that scripts are required to have the following property: * The script always evaluates the same Redis *write* commands with the same arguments given the same input data set. Operations performed by -the script cannot depend on any hidden information or state that may -change as script execution proceeds or between different executions of +the script cannot depend on any hidden (non explicit) information or state +that may change as script execution proceeds or between different executions of the script, nor can it depend on any external input from I/O devices. Things like using the system time, calling Redis random commands like -RANDOMKEY, or using Lua random number generator, could result into scripts +`RANDOMKEY`, or using Lua random number generator, could result into scripts that will not evaluate always in the same way. In order to enforce this behavior in scripts Redis does the following: -* Lua does not export commands to access the system time or other -external state. +* Lua does not export commands to access the system time or other external state. * Redis will block the script with an error if a script will call a -Redis command able to alter the data set **after** a Redis random -command like RANDOMKEY or SRANDMEMBER. This means that if a script is -read only and does not modify the data set it is free to call those -commands. +Redis command able to alter the data set **after** a Redis *random* +command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is +read only and does not modify the data set it is free to call those commands. +Note that a *random command* does not necessarily identifies a command that +uses random numbers: any non deterministic command is considered a random +command (the best example in this regard is the `TIME` command). + +* Redis commands that may return elements in random order, like `SMEMBERS` +(because Redis Sets are *unordered*) have a different behavior when called from Lua, and undergone a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements. * Lua pseudo random number generation functions `math.random` and `math.randomseed` are modified in order to always have the same seed every @@ -313,7 +310,7 @@ always generate the same sequence of numbers every time a script is executed if `math.randomseed` is not used. However the user is still able to write commands with random behaviors -using the following simple trick. For example I want to write a Redis +using the following simple trick. Imagine I want to write a Redis script that will populate a list with N random integers. I can start writing the following script, using a small Ruby program: @@ -351,9 +348,10 @@ following elements: 10) "0.17082803611217" In order to make it a pure function, but still making sure that every -invocation of the script will result in a different random elements, we can +invocation of the script will result in different random elements, we can simply add an additional argument to the script, that will be used in order to -seed the Lua PRNG. The new script will be like the following: +seed the Lua pseudo random number generator. The new script will be like the +following: RandomPushScript = < Date: Thu, 22 Mar 2012 16:25:40 +0100 Subject: [PATCH 0077/2880] More scripting documentation. --- commands.json | 39 ++++++++++++++++++++++++++++++++++++++- commands/script exists.md | 17 +++++++++++++++++ commands/script flush.md | 7 +++++++ commands/script kill.md | 12 ++++++++++++ commands/script load.md | 13 +++++++++++++ 5 files changed, 87 insertions(+), 1 deletion(-) create mode 100644 commands/script exists.md create mode 100644 commands/script flush.md create mode 100644 commands/script kill.md create mode 100644 commands/script load.md diff --git a/commands.json b/commands.json index e8aee5e04f..fa3ffc3d63 100644 --- a/commands.json +++ b/commands.json @@ -224,7 +224,7 @@ } ], "since": "2.6.0", - "group": "generic" + "group": "scripting" }, "EXEC": { "summary": "Execute all commands issued after MULTI", @@ -1113,6 +1113,43 @@ "since": "1.0.0", "group": "set" }, + "SCRIPT EXISTS": { + "summary": "Check existence of scripts in the script cache.", + "complexity": "O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation).", + "arguments": [ + { + "name": "script", + "type": "string", + "multiple": true + }, + ], + "since": "2.6.0", + "group": "scripting" + }, + "SCRIPT FLUSH": { + "summary": "Remove all the scripts from the script cache.", + "complexity": "O(N) with N being the number of scripts in cache", + "since": "2.6.0", + "group": "scripting" + }, + "SCRIPT KILL": { + "summary": "Kill the script currently in execution.", + "complexity": "O(1)", + "since": "2.6.0", + "group": "scripting" + }, + "SCRIPT LOAD": { + "summary": "Load the specified Lua script into the script cache.", + "complexity": "O(N) with N being the length in bytes of the script body.", + "arguments": [ + { + "name": "script", + "type": "string" + }, + ], + "since": "2.6.0", + "group": "scripting" + }, "SDIFF": { "summary": "Subtract multiple sets", "complexity": "O(N) where N is the total number of elements in all given sets.", diff --git a/commands/script exists.md b/commands/script exists.md new file mode 100644 index 0000000000..bda2569865 --- /dev/null +++ b/commands/script exists.md @@ -0,0 +1,17 @@ +Returns information about the existence of the scripts in the script cache. + +This command accepts one or more SHA1 sums and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. +This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining operation can be performed solely using `EVALSHA` instead of `EVAL` to save bandwidth. + +Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. + +@return + +@multi-bulk-reply +The command returns an array of integers that correspond to the specified SHA1 sum arguments. For every corresponding SHA1 sum of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned. + +@example + + @cli + SCRIPT LOAD "return 1" + SCRIPT EXISTS e0e1f9fabfc9d4800c877a703b823ac0578ff8db ffffffffffffffffffffffffffffffffffffffff diff --git a/commands/script flush.md b/commands/script flush.md new file mode 100644 index 0000000000..1a550eced1 --- /dev/null +++ b/commands/script flush.md @@ -0,0 +1,7 @@ +Flush the Lua scripts cache. + +Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. + +@return + +@status-reply diff --git a/commands/script kill.md b/commands/script kill.md new file mode 100644 index 0000000000..362740bbbf --- /dev/null +++ b/commands/script kill.md @@ -0,0 +1,12 @@ +Kills the currently executing Lua script, assuming no write operation was yet performed by the script. + +This command is mainly useful to kill a script that is running for too much time(for instance because it entered an infinite loop because of a bug). +The script will be killed and the client currently blocked into EVAL will see the command returning with an error. + +If the script already performed write operations it can not be killed in this way because it would violate Lua script atomicity contract. In such a case only `SHUTDOWN NOSAVE` is able to kill the script, killign the Redis process in an hard way preventing it to persist with half-written informations. + +Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. + +@return + +@status-reply diff --git a/commands/script load.md b/commands/script load.md new file mode 100644 index 0000000000..a3bfd38e16 --- /dev/null +++ b/commands/script load.md @@ -0,0 +1,13 @@ +Load a script into the scripts cache, without executing it. +After the specified command is loaded into the script cache it will be callable using `EVALSHA` with the correct SHA1 digest of the script, exactly like after the first successful invocation of `EVAL`. + +The script is guaranteed to stay in the script cache forever (unless `SCRIPT FLUSH` is called). + +The command works in the same way even if the script was already present in the script cache. + +Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. + +@return + +@bulk-reply +This command returns the SHA1 sum of the script added into the script cache. From deb0f66d7e12b8c62438f1e7247b6f9690adbbfd Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Mar 2012 16:26:22 +0100 Subject: [PATCH 0078/2880] JSON typo as usually --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index fa3ffc3d63..9b91edfe29 100644 --- a/commands.json +++ b/commands.json @@ -1145,7 +1145,7 @@ { "name": "script", "type": "string" - }, + } ], "since": "2.6.0", "group": "scripting" From 67a7608915a61f1a27e9ea96972c06874bfdc469 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Mar 2012 16:27:18 +0100 Subject: [PATCH 0079/2880] JSON typo, the revenge. --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 9b91edfe29..4cfcb88646 100644 --- a/commands.json +++ b/commands.json @@ -1121,7 +1121,7 @@ "name": "script", "type": "string", "multiple": true - }, + } ], "since": "2.6.0", "group": "scripting" From 10fa993113af288791bd33bd19de8b3b392a471f Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 23 Mar 2012 10:32:46 +0100 Subject: [PATCH 0080/2880] INCRBYFLOAT and HINCRBYFLOAT documentation. --- commands.json | 38 +++++++++++++++++++++++++++++++++++++- commands/hincrbyfloat.md | 23 +++++++++++++++++++++++ commands/incrbyfloat.md | 31 +++++++++++++++++++++++++++++++ 3 files changed, 91 insertions(+), 1 deletion(-) create mode 100644 commands/hincrbyfloat.md create mode 100644 commands/incrbyfloat.md diff --git a/commands.json b/commands.json index 4cfcb88646..ed66049937 100644 --- a/commands.json +++ b/commands.json @@ -430,6 +430,26 @@ "since": "2.0.0", "group": "hash" }, + "HINCRBYFLOAT": { + "summary": "Increment the float value of a hash field by the given amount", + "complexity": "O(1)", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "field", + "type": "string" + }, + { + "name": "increment", + "type": "double" + } + ], + "since": "2.6.0", + "group": "hash" + }, "HKEYS": { "summary": "Get all the fields in a hash", "complexity": "O(N) where N is the size of the hash.", @@ -553,7 +573,7 @@ "group": "string" }, "INCRBY": { - "summary": "Increment the integer value of a key by the given number", + "summary": "Increment the integer value of a key by the given amount", "complexity": "O(1)", "arguments": [ { @@ -568,6 +588,22 @@ "since": "1.0.0", "group": "string" }, + "INCRBYFLOAT": { + "summary": "Increment the float value of a key by the given amount", + "complexity": "O(1)", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "increment", + "type": "double" + } + ], + "since": "2.6.0", + "group": "string" + }, "INFO": { "summary": "Get information and statistics about the server", "since": "1.0.0", diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md new file mode 100644 index 0000000000..4a95ecafbc --- /dev/null +++ b/commands/hincrbyfloat.md @@ -0,0 +1,23 @@ +Increment the specified `field` of an hash stored at `key`, and representing a floating point number, by the specified `increment`. If the field does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: + +* The field contains a value of the wrong type (not a string). +* The current field content or the specified increment are not parsable as a double precision floating point number. + +The exact behavior of this command is identical to the one of the `INCRBYFLOAT` command, please refer to the documentation of `INCRBYFLOAT` for further information. + +@return + +@bulk-reply: the value of `field` after the increment. + +@examples + + @cli + HSET mykey 10.50 + HINCRBYFLOAT mykey 0.1 + HSET mykey 5.0e3 + HINCRBYFLOAT mykey 2.0e2 + +## Implementation details + +The command is always propagated in the replication link and the Append Only File as a `HSET` operation, so that differences in the underlying floating point +math implementation will not be sources of inconsistency. diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md new file mode 100644 index 0000000000..adfef50a3b --- /dev/null +++ b/commands/incrbyfloat.md @@ -0,0 +1,31 @@ +Increment the string representing a floating point number stored at `key` by +the specified `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: + +* The key contains a value of the wrong type (not a string). +* The current key content or the specified increment are not parsable as a double precision floating point number. + +If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string. + +Both the value already contained in the string key and the increment argument +can be optionally provided in exponential notation, however the value computed +after the incremnet is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed. + +The precision of the output is fixed at 17 digits after the decimal point +regardless of the actual internal precision of the computation. + +@return + +@bulk-reply: the value of `key` after the increment. + +@examples + + @cli + SET mykey 10.50 + INCRBYFLOAT mykey 0.1 + SET mykey 5.0e3 + INCRBYFLOAT mykey 2.0e2 + +## Implementation details + +The command is always propagated in the replication link and the Append Only File as a `SET` operation, so that differences in the underlying floating point +math implementation will not be sources of inconsistency. From a902a35079a2d7f26250388d8fd742bc6ca5a19c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 23 Mar 2012 10:45:41 +0100 Subject: [PATCH 0081/2880] HINCRBYFLOAT example fixed. --- commands/hincrbyfloat.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md index 4a95ecafbc..2668c09049 100644 --- a/commands/hincrbyfloat.md +++ b/commands/hincrbyfloat.md @@ -12,10 +12,10 @@ The exact behavior of this command is identical to the one of the `INCRBYFLOAT` @examples @cli - HSET mykey 10.50 - HINCRBYFLOAT mykey 0.1 - HSET mykey 5.0e3 - HINCRBYFLOAT mykey 2.0e2 + HSET mykey field 10.50 + HINCRBYFLOAT mykey field 0.1 + HSET mykey field 5.0e3 + HINCRBYFLOAT mykey field 2.0e2 ## Implementation details From 604f6c133399bea346e99ede2ac1a7cf43cdb73f Mon Sep 17 00:00:00 2001 From: antirez Date: Sun, 25 Mar 2012 11:46:54 +0200 Subject: [PATCH 0082/2880] CONFIG RESETSTAT page updated. --- commands/config resetstat.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/commands/config resetstat.md b/commands/config resetstat.md index 03b3417292..aa096e4bb6 100644 --- a/commands/config resetstat.md +++ b/commands/config resetstat.md @@ -7,6 +7,9 @@ These are the counters that are reset: * Number of commands processed * Number of connections received * Number of expired keys +* Number of rejected connections +* Latest fork(2) time +* The `aof_delayed_fsync` counter @return From 20a55c571974c2858e02d0764c8a461f9a65bd0e Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 26 Mar 2012 15:20:19 +0200 Subject: [PATCH 0083/2880] blog article about persistence linked to official doc. --- topics/persistence.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/persistence.md b/topics/persistence.md index e5388bc7e2..3162c72eac 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -1,3 +1,5 @@ +This page provides a technical description of Redis persistence, it is a suggested read for all the Redis users. For a wider overview of Redis persistence and the durability guarantees it provides you may want to also read [Redis persistence demystified](http://antirez.com/post/redis-persistence-demystified.html). + Redis Persistence === From ef415ca683e8d689e65ac166865fe16996ad012c Mon Sep 17 00:00:00 2001 From: huangz1990 Date: Tue, 27 Mar 2012 09:34:31 +0800 Subject: [PATCH 0084/2880] fix typo --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 60ac7a4d1a..0443cfec5f 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -61,7 +61,7 @@ In order for this to be true for `EVAL` also keys must be explicit. This is useful in many ways, but especially in order to make sure Redis Cluster is able to forward your request to the appropriate cluster node (Redis Cluster is a work in progress, but the scripting feature was designed -in order to play well with it). However this rule is not envorced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster. +in order to play well with it). However this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster. Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules. From 25f073dd401b2ab5ece06a388cf3415909742770 Mon Sep 17 00:00:00 2001 From: huangz1990 Date: Tue, 27 Mar 2012 11:45:16 +0800 Subject: [PATCH 0085/2880] fix typo --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 0443cfec5f..b0a152834c 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -162,7 +162,7 @@ Bandwidth and EVALSHA The `EVAL` command forces you to send the script body again and again. Redis does not need to recompile the script every time as it uses an internal caching mechanism, however paying the cost of the additional bandwidth may -not be optimal in may contexts. +not be optimal in many contexts. On the other hand defining commands using a special command or via `redis.conf` would be a problem for a few reasons: From dc5d847d382c56e2915bcfcf18b2aa5417fac560 Mon Sep 17 00:00:00 2001 From: chernjie Date: Wed, 28 Mar 2012 12:17:58 +0800 Subject: [PATCH 0086/2880] Update topics/whos-using-redis.md --- topics/whos-using-redis.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index c0684da6d3..5a81c0e954 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -87,6 +87,7 @@ And many others: * [Nasza Klasa](http://nk.pl/) * [Forrst](http://forrst.com) * [Surfingbird](http://surfingbird.com) +* [mig33](http://www.mig33.com) This list is incomplete. If you're using Redis and would like to be listed, [send a pull request](https://github.com/antirez/redis-doc). From b9f51f213af84a9c53ef8b5c4472a76c269aa2d6 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 15:32:51 +0200 Subject: [PATCH 0087/2880] Software watchdog documented. --- topics/latency.md | 55 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/topics/latency.md b/topics/latency.md index 26294e94dc..830fc0c853 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -473,3 +473,58 @@ Apparently there is no way to tell strace to just show slow system calls so I use the following command: sudo strace -f -p $(pidof redis-server) -T -e trace=fdatasync,write 2>&1 | grep -v '0.0' | grep -v unfinished + +Redis software watchdog +--- + +Redis 2.6 introduces the *Redis Software Watchdog* that is a debugging tool +designed to track those latency problems that for one reason or the other +esacped an analysis using normal tools. + +The software watchdog is an experimental feature. While it is designed to +be used in production enviroments care should be taken to backup the database +before proceeding as it could possibly have unexpected interactions with the +normal execution of the Redis server. + +It is important to use it only as *last resort* when there is no way to track the issue by other means. + +This is how this feature works: + +* The user enables the softare watchdog using te `CONFIG SET` command. +* Redis starts monitoring itself constantly. +* If Redis detects that the server is blocked into some operation that is not returning fast enough, and that may be the source of the latency issue, a low level report about where the server is blocked is dumped on the log file. +* The user contacts the developers writing a message in the Redis Google Group, including the watchdog report in the message. + +Note that this feature can not be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes. + +To enable the feature just use the following: + + CONFIG SET watchdog-period 500 + +The period is specified in milliseconds. In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater. The minimum configurable period is 200 milliseconds. + +When you are done with the software watchdog you can turn it off setting the `watchdog-period` parameter to 0. **Important:** remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea. + +The following is an example of what you'll see printed in the log file once the software watchdog detects a delay longer than the configured one: + + [8547 | signal handler] (1333114359) + --- WATCHDOG TIMER EXPIRED --- + /lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d] + /lib/libpthread.so.0(+0xf8f0) [0x7f16b5f158f0] + /lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d] + /lib/libc.so.6(usleep+0x34) [0x7f16b5c62844] + ./redis-server(debugCommand+0x3e1) [0x43ab41] + ./redis-server(call+0x5d) [0x415a9d] + ./redis-server(processCommand+0x375) [0x415fc5] + ./redis-server(processInputBuffer+0x4f) [0x4203cf] + ./redis-server(readQueryFromClient+0xa0) [0x4204e0] + ./redis-server(aeProcessEvents+0x128) [0x411b48] + ./redis-server(aeMain+0x2b) [0x411dbb] + ./redis-server(main+0x2b6) [0x418556] + /lib/libc.so.6(__libc_start_main+0xfd) [0x7f16b5ba1c4d] + ./redis-server() [0x411099] + ------ + +Note: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context. + +If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is. From 4fa1aca69984c2dc9ef131b585b6ce2c8a5a91ff Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 18:58:55 +0200 Subject: [PATCH 0088/2880] Problems page added. --- topics/problems.md | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 topics/problems.md diff --git a/topics/problems.md b/topics/problems.md new file mode 100644 index 0000000000..2f6e7051d4 --- /dev/null +++ b/topics/problems.md @@ -0,0 +1,36 @@ +Problems with Redis? This is a good starting point. +=== + +This page tries to help you about what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to let people alone with their issues. + +* If you have latency problems with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/slatency). +* Redis stable releases are usually very reliable, however in the rare event you are experiencing crashes the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). +* It happened multiple times that users experiencing problems with Redis actually had a server with broken RAM. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memcheck86](http://memcheck86.com). + +For every other problem please drop a message to the [Redis Google Group](http://groups.google.com/group/redis-db). We will be glad to help. + +List of known critical bugs in previous Redis releases. +=== + +Note: this list may not be complete as we staretd it March 30, 2012, and did not included much historical data. + +* Redis version up to 2.4.9: **memory leak in replication**. A memory leak was triggered by replicating a master contaning a database ID greatear than ID 9. +* Redis version up to 2.4.9: **chained replication bug**. In environments where a slave B is attached to another instance `A`, and the instance `A` is switched between master and slave using the `SLAVEOF` command, it is possilbe that `B` will not be correctly disconnected to force a resync when `A` changes status (and data set content). +* Redis version up to 2.4.7: **redis-check-aof does not work properly in 32 bit instances with AOF files bigger than 2GB**. +* Redis version up to 2.4.7: **Mixing replication and maxmemory produced bad results**. Specifically a master with maxmemory set with attached slaves could result into the master blocking and the dataset on the master to get completely erased. The reason was that key expiring produced more memory usabe because of the replication link DEL synthesizing, triggering the expiring of more keys. +* Redis versions up to 2.4.5: **Connection of multiple slaves at the same time could result into big master memory usage, and slave desync**. (See [issue 141](http://github.com/antirez/redis/issues/141) for more details). + +List of known bugs still present in latest 2.4 release. +=== + +* Redis version up to the current 2.4.x release: **Variadic list push commands and blocking list operations will not play well**. If you use `LPUSH` or `RPUSH` commands against a key that has other clients waiting for elements with blocking operations such as `BLPOP`, both the results of the computation the replication on slaves, and the AOF file commands produced, may not be correct. This bug is fixed in Redis 2.6 but unfortunately a too big refactoring was needed to fix the bug, large enough to make a back port more problematic than the bug itself. + +List of known bugs still present in latest 2.6 release. +=== + +* There are no known important bugs in Redis 2.6.x + +List of known Linux related bugs affecting Redis. +=== + +* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. From 917817ca1dc42214fec0149381e62b8fc900cc65 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:00:00 +0200 Subject: [PATCH 0089/2880] more bold is better. --- topics/problems.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/problems.md b/topics/problems.md index 2f6e7051d4..285a6fd7e4 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -3,9 +3,9 @@ Problems with Redis? This is a good starting point. This page tries to help you about what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to let people alone with their issues. -* If you have latency problems with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/slatency). -* Redis stable releases are usually very reliable, however in the rare event you are experiencing crashes the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). -* It happened multiple times that users experiencing problems with Redis actually had a server with broken RAM. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memcheck86](http://memcheck86.com). +* If you have **latency problems** with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/slatency). +* Redis stable releases are usually very reliable, however in the rare event you are **experiencing crashes** the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). +* It happened multiple times that users experiencing problems with Redis actually had a server with **broken RAM**. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memcheck86](http://memcheck86.com). For every other problem please drop a message to the [Redis Google Group](http://groups.google.com/group/redis-db). We will be glad to help. From 3f88ab5ad9e8de7c78157e06b1aedfaedeec7275 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:01:22 +0200 Subject: [PATCH 0090/2880] typo fixed --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index 285a6fd7e4..d6de4ca19f 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -3,7 +3,7 @@ Problems with Redis? This is a good starting point. This page tries to help you about what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to let people alone with their issues. -* If you have **latency problems** with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/slatency). +* If you have **latency problems** with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/latency). * Redis stable releases are usually very reliable, however in the rare event you are **experiencing crashes** the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). * It happened multiple times that users experiencing problems with Redis actually had a server with **broken RAM**. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memcheck86](http://memcheck86.com). From f217aab818f6714ba6d78724dcf52fa10616fb4f Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:02:03 +0200 Subject: [PATCH 0091/2880] more typos --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index d6de4ca19f..bf074e9a4d 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -5,7 +5,7 @@ This page tries to help you about what to do if you have issues with Redis. Part * If you have **latency problems** with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/latency). * Redis stable releases are usually very reliable, however in the rare event you are **experiencing crashes** the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). -* It happened multiple times that users experiencing problems with Redis actually had a server with **broken RAM**. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memcheck86](http://memcheck86.com). +* It happened multiple times that users experiencing problems with Redis actually had a server with **broken RAM**. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memtest86](http://memtest86.com). For every other problem please drop a message to the [Redis Google Group](http://groups.google.com/group/redis-db). We will be glad to help. From ad907cb59cbf78302525a362fde011200aa8fedc Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:06:24 +0200 Subject: [PATCH 0092/2880] typo. --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index bf074e9a4d..1bbc2f374a 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -3,7 +3,7 @@ Problems with Redis? This is a good starting point. This page tries to help you about what to do if you have issues with Redis. Part of the Redis project is helping people that are experiencing problems because we don't like to let people alone with their issues. -* If you have **latency problems** with Redis, that is some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/latency). +* If you have **latency problems** with Redis, that in some way appears to be idle for some time, read our [Redis latency trubleshooting guide](/topics/latency). * Redis stable releases are usually very reliable, however in the rare event you are **experiencing crashes** the developers can help a lot more if you provide debugging informations. Please read our [Debugging Redis guide](/topics/debugging). * It happened multiple times that users experiencing problems with Redis actually had a server with **broken RAM**. Please test your RAM using **redis-server --test-memory** in case Redis is not stable in your system. Redis built-in memory test is fast and reasonably reliable, but if you can you should reboot your server and use [memtest86](http://memtest86.com). From 063abcbe2475385323fe205cd14d6b21f8fade78 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:15:56 +0200 Subject: [PATCH 0093/2880] Added Linux kernel bugs links to get more info. --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index 1bbc2f374a..2364ac2b3e 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -33,4 +33,4 @@ List of known bugs still present in latest 2.6 release. List of known Linux related bugs affecting Redis. === -* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. +* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). From 63928ddfc3974f4f3389f59cdae003b8fff6d030 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:20:29 +0200 Subject: [PATCH 0094/2880] more info about Linux kernel bugs. --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index 2364ac2b3e..a59a3ed409 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -33,4 +33,4 @@ List of known bugs still present in latest 2.6 release. List of known Linux related bugs affecting Redis. === -* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). +* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). Those bugs were reported in the context of EC2 instances, but other users confirmed that also native servers are affected. From e45888e9d3b3e964f3f2075df3c096412fa3483c Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 30 Mar 2012 19:22:30 +0200 Subject: [PATCH 0095/2880] Grammar. --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index a59a3ed409..b5f913975d 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -33,4 +33,4 @@ List of known bugs still present in latest 2.6 release. List of known Linux related bugs affecting Redis. === -* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). Those bugs were reported in the context of EC2 instances, but other users confirmed that also native servers are affected. +* Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). Both bugs were reported many times in the context of EC2 instances, but other users confirmed that also native servers are affected (at least by one of the two). From b284f04b1ad61797817458681e49fa0a070ed08a Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Apr 2012 16:18:54 +0200 Subject: [PATCH 0096/2880] DUMP, MIGRATE, RESTORE documented. --- commands.json | 60 +++++++++++++++++++++++++++++++++++++++++++++ commands/dump.md | 21 ++++++++++++++++ commands/migrate.md | 20 +++++++++++++++ commands/restore.md | 17 +++++++++++++ 4 files changed, 118 insertions(+) create mode 100644 commands/dump.md create mode 100644 commands/migrate.md create mode 100644 commands/restore.md diff --git a/commands.json b/commands.json index ed66049937..c95eddda28 100644 --- a/commands.json +++ b/commands.json @@ -189,6 +189,18 @@ "since": "2.0.0", "group": "transactions" }, + "DUMP": { + "summary": "Return a serialized verison of the value stored at the specified key.", + "complexity": "O(1) to access the key and additional O(N*M) to serialized it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).", + "arguments": [ + { + "name": "key", + "type": "key", + } + ], + "since": "2.6.0", + "group": "generic" + }, "ECHO": { "summary": "Echo the given string", "arguments": [ @@ -817,6 +829,34 @@ "since": "1.0.0", "group": "string" }, + "MIGRATE": { + "summary": "Atomically transfer a key from a Redis instance to another one.", + "complexity": "This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed.", + "arguments": [ + { + "name": "host", + "type": "string", + }, + { + "name": "port", + "type": "string", + }, + { + "name": "key", + "type": "key" + }, + { + "name": "destination db", + "type": "integer" + }, + { + "name": "timeout", + "type": "integer" + } + ], + "since": "2.6.0", + "group": "generic" + }, "MONITOR": { "summary": "Listen for all requests received by the server in real time", "since": "1.0.0", @@ -1054,6 +1094,26 @@ "since": "1.0.0", "group": "generic" }, + "RESTORE": { + "summary": "Create a key using the provided serialized value, previously obtained using DUMP.", + "complexity": "O(1) to create the new key and additional O(N*M) to recostruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).", + "arguments": [ + { + "name": "key", + "type": "key", + }, + { + "name": "ttl", + "type": "integer", + }, + { + "name": "serialized value", + "type": "string" + } + ], + "since": "2.6.0", + "group": "generic" + }, "RPOP": { "summary": "Remove and get the last element in a list", "complexity": "O(1)", diff --git a/commands/dump.md b/commands/dump.md new file mode 100644 index 0000000000..7ce3eacf6d --- /dev/null +++ b/commands/dump.md @@ -0,0 +1,21 @@ +Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the `RESTORE` command. + +The serialization format is opaque and non-standard, however it has a few semantical characteristics: + +* It contains a 64bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. +* Values are encoded in the same format used by RDB. +* An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. + +The serialized value does NOT contain expire information. In order to capture the time to live of the current value the `PTTL` command should be used. + +If `key` does not exist a nil bulk reply is returned. + +@return + +@bulk-reply: the serialized value. + +@examples + + @cli + SET mykey 10 + DUMP mykey diff --git a/commands/migrate.md b/commands/migrate.md new file mode 100644 index 0000000000..ea18bab315 --- /dev/null +++ b/commands/migrate.md @@ -0,0 +1,20 @@ +Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance. + +The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs. + +The command internally uses `DUMP` to generate the serialized version of the key value, and `RESTORE` in order to synthesize the key in the target instance. +The source instance acts as a client for the target instance. If the target instance returns OK to the `RESTORE` command, the source instance deletes the key using `DEL`. + +The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. If this idle time is reached the operation is aborted, an error returned, and one of the following cases are possible: + +* The key may be on both the instances. +* The key may be only in the source instance. + +It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly. + +On success OK is returned, otherwise an error is returned. +If the error is a timeout the special error -TIMEOUT is returned so that clients can distinguish between this and other errors. + +@return + +@status-reply: The command returns OK on success. diff --git a/commands/restore.md b/commands/restore.md new file mode 100644 index 0000000000..9c61b6e1c4 --- /dev/null +++ b/commands/restore.md @@ -0,0 +1,17 @@ +Create a key assosicated with a value that is obtained unserializing the provided serialized value (obtained via `DUMP`). + +If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. + +`RESTORE` checks the RDB version and data checksum. If they don't match an error is returned. + +@return + +@status-reply: The command returns OK on success. + +@examples + + @cli + DEL mykey + RESTORE mykey "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\xff\x04\x00u#<\xc0;.\xe9\xdd" + TYPE mykey + LRANGE mykey 0 -1 From 1c8aeec6976a0983e673640a97a6a6f5276bffbd Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Apr 2012 16:20:31 +0200 Subject: [PATCH 0097/2880] json fixed --- commands.json | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/commands.json b/commands.json index c95eddda28..6f6419c72e 100644 --- a/commands.json +++ b/commands.json @@ -195,7 +195,7 @@ "arguments": [ { "name": "key", - "type": "key", + "type": "key" } ], "since": "2.6.0", @@ -835,11 +835,11 @@ "arguments": [ { "name": "host", - "type": "string", + "type": "string" }, { "name": "port", - "type": "string", + "type": "string" }, { "name": "key", @@ -1100,11 +1100,11 @@ "arguments": [ { "name": "key", - "type": "key", + "type": "key" }, { "name": "ttl", - "type": "integer", + "type": "integer" }, { "name": "serialized value", From d03a38a93933aceb662394ea4154216613bf21b2 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Apr 2012 16:35:50 +0200 Subject: [PATCH 0098/2880] Better MIGRATE doc. --- commands/migrate.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/commands/migrate.md b/commands/migrate.md index ea18bab315..b19e1a3a90 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -5,15 +5,18 @@ The command is atomic and blocks the two instances for the time required to tran The command internally uses `DUMP` to generate the serialized version of the key value, and `RESTORE` in order to synthesize the key in the target instance. The source instance acts as a client for the target instance. If the target instance returns OK to the `RESTORE` command, the source instance deletes the key using `DEL`. -The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. If this idle time is reached the operation is aborted, an error returned, and one of the following cases are possible: +The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds. + +`MIGRATE` needs to perform I/O operations and to honour the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -IOERR returned. When this happens the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly. -On success OK is returned, otherwise an error is returned. -If the error is a timeout the special error -TIMEOUT is returned so that clients can distinguish between this and other errors. +When any other error is returned (startign with "ERR") `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance). + +On success OK is returned. @return From ccaa5052d779b19fc7ce2728e21daaf15bd49039 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Apr 2012 17:21:39 +0200 Subject: [PATCH 0099/2880] RESTORE example is now hard-coded --- commands/restore.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/commands/restore.md b/commands/restore.md index 9c61b6e1c4..3688a85f13 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -10,8 +10,13 @@ If `ttl` is 0 the key is created without any expire, otherwise the specified exp @examples - @cli - DEL mykey - RESTORE mykey "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\xff\x04\x00u#<\xc0;.\xe9\xdd" - TYPE mykey - LRANGE mykey 0 -1 + redis> DEL mykey + 0 + redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\xff\x04\x00u#<\xc0;.\xe9\xdd" + OK + redis> TYPE mykey + list + redis> LRANGE mykey 0 -1 + 1) "1" + 2) "2" + 3) "3" From c2b19b33778f03034a2d2bf3cb1c0cbec2cd45bd Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Apr 2012 17:22:37 +0200 Subject: [PATCH 0100/2880] Break too long lines in RESTORE example. --- commands/restore.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/commands/restore.md b/commands/restore.md index 3688a85f13..adbdcdd07d 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -12,7 +12,9 @@ If `ttl` is 0 the key is created without any expire, otherwise the specified exp redis> DEL mykey 0 - redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\xff\x04\x00u#<\xc0;.\xe9\xdd" + redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ + x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\ + xff\x04\x00u#<\xc0;.\xe9\xdd" OK redis> TYPE mykey list From c233d0cf1234df2c98eba5b5b31eafebfd41d3b7 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 3 Apr 2012 15:36:27 +0200 Subject: [PATCH 0101/2880] use dash instead of space when argument name was composed of multiple words. --- commands.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/commands.json b/commands.json index 6f6419c72e..9285e8c092 100644 --- a/commands.json +++ b/commands.json @@ -846,7 +846,7 @@ "type": "key" }, { - "name": "destination db", + "name": "destination-db", "type": "integer" }, { @@ -964,7 +964,7 @@ "type": "key" }, { - "name": "milliseconds timestamp", + "name": "milliseconds-timestamp", "type": "posix time" } ], @@ -1107,7 +1107,7 @@ "type": "integer" }, { - "name": "serialized value", + "name": "serialized-value", "type": "string" } ], From 441f931f9c347c4a5de1008d76a4a7f889031c44 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 5 Apr 2012 16:12:43 +0200 Subject: [PATCH 0102/2880] SAVE man page was empty... --- commands/save.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/commands/save.md b/commands/save.md index e3159429b9..99da2b9f6d 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,3 +1,9 @@ -@examples +The `SAVE` commands performs a **synchronous** save of all the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. + +You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to craete the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. + +For more information check the documentation [describing how Redis persistence works](/topics/persistence) in details. @return + +@status-reply: The commands returns OK on success. From b46e679469636d9be9fba7bb31c7e60df53ddc24 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 5 Apr 2012 16:14:27 +0200 Subject: [PATCH 0103/2880] typo fixed --- commands/save.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/save.md b/commands/save.md index 99da2b9f6d..00b54aae8f 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,4 +1,4 @@ -The `SAVE` commands performs a **synchronous** save of all the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. +The `SAVE` commands performs a **synchronous** save of the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to craete the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. From 91d0412be4108ee0a6284daf91fe979492c9e0df Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 6 Apr 2012 00:47:43 +0200 Subject: [PATCH 0104/2880] Fixed typo --- commands/save.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/save.md b/commands/save.md index 00b54aae8f..386c976a39 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,6 +1,6 @@ The `SAVE` commands performs a **synchronous** save of the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. -You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to craete the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. +You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. For more information check the documentation [describing how Redis persistence works](/topics/persistence) in details. From 08a19f3cd35d7938e1610a9b61a56132085dc710 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 6 Apr 2012 21:20:34 +0200 Subject: [PATCH 0105/2880] Better BGREWRITEAOF page. --- commands/bgrewriteaof.md | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 698eb40fac..65ee3d4f1a 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,8 +1,15 @@ -Rewrites the [append-only file](/topics/persistence#append-only-file) to reflect the current dataset in memory. +Instruct Redis to start an [Append Only File](/topics/persistence#append-only-file) rewrite process. The rewrite will create a small optimized version of the current Append Only File. If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. -Please refer to the [persistence documentation](/topics/persistence) for detailed information about AOF rewriting. +The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically: + +* If a Redis child is creating a snapshot on disk, the AOF rewrite is *scheduled* but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command starting from Redis 2.6. +* If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. + +Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. + +Please check the documentation about [Redis Persistence](/topics/persistence#append-only-file) for more information. @return From 902abb13a2351ea835ec4ae7308b519ad5f26d0b Mon Sep 17 00:00:00 2001 From: Tyler Bird Date: Tue, 10 Apr 2012 12:37:30 -0600 Subject: [PATCH 0106/2880] The logo file for github was broken. --- topics/whos-using-redis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index c0684da6d3..19f02b3cf6 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -9,7 +9,7 @@ Logos are linked to the relevant story when available.
  • - GitHub + GitHub
  • From 70f695aab20d313deb5ab6c05f50da4d6718748e Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 11 Apr 2012 10:05:21 +0200 Subject: [PATCH 0107/2880] Removed useless line. --- topics/faq.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index f0178642e8..177557e2ba 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -129,8 +129,6 @@ In other words your limit is likely the available memory in your system. ## What Redis means actually? -Redis means two things: - It means REmote DIctionary Server. ## Why did you started the Redis project? From 4032f31819b667e38a8adf0f76dbf33e6e3ad354 Mon Sep 17 00:00:00 2001 From: TheMue Date: Wed, 11 Apr 2012 21:13:24 +0300 Subject: [PATCH 0108/2880] Update clients.json --- clients.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/clients.json b/clients.json index eebe8975e4..9c2d197cc1 100644 --- a/clients.json +++ b/clients.json @@ -67,10 +67,10 @@ }, { - "name": "Tideland RDC", + "name": "Tideland CGL Redis", "language": "Go", - "repository": "http://code.google.com/p/tideland-rdc/", - "description": "", + "repository": "http://code.google.com/p/tcgl/", + "description": "A flexible Go Redis client able to handle all commands", "authors": ["themue"] }, From 18caa09945d079be04de8fec5b15d8b5b553c25e Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 14:07:41 +0200 Subject: [PATCH 0109/2880] Better latency page. Use of huge tables moved into appendix as it is confusing for most non advanced users. Fork time table added. --- topics/latency.md | 158 ++++++++++++++++++++++++++-------------------- 1 file changed, 91 insertions(+), 67 deletions(-) diff --git a/topics/latency.md b/topics/latency.md index 830fc0c853..d5b80466f1 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -112,9 +112,9 @@ a sign that slow commands are used. Latency generated by fork ------------------------- -Depending on the chosen persistency mechanism, Redis has to fork background -processes. The fork operation (running in the main thread) can induce latency -by itself. +In order to generate the RDB file in background, or to rewrite the Append Only File +if AOF persistence is enabled, Redis has to fork background processes. +The fork operation (running in the main thread) can induce latency by itself. Forking is an expensive operation on most Unix-like systems, since it involves copying a good number of objects linked to the process. This is especially @@ -131,73 +131,22 @@ which will involve allocating and copying 48 MB of memory. It takes time and CPU, especially on virtual machines where allocation and initialization of a large memory chunk can be expensive. -Some CPUs can use different page size though. AMD and Intel CPUs can support -2 MB page size if needed. These pages are nicknamed *huge pages*. Some -operating systems can optimize page size in real time, transparently -aggregating small pages into huge pages on the fly. - -On Linux, explicit huge pages management has been introduced in 2.6.16, and -implicit transparent huge pages are available starting in 2.6.38. If you -run recent Linux distributions (for example RH 6 or derivatives), transparent -huge pages can be activated, and you can use a vanilla Redis version with them. - -This is the preferred way to experiment/use with huge pages on Linux. - -Now, if you run older distributions (RH 5, SLES 10-11, or derivatives), and -not afraid of a few hacks, Redis requires to be patched in order to support -huge pages. - -The first step would be to read [Mel Gorman's primer on huge pages](http://lwn.net/Articles/374424/) - -There are currently two ways to patch Redis to support huge pages. - -+ For Redis 2.4, the embedded jemalloc allocator must be patched. -[patch](https://gist.github.com/1171054) by Pieter Noordhuis. -Note this patch relies on the anonymous mmap huge page support, -only available starting 2.6.32, so this method cannot be used for older -distributions (RH 5, SLES 10, and derivatives). - -+ For Redis 2.2, or 2.4 with the libc allocator, Redis makefile -must be altered to link Redis with -[the libhugetlbfs library](http://libhugetlbfs.sourceforge.net/). -It is a straightforward [change](https://gist.github.com/1240452) - -Then, the system must be configured to support huge pages. - -The following command allocates and makes N huge pages available: +Fork time in different systems +------------------------------ - $ sudo sysctl -w vm.nr_hugepages= - -The following command mounts the huge page filesystem: - - $ sudo mount -t hugetlbfs none /mnt/hugetlbfs - -In all cases, once Redis is running with huge pages (transparent or -not), the following benefits are expected: - -+ The latency due to the fork operations is dramatically reduced. - This is mostly useful for very large instances, and especially - on a VM. -+ Redis is faster due to the fact the translation look-aside buffer - (TLB) of the CPU is more efficient to cache page table entries - (i.e. the hit ratio is better). Do not expect miracle, it is only - a few percent gain at most. -+ Redis memory cannot be swapped out anymore, which is interesting - to avoid outstanding latencies due to virtual memory. +Modern hardware is pretty fast to copy the page table, but Xen is not. +The problem with Xen is not virtualization-specific, but Xen-specific. For instance +The using VMware or Virutal Box does not result into slow fork time. +The following is a table that comprares fork time for difference Redis instance +size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. -Unfortunately, and on top of the extra operational complexity, -there is also a significant drawback of running Redis with -huge pages. The COW mechanism granularity is the page. With -2 MB pages, the probability a page is modified during a background -save operation is 512 times higher than with 4 KB pages. The actual -memory required for a background save therefore increases a lot, -especially if the write traffic is truly random, with poor locality. -With huge pages, using twice the memory while saving is not anymore -a theoretical incident. It really happens. - -The result of a complete benchmark can be found -[here](https://gist.github.com/1272254). +* **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). +* **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB) +* **Linux running on physical machine (Xeon @ 2.27Ghz)** .9GB RSS forked into 62 millisecodns (9 milliseconds per GB). +* **Linux VM on EC2 (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB). +* **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 millisecodns (424 milliseconds per GB). +As you can see a VM running on Xen has a performance hit that is between one order to two orders of magnitude. We believe this is a severe problem with Xen and we hope it will be addressed ASAP. Latency induced by swapping (operating system paging) ----------------------------------------------------- @@ -528,3 +477,78 @@ The following is an example of what you'll see printed in the log file once the Note: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context. If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is. + +APPENDIX A: Experimenting with huge pages +----------------------------------------- + +Latency introduced by fork can be mitigated using huge pages at the cost of a bigger memory usage during persistence. The following appeindex describe in details this feature as implemented in the Linux kernel. + +Some CPUs can use different page size though. AMD and Intel CPUs can support +2 MB page size if needed. These pages are nicknamed *huge pages*. Some +operating systems can optimize page size in real time, transparently +aggregating small pages into huge pages on the fly. + +On Linux, explicit huge pages management has been introduced in 2.6.16, and +implicit transparent huge pages are available starting in 2.6.38. If you +run recent Linux distributions (for example RH 6 or derivatives), transparent +huge pages can be activated, and you can use a vanilla Redis version with them. + +This is the preferred way to experiment/use with huge pages on Linux. + +Now, if you run older distributions (RH 5, SLES 10-11, or derivatives), and +not afraid of a few hacks, Redis requires to be patched in order to support +huge pages. + +The first step would be to read [Mel Gorman's primer on huge pages](http://lwn.net/Articles/374424/) + +There are currently two ways to patch Redis to support huge pages. + ++ For Redis 2.4, the embedded jemalloc allocator must be patched. +[patch](https://gist.github.com/1171054) by Pieter Noordhuis. +Note this patch relies on the anonymous mmap huge page support, +only available starting 2.6.32, so this method cannot be used for older +distributions (RH 5, SLES 10, and derivatives). + ++ For Redis 2.2, or 2.4 with the libc allocator, Redis makefile +must be altered to link Redis with +[the libhugetlbfs library](http://libhugetlbfs.sourceforge.net/). +It is a straightforward [change](https://gist.github.com/1240452) + +Then, the system must be configured to support huge pages. + +The following command allocates and makes N huge pages available: + + $ sudo sysctl -w vm.nr_hugepages= + +The following command mounts the huge page filesystem: + + $ sudo mount -t hugetlbfs none /mnt/hugetlbfs + +In all cases, once Redis is running with huge pages (transparent or +not), the following benefits are expected: + ++ The latency due to the fork operations is dramatically reduced. + This is mostly useful for very large instances, and especially + on a VM. ++ Redis is faster due to the fact the translation look-aside buffer + (TLB) of the CPU is more efficient to cache page table entries + (i.e. the hit ratio is better). Do not expect miracle, it is only + a few percent gain at most. ++ Redis memory cannot be swapped out anymore, which is interesting + to avoid outstanding latencies due to virtual memory. + +Unfortunately, and on top of the extra operational complexity, +there is also a significant drawback of running Redis with +huge pages. The COW mechanism granularity is the page. With +2 MB pages, the probability a page is modified during a background +save operation is 512 times higher than with 4 KB pages. The actual +memory required for a background save therefore increases a lot, +especially if the write traffic is truly random, with poor locality. +With huge pages, using twice the memory while saving is not anymore +a theoretical incident. It really happens. + +The result of a complete benchmark can be found +[here](https://gist.github.com/1272254). + + + From cdc074a58605c16a6cbeebc33089835d4a1777f9 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 14:11:05 +0200 Subject: [PATCH 0110/2880] typo fixe --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index d5b80466f1..3e1bc896d5 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -136,7 +136,7 @@ Fork time in different systems Modern hardware is pretty fast to copy the page table, but Xen is not. The problem with Xen is not virtualization-specific, but Xen-specific. For instance -The using VMware or Virutal Box does not result into slow fork time. +using VMware or Virutal Box does not result into slow fork time. The following is a table that comprares fork time for difference Redis instance size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. From 127d6bc0ae459a2511c64372ccf2422ef77c816f Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 14:11:29 +0200 Subject: [PATCH 0111/2880] typo fixe --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 3e1bc896d5..f7d4f13fcb 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -137,7 +137,7 @@ Fork time in different systems Modern hardware is pretty fast to copy the page table, but Xen is not. The problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virutal Box does not result into slow fork time. -The following is a table that comprares fork time for difference Redis instance +The following is a table that compares fork time for difference Redis instance size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). From e55af92817e65d2f6a270d0c4bbb418fd95e4209 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 14:12:04 +0200 Subject: [PATCH 0112/2880] typo 3 --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index f7d4f13fcb..9a77c9ec11 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -137,7 +137,7 @@ Fork time in different systems Modern hardware is pretty fast to copy the page table, but Xen is not. The problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virutal Box does not result into slow fork time. -The following is a table that compares fork time for difference Redis instance +The following is a table that compares fork time for different Redis instance size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). From b6830618ce4d3391d43c49a88f9fa99b1b3ee029 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 14:43:51 +0200 Subject: [PATCH 0113/2880] typo fixed. --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 9a77c9ec11..0409390c4f 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -142,7 +142,7 @@ size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). * **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB) -* **Linux running on physical machine (Xeon @ 2.27Ghz)** .9GB RSS forked into 62 millisecodns (9 milliseconds per GB). +* **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 millisecodns (9 milliseconds per GB). * **Linux VM on EC2 (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB). * **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 millisecodns (424 milliseconds per GB). From 89204928a68b407f840f0f32782cdacd3c5c1384 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Thu, 12 Apr 2012 11:34:36 -0700 Subject: [PATCH 0114/2880] Trailing 4 --- topics/persistence.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 3162c72eac..8d6df85ec5 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -55,7 +55,7 @@ There are many users using AOF alone, but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups, for faster restarts, and in the event of bugs in the AOF engine. -Note: for all this reasons we'll likely end unifying AOF and RDB into a single persistence model in the future (long term plan).4 +Note: for all this reasons we'll likely end unifying AOF and RDB into a single persistence model in the future (long term plan). The following sections will illustrate a few more details about the two persistence models. @@ -158,7 +158,7 @@ this problem using the following procedure: Redis: $ redis-check-aof --fix - + * Optionally use `diff -u` to check what is the difference between two files. From bd1428de0241f439dc164eadfd6cb181207efac4 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 22:27:36 +0200 Subject: [PATCH 0115/2880] Fork latency on KVM added. --- topics/latency.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/latency.md b/topics/latency.md index 0409390c4f..82c17f5086 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -143,6 +143,7 @@ size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). * **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB) * **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 millisecodns (9 milliseconds per GB). +* **Linux VM on KVM** 360 MB RSS forked in 8.2 milliseconds (23.3 millisecond per GB). * **Linux VM on EC2 (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB). * **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 millisecodns (424 milliseconds per GB). From a2934de3296c77bdcaae06cc054112bad01c5275 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 12 Apr 2012 22:36:16 +0200 Subject: [PATCH 0116/2880] Mention 6sync as I did with EC2 and Linode. --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 82c17f5086..4ffff654e9 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -143,7 +143,7 @@ size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` * **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). * **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB) * **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 millisecodns (9 milliseconds per GB). -* **Linux VM on KVM** 360 MB RSS forked in 8.2 milliseconds (23.3 millisecond per GB). +* **Linux VM on 6sync (KVM)** 360 MB RSS forked in 8.2 milliseconds (23.3 millisecond per GB). * **Linux VM on EC2 (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB). * **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 millisecodns (424 milliseconds per GB). From 48570a3c35777b2ac99211664619465de513d1b8 Mon Sep 17 00:00:00 2001 From: Nathan Parry Date: Thu, 19 Apr 2012 20:07:41 -0300 Subject: [PATCH 0117/2880] Fix typo in info.md --- commands/info.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/info.md b/commands/info.md index 6ec2304b2c..3bb5ebfd5d 100644 --- a/commands/info.md +++ b/commands/info.md @@ -1,5 +1,5 @@ The `INFO` command returns information and statistics about the server -in format that is simple to parse by computers and easy to red by humans. +in a format that is simple to parse by computers and easy to read by humans. @return From 4fa5f917d25a3708648b3853b6899ca542258201 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 10:02:10 +0200 Subject: [PATCH 0118/2880] Two crashes in stable releases added to problems.md --- topics/problems.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/problems.md b/topics/problems.md index b5f913975d..a69424a71f 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -14,6 +14,8 @@ List of known critical bugs in previous Redis releases. Note: this list may not be complete as we staretd it March 30, 2012, and did not included much historical data. +* Redis version up to 2.4.10: SORT using GET or BY option with keys with an expire set may crash the server. [Issue #460](http://github.com/antirez/redis/issues/460). +* Redis version up to 2.4.10: a bug in the aeWait() implementation in ae.c may result in a server crash under extremely hard to replicate conditions. [Issue #267](http://github.com/antirez/redis/issues/267). * Redis version up to 2.4.9: **memory leak in replication**. A memory leak was triggered by replicating a master contaning a database ID greatear than ID 9. * Redis version up to 2.4.9: **chained replication bug**. In environments where a slave B is attached to another instance `A`, and the instance `A` is switched between master and slave using the `SLAVEOF` command, it is possilbe that `B` will not be correctly disconnected to force a resync when `A` changes status (and data set content). * Redis version up to 2.4.7: **redis-check-aof does not work properly in 32 bit instances with AOF files bigger than 2GB**. From 0655b3391baf98136b0226cb74c6eeb9c62e4ec0 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 15:56:02 +0200 Subject: [PATCH 0119/2880] Pipelining page improved. --- topics/pipelining.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/topics/pipelining.md b/topics/pipelining.md index 1863e1fdcc..5be27ac5e6 100644 --- a/topics/pipelining.md +++ b/topics/pipelining.md @@ -100,10 +100,9 @@ Running the above simple script will provide this figures in my Mac OS X system, As you can see using pipelining we improved the transfer by a factor of five. -Pipelining VS other multi-commands +Pipelining VS Scripting --- -Often we get requests about adding new commands performing multiple operations in a single pass. -For instance there is no command to add multiple elements in a set. You need calling many times `SADD`. +Using [Redis scripting](/commands/eval) (available in Redis version 2.6 or greater) a number of use cases for pipelining can be addressed more efficiently using scripts that perform a lot of the work needed server side. A big advantage of scripting is that it is able to both read and write data with minimal latency, making operations like *read, compute, write* very fast (pipelining can't help in this scenario since the client needs the reply of the read command before it can call the write command). -With pipelining you can have performances near to an hypothetical MSADD command, but at the same time we'll avoid bloating the Redis command set with too many commands. An additional advantage is that the version written using just SADD will be ready for a distributed environment (for instance Redis Cluster, that is in the process of being developed) just dropping the pipelining code. +Sometimes the application may also want to send `EVAL` or `EVALSHA` commands in a pipeline. This is entirely possible and Redis explicitly supports it with the [SCRIPT LOAD](http://redis.io/commands/script-load) command (it guarantees that `EVALSHA` can be called without the risk of failing). From f2e919991ee72eebfce341fdd5359c4036b9a782 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 16:00:11 +0200 Subject: [PATCH 0120/2880] Memory optimization page improved. --- topics/memory-optimization.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 65869b7780..db53b23295 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -3,15 +3,17 @@ This page is a work in progress. Currently it is just a list of things you shoul Special encoding of small aggregate data types ---------------------------------------------- -Since Redis 2.2 many data types are optimized to use less space up to a certain size. Hashes, Lists of any kind and Sets composed of just integers, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory efficient way that uses *up to 10 times less memory* (with 5 time less memory used being the average saving). +Since Redis 2.2 many data types are optimized to use less space up to a certain size. Hashes, Lists, Sets composed of just integers, and Sorted Sets, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory efficient way that uses *up to 10 times less memory* (with 5 time less memory used being the average saving). This is completely transparent from the point of view of the user and API. Since this is a CPU / memory trade off it is possible to tune the maximum number of elements and maximum element size for special encoded types using the following redis.conf directives. - hash-max-zipmap-entries 64 - hash-max-zipmap-value 512 + hash-max-zipmap-entries 64 (hahs-max-ziplist-entries for Redis >= 2.6) + hash-max-zipmap-value 512 (hash-max-ziplist-value for Redis >= 2.6) list-max-ziplist-entries 512 list-max-ziplist-value 64 + zset-max-ziplist-entries 128 + zset-max-ziplist-value 64 set-max-intset-entries 512 If a specially encoded value will overflow the configured max size, Redis will automatically convert it into normal encoding. This operation is very fast for small values, but if you change the setting in order to use specially encoded values for much larger aggregate types the suggestion is to run some benchmark and test to check the conversion time. From aeebfeeff8511c0a1a3887c15112f52a2ad32cc1 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 16:00:52 +0200 Subject: [PATCH 0121/2880] Redis 2.2 is no longer new... --- topics/memory-optimization.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index db53b23295..d2caa9ac5e 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -23,8 +23,8 @@ Using 32 bit instances Redis compiled with 32 bit target uses a lot less memory per key, since pointers are small, but such an instance will be limited to 4 GB of maximum memory usage. To compile Redis as 32 bit binary use *make 32bit*. RDB and AOF files are compatible between 32 bit and 64 bit instances (and between little and big endian of course) so you can switch from 32 to 64 bit, or the contrary, without problems. -New 2.2 bit and byte level operations -------------------------------------- +Bit and byte level operations +----------------------------- Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`. Using this commands you can treat the Redis string type as a random access array. For instance if you have an application where users are identified by an unique progressive integer number, you can use a bitmap in order to save information about sex of users, setting the bit for females and clearing it for males, or the other way around. With 100 millions of users this data will take just 12 megabyte of RAM in a Redis instance. You can do the same using `GETRANGE` and `SETRANGE` in order to store one byte of information for user. This is just an example but it is actually possible to model a number of problems in very little space with this new primitives. From a2fa910b0060e896d45923621fa99a90977eab39 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 19:44:08 +0200 Subject: [PATCH 0122/2880] EXPIRE command page improved --- commands/expire.md | 78 +++++++++++++++++++--- topics/expire.md | 158 ++++++--------------------------------------- 2 files changed, 88 insertions(+), 148 deletions(-) diff --git a/commands/expire.md b/commands/expire.md index a2dddca8aa..9a3eb79a7f 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -29,14 +29,6 @@ existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern section below. -Expire accuracy ---- - -In Redis 2.4 the expire might not be pin-point accurate, and it could be -between zero to one seconds out. - -Since Redis 2.6 the expire error is from 0 to 1 milliseconds. - Differences in Redis prior 2.1.3 --- @@ -85,3 +77,73 @@ recorded. This pattern is easily modified to use counters using `INCR` instead of lists using `RPUSH`. + +# Appendix: Redis expires + +## Keys with an expire + +Normally Redis keys are created without an associated time to live. The key +will simply live forever, unless it is removed by the user in an explicit +way, for instance using the `DEL` command. + +The `EXPIRE` family of commands is able to associate an expire to a given key, +at the cost of some additional memory used by the key. When a key has an expire +set, Redis will make sure to remove the key when the specified amount of time +elapsed. + +The key time to live can be updated or entierly removed using the `EXPIRE` and `PERSIST` command (or other strictly related commands). + +## Expire accuracy + +In Redis 2.4 the expire might not be pin-point accurate, and it could be +between zero to one seconds out. + +Since Redis 2.6 the expire error is from 0 to 1 milliseconds. + +## Expires and persistence + +Keys expiring information is stored as absolute unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active. + +For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desynch in their clocks, funny things may happen (like all the keys loaded to be expired at loading time). + +Even runnign instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediatly, instead of lasting for 1000 seconds. + +## How Redis expires keys + +Redis keys are expired in two ways: a passive way, and an active way. + +A key is actively expired simply when some client tries to access it, and +the key is found to be timed out. + +Of course this is not enough as there are expired keys that will never +be accessed again. This keys should be expired anyway, so periodically +Redis test a few keys at random among keys with an expire set. +All the keys that are already expired are deleted from the keyspace. + +Specifically this is what Redis does 10 times per second: + +1. Test 100 random keys from the set of keys with an associated expire. +2. Delete all the keys found expired. +3. If more than 25 keys were expired, start again from step 1. + +This is a trivial probabilistic algorithm, basically the assumption is +that our sample is representative of the whole key space, +and we continue to expire until the percentage of keys that are likely +to be expired is under 25% + +This means that at any given moment the maximum amount of keys already +expired that are using memory is at max equal to max amount of write +operations per second divided by 4. + +## How expires are handled in the replication link and AOF file + +In order to obtain a correct behavior without sacrificing consistency, when +a key expires, a `DEL` operation is synthesized in both the AOF file and gains +all the attached slaves. This way the expiration process is centralized in +the master instance, and there is no chance of consistency errors. + +However while the slaves connected to a master will not expire keys +independently (but will wait for the `DEL` coming from the master), they'll +still take the full state of the expires existing in the dataset, so when a +slave is elected to a master it will be able to expire the keys +independently, fully acting as a master. diff --git a/topics/expire.md b/topics/expire.md index 04956ef6fd..0a583f9048 100644 --- a/topics/expire.md +++ b/topics/expire.md @@ -1,89 +1,20 @@ -# Expiring keys +## How Redis expires keys -Volatile keys are stored on disk like the other keys, the timeout is persistent -too like all the other aspects of the dataset. Saving a dataset containing -expires and stopping the server does not stop the flow of time as Redis -stores on disk the time when the key will no longer be available as Unix -time, and not the remaining seconds. +Redis keys are expired in two ways: a passive way, and an active way. -## How the expire is removed from a key - -When the key is set to a new value using the `SET` command, or when a key -is destroyed via `DEL`, the timeout is removed from the key. - -## Restrictions with write operations against volatile keys - -IMPORTANT: Since Redis 2.1.3 or greater, there are no restrictions about -the operations you can perform against volatile keys, however older versions -of Redis, including the current stable version 2.0.0, have the following -limitations: - -Write operations like `LPUSH`, `LSET` and every other command that has the -effect of modifying the value stored at a volatile key have a special semantic: -basically a volatile key is destroyed when it is target of a write operation. -See for example the following usage pattern: - % ./redis-cli lpush mylist foobar /Users/antirez/hack/redis - OK - % ./redis-cli lpush mylist hello /Users/antirez/hack/redis - OK - % ./redis-cli expire mylist 10000 /Users/antirez/hack/redis - 1 - % ./redis-cli lpush mylist newelemen - OK - % ./redis-cli lrange mylist 0 -1 /Users/antirez/hack/redis - 1. newelemen -What happened here is that `LPUSH` against the key with a timeout set deleted -the key before to perform the operation. There is so a simple rule, write -operations against volatile keys will destroy the key before to perform the -operation. Why Redis uses this behavior? In order to retain an important -property: a server that receives a given number of commands in the same -sequence will end with the same dataset in memory. Without the delete-on-write -semantic the state of the server depends on the time the commands were issued. -This is not a desirable property in a distributed database that supports replication. - -## Restrictions for write operations with volatile keys as sources - -Even when the volatile key is not modified as part of a write operation, if -it is read in a composite write operation (such as `SINTERSTORE`) it will be -cleared at the start of the operation. This is done to avoid concurrency issues -in replication. Imagine a key that is about to expire and the composite operation -is run against it. On a slave node, this key might already be expired, which -leaves you with a desync in your dataset. - -## Setting the timeout again on already volatile keys - -Trying to call `EXPIRE` against a key that already has an associated timeout -will not change the timeout of the key, but will just return 0. If instead -the key does not have a timeout associated the timeout will be set and `EXPIRE` -will return 1. - -## Enhanced Lazy Expiration algorithm - -Redis does not constantly monitor keys that are going to be expired. -Keys are expired simply when some client tries to access a key, and +A key is actively expired simply when some client tries to access it, and the key is found to be timed out. Of course this is not enough as there are expired keys that will never -be accessed again. This keys should be expired anyway, so once every -second Redis test a few keys at random among keys with an expire set. +be accessed again. This keys should be expired anyway, so periodically +Redis test a few keys at random among keys with an expire set. All the keys that are already expired are deleted from the keyspace. -### Version 1.0 - -Each time a fixed number of keys were tested (100 by default). So if -you had a client setting keys with a very short expire faster than 100 -for second the memory continued to grow. When you stopped to insert -new keys the memory started to be freed, 100 keys every second in the -best conditions. Under a peak Redis continues to use more and more RAM -even if most keys are expired in each sweep. +Specifically this is what Redis does 10 times per second: -### Version 1.1 - -Each time Redis: - -1. Tests 100 random keys from expired keys set. -2. Deletes all the keys found expired. -3. If more than 25 keys were expired, it starts again from 1. +1. Test 100 random keys from the set of keys with an associated expire. +2. Delete all the keys found expired. +3. If more than 25 keys were expired, start again from step 1. This is a trivial probabilistic algorithm, basically the assumption is that our sample is representative of the whole key space, @@ -91,71 +22,18 @@ and we continue to expire until the percentage of keys that are likely to be expired is under 25% This means that at any given moment the maximum amount of keys already -expired that are using memory is at max equal to max setting operations -per second divided by 4. - -## Example +expired that are using memory is at max equal to max amount of write +operations per second divided by 4. -OK, let's start with the problem: +## How expires are handled in the replication link and AOF file - SET a 100 - OK - EXPIRE a 360 - (integer) 1 - INCR a - (integer) 1 - -I set a key to the value of 100, then set an expire of 360 seconds, and then -incremented the key (before the 360 timeout expired of course). The obvious -result would be: 101, instead the key is set to the value of 1. Why? There -is a very important reason involving the Append Only File and Replication. -Let's rework our example a bit by adding the notion of time to the mix: - - SET a 100 - EXPIRE a 5 - ... wait 10 seconds ... - INCR a - -Imagine a Redis version that does not implement the Delete keys with an expire -set on write operation semantic. Running the above example with the 10 seconds -pause will lead to 'a' being set to the value of 1, as it no longer exists -when `INCR` is called 10 seconds later. - -Instead if we drop the 10 seconds pause, the result is that 'a' is set to 101. - - -And in the practice timing changes! For instance the client may wait 10 seconds -before `INCR`, but the sequence written in the Append Only File (and later replayed-back -as fast as possible when Redis is restarted) will not have the pause. Even -if we add a timestamp in the AOF, when the time difference is smaller than -our timer resolution, we have a race condition. - -The same happens with master-slave replication. Again, consider the example -above: the client will use the same sequence of commands without the 10 second -pause, but the replication link will slow down for a few seconds due to a network -problem. Result? The master will contain 'a' set to 101, the slave 'a' set -to 1. - -The only way to avoid this but at the same time have reliable non time dependent -timeouts on keys is to destroy volatile keys when a write operation is attempted -against it. - -After all Redis is one of the rare fully persistent databases that will give -you `EXPIRE`. This comes to a cost :) - -## FAQ: How this limitations were solved in Redis versions > 2.1 - -Since Redis 2.1.3 there are no longer restrictions in the use you can do of -write commands against volatile keys, still the replication and AOF file are -guaranteed to be fully consistent. - -In order to obtain a correct behavior without sacrificing consistency now when +In order to obtain a correct behavior without sacrificing consistency, when a key expires, a `DEL` operation is synthesized in both the AOF file and gains all the attached slaves. This way the expiration process is centralized in -the master instance, and there is no longer a chance of consistency errors. - +the master instance, and there is no chance of consistency errors. However while the slaves connected to a master will not expire keys -independently, they'll still take the full state of the expires existing in -the dataset, so when a slave is elected to a master it will be able to expire -the keys independently, fully acting as a master. +independently (but will wait for the `DEL` coming from the master), they'll +still take the full state of the expires existing in the dataset, so when a +slave is elected to a master it will be able to expire the keys +independently, fully acting as a master. From 656ba143d57fe54a10a75744bb93cb5b5215a71f Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 19:44:35 +0200 Subject: [PATCH 0123/2880] topics/expire removed because the source of info is the EXPIRE command page now. --- topics/expire.md | 39 --------------------------------------- 1 file changed, 39 deletions(-) delete mode 100644 topics/expire.md diff --git a/topics/expire.md b/topics/expire.md deleted file mode 100644 index 0a583f9048..0000000000 --- a/topics/expire.md +++ /dev/null @@ -1,39 +0,0 @@ -## How Redis expires keys - -Redis keys are expired in two ways: a passive way, and an active way. - -A key is actively expired simply when some client tries to access it, and -the key is found to be timed out. - -Of course this is not enough as there are expired keys that will never -be accessed again. This keys should be expired anyway, so periodically -Redis test a few keys at random among keys with an expire set. -All the keys that are already expired are deleted from the keyspace. - -Specifically this is what Redis does 10 times per second: - -1. Test 100 random keys from the set of keys with an associated expire. -2. Delete all the keys found expired. -3. If more than 25 keys were expired, start again from step 1. - -This is a trivial probabilistic algorithm, basically the assumption is -that our sample is representative of the whole key space, -and we continue to expire until the percentage of keys that are likely -to be expired is under 25% - -This means that at any given moment the maximum amount of keys already -expired that are using memory is at max equal to max amount of write -operations per second divided by 4. - -## How expires are handled in the replication link and AOF file - -In order to obtain a correct behavior without sacrificing consistency, when -a key expires, a `DEL` operation is synthesized in both the AOF file and gains -all the attached slaves. This way the expiration process is centralized in -the master instance, and there is no chance of consistency errors. - -However while the slaves connected to a master will not expire keys -independently (but will wait for the `DEL` coming from the master), they'll -still take the full state of the expires existing in the dataset, so when a -slave is elected to a master it will be able to expire the keys -independently, fully acting as a master. From ef0a538d92b67887c28210a926cf89574aebc4f2 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 20:56:18 +0200 Subject: [PATCH 0124/2880] EVAL globals protection documented. --- commands/eval.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/commands/eval.md b/commands/eval.md index b0a152834c..da46469ca3 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -322,6 +322,7 @@ I can start writing the following script, using a small Ruby program: RandomPushScript = < 0) do res = redis.call('lpush',KEYS[1],math.random()) i = i-1 @@ -355,6 +356,7 @@ following: RandomPushScript = < 0) do res = redis.call('lpush',KEYS[1],math.random()) @@ -379,6 +381,28 @@ as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. 32 or 64 bit systems like big or little endian systems will still produce the same output. +Globals variables protection +--- + +Redis scripts are not allowed to create global variables, in order to avoid +leaking data into the Lua state. If a script requires to take state across +calls (a pretty uncommon need) it should use Redis keys instead. + +When a global variable access is attempted the script is terminated and EVAL returns with an error: + + redis 127.0.0.1:6379> eval 'a=10' 0 + (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' + +Accessing a *non existing* global variable generates a similar error. + +Using Lua debugging functionalities or other approaches like altering the meta +table used to implement global protections, in order to circumvent globals +protection, is not hard. However it is hardly possible to do it accidentally. +If the user messes with the Lua global state, the consistency of AOF and +replication is not guaranteed: don't do it. + +Note for Lua newbies: in order to avoid using global variables in your scripts simply declare every variable you are going to use using the *local* keyword. + Available libraries --- From 623c5cb492d3339e27bb604da1a9f5701d3cc0ba Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 20 Apr 2012 20:57:04 +0200 Subject: [PATCH 0125/2880] Fixed typo. --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index da46469ca3..06b4ab0066 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -381,7 +381,7 @@ as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. 32 or 64 bit systems like big or little endian systems will still produce the same output. -Globals variables protection +Global variables protection --- Redis scripts are not allowed to create global variables, in order to avoid From 3b6b06dda694b817c015b60e14b8845495dab846 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 21 Apr 2012 11:14:07 +0200 Subject: [PATCH 0126/2880] Transactions page improved. Scripting section added. --- topics/transactions.md | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/topics/transactions.md b/topics/transactions.md index 1c24708e6b..f76712f256 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -97,6 +97,19 @@ syntax errors are reported ASAP instead: This time due to the syntax error the bad `INCR` command is not queued at all. +## Errors inside a transaction + +If you have a relational databases background, the fact that Redis commands +can fail during a transaction, but still Redis will execute the rest of the +transaction instead of rolling back, may look odd to you. + +However there are good opinions for this behavior: + +* Redis commands can fail only if called with a wrong syntax, or against keys holding the wrong data type: this means that in practical terms a failing command is the result of a programming errors, and a kind of error that is very likely to be detected during development, and not in production. +* Redis is internally simplified and faster because it does not need the ability to roll back. + +An argument against Redis point of view is that bugs happen, however it should be noted that in general the roll back does not save you from programming errors. For instance if a query increments a key by 2 instead of 1, or increments the wrong key, there is no way for a rollback mechanism to help. Given that no one can save the programmer from his errors, and that the kind of errors required for a Redis command to fail are unlikely to enter in production, we selected the simpler and faster approach of not supporting roll backs on errors. + ## Discarding the command queue `DISCARD` can be used in order to abort a transaction. In this case, no @@ -202,3 +215,20 @@ sorted set in an atomic way. This is the simplest implementation: EXEC If `EXEC` fails (i.e. returns a @nil-reply) we just repeat the operation. + +## Redis scripting and transactions + +A [Redis script](/commands/eval) is transactional by definition, so everything +you can do with a Redis transaction, you can also do with a script, and +usually the script will be both simpler and faster. + +This duplication is due to the fact that scripting was introduced in Redis 2.6 +while transactions already existed long before. However we are unlikely to +remove the support for transactions in the short time because it seems +semantically opportune that even without resorting to Redis scripting it is +still possible to avoid race conditions, especially since the implementation +complexity of Redis transactions is minimal. + +However it is not impossible that in a non immediate future we'll see that the +whole user base is just using scripts. If this happens we may deprecate and +finally remove transactions. From b157993463d229f742dac4afb0172c994a232ec8 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 21 Apr 2012 12:11:37 +0200 Subject: [PATCH 0127/2880] Tell the truth about our non perfect support for Solaris-derived OSes --- topics/introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/introduction.md b/topics/introduction.md index 941bc8b9b4..bc66597cd7 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -34,7 +34,7 @@ cache. You can use Redis from [most programming languages](/clients) out there. Redis is written in **ANSI C** and works in most POSIX systems like Linux, -\*BSD, OS X and Solaris without external dependencies. There +\*BSD, OS X without external dependencies. Linux and OSX are the two operating systems where Redis is developed and more tested, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There is no official support for Windows builds, although you may have [some](http://code.google.com/p/redis/issues/detail?id=34) [options](https://github.com/dmajkic/redis). From 4fd222718ed9867c175a2935e669e7be80977673 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Apr 2012 13:27:34 +0200 Subject: [PATCH 0128/2880] Tcl client link fixed. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index eebe8975e4..0841c205e6 100644 --- a/clients.json +++ b/clients.json @@ -307,7 +307,7 @@ { "name": "Tcl Client", "language": "Tcl", - "repository": "https://github.com/antirez/redis/blob/master/tests/support/redis.tcl", + "repository": "https://github.com/antirez/redis/blob/unstable/tests/support/redis.tcl", "description": "The client used in the Redis test suite.", "authors": ["antirez"] }, From 6e2a2a25ade0d5e87657234b716099bb61ce79fd Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Mon, 30 Apr 2012 04:14:11 -0400 Subject: [PATCH 0129/2880] Some small grammatical changes. --- commands/bgsave.md | 4 +- commands/blpop.md | 2 +- commands/config get.md | 8 +- commands/config set.md | 15 ++-- commands/dbsize.md | 2 +- commands/dump.md | 2 +- commands/eval.md | 170 ++++++++++++++++++++--------------------- 7 files changed, 100 insertions(+), 103 deletions(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index 233964381d..62e36cc145 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,8 +1,8 @@ Save the DB in background. The OK code is immediately returned. -Redis forks, the parent continues to server the clients, the child -saves the DB on disk then exit. A client my be able to check if the +Redis forks, the parent continues to serve the clients, the child +saves the DB on disk then exits. A client my be able to check if the operation succeeded using the `LASTSAVE` command. Please refer to the [persistence documentation](/topics/persistence) for detailed information. diff --git a/commands/blpop.md b/commands/blpop.md index 73c7a2b13d..4d85387506 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -5,7 +5,7 @@ non-empty, with the given keys being checked in the order that they are given. ## Non-blocking behavior -When `BLPOP` is called, if at least one of the specified keys contain a +When `BLPOP` is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. diff --git a/commands/config get.md b/commands/config get.md index e3d93eb768..2f7df9bd62 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -6,7 +6,7 @@ a server using this command. The symmetric command used to alter the configuration at run time is `CONFIG SET`. -`CONFIG GET` takes a single argument, that is glob style pattern. All the +`CONFIG GET` takes a single argument, which is a glob-style pattern. All the configuration parameters matching this parameter are reported as a list of key-value pairs. Example: @@ -18,14 +18,14 @@ list of key-value pairs. Example: 5) "set-max-intset-entries" 6) "512" -You can obtain a list of all the supported configuration parameters typing +You can obtain a list of all the supported configuration parameters by typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. +* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive. +* The save parameter is a single string of space-separated integers. Every pair of integers represents a seconds/modifications threshold. For instance what in redis.conf looks like: diff --git a/commands/config set.md b/commands/config set.md index b59e683291..6898a4e996 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -8,14 +8,13 @@ used to obtain information about the configuration of a running Redis instance. All the configuration parameters set using `CONFIG SET` are immediately loaded -by Redis that will start acting as specified starting from the next command -executed. +by Redis and will take effect starting with the next command executed. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. +* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive. +* The save parameter is a single string of space-separated integers. Every pair of integers represent a seconds/modifications threshold. For instance what in redis.conf looks like: @@ -26,14 +25,14 @@ that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, should be set using `CONFIG SET` as "900 1 300 10". -It is possible to switch persistence form .rdb snapshotting to append only file +It is possible to switch persistence from .rdb snapshotting to append-only file (and the other way around) using the `CONFIG SET` command. For more information -about how to do that please check [persistence page](/topics/persistence). +about how to do that please check the [persistence page](/topics/persistence). In general what you should know is that setting the *appendonly* parameter to -*yes* will start a background process to save the initial append only file +*yes* will start a background process to save the initial append-only file (obtained from the in memory data set), and will append all the subsequent -commands on the append only file, thus obtaining exactly the same effect of +commands on the append-only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start. You can have both the AOF enabled with .rdb snapshotting if you want, the diff --git a/commands/dbsize.md b/commands/dbsize.md index 468ac7d678..341d750148 100644 --- a/commands/dbsize.md +++ b/commands/dbsize.md @@ -1,6 +1,6 @@ -Return the number of keys in the currently selected database. +Return the number of keys in the currently-selected database. @return diff --git a/commands/dump.md b/commands/dump.md index 7ce3eacf6d..d50a4d1add 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -2,7 +2,7 @@ Serialize the value stored at key in a Redis-specific format and return it to th The serialization format is opaque and non-standard, however it has a few semantical characteristics: -* It contains a 64bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. +* It contains a 64-bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. * Values are encoded in the same format used by RDB. * An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. diff --git a/commands/eval.md b/commands/eval.md index 06b4ab0066..09d15982a7 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -53,9 +53,9 @@ script uses should be passed using the KEYS array, in the following way: > eval "return redis.call('set',KEYS[1],'bar')" 1 foo OK -The reason for passing keys in the proper way is that, before of `EVAL` all +The reason for passing keys in the proper way is that, before `EVAL` all the Redis commands could be analyzed before execution in order to -establish what are the keys the command will operate on. +establish what keys the command will operate on. In order for this to be true for `EVAL` also keys must be explicit. This is useful in many ways, but especially in order to make sure Redis Cluster @@ -70,14 +70,14 @@ Conversion between Lua and Redis data types Redis return values are converted into Lua data types when Lua calls a Redis command using call() or pcall(). Similarly Lua data types are -converted into Redis protocol when a Lua script returns some value, so that -scripts can control what `EVAL` will reply to the client. +converted into the Redis protocol when a Lua script returns a value, so that +scripts can control what `EVAL` will return to the client. This conversion between data types is designed in a way that if a Redis type is converted into a Lua type, and then the result is converted back into a Redis type, the result is the same as of the initial value. -In other words there is a one to one conversion between Lua and Redis types. +In other words there is a one-to-one conversion between Lua and Redis types. The following table shows you all the conversions rules: **Redis to Lua** conversion table. @@ -98,12 +98,12 @@ The following table shows you all the conversions rules: * Lua table with a single `err` field -> Redis error reply * Lua boolean false -> Redis Nil bulk reply. -There is an additional Lua to Redis conversion rule that has no corresponding +There is an additional Lua-to-Redis conversion rule that has no corresponding Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. -The followings are a few conversion examples: +Here are a few conversion examples: > eval "return 10" 0 (integer) 10 @@ -117,9 +117,9 @@ The followings are a few conversion examples: > eval "return redis.call('get','foo')" 0 "bar" -The last example shows how it is possible to directly return from Lua -the return value of `redis.call()` and `redis.pcall()` with the result of -returning exactly what the called command would return if called directly. +The last example shows how it is possible to receive the exact return value of +`redis.call()` or `redis.pcall()` from Lua that would be returned if the +command was called directly. Atomicity of scripts --- @@ -140,8 +140,8 @@ is busy. Error handling --- -As already stated calls to `redis.call()` resulting into a Redis command error -will stop the execution of the script and will return that error back, in a +As already stated, calls to `redis.call()` resulting in a Redis command error +will stop the execution of the script and will return the error, in a way that makes it obvious that the error was generated by a script: > del foo @@ -153,7 +153,7 @@ way that makes it obvious that the error was generated by a script: Using the `redis.pcall()` command no error is raised, but an error object is returned in the format specified above (as a Lua table with an `err` -field). The user can later return this exact error to the user just returning +field). The script can pass the exact error to the user by returning the error object returned by `redis.pcall()`. Bandwidth and EVALSHA @@ -164,25 +164,24 @@ Redis does not need to recompile the script every time as it uses an internal caching mechanism, however paying the cost of the additional bandwidth may not be optimal in many contexts. -On the other hand defining commands using a special command or via `redis.conf` +On the other hand, defining commands using a special command or via `redis.conf` would be a problem for a few reasons: * Different instances may have different versions of a command implementation. -* Deployment is hard if there is to make sure all the instances contain a given command, especially in a distributed environment. +* Deployment is hard if all the instances do not support a given command, especially in a distributed environment. -* Reading an application code the full semantic could not be clear since the application would call commands defined server side. +* Application code which uses commands defined server-side may cause confusion for other developers. -In order to avoid the above three problems and at the same time don't incur -in the bandwidth penalty, Redis implements the `EVALSHA` command. +In order to avoid these problems while avoiding +the bandwidth penalty, Redis implements the `EVALSHA` command. -`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument it has the SHA1 sum of a script. The behavior is the following: +`EVALSHA` works exactly like `EVAL`, but instead of having a script as the first argument it has the SHA1 hash of a script. The behavior is the following: -* If the server still remembers a script whose SHA1 sum was the one -specified, the script is executed. +* If the server still remembers a script with a matching SHA1 hash, the script is executed. -* If the server does not remember a script with this SHA1 sum, a special -error is returned that will tell the client to use `EVAL` instead. +* If the server does not remember a script with this SHA1 hash, a special +error is returned telling the client to use `EVAL` instead. Example: @@ -196,10 +195,10 @@ Example: (error) `NOSCRIPT` No matching script. Please use `EVAL`. The client library implementation can always optimistically send `EVALSHA` under -the hoods even when the client actually called `EVAL`, in the hope the script +the hood even when the client actually calls `EVAL`, in the hope the script was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be used instead. -Passing keys and arguments as `EVAL` additional arguments is also +Passing keys and arguments as additional `EVAL` arguments is also very useful in this context as the script string remains constant and can be efficiently cached by Redis. @@ -211,25 +210,25 @@ This means that if an `EVAL` is performed against a Redis instance all the subsequent `EVALSHA` calls will succeed. The only way to flush the script cache is by explicitly calling the -SCRIPT FLUSH command, that will *completely flush* the scripts cache removing +SCRIPT FLUSH command, which will *completely flush* the scripts cache removing all the scripts executed so far. This is usually needed only when the instance is going to be instantiated for another customer or application in a cloud environment. The reason why scripts can be cached for long time is that it is unlikely -for a well written application to have so many different scripts to create +for a well written application to have enough different scripts to cause memory problems. Every script is conceptually like the implementation of a new command, and even a large application will likely have just a few -hundreds of that. Even if the application is modified many times and -scripts will change, still the memory used is negligible. +hundred of them. Even if the application is modified many times and +scripts will change, the memory used is negligible. The fact that the user can count on Redis not removing scripts -is semantically a very good thing. For instance an application taking -a persistent connection to Redis can stay sure that if a script was -sent once it is still in memory, thus for instance can use EVALSHA -against those scripts in a pipeline without the chance that an error -will be generated since the script is not known (we'll see this problem -in its details later). +is semantically a very good thing. For instance an application with +a persistent connection to Redis can be sure that if a script was +sent once it is still in memory, so EVALSHA can be used +against those scripts in a pipeline without the chance of an error +being generated due to an unknown script (we'll see this problem +in detail later). The SCRIPT command --- @@ -238,9 +237,9 @@ Redis offers a SCRIPT command that can be used in order to control the scripting subsystem. SCRIPT currently accepts three different commands: * SCRIPT FLUSH. This command is the only way to force Redis to flush the -scripts cache. It is mostly useful in a cloud environment where the same +scripts cache. It is most useful in a cloud environment where the same instance can be reassigned to a different user. It is also useful for -testing client libraries implementations of the scripting feature. +testing client libraries' implementations of the scripting feature. * SCRIPT EXISTS *sha1* *sha2* ... *shaN*. Given a list of SHA1 digests as arguments this command returns an array of 1 or 0, where 1 means the @@ -254,22 +253,22 @@ we want to make sure that `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC operation), without the need to actually execute the script. -* SCRIPT KILL. This command is the only wait to interrupt a long running -script that reached the configured maximum execution time for scripts. -The SCRIPT KILL command can only be used with scripts that did not modified -the dataset during their execution (since stopping a read only script does -not violate the scripting engine guaranteed atomicity). +* SCRIPT KILL. This command is the only way to interrupt a long-running +script that reaches the configured maximum execution time for scripts. +The SCRIPT KILL command can only be used with scripts that did not modify +the dataset during their execution (since stopping a read-only script does +not violate the scripting engine's guaranteed atomicity). See the next sections for more information about long running scripts. Scripts as pure functions --- A very important part of scripting is writing scripts that are pure functions. -Scripts executed in a Redis instance are replicated on slaves sending the -same script, instead of the resulting commands. The same happens for the -Append Only File. The reason is that scripts are much faster than sending -commands one after the other to a Redis instance, so if the client is -taking the master very busy sending scripts, turning this scripts into single +Scripts executed in a Redis instance are replicated on slaves by sending the +script -- not the resulting commands. The same happens for the Append Only File. +The reason is that sending a script to another Redis instance is much faster +than sending the multiple commands the script generates, so if the client is +sending many scripts to the master, converting the scripts into individual commands for the slave / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via network is a lot more work for Redis @@ -280,28 +279,28 @@ have the following property: * The script always evaluates the same Redis *write* commands with the same arguments given the same input data set. Operations performed by -the script cannot depend on any hidden (non explicit) information or state +the script cannot depend on any hidden (non-explicit) information or state that may change as script execution proceeds or between different executions of the script, nor can it depend on any external input from I/O devices. Things like using the system time, calling Redis random commands like `RANDOMKEY`, or using Lua random number generator, could result into scripts -that will not evaluate always in the same way. +that will not always evaluate in the same way. In order to enforce this behavior in scripts Redis does the following: * Lua does not export commands to access the system time or other external state. -* Redis will block the script with an error if a script will call a +* Redis will block the script with an error if a script calls a Redis command able to alter the data set **after** a Redis *random* command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is -read only and does not modify the data set it is free to call those commands. -Note that a *random command* does not necessarily identifies a command that -uses random numbers: any non deterministic command is considered a random +read-only and does not modify the data set it is free to call those commands. +Note that a *random command* does not necessarily mean a command that +uses random numbers: any non-deterministic command is considered a random command (the best example in this regard is the `TIME` command). * Redis commands that may return elements in random order, like `SMEMBERS` -(because Redis Sets are *unordered*) have a different behavior when called from Lua, and undergone a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements. +(because Redis Sets are *unordered*) have a different behavior when called from Lua, and undergo a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements. * Lua pseudo random number generation functions `math.random` and `math.randomseed` are modified in order to always have the same seed every @@ -309,11 +308,11 @@ time a new script is executed. This means that calling `math.random` will always generate the same sequence of numbers every time a script is executed if `math.randomseed` is not used. -However the user is still able to write commands with random behaviors +However the user is still able to write commands with random behavior using the following simple trick. Imagine I want to write a Redis script that will populate a list with N random integers. -I can start writing the following script, using a small Ruby program: +I can start with this small Ruby program: require 'rubygems' require 'redis' @@ -348,11 +347,10 @@ following elements: 9) "0.74990198051087" 10) "0.17082803611217" -In order to make it a pure function, but still making sure that every +In order to make it a pure function, but still be sure that every invocation of the script will result in different random elements, we can -simply add an additional argument to the script, that will be used in order to -seed the Lua pseudo random number generator. The new script will be like the -following: +simply add an additional argument to the script that will be used in order to +seed the Lua pseudo-random number generator. The new script is as follows: RandomPushScript = < eval 'a=10' 0 (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' Accessing a *non existing* global variable generates a similar error. -Using Lua debugging functionalities or other approaches like altering the meta -table used to implement global protections, in order to circumvent globals -protection, is not hard. However it is hardly possible to do it accidentally. +Using Lua debugging functionality or other approaches like altering the meta +table used to implement global protections in order to circumvent globals +protection is not hard. However it is difficult to do it accidentally. If the user messes with the Lua global state, the consistency of AOF and replication is not guaranteed: don't do it. @@ -419,7 +417,7 @@ The Redis Lua interpreter loads the following Lua libraries: Every Redis instance is *guaranteed* to have all the above libraries so you can be sure that the environment for your Redis scripts is always the same. -The CJSON library allows to manipulate JSON data in a very fast way from Lua. +The CJSON library provides extremely fast JSON maniplation within Lua. All the other libraries are standard Lua libraries. Emitting Redis logs from scripts @@ -437,7 +435,8 @@ loglevel is one of: * `redis.LOG_NOTICE` * `redis.LOG_WARNING` -They exactly correspond to the normal Redis log levels. Only logs emitted by scripting using a log level that is equal or greater than the currently configured +They correspond directly to the normal Redis log levels. Only logs emitted by +scripting using a log level that is equal or greater than the currently configured Redis instance log level will be emitted. The `message` argument is simply a string. Example: @@ -451,32 +450,31 @@ Will generate the following: Sandbox and maximum execution time --- -Scripts should never try to access the external system, like the file system, -nor calling any other system call. A script should just do its work operating -on Redis data and passed arguments. +Scripts should never try to access the external system, like the file system +or any other system call. A script should only operate on Redis data and passed +arguments. Scripts are also subject to a maximum execution time (five seconds by default). -This default timeout is huge since a script should run usually in a sub -millisecond amount of time. The limit is mostly needed in order to avoid -problems when developing scripts that may loop forever for a programming -error. +This default timeout is huge since a script should usually run in under a +millisecond. The limit is mostly to handle accidental infinite loops created +during development. It is possible to modify the maximum time a script can be executed -with milliseconds precision, either via `redis.conf` or using the +with millisecond precision, either via `redis.conf` or using the CONFIG GET / CONFIG SET command. The configuration parameter affecting max execution time is called `lua-time-limit`. When a script reaches the timeout it is not automatically terminated by Redis since this violates the contract Redis has with the scripting engine -to ensure that scripts are atomic in nature. Stopping a script half-way means -to possibly leave the dataset with half-written data inside. +to ensure that scripts are atomic. Interrupting a script means potentially +leaving the dataset with half-written data. For this reasons when a script executes for more than the specified time the following happens: -* Redis logs that a script that is running for too much time is still in execution. +* Redis logs that a script is running too long. * It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. -* It is possible to terminate a script that executed only read-only commands using the `SCRIPT KILL` command. This does not violate the scripting semantic as no data was yet written on the dataset by the script. -* If the script already called write commands the only allowed command becomes `SHUTDOWN NOSAVE` that stops the server not saving the current data set on disk (basically the server is aborted). +* It is possible to terminate a script that executes only read-only commands using the `SCRIPT KILL` command. This does not violate the scripting semantic as no data was yet written to the dataset by the script. +* If the script already called write commands the only allowed command becomes `SHUTDOWN NOSAVE` that stops the server without saving the current data set on disk (basically the server is aborted). EVALSHA in the context of pipelining --- @@ -493,5 +491,5 @@ approaches: * Accumulate all the commands to send into the pipeline, then check for `EVAL` commands and use the `SCRIPT EXISTS` command to check if all the -scripts are already defined. If not add `SCRIPT LOAD` commands on top of +scripts are already defined. If not, add `SCRIPT LOAD` commands on top of the pipeline as required, and use `EVALSHA` for all the `EVAL` calls. From f91671abc91ee7188d0d4ff66359c850a4e387b1 Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Mon, 30 Apr 2012 08:21:32 -0400 Subject: [PATCH 0130/2880] Reverted edits which changed meaning. --- commands/eval.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 09d15982a7..f9d9a522ee 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -169,9 +169,9 @@ would be a problem for a few reasons: * Different instances may have different versions of a command implementation. -* Deployment is hard if all the instances do not support a given command, especially in a distributed environment. +* Deployment is hard if there is to make sure all the instances contain a given command, especially in a distributed environment. -* Application code which uses commands defined server-side may cause confusion for other developers. +* Reading an application code the full semantic could not be clear since the application would call commands defined server side. In order to avoid these problems while avoiding the bandwidth penalty, Redis implements the `EVALSHA` command. From 25a671f2cdf509027974e14a4c6ecec03a1bdfb9 Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Mon, 30 Apr 2012 08:25:03 -0400 Subject: [PATCH 0131/2880] Changed "SHA1 x" to "SHA1 digest." --- commands/eval.md | 6 +++--- commands/script exists.md | 4 ++-- commands/script load.md | 2 +- topics/data-types-intro.md | 4 ++-- topics/persistence.md | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index f9d9a522ee..caa7c60520 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -176,11 +176,11 @@ would be a problem for a few reasons: In order to avoid these problems while avoiding the bandwidth penalty, Redis implements the `EVALSHA` command. -`EVALSHA` works exactly like `EVAL`, but instead of having a script as the first argument it has the SHA1 hash of a script. The behavior is the following: +`EVALSHA` works exactly like `EVAL`, but instead of having a script as the first argument it has the SHA1 digest of a script. The behavior is the following: -* If the server still remembers a script with a matching SHA1 hash, the script is executed. +* If the server still remembers a script with a matching SHA1 digest, the script is executed. -* If the server does not remember a script with this SHA1 hash, a special +* If the server does not remember a script with this SHA1 digest, a special error is returned telling the client to use `EVAL` instead. Example: diff --git a/commands/script exists.md b/commands/script exists.md index bda2569865..625e616711 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -1,6 +1,6 @@ Returns information about the existence of the scripts in the script cache. -This command accepts one or more SHA1 sums and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. +This command accepts one or more SHA1 digests and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining operation can be performed solely using `EVALSHA` instead of `EVAL` to save bandwidth. Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. @@ -8,7 +8,7 @@ Plase check the `EVAL` page for detailed information about how Redis Lua scripti @return @multi-bulk-reply -The command returns an array of integers that correspond to the specified SHA1 sum arguments. For every corresponding SHA1 sum of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned. +The command returns an array of integers that correspond to the specified SHA1 digest arguments. For every corresponding SHA1 digest of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned. @example diff --git a/commands/script load.md b/commands/script load.md index a3bfd38e16..b5d2ff15f4 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -10,4 +10,4 @@ Plase check the `EVAL` page for detailed information about how Redis Lua scripti @return @bulk-reply -This command returns the SHA1 sum of the script added into the script cache. +This command returns the SHA1 digest of the script added into the script cache. diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 5be69f87bb..3feb1024f2 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -293,7 +293,7 @@ Our first attempt (that is broken) can be the following. Let's suppose we want to get a unique ID for the tag "redis": * In order to make this algorithm binary safe (they are just tags but think to - utf8, spaces and so forth) we start performing the SHA1 sum of the tag. + utf8, spaces and so forth) we start performing the SHA1 digest of the tag. SHA1(redis) = b840fc02d524045429941cc15f59e41cb7be6c52. * Let's check if this tag is already associated with a unique ID with the command *GET tag:b840fc02d524045429941cc15f59e41cb7be6c52:id*. @@ -313,7 +313,7 @@ return the wrong ID to the caller. To fix the algorithm is not hard fortunately, and this is the sane version: * In order to make this algorithm binary safe (they are just tags but think to - utf8, spaces and so forth) we start performing the SHA1 sum of the tag. + utf8, spaces and so forth) we start performing the SHA1 digest of the tag. SHA1(redis) = b840fc02d524045429941cc15f59e41cb7be6c52. * Let's check if this tag is already associated with a unique ID with the command *GET tag:b840fc02d524045429941cc15f59e41cb7be6c52:id*. diff --git a/topics/persistence.md b/topics/persistence.md index 8d6df85ec5..2e4dd67b33 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -276,7 +276,7 @@ for best results. It is important to understand that this systems can easily fail if not coded in the right way. At least make absolutely sure that after the transfer is completed you are able to verify the file size (that should match the one of -the file you copied) and possibly the SHA1 sum if you are using a VPS. +the file you copied) and possibly the SHA1 digest if you are using a VPS. You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason. From df7ca0ebff15be89ea6ce7fe578e10d3e7e169bc Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 2 May 2012 11:57:38 +0200 Subject: [PATCH 0132/2880] Issue 488 added in /topics/problems. --- topics/problems.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/problems.md b/topics/problems.md index a69424a71f..6724eb343d 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -14,6 +14,7 @@ List of known critical bugs in previous Redis releases. Note: this list may not be complete as we staretd it March 30, 2012, and did not included much historical data. +* Redis version up to 2.4.12 and 2.6.0-RC1: KEYS may not list all the keys, or may list duplicated keys, if keys with an expire set are present in the database. [Issue #488](https://github.com/antirez/redis/pull/488). * Redis version up to 2.4.10: SORT using GET or BY option with keys with an expire set may crash the server. [Issue #460](http://github.com/antirez/redis/issues/460). * Redis version up to 2.4.10: a bug in the aeWait() implementation in ae.c may result in a server crash under extremely hard to replicate conditions. [Issue #267](http://github.com/antirez/redis/issues/267). * Redis version up to 2.4.9: **memory leak in replication**. A memory leak was triggered by replicating a master contaning a database ID greatear than ID 9. From 6bfb67f36aacc62f0b9f89ed4ef8fdea6737506a Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 2 May 2012 12:01:06 +0200 Subject: [PATCH 0133/2880] Xen fork latency problem documented into /topics/problems. --- topics/problems.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/problems.md b/topics/problems.md index 6724eb343d..5d36b1c425 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -37,3 +37,4 @@ List of known Linux related bugs affecting Redis. === * Ubuntu 10.04 and 10.10 have serious bugs (especially 10.10) that cause slow downs if not just instance hangs. Please move away from the default kernels shipped with this distributions. [Link to 10.04 bug](https://silverline.librato.com/blog/main/EC2_Users_Should_be_Cautious_When_Booting_Ubuntu_10_04_AMIs). [Link to 10.10 bug](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/666211). Both bugs were reported many times in the context of EC2 instances, but other users confirmed that also native servers are affected (at least by one of the two). +* Certain versions of the Xen hypervisor are known to have very bad fork() performances. See [the latency page](/topics/latency) for more information. From e71525a10ab232c64475eac6bbe66cb6e9d91d97 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 2 May 2012 12:11:52 +0200 Subject: [PATCH 0134/2880] Fixed issue number. --- topics/problems.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/problems.md b/topics/problems.md index 5d36b1c425..fdc96b7eeb 100644 --- a/topics/problems.md +++ b/topics/problems.md @@ -14,7 +14,7 @@ List of known critical bugs in previous Redis releases. Note: this list may not be complete as we staretd it March 30, 2012, and did not included much historical data. -* Redis version up to 2.4.12 and 2.6.0-RC1: KEYS may not list all the keys, or may list duplicated keys, if keys with an expire set are present in the database. [Issue #488](https://github.com/antirez/redis/pull/488). +* Redis version up to 2.4.12 and 2.6.0-RC1: KEYS may not list all the keys, or may list duplicated keys, if keys with an expire set are present in the database. [Issue #487](https://github.com/antirez/redis/pull/487). * Redis version up to 2.4.10: SORT using GET or BY option with keys with an expire set may crash the server. [Issue #460](http://github.com/antirez/redis/issues/460). * Redis version up to 2.4.10: a bug in the aeWait() implementation in ae.c may result in a server crash under extremely hard to replicate conditions. [Issue #267](http://github.com/antirez/redis/issues/267). * Redis version up to 2.4.9: **memory leak in replication**. A memory leak was triggered by replicating a master contaning a database ID greatear than ID 9. From 6a1e0602f62ed31e84d3656071c863cc9a4dfeb6 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 3 May 2012 17:50:17 +0200 Subject: [PATCH 0135/2880] Redis Sentinel specification draft. --- topics/sentinel-spec.md | 291 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 291 insertions(+) create mode 100644 topics/sentinel-spec.md diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md new file mode 100644 index 0000000000..0e9dd0288c --- /dev/null +++ b/topics/sentinel-spec.md @@ -0,0 +1,291 @@ +Redis Sentinel design draft +=== + +Redis Sentinel is the name of the Redis high availability solution that's +currently under development. It has nothing to do with Redis Cluster and +is intended to be used by people that don't need Redis Cluster, but simply +a way to perform automatic fail over when a master instance is not functioning +correctly. + +The plan is to provide an usable beta implementaiton of Redis Sentinel in a +short time, preferrably in mid June 2012. + +In short this is what Redis Sentinel will be able to do: + +1) Monitor master instances to see if they are available. +2) Promote a slave to master when the master fails. +3) Modify clients configurations when a slave is elected. +4) Inform the system administrator about incidents using notifications. + +The following document explains what is the design of Redis Sentinel in order +to accomplish this goals. + +Redis Sentinel idea +=== + +The idea of Redis Sentinel is to have multiple "monitoring devices" in +different places of your network, monitoring the Redis master instance. + +However this independent devices can't act without agreement with other +sentinels. + +Once a Redis master instance is detected as failing, for the fail over process +to start the sentinel must verify that there is a given level of agreement. + +The amount of sentinels, their location in the network, and the +"minimal agreement" configured, select the desired behavior among many +possibilities. + +Redis Sentinel does not use any proxy: client reconfiguration are performed +running user-provided executables (for instance a shell script or a +Python program) in a user setup specific way. + +In what form it will be shipped +=== + +Redis Sentinel will just be a special mode of the redis-server executable. + +If the redis-server is called with "redis-sentinel" as argv[0] (for instance +using a symbolic link or copying the file), or if --sentinel option is passed, +the Redis instance starts in sentinel mode and will only understand sentinel +related commands. All the other commands will be refused. + +The whole implementation of sentinel will live in a separated file sentinel.c +with minimal impact on the rest of the code base. However this solution allows +to use all the facilities already implemented inside Redis without any need +to reimplement them or to maintain a separated code base for Redis Sentinel. + +Sentinels networking +=== + +All the sentinels take a connection with the monitored master. + +Sentinels use the Redis protocol to talk with each other when needed. + +Redis Sentinels export a single SENTINEL command. Subcommands of the SENTINEL +command are used in order to perform different actions. + +For instance to check what a sentinel thinks about the state of the master +it is possible to send the "SENTINEL STATUS" command using redis-cli. + +There is no gossip going on between sentinels. A sentinel instance will query +other instances only when an agreement is needed about the state of the +master or slaves. + +The list of networking tasks performed by every sentinel is the following: + +1) A Sentinel PUBLISH its presence using the master Pub/Sub every minute. +2) A Sentinel accepts commands using a TCP port. +3) A Sentinel constantly monitors master and slaves sending PING commands. +4) A Sentinel sends INFO commands to the master every minute in order to take a fresh list of connected slaves. +5) A Sentinel monitors the snetinels Pub/SUb channel in order to discover newly connected setninels. + +Sentinels discovering +=== + +While sentinels don't use some kind of bus interconnecting every Redis Sentinel +instance to each other, they still need to know the IP address and port of +each other sentinel instance, because this is useful to run the agreement +protocol needed to perform the slave election. + +To make the configuration of sentinels as simple as possible every sentinel +broadcasts its presence using the Redis master Pub/Sub functionality. + +Every sentinel is subscribed to the same channel, and broadcast information +about its existence to the same channel, including the "Run ID" of the Sentinel, +and the IP address and port where it is listening for commands. + +Every sentinel maintain a list of other sentinels ID, IP and port. +A sentinel that does no longer announce its presence using Pub/Sub for too +long time is removed from the list. In that case, optionally, a notification +is delivered to the system administrator. + +Detection of failing masters +=== + +An instance is not available from the point of view of Redis Sentinel when +it is no longer able to reply to the PING command correctly for longer than +the specified number of seconds, consecutively. + +For a PING reply to be considered valid, one of the following conditions +should be true: + +1) PING replied with +PONG. +2) PING replied with -LOADING error. +3) PING replied with -MASTERDOWN error. + +What is not considered an acceptable reply: + +1) PING replied with -BUSY error. +2) PING replied with -MISCONF error. +3) PING reply not received after more than a specified number of milliseconds. + +PING should never reply with a different error code than the ones listed above +but any other error code is considered an acceptable reply by Redis Sentinel. + +Handling of -BUSY state +=== + +The -BUSY error is returned when a script is running for more time than the +configured script time limit. When this happens before triggering a fail over +Redis Sentinel will try to send a "SCRIPT KILL" command, that will only +succeed if the script was read-only. + +Agreement with other sentinels +=== + +Once a Sentinel detects that the master is failing, in order to perform the +fail over, it must make sure that the required number of other sentinels +are agreeing as well. + +To do so one sentinel after the other is checked to see if the needed +quorum is reached, as configured by the user. + +If the needed level of agreement is reached, the sentinel schedules the +fail over after DELAY seconds, where: + + DELAY = SENTINEL_CARDINALITY * 60 + +The cardinality of a sentinel is obtained by the sentinel ordering all the +known sentinels, including itself, lexicographically by ID. The first sentinel +has cardinality 0, the second 1, and so forth. + +This is useful in order to avoid that multiple sentinels will try to perform +the fail over at the same time. + +However if a sentinel will fail for some reason, within 60 seconds the next +one will try to perform the fail over. + +Anyway once the delay has elapsed, before performing the fail over, sentinels +make sure using the INFO command that none of the slaves was already switched +into a master by some other sentinel or any other external software +component (or the system administrator itself). + +Also the "SENTINEL NEWMASTER" command is send to all the other sentinels +by the sentinel that performed the failover (see later for details). + +Slave sanity checks before election +=== + +Once the fail over process starts, the sentinel performing the slave election +must be sure that the slave is functioning correctly. + +A master may have multiple slaves. A suitable candidate must be found. + +To do this, a sentinel will check all the salves in the order listed by +Redis in the INFO output (however it is likely that we'll introduce some way +to indicate that a slave is to be preferred to another). + +The slave must be functioning correctly (able to reply to PING with one of +the accepted replies), and the INFO command should show that it has been +disconnected by the master for no more than the specified number of seconds +in the Sentinel configuration. + +The first slave found to meet this conditions is selected as the candidate +to be elected to master. However to really be selected as a candidate the +configured number of sentinels must also agree on the reachability of the +slave (the sentinel will check this sending SENTINEL STATUS commands). + +Fail over process +=== + +The fail over process consists of the following steps: + +1) Check that no slave was already elected. +2) Find suitable slave. +3) Turn the slave into a master using the SLAVEOF NO ONE command. +4) Verify the state of the new master again using INFO. +5) Call an user script to inform the clients that the configuration changed. +6) Call an user script to notify the system administrator. +7) Turn all the remaining slaves, if any, to slaves of the new master. +8) Send a SENTINEL NEWMASTER command to all the reachable sentinels. +0) Start monitoring the new master. + +If Steps "1","2" or "3" fail, the fail over is aborted. +If Step "6" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. + +All the other errors are considered to be non-fatal. + +SENTINEL NEWMASTER command +== + +The SENTINEL NEWMASTER command reconfigures a sentinel to monitor a new master. +The effect is similar of completely restarting a sentinel against a new master. +If a fail over was scheduled by the sentinel it is cancelled as well. + +Sentinels monitoring other sentinels +=== + +When a sentinel no longer advertises itself using the Pub/Sub channel for too +much time (configurable), the other sentinels can send (if configured) a +notification to the system administrator to notify that a sentinel may be down. + +At the same time the sentinel is removed from the list of sentinels (but it +will be automatically re-added to this list once it starts advertising itself +again using Pub/Sub). + +User provided scripts +=== + +Sentinels call user-provided scripts to perform two tasks: + +1) Inform clients that the configuration changed. +2) Notify the system administrator of problems. + +The script to inform clients of a configuration change has the following parameters: + +1) ip:port of the calling Sentinel. +2) old master ip:port. +3) new master ip:port. + +The script to send notifications is called with the following parameters: + +1) ip:port of the calling Sentinel. +2) The message to deliver to the system administrator is passed writing to the standard input. + +Using the ip:port of the calling sentinel, scripts may call SENTINEL subcommands +to get more info if needed. + +Concrete implementations of notification scripts will likely use the "mail" +command or some other command to deliver SMS messages, emails, twitter direct +messages. + +Implementations of the script to modify the configuration in web applications +are likely to use HTTP GET requests to force clients to update the +configuration. + +Setup examples +=== + +Imaginary setup: + + computer A runs the Redis master. + computer B runs the Reids slave and the client software. + +In this naive configuration it is possible to place a single sentinel, with +"minimal agreement" set to the value of one (no acknowledge from other +sentinels needed), running on "B". + +If "A" will fail the fail over process will start, the slave will be elected +to master, and the client software will be reconfigured. + +Imaginary setup: + + computer A runs the Redis master + computer B runs the Redis slave + computer C,D,E,F,G are web servers acting as clients + +In this setup it is possible to run five sentinels placed at C,D,E,F,G with +"minimal agreement" set to 3. + +In real production environments there is to evaluate how the different +computers are networked together, and to check what happens during net splits +in order to select where to place the sentinels, and the level of minimal +agreement, so that a single arm of the network failing will not trigger a +fail over. + +In general if a complex network topology is present, the minimal agreement +should be set to the max number of sentinels existing at the same time in +the same network arm, plus one. + + From 365413f4d9219da61c3a5bf67e834b54af31ac19 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 3 May 2012 17:52:03 +0200 Subject: [PATCH 0136/2880] Turn ASCII lists into markdown lists. --- topics/sentinel-spec.md | 62 ++++++++++++++++++++--------------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 0e9dd0288c..d3aabd4bff 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -12,10 +12,10 @@ short time, preferrably in mid June 2012. In short this is what Redis Sentinel will be able to do: -1) Monitor master instances to see if they are available. -2) Promote a slave to master when the master fails. -3) Modify clients configurations when a slave is elected. -4) Inform the system administrator about incidents using notifications. +* Monitor master instances to see if they are available. +* Promote a slave to master when the master fails. +* Modify clients configurations when a slave is elected. +* Inform the system administrator about incidents using notifications. The following document explains what is the design of Redis Sentinel in order to accomplish this goals. @@ -74,11 +74,11 @@ master or slaves. The list of networking tasks performed by every sentinel is the following: -1) A Sentinel PUBLISH its presence using the master Pub/Sub every minute. -2) A Sentinel accepts commands using a TCP port. -3) A Sentinel constantly monitors master and slaves sending PING commands. -4) A Sentinel sends INFO commands to the master every minute in order to take a fresh list of connected slaves. -5) A Sentinel monitors the snetinels Pub/SUb channel in order to discover newly connected setninels. +* A Sentinel PUBLISH its presence using the master Pub/Sub every minute. +* A Sentinel accepts commands using a TCP port. +* A Sentinel constantly monitors master and slaves sending PING commands. +* A Sentinel sends INFO commands to the master every minute in order to take a fresh list of connected slaves. +* A Sentinel monitors the snetinels Pub/SUb channel in order to discover newly connected setninels. Sentinels discovering === @@ -110,15 +110,15 @@ the specified number of seconds, consecutively. For a PING reply to be considered valid, one of the following conditions should be true: -1) PING replied with +PONG. -2) PING replied with -LOADING error. -3) PING replied with -MASTERDOWN error. +* PING replied with +PONG. +* PING replied with -LOADING error. +* PING replied with -MASTERDOWN error. What is not considered an acceptable reply: -1) PING replied with -BUSY error. -2) PING replied with -MISCONF error. -3) PING reply not received after more than a specified number of milliseconds. +* PING replied with -BUSY error. +* PING replied with -MISCONF error. +* PING reply not received after more than a specified number of milliseconds. PING should never reply with a different error code than the ones listed above but any other error code is considered an acceptable reply by Redis Sentinel. @@ -191,15 +191,15 @@ Fail over process The fail over process consists of the following steps: -1) Check that no slave was already elected. -2) Find suitable slave. -3) Turn the slave into a master using the SLAVEOF NO ONE command. -4) Verify the state of the new master again using INFO. -5) Call an user script to inform the clients that the configuration changed. -6) Call an user script to notify the system administrator. -7) Turn all the remaining slaves, if any, to slaves of the new master. -8) Send a SENTINEL NEWMASTER command to all the reachable sentinels. -0) Start monitoring the new master. +* 1) Check that no slave was already elected. +* 2) Find suitable slave. +* 3) Turn the slave into a master using the SLAVEOF NO ONE command. +* 4) Verify the state of the new master again using INFO. +* 5) Call an user script to inform the clients that the configuration changed. +* 6) Call an user script to notify the system administrator. +* 7) Turn all the remaining slaves, if any, to slaves of the new master. +* 8) Send a SENTINEL NEWMASTER command to all the reachable sentinels. +* 0) Start monitoring the new master. If Steps "1","2" or "3" fail, the fail over is aborted. If Step "6" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. @@ -229,19 +229,19 @@ User provided scripts Sentinels call user-provided scripts to perform two tasks: -1) Inform clients that the configuration changed. -2) Notify the system administrator of problems. +* Inform clients that the configuration changed. +* Notify the system administrator of problems. The script to inform clients of a configuration change has the following parameters: -1) ip:port of the calling Sentinel. -2) old master ip:port. -3) new master ip:port. +* ip:port of the calling Sentinel. +* old master ip:port. +* new master ip:port. The script to send notifications is called with the following parameters: -1) ip:port of the calling Sentinel. -2) The message to deliver to the system administrator is passed writing to the standard input. +* ip:port of the calling Sentinel. +* The message to deliver to the system administrator is passed writing to the standard input. Using the ip:port of the calling sentinel, scripts may call SENTINEL subcommands to get more info if needed. From 3ea4bc73118d24ebb3a6ce4a364fd5be916f025a Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 3 May 2012 17:55:01 +0200 Subject: [PATCH 0137/2880] fix minor typo. --- topics/sentinel-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index d3aabd4bff..69ff75c21c 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -199,7 +199,7 @@ The fail over process consists of the following steps: * 6) Call an user script to notify the system administrator. * 7) Turn all the remaining slaves, if any, to slaves of the new master. * 8) Send a SENTINEL NEWMASTER command to all the reachable sentinels. -* 0) Start monitoring the new master. +* 9) Start monitoring the new master. If Steps "1","2" or "3" fail, the fail over is aborted. If Step "6" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. From d712cc90ed4f557419ba0a055f3082bb108de062 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 4 May 2012 09:56:24 +0200 Subject: [PATCH 0138/2880] Sentinel spec updated to version 1.1 --- topics/sentinel-spec.md | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 69ff75c21c..9a35fab17d 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -1,4 +1,12 @@ -Redis Sentinel design draft +Redis Sentinel design draft 1.1 +=== + +Changelog: + +* 1.0 first version. +* 1.1 fail over steps modified: slaves are pointed to new master one after the other and not simultaneously. New section about monitoring slaves to ensure they are replicating correctly. + +Introduction === Redis Sentinel is the name of the Redis high availability solution that's @@ -197,8 +205,8 @@ The fail over process consists of the following steps: * 4) Verify the state of the new master again using INFO. * 5) Call an user script to inform the clients that the configuration changed. * 6) Call an user script to notify the system administrator. -* 7) Turn all the remaining slaves, if any, to slaves of the new master. -* 8) Send a SENTINEL NEWMASTER command to all the reachable sentinels. +* 7) Send a SENTINEL NEWMASTER command to all the reachable sentinels. +* 8) Turn all the remaining slaves, if any, to slaves of the new master. This is done incrementally, one slave after the other, waiting for the previous slave to complete the synchronization process before starting with the next one. * 9) Start monitoring the new master. If Steps "1","2" or "3" fail, the fail over is aborted. @@ -213,6 +221,25 @@ The SENTINEL NEWMASTER command reconfigures a sentinel to monitor a new master. The effect is similar of completely restarting a sentinel against a new master. If a fail over was scheduled by the sentinel it is cancelled as well. +Slaves monitoring +=== + +A successful fail over can be only performed if there is at least one slave +that contains a reasonably update version of the master dataset. +We perform this check before electing the slave using the INFO command +to check how many seconds elapsed since master and slave disconnected. + +However if there is a problem in the replication process (networking problem, +redis bug, a problem with the slave operating system, ...), when the master +fail we can be in the unhappy condition of not having a slave that's good +enough for the fail over. + +For this reason every sentinel also continuously monitors slaves as well, +checking if the replication is up. If the replication appears to be failing +for too long time (configurable), a notification is sent to the system +administrator that should make sure that slaves are correctly configured +and operational. + Sentinels monitoring other sentinels === From 753505981dcfbcdb60a96def6651b39df27dc432 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 4 May 2012 10:28:40 +0200 Subject: [PATCH 0139/2880] Sentinel spec 1.2 --- topics/sentinel-spec.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 9a35fab17d..8cd34f952e 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -5,6 +5,7 @@ Changelog: * 1.0 first version. * 1.1 fail over steps modified: slaves are pointed to new master one after the other and not simultaneously. New section about monitoring slaves to ensure they are replicating correctly. +* 1.2 Fixed a typo in the fail over section about: critical error is in step 5 and not 6. Added TODO section. Introduction === @@ -210,7 +211,7 @@ The fail over process consists of the following steps: * 9) Start monitoring the new master. If Steps "1","2" or "3" fail, the fail over is aborted. -If Step "6" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. +If Step "5" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. All the other errors are considered to be non-fatal. @@ -315,4 +316,11 @@ In general if a complex network topology is present, the minimal agreement should be set to the max number of sentinels existing at the same time in the same network arm, plus one. +TODO +=== +* More detailed specification of user script error handling, including what return codes may mean, like 0: try again. 1: fatal error. 2: try again, and so forth. +* More detailed specification of what happens when an user script does not return in a given amount of time. +* Add a "push" notification system for configuration changes. +* Consider adding a "name" to every set of slaves / masters, so that clients can identify services by name. +* Make clear that we handle a single Sentinel monitoring multiple masters. From 798fa06cf2a6344d57e1efeb6d067f440d3c55a2 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 May 2012 00:11:49 +0200 Subject: [PATCH 0140/2880] Mass insert page added --- topics/mass-insert.md | 142 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 142 insertions(+) create mode 100644 topics/mass-insert.md diff --git a/topics/mass-insert.md b/topics/mass-insert.md new file mode 100644 index 0000000000..68faf6e8ef --- /dev/null +++ b/topics/mass-insert.md @@ -0,0 +1,142 @@ +Redis Mass Insertion +=== + +Sometimes Redis instances needs to be loaded with big amount of preexisting +or user generated data in a short amount of time, so that million of keys +will be created as fast as possible. + +This is called a *mass insertion*, and the goal of this document is to +provide information about how to feed Redis with data as fast as possible. + +Use the protocol, Luke +---------------------- + +Using a normal Redis client to perform mass insertion is not a good idea +for a few reasons: the naive approach of sending one command after the other +is slow because there is to pay the round trip time for every command. +It is possible to use pipelining, but for mass insertion of many records +you need to write new commands while you read replies at the same time to +make sure you are inserting as fast as possible. + +Only a small percentage of clients support non-blocking I/O, and not all the +clients are able to parse the replies in an efficient way in order to maximize +troughput. For all this reasons the preferred way to mass import data into +Redis is to generate a text file containing the Redis protocol, in raw format, +in order to call the commands needed to insert the required data. + +For instance if I need to generate a large data set where there are billions +of keys in the form: `keyN -> ValueN' I will create a file containing the +following commands in the Redis protocol format: + + SET Key0 Value0 + SET Key1 Value1 + ... + SET KeyN ValueN + +Once this file is created, the remaining action is to feed it to Redis +as fast as possible. In the past the way to do this was to use the +following `netcat` with the following command: + + (cat data.txt; sleep 10) | nc localhost 6379 > /dev/null + +However this is not a very reliable way to perform mass import because netcat +does not really know when all the data was transferred and can't check for +errors. In the unstable branch of Redis at github the `redis-cli` utility +supports a new mode called **pipe mode** that was designed in order to perform +mass insertion. (This feature will be available in a few days in Redis 2.6-RC4 + and in Redis 2.4.14). + +Using the pipe mode the command to run looks like the following: + + cat data.txt | redis-cli --pipe + +That will produce an output similar to this: + + All data transferred. Waiting for the last reply... + Last reply received from server. + errors: 0, replies: 1000000 + +The redis-cli utility will also make sure to only redirect errors received +from the Redis instance to the standard output. + +Generating Redis Protocol +------------------------- + +The Redis protocol is extremely simple to generate and parse, and is +[Documented here](/topics/protocol). However in order to generate protocol for +the goal of mass insertion you don't need to understand every detail of the +protocol, but just that every command is represented in the following way: + + * + $ + + + ... + + +Where means "\r" (or ASCII character 13) and means "\n" (or ASCII character 10). + +For instance the command **SET key value** is represented by the following protocol: + + *3 + $3 + SET + $3 + key + $5 + value + +Or represented as a quoted string: + + "*3\r\n$3\r\nSET\r\n$3\r\nkey\r\n$5\r\nvalue\r\n" + +The file you need to generate for mass insertion is just composed of commands +represented in the above way, one after the other. + +The following Ruby function generates valid protocol: + + def gen_redis_proto(*cmd) + proto = "" + proto << "*"+cmd.length.to_s+"\r\n" + cmd.each{|arg| + proto <<= "$"+arg.length.to_s+"\r\n" + proto <<= arg.to_s+"\r\n" + } + proto + end + + puts gen_redis_proto("SET","mykey","Hello World!").inspect + +Using the above function it is possible to easily generate the key value pairs +in the above example, with this program: + + (0...1000).each{|n| + STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}")) + } + +We can run the program directly in pipe to redis-cli in order to perform our +first mass import session. + + $ ruby proto.rb | ../src/redis-cli --pipe + All data transferred. Waiting for the last reply... + Last reply received from server. + errors: 0, replies: 1000 + +How the pipe mode works under the hoods +--------------------------------------- + +The magic needed inside the pipe mode of redis-cli is to be as fast as netcat +and still be able to understand when the last reply was sent by the server +at the same time. + +This is obtained in the following way: + ++ redis-cli --pipe tries to send data as fast as possible to the server. ++ At the same time it reads data when available, trying to parse it. ++ Once there is no more data to read from stdin, it sends a special **ECHO** command with a random 20 bytes string: we are sure this is the latest command sent, and we are sure we can match the reply checking if we receive the same 20 bytes as a bulk reply. ++ Once this special final command is sent, the code receiving replies starts to match replies with this 20 bytes. When the matching reply is reached it can exit with success. + +Using this trick we don't need to parse the protocol we send to the server in order to understand how many commands we are sending, but just the replies. + +However while parsing the replies we take a counter of all the replies parsed so that at the end we are able to tell the user the amount of commands transferred to the server by the mass insert session. + From abea92cf0ecd29848c4ef039b78910cd40c451a6 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 May 2012 00:13:33 +0200 Subject: [PATCH 0141/2880] Simplify command line in example. --- topics/mass-insert.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index 68faf6e8ef..c61bcc7c1f 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -117,7 +117,7 @@ in the above example, with this program: We can run the program directly in pipe to redis-cli in order to perform our first mass import session. - $ ruby proto.rb | ../src/redis-cli --pipe + $ ruby proto.rb | redis-cli --pipe All data transferred. Waiting for the last reply... Last reply received from server. errors: 0, replies: 1000 From b4fd5ee43ff3da1881d8a919a3474816458a320e Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 May 2012 08:57:22 +0200 Subject: [PATCH 0142/2880] Markdown syntax fix. --- topics/mass-insert.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index c61bcc7c1f..d5e7917b00 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -74,7 +74,7 @@ protocol, but just that every command is represented in the following way: ... -Where means "\r" (or ASCII character 13) and means "\n" (or ASCII character 10). +Where `` means "\r" (or ASCII character 13) and `` means "\n" (or ASCII character 10). For instance the command **SET key value** is represented by the following protocol: From ee4664933ec4df7efb5d11a7301d399a0d0f9f6f Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Tue, 8 May 2012 11:03:16 +0200 Subject: [PATCH 0143/2880] Removed superfluous `following` and "ruby-fied" the code example. --- topics/mass-insert.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index d5e7917b00..82fa07f278 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -35,7 +35,7 @@ following commands in the Redis protocol format: Once this file is created, the remaining action is to feed it to Redis as fast as possible. In the past the way to do this was to use the -following `netcat` with the following command: +`netcat` with the following command: (cat data.txt; sleep 10) | nc localhost 6379 > /dev/null @@ -99,8 +99,8 @@ The following Ruby function generates valid protocol: proto = "" proto << "*"+cmd.length.to_s+"\r\n" cmd.each{|arg| - proto <<= "$"+arg.length.to_s+"\r\n" - proto <<= arg.to_s+"\r\n" + proto << "$"+arg.length.to_s+"\r\n" + proto << arg.to_s+"\r\n" } proto end From 656abd2599a40f456f6115c4cd51e5619dba79a2 Mon Sep 17 00:00:00 2001 From: Lance Lakey Date: Tue, 8 May 2012 12:50:12 -0700 Subject: [PATCH 0144/2880] latency topic grep process_id typo fixed --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 4ffff654e9..5a836d8de0 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -184,7 +184,7 @@ this is the case. The first thing to do is to checking the amount of Redis memory that is swapped on disk. In order to do so you need to obtain the Redis instance pid: - $ redis-cli info | grep redis-cli info | grep process_id + $ redis-cli info | grep process_id process_id:5454 Now enter the /proc file system directory for this process: From 064ede3479cc84253c3dc48d2a5b4e3b46bd3a6f Mon Sep 17 00:00:00 2001 From: Ludovic Demblans Date: Thu, 10 May 2012 17:02:27 +0200 Subject: [PATCH 0145/2880] fixed typo verison -> version --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 9285e8c092..bdf8bb504d 100644 --- a/commands.json +++ b/commands.json @@ -190,7 +190,7 @@ "group": "transactions" }, "DUMP": { - "summary": "Return a serialized verison of the value stored at the specified key.", + "summary": "Return a serialized version of the value stored at the specified key.", "complexity": "O(1) to access the key and additional O(N*M) to serialized it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).", "arguments": [ { From f1c7240504a01a83f1b93562a1ffe54c09f28564 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 11 May 2012 12:52:59 +0200 Subject: [PATCH 0146/2880] New section about expires in the latency page. --- topics/latency.md | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 4ffff654e9..e46dc90322 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -108,7 +108,6 @@ Additionally, you can use your favorite per-process monitoring program main Redis process. If it is high while the traffic is not, it is usually a sign that slow commands are used. - Latency generated by fork ------------------------- @@ -424,6 +423,29 @@ I use the following command: sudo strace -f -p $(pidof redis-server) -T -e trace=fdatasync,write 2>&1 | grep -v '0.0' | grep -v unfinished +Latency generated by expires +---------------------------- + +Redis evict expired keys in two ways: + ++ One *lazy* way expires a key when it is requested by a command, but it is found to be already expired. ++ One *active* way expires a few keys every 100 milliseconds. + +The active expiring is designed to be adaptive. An expire cycle is started every 100 milliseconds (10 times per second), and will do the following: + ++ Sample `REDIS_EXPIRELOOKUPS_PER_CRON` keys, evicting all the keys already expired. ++ If the more than 25% of the keys were found expired, repeat. + +Given that `REDIS_EXPIRELOOKUPS_PER_CRON` is set to 10 by default, and the process is performed ten times per second, usually just 100 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a log time, so that the *lazy* algorithm does not help. At the same time expiring just 100 keys per second has no effects in the latency a Redis instance. + +However the algorithm is adaptive and will loop if it founds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*. + +Basically this means that **if the database contains has many many keys expiring in the same second, and this keys are at least 25% of the current population of keys with an expire set**, Redis can block in order to reach back a percentage of keys already expired that is less than 25%. + +This approach is needed in order to avoid using too much memory for keys that area already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time. + +In short: be aware that many keys expiring at the same moment can be a source of latency. + Redis software watchdog --- From 91efa2d461797bf55b24c1c666038f3b774236e4 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 17 May 2012 18:31:14 +0200 Subject: [PATCH 0147/2880] BITOP and BITCOUNT documented. --- commands.json | 43 +++++++++++++++++++++++++++++++ commands/bitcount.md | 61 ++++++++++++++++++++++++++++++++++++++++++++ commands/bitop.md | 56 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+) create mode 100644 commands/bitcount.md create mode 100644 commands/bitop.md diff --git a/commands.json b/commands.json index 9285e8c092..d2cfa88b48 100644 --- a/commands.json +++ b/commands.json @@ -36,6 +36,49 @@ "since": "1.0.0", "group": "server" }, + "BITCOUNT": { + "summary": "Count set bits in a string", + "complexity": "O(N)", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "start", + "type": "integer", + "optional": true + }, + { + "name": "end", + "type": "integer", + "optional": true + } + ], + "since": "2.6.0", + "group": "string" + }, + "BITOP": { + "summary": "Perform bitwise opeations between strings", + "complexity": "O(N)", + "arguments": [ + { + "name": "destkey", + "type": "key", + }, + { + "name": "operation", + "type": "string", + }, + { + "name": "key", + "type": "key", + "multiple": true + } + ], + "since": "2.6.0", + "group": "list" + }, "BLPOP": { "summary": "Remove and get the first element in a list, or block until one is available", "complexity": "O(1)", diff --git a/commands/bitcount.md b/commands/bitcount.md new file mode 100644 index 0000000000..087fa78e5a --- /dev/null +++ b/commands/bitcount.md @@ -0,0 +1,61 @@ +Count the number of set bits (population counting) in a string. + +By default all the bytes contained in the string are examined. It is possible +to specifiy the counting operation only in an interval passing the additional +arguments *start* and *end*. + +Like for the `GETRANGE` command start and end can contain negative values +in order to index bytes starting from the end of the string, where -1 is the +last byte, -2 is the penultimate, and so forth. + +Non existing keys are treated as empty strings, so the command will return +zero. + +@return + +@integer-reply + +The number of bits set to 1. + +@examples + + @cli + SET mykey "foobar" + BITCOUNT mykey + BITCOUNT mykey 0 0 + BITCOUNT mykey 1 1 + +Pattern: real time metrics using bitmaps +--- + +Bitmaps are a very space efficient representation of certain kinds of +information. One example is a web application that needs to the history +of every user visits, so that for instance it is possible to understand what +users are good targets of beta features, or for any other purpose. + +Using the `SETBIT` command this is trivial to accomplish, identifying every +day with a small progressive integer. For instance day 0 is the first day +the application was put online, day 1 the next day, and so forth. + +Every time an user performs a page view, the application can register that +in the current day the user visited the web site using the `SETBIT` command +setting the bit corresponding to the current day. + +Later it will be trivial to know the number of single days the user visited +the web site simply calling the `BITCOUNT` command against the bitmap. + +A similar pattern where user IDs are used instead of days is described +in the article [Fast easy realtime metrics usign Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/). + +Performance considerations +--- + +In the above example of counting days, even after 10 years the application +is online we still have just `365*10` bits of data per user, that is +just 456 bytes per user. With this amount of data `BITCOUNT` is still as fast +as any other O(1) Redis command like `GET` or `INCR`. + +When the bitmap is big, there are two alternatives: + ++ Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script. ++ Running the bitmap incrementally using the `BITCOUNT` *start* and *end* optional parameters, accumulating the results client-side, and optionally caching the result into a key. diff --git a/commands/bitop.md b/commands/bitop.md new file mode 100644 index 0000000000..0ead7fa1ac --- /dev/null +++ b/commands/bitop.md @@ -0,0 +1,56 @@ +Perform a bitwise operation between multiple keys (containing string +values) and store the result in the destionation key. + +The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: + ++ BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN* ++ BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN* ++ BITOP XOR *destkey srckey1 srckey2 srckey3 ... srckeyN* ++ BITOP NOT *destkey srckey* + +As you can see **NOT** is special as it only takes an input key, because it +performs invertion of bits so it only makes sense as an unary operator. + +The result of the operation is always stored at *destkey*. + +Handling of strings with different lengths +--- + +When an operation is performed between strings having different lengths, all +the strings shorter than the longest string in the set are treated as if +they were zero-padded up to the length of the longest string. + +The same holds true for non-existing keys, that are considered as a stream of +zero bytes up to the length of the longest string. + +@return + +@integer-reply + +The size of the string stored into the destination key, that is equal to the size of the longest input string. + +@examples + + @cli + SET key1 "foobar" + SET key2 "abcdef" + BITOP AND dest key1 key2 + GET dest + +Pattern: real time metrics using bitmaps +--- + +`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target +bitmap where to perform the population counting operation. + +See the article [Fast easy realtime metrics usign Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/) for an interesting use cases. + +Performance considerations +--- + +`BITOP` is a potentially slow command as it runs in O(N) time. +Care should be taken when running it against long input strings. + +For real time metrics and statistics involving large inputs a good approach +is to use a slave (with read-only option disabled) where to perform the +bit-wise operations without blocking the master instance. From c8d1313f2b3ed9560e15e9b9457883bf13815eed Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 17 May 2012 18:32:33 +0200 Subject: [PATCH 0148/2880] JSON typo fixed. --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index d2cfa88b48..5645720306 100644 --- a/commands.json +++ b/commands.json @@ -64,11 +64,11 @@ "arguments": [ { "name": "destkey", - "type": "key", + "type": "key" }, { "name": "operation", - "type": "string", + "type": "string" }, { "name": "key", From 395f901e1bc6e06962123a6f5368aa0f270a3014 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 17 May 2012 18:34:56 +0200 Subject: [PATCH 0149/2880] typo --- commands/bitcount.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/bitcount.md b/commands/bitcount.md index 087fa78e5a..15551cc46c 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -29,8 +29,8 @@ Pattern: real time metrics using bitmaps --- Bitmaps are a very space efficient representation of certain kinds of -information. One example is a web application that needs to the history -of every user visits, so that for instance it is possible to understand what +information. One example is a web application that needs the history +of user visits, so that for instance it is possible to determine what users are good targets of beta features, or for any other purpose. Using the `SETBIT` command this is trivial to accomplish, identifying every From 33e4c39756afbde6430df8e8782a86f147c3b957 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 17 May 2012 21:00:21 +0200 Subject: [PATCH 0150/2880] typo --- commands.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands.json b/commands.json index 5645720306..1cf880efb0 100644 --- a/commands.json +++ b/commands.json @@ -59,7 +59,7 @@ "group": "string" }, "BITOP": { - "summary": "Perform bitwise opeations between strings", + "summary": "Perform bitwise operations between strings", "complexity": "O(N)", "arguments": [ { From 9ab59dcb0a1db6d2d47b4e63e4e09d625b2feaec Mon Sep 17 00:00:00 2001 From: Peter Taoussanis Date: Sat, 19 May 2012 18:34:07 +0700 Subject: [PATCH 0151/2880] Added Clojure client: "Carmine". --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 0841c205e6..63870b2a20 100644 --- a/clients.json +++ b/clients.json @@ -24,6 +24,14 @@ "description": "", "authors": ["tavisrudd"] }, + + { + "name": "Carmine", + "language": "Clojure", + "repository": "https://github.com/ptaoussanis/carmine", + "description": "Deliberately simple, high-performance Redis (2.0+) client for Clojure.", + "authors": ["ptaoussanis"] + }, { "name": "CL-Redis", From 38a312b2a49019d26bf8796e7bffdc0c77ca93a1 Mon Sep 17 00:00:00 2001 From: huangz1990 Date: Fri, 25 May 2012 11:54:19 +0800 Subject: [PATCH 0152/2880] fix BITOP command --- commands.json | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/commands.json b/commands.json index 1cf880efb0..f00a7a6b33 100644 --- a/commands.json +++ b/commands.json @@ -62,14 +62,14 @@ "summary": "Perform bitwise operations between strings", "complexity": "O(N)", "arguments": [ - { - "name": "destkey", - "type": "key" - }, { "name": "operation", "type": "string" }, + { + "name": "destkey", + "type": "key" + }, { "name": "key", "type": "key", @@ -77,7 +77,7 @@ } ], "since": "2.6.0", - "group": "list" + "group": "string" }, "BLPOP": { "summary": "Remove and get the first element in a list, or block until one is available", From 35d07b345bb6e945acab2111c3f9efb601454119 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 25 May 2012 10:34:21 +0200 Subject: [PATCH 0153/2880] Dart client added. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 0841c205e6..a41948ebf2 100644 --- a/clients.json +++ b/clients.json @@ -338,6 +338,15 @@ "authors": ["chakrit"] }, + { + "name": "DartRedisClient", + "language": "Dart", + "url": "https://github.com/mythz/DartRedisClient", + "description": "A high-performance async/non-blocking Redis client for Dart", + "authors": ["demisbellot"], + "recommended": true + }, + { "name": "hxneko-redis", "language": "haXe", From c075de2808ac594398b2e6753510da2c7e2f7840 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 25 May 2012 20:06:18 +0200 Subject: [PATCH 0154/2880] Redis setup hints. --- topics/admin.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/topics/admin.md b/topics/admin.md index e1f34f2e2a..f507051e1b 100644 --- a/topics/admin.md +++ b/topics/admin.md @@ -4,6 +4,17 @@ Redis Administration This page contains topics related to the administration of Redis instances. Every topic is self contained in form of a FAQ. New topics will be created in the future. +Redis setup hints +----------------- + ++ We suggest deploying Redis using the **Linux operating system**. Redis is also tested heavily on osx, and tested from time to time on FreeBSD and OpenBSD systems. However Linux is where we do all the major stress testing, and where most production deployments are working. ++ Make sure to set the Linux kernel **overcommit memory setting to 1**. Add `vm.overcommit_memory = 1` to `/etc/sysctl.conf` and then reboot or run the command `sysctl vm.overcommit_memory=1` for this to take effect immediately. ++ Make sure to **setup some swap** in your system (we suggest as much as swap as memory). If Linux does not have swap and your Redis instance accidentally consumes too much memory, either Redis will crash for out of memory or the Linux kernel OOM killer will kill the Redis process. ++ If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log **Redis may use up to 2 times the memory normally used**. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly. ++ Even if you have persistence disabled, Redis will need to perform RDB saves if you use replication. ++ The use of Redis persistence with **EC2 EBS volumes is discouraged** since EBS performance is usually poor. Use ephemeral storage to persist and then move your persistence files to EBS when possible. ++ If you are deploying using a virtual machine that uses the **Xen hypervisor you may experience slow fork() times**. This may block Redis from a few milliseconds up to a few seconds depending on the dataset size. Check the [latency page](/topics/latency) for more information. This problem is not common to other hypervisors. + Upgrading or restarting a Redis instance without downtime ------------------------------------------------------- From 2d5132b240d88654692b41b96651353ba21cbf65 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 25 May 2012 20:56:03 +0200 Subject: [PATCH 0155/2880] Added a new administration hint. --- topics/admin.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/admin.md b/topics/admin.md index f507051e1b..1baa94adf9 100644 --- a/topics/admin.md +++ b/topics/admin.md @@ -14,6 +14,7 @@ Redis setup hints + Even if you have persistence disabled, Redis will need to perform RDB saves if you use replication. + The use of Redis persistence with **EC2 EBS volumes is discouraged** since EBS performance is usually poor. Use ephemeral storage to persist and then move your persistence files to EBS when possible. + If you are deploying using a virtual machine that uses the **Xen hypervisor you may experience slow fork() times**. This may block Redis from a few milliseconds up to a few seconds depending on the dataset size. Check the [latency page](/topics/latency) for more information. This problem is not common to other hypervisors. ++ Use `daemonize no` when run under daemontools. Upgrading or restarting a Redis instance without downtime ------------------------------------------------------- From a7838a1a3806294ff9b86177190f9bbe53966990 Mon Sep 17 00:00:00 2001 From: quiver Date: Sat, 9 Jun 2012 14:12:00 +0900 Subject: [PATCH 0156/2880] fix typos --- commands/bitcount.md | 4 ++-- commands/bitop.md | 6 +++--- topics/mass-insert.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/commands/bitcount.md b/commands/bitcount.md index 15551cc46c..71e968ab14 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -1,7 +1,7 @@ Count the number of set bits (population counting) in a string. By default all the bytes contained in the string are examined. It is possible -to specifiy the counting operation only in an interval passing the additional +to specify the counting operation only in an interval passing the additional arguments *start* and *end*. Like for the `GETRANGE` command start and end can contain negative values @@ -45,7 +45,7 @@ Later it will be trivial to know the number of single days the user visited the web site simply calling the `BITCOUNT` command against the bitmap. A similar pattern where user IDs are used instead of days is described -in the article [Fast easy realtime metrics usign Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/). +in the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/). Performance considerations --- diff --git a/commands/bitop.md b/commands/bitop.md index 0ead7fa1ac..b30685e333 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -1,5 +1,5 @@ Perform a bitwise operation between multiple keys (containing string -values) and store the result in the destionation key. +values) and store the result in the destination key. The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: @@ -9,7 +9,7 @@ The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** a + BITOP NOT *destkey srckey* As you can see **NOT** is special as it only takes an input key, because it -performs invertion of bits so it only makes sense as an unary operator. +performs inversion of bits so it only makes sense as an unary operator. The result of the operation is always stored at *destkey*. @@ -43,7 +43,7 @@ Pattern: real time metrics using bitmaps `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target bitmap where to perform the population counting operation. -See the article [Fast easy realtime metrics usign Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/) for an interesting use cases. +See the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/) for an interesting use cases. Performance considerations --- diff --git a/topics/mass-insert.md b/topics/mass-insert.md index 82fa07f278..641cf8c98d 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -20,7 +20,7 @@ make sure you are inserting as fast as possible. Only a small percentage of clients support non-blocking I/O, and not all the clients are able to parse the replies in an efficient way in order to maximize -troughput. For all this reasons the preferred way to mass import data into +throughput. For all this reasons the preferred way to mass import data into Redis is to generate a text file containing the Redis protocol, in raw format, in order to call the commands needed to insert the required data. From c0cc69673bd12ebe91624a738bdb224a1176a257 Mon Sep 17 00:00:00 2001 From: quiver Date: Sat, 9 Jun 2012 15:28:50 +0900 Subject: [PATCH 0157/2880] beef up slowlog parameter description --- commands/slowlog.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/commands/slowlog.md b/commands/slowlog.md index bf2a1b36c6..a993432318 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -9,10 +9,13 @@ but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). -You can configure the slow log with two parameters: one tells Redis +You can configure the slow log with two parameters: +*slowlog-log-slower-than* tells Redis what is the execution time, in microseconds, to exceed in order for the -command to get logged, and the other parameter is the length of the -slow log. When a new command is logged and the slow log is already at its +command to get logged. Note that a negative number disables the slow log, +while a value of zero forces the logging of every command. +*slowlog-max-len* is the length of the slow log. The minimum value is zero. +When a new command is logged and the slow log is already at its maximum length, the oldest one is removed from the queue of logged commands in order to make space. From bf000b5c7e7b766426db496a651619780e2efd54 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Tue, 12 Jun 2012 09:30:48 -0700 Subject: [PATCH 0158/2880] Update MONITOR doc with its current output format --- commands/monitor.md | 71 ++++++++++++++++++++++++++------------------- 1 file changed, 41 insertions(+), 30 deletions(-) diff --git a/commands/monitor.md b/commands/monitor.md index 78e773d8f7..f564002c19 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -1,34 +1,45 @@ -`MONITOR` is a debugging command that outputs the whole sequence of commands -received by the Redis server. is very handy in order to understand -what is happening into the database. This command is used directly -via telnet. - % telnet 127.0.0.1 6379 - Trying 127.0.0.1... - Connected to segnalo-local.com. - Escape character is '^]'. - MONITOR - +OK - monitor - keys * - dbsize - set x 6 - foobar - get x - del x - get x - set key_x 5 - hello - set key_y 5 - hello - set key_z 5 - hello - set foo_a 5 - hello -The ability to see all the requests processed by the server is useful in order -to spot bugs in the application both when using Redis as a database and as -a distributed caching system. +`MONITOR` is a debugging command that streams back every command +processed by the Redis server. It can help in understanding what is +happening to the database. This command can both be used via `redis-cli` +and via `telnet`. -In order to end a monitoring session just issue a `QUIT` command by hand. +The ability to see all the requests processed by the server is useful in +order to spot bugs in an application both when using Redis as a database +and as a distributed caching system. + +``` +$ redis-cli monitor +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" +1339518087.877697 [0 127.0.0.1:60866] "dbsize" +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" +1339518096.506257 [0 127.0.0.1:60866] "get" "x" +1339518099.363765 [0 127.0.0.1:60866] "del" "x" +1339518100.544926 [0 127.0.0.1:60866] "get" "x" +``` + +Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via +`redis-cli`. + +``` +$ telnet localhost 6379 +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. +MONITOR ++OK ++1339518083.107412 [0 127.0.0.1:60866] "keys" "*" ++1339518087.877697 [0 127.0.0.1:60866] "dbsize" ++1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" ++1339518096.506257 [0 127.0.0.1:60866] "get" "x" ++1339518099.363765 [0 127.0.0.1:60866] "del" "x" ++1339518100.544926 [0 127.0.0.1:60866] "get" "x" +QUIT ++OK +Connection closed by foreign host. +``` + +Manually issue the `QUIT` command to stop a `MONITOR` stream running via +`telnet`. @return From 13ac220b7027a18ecf13feb4e86f96ce2894fff8 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Tue, 12 Jun 2012 09:38:08 -0700 Subject: [PATCH 0159/2880] We don't do the GitHub flavored Markdown --- commands/monitor.md | 48 +++++++++++++++++++++------------------------ 1 file changed, 22 insertions(+), 26 deletions(-) diff --git a/commands/monitor.md b/commands/monitor.md index f564002c19..09a66b040c 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -7,36 +7,32 @@ The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a distributed caching system. -``` -$ redis-cli monitor -1339518083.107412 [0 127.0.0.1:60866] "keys" "*" -1339518087.877697 [0 127.0.0.1:60866] "dbsize" -1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" -1339518096.506257 [0 127.0.0.1:60866] "get" "x" -1339518099.363765 [0 127.0.0.1:60866] "del" "x" -1339518100.544926 [0 127.0.0.1:60866] "get" "x" -``` + $ redis-cli monitor + 1339518083.107412 [0 127.0.0.1:60866] "keys" "*" + 1339518087.877697 [0 127.0.0.1:60866] "dbsize" + 1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" + 1339518096.506257 [0 127.0.0.1:60866] "get" "x" + 1339518099.363765 [0 127.0.0.1:60866] "del" "x" + 1339518100.544926 [0 127.0.0.1:60866] "get" "x" Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. -``` -$ telnet localhost 6379 -Trying 127.0.0.1... -Connected to localhost. -Escape character is '^]'. -MONITOR -+OK -+1339518083.107412 [0 127.0.0.1:60866] "keys" "*" -+1339518087.877697 [0 127.0.0.1:60866] "dbsize" -+1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" -+1339518096.506257 [0 127.0.0.1:60866] "get" "x" -+1339518099.363765 [0 127.0.0.1:60866] "del" "x" -+1339518100.544926 [0 127.0.0.1:60866] "get" "x" -QUIT -+OK -Connection closed by foreign host. -``` + $ telnet localhost 6379 + Trying 127.0.0.1... + Connected to localhost. + Escape character is '^]'. + MONITOR + +OK + +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" + +1339518087.877697 [0 127.0.0.1:60866] "dbsize" + +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" + +1339518096.506257 [0 127.0.0.1:60866] "get" "x" + +1339518099.363765 [0 127.0.0.1:60866] "del" "x" + +1339518100.544926 [0 127.0.0.1:60866] "get" "x" + QUIT + +OK + Connection closed by foreign host. Manually issue the `QUIT` command to stop a `MONITOR` stream running via `telnet`. From bd1e97f156510877592049072961d3a52a1305d9 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Tue, 12 Jun 2012 09:48:20 -0700 Subject: [PATCH 0160/2880] Cost of running MONITOR --- commands/monitor.md | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/commands/monitor.md b/commands/monitor.md index 09a66b040c..16133672b1 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -37,6 +37,35 @@ Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via Manually issue the `QUIT` command to stop a `MONITOR` stream running via `telnet`. +## Cost of running `MONITOR` + +Because `MONITOR` streams back **all** commands, its use comes at a +cost. The following (totally unscientific) benchmark numbers illustrate +what the cost of running `MONITOR` can be. + +Benchmark result **without** `MONITOR` running: + + $ src/redis-benchmark -c 10 -n 100000 -q + PING_INLINE: 101936.80 requests per second + PING_BULK: 102880.66 requests per second + SET: 95419.85 requests per second + GET: 104275.29 requests per second + INCR: 93283.58 requests per second + +Benchmark result **with** `MONITOR` running (`redis-cli monitor > +/dev/null`): + + $ src/redis-benchmark -c 10 -n 100000 -q + PING_INLINE: 58479.53 requests per second + PING_BULK: 59136.61 requests per second + SET: 41823.50 requests per second + GET: 45330.91 requests per second + INCR: 41771.09 requests per second + +In this particular case, running a single `MONITOR` client can reduce +the throughput by more than 50%. Running more `MONITOR` clients will +reduce throughput even more. + @return **Non standard return value**, just dumps the received commands in an infinite From d1ec751966497301acf5567d9d33e07e569de348 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 09:39:40 -0700 Subject: [PATCH 0161/2880] Add EVALSHA --- commands.json | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/commands.json b/commands.json index f00a7a6b33..bce3985277 100644 --- a/commands.json +++ b/commands.json @@ -281,6 +281,32 @@ "since": "2.6.0", "group": "scripting" }, + "EVALSHA": { + "summary": "Execute a Lua script server side", + "complexity": "Looking up the script both with EVAL or EVALSHA is an O(1) business. The additional complexity is up to the script you execute.", + "arguments": [ + { + "name": "sha1", + "type": "string" + }, + { + "name": "numkeys", + "type": "integer" + }, + { + "name": "key", + "type": "key", + "multiple": true + }, + { + "name": "arg", + "type": "string", + "multiple": true + } + ], + "since": "2.6.0", + "group": "scripting" + }, "EXEC": { "summary": "Execute all commands issued after MULTI", "since": "1.2.0", From cb0770392f43666f7174a5da359a206cbdf35e2f Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 09:40:40 -0700 Subject: [PATCH 0162/2880] Don't mention script lookup in complexity description --- commands.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands.json b/commands.json index bce3985277..ed91e3e82c 100644 --- a/commands.json +++ b/commands.json @@ -257,7 +257,7 @@ }, "EVAL": { "summary": "Execute a Lua script server side", - "complexity": "Looking up the script both with EVAL or EVALSHA is an O(1) business. The additional complexity is up to the script you execute.", + "complexity": "Depends on the script that is executed.", "arguments": [ { "name": "script", @@ -283,7 +283,7 @@ }, "EVALSHA": { "summary": "Execute a Lua script server side", - "complexity": "Looking up the script both with EVAL or EVALSHA is an O(1) business. The additional complexity is up to the script you execute.", + "complexity": "Depends on the script that is executed.", "arguments": [ { "name": "sha1", From f6f79442abbc199710f328176badcdd7adc97192 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 10:12:35 -0700 Subject: [PATCH 0163/2880] Fix typos --- commands/append.md | 6 +++--- commands/config get.md | 4 ++-- commands/config set.md | 14 +++++++------- commands/decr.md | 2 +- commands/decrby.md | 2 +- commands/eval.md | 4 ++-- commands/expire.md | 20 ++++++++++---------- commands/incr.md | 6 +++--- commands/incrby.md | 2 +- commands/incrbyfloat.md | 2 +- commands/migrate.md | 4 ++-- commands/pexpireat.md | 2 +- commands/pttl.md | 2 +- commands/restore.md | 2 +- commands/script exists.md | 2 +- commands/script flush.md | 2 +- commands/script kill.md | 4 ++-- commands/script load.md | 2 +- commands/slowlog.md | 2 +- commands/time.md | 4 ++-- wordlist | 25 +++++++++++++++++++++++++ 21 files changed, 69 insertions(+), 44 deletions(-) diff --git a/commands/append.md b/commands/append.md index 216c077fe7..6f7d59ffd4 100644 --- a/commands/append.md +++ b/commands/append.md @@ -23,7 +23,7 @@ Every time a new sample arrives we can store it using the command APPEND timeseries "fixed-size sample" -Accessing to individual elements in the time serie is not hard: +Accessing individual elements in the time series is not hard: * `STRLEN` can be used in order to obtain the number of samples. * `GETRANGE` allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. @@ -31,8 +31,8 @@ Accessing to individual elements in the time serie is not hard: The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. -Hint: it is possible to switch to a different key based on the current unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more -firendly to be distributed across many Redis instances. +Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more +friendly to be distributed across many Redis instances. An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations). diff --git a/commands/config get.md b/commands/config get.md index e3d93eb768..41c1703a5c 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -24,10 +24,10 @@ You can obtain a list of all the supported configuration parameters typing All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. +* Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. * The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. -For instance what in redis.conf looks like: +For instance what in `redis.conf` looks like: save 900 1 save 300 10 diff --git a/commands/config set.md b/commands/config set.md index b59e683291..bfdec9abcb 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -1,4 +1,4 @@ -The `CONFIG SET` command is used in order to reconfigure the server at runtime +The `CONFIG SET` command is used in order to reconfigure the server at run time without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command. @@ -14,10 +14,10 @@ executed. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. +* Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. * The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. -For instance what in redis.conf looks like: +For instance what in `redis.conf` looks like: save 900 1 save 300 10 @@ -26,17 +26,17 @@ that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, should be set using `CONFIG SET` as "900 1 300 10". -It is possible to switch persistence form .rdb snapshotting to append only file +It is possible to switch persistence from RDB snapshotting to append only file (and the other way around) using the `CONFIG SET` command. For more information about how to do that please check [persistence page](/topics/persistence). -In general what you should know is that setting the *appendonly* parameter to -*yes* will start a background process to save the initial append only file +In general what you should know is that setting the `appendonly` parameter to +`yes` will start a background process to save the initial append only file (obtained from the in memory data set), and will append all the subsequent commands on the append only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start. -You can have both the AOF enabled with .rdb snapshotting if you want, the +You can have both the AOF enabled with RDB snapshotting if you want, the two options are not mutually exclusive. @return diff --git a/commands/decr.md b/commands/decr.md index 9119f5b94a..a1aeb4c1ad 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -1,7 +1,7 @@ Decrements the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to **64 +string that can not be represented as integer. This operation is limited to **64 bit signed integers**. See `INCR` for extra information on increment/decrement diff --git a/commands/decrby.md b/commands/decrby.md index b819c599d8..28fc3f6bac 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -1,7 +1,7 @@ Decrements the number stored at `key` by `decrement`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 +string that can not be represented as integer. This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/eval.md b/commands/eval.md index 06b4ab0066..12782f41a3 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -391,7 +391,7 @@ calls (a pretty uncommon need) it should use Redis keys instead. When a global variable access is attempted the script is terminated and EVAL returns with an error: redis 127.0.0.1:6379> eval 'a=10' 0 - (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' + (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' Accessing a *non existing* global variable generates a similar error. @@ -430,7 +430,7 @@ It is possible to write to the Redis log file from Lua scripts using the redis.log(loglevel,message) -loglevel is one of: +`loglevel` is one of: * `redis.LOG_DEBUG` * `redis.LOG_VERBOSE` diff --git a/commands/expire.md b/commands/expire.md index 9a3eb79a7f..10cc218f4b 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -13,7 +13,7 @@ operations that will leave the timeout untouched. The timeout can also be cleared, turning the key back into a persistent key, using the `PERSIST` command. -If a key is renamed with `RENAME`, the associated time to live is transfered to +If a key is renamed with `RENAME`, the associated time to live is transferred to the new key name. If a key is overwritten by `RENAME`, like in the case of an existing key @@ -57,14 +57,14 @@ Pattern: Navigation session --- Imagine you have a web service and you are interested in the latest N pages -*recently* visited by your users, such that each adiacent pageview was not +*recently* visited by your users, such that each adiacent page view was not performed more than 60 seconds after the previous. Conceptually you may think -at this set of pageviews as a *Navigation session* if your user, that may -contain interesting informations about what kind of products he or she is +at this set of page views as a *Navigation session* if your user, that may +contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products. You can easily model this pattern in Redis using the following strategy: -every time the user does a pageview you call the following commands: +every time the user does a page view you call the following commands: MULTI RPUSH pagewviews.user: http://..... @@ -72,7 +72,7 @@ every time the user does a pageview you call the following commands: EXEC If the user will be idle more than 60 seconds, the key will be deleted and only -subsequent pageviews that have less than 60 seconds of difference will be +subsequent page views that have less than 60 seconds of difference will be recorded. This pattern is easily modified to use counters using `INCR` instead of lists @@ -91,7 +91,7 @@ at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed. -The key time to live can be updated or entierly removed using the `EXPIRE` and `PERSIST` command (or other strictly related commands). +The key time to live can be updated or entirely removed using the `EXPIRE` and `PERSIST` command (or other strictly related commands). ## Expire accuracy @@ -102,11 +102,11 @@ Since Redis 2.6 the expire error is from 0 to 1 milliseconds. ## Expires and persistence -Keys expiring information is stored as absolute unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active. +Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active. -For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desynch in their clocks, funny things may happen (like all the keys loaded to be expired at loading time). +For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time). -Even runnign instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediatly, instead of lasting for 1000 seconds. +Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds. ## How Redis expires keys diff --git a/commands/incr.md b/commands/incr.md index 02f82ee755..7c8b7d351c 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -1,7 +1,7 @@ Increments the number stored at `key` by one. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 +string that can not be represented as integer. This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated @@ -69,7 +69,7 @@ The more simple and direct implementation of this pattern is the following: PERFORM_API_CALL() END -Basically we have a counter for every IP, for every differet second. +Basically we have a counter for every IP, for every different second. But this counters are always incremented setting an expire of 10 seconds so that they'll be removed by Redis automatically when the current second is a different one. @@ -138,4 +138,4 @@ The `RPUSHX` command only pushes the element if the key already exists. Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside the `MULTI`/`EXEC` block. However this race will just miss an API call under rare -conditons, so the rate limiting will still work correctly. +conditions, so the rate limiting will still work correctly. diff --git a/commands/incrby.md b/commands/incrby.md index e6101beaa3..bf1b25b80e 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -1,7 +1,7 @@ Increments the number stored at `key` by `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 +string that can not be represented as integer. This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md index adfef50a3b..663dae098a 100644 --- a/commands/incrbyfloat.md +++ b/commands/incrbyfloat.md @@ -8,7 +8,7 @@ If the command is successful the new incremented value is stored as the new valu Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed -after the incremnet is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed. +after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed. The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation. diff --git a/commands/migrate.md b/commands/migrate.md index b19e1a3a90..d8383d947c 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -7,14 +7,14 @@ The source instance acts as a client for the target instance. If the target inst The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds. -`MIGRATE` needs to perform I/O operations and to honour the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -IOERR returned. When this happens the following two cases are possible: +`MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -`IOERR` returned. When this happens the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly. -When any other error is returned (startign with "ERR") `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance). +When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance). On success OK is returned. diff --git a/commands/pexpireat.md b/commands/pexpireat.md index e40b4ccd3d..9ffd8bb3fb 100644 --- a/commands/pexpireat.md +++ b/commands/pexpireat.md @@ -3,7 +3,7 @@ O(1) -`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the unix time at which the key will expire is specified in milliseconds instead of seconds. +`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at which the key will expire is specified in milliseconds instead of seconds. @return diff --git a/commands/pttl.md b/commands/pttl.md index 93ad4a623e..9d055a19fc 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -3,7 +3,7 @@ O(1) -Like `TTL` this comand returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. +Like `TTL` this command returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. @return diff --git a/commands/restore.md b/commands/restore.md index adbdcdd07d..3734f3f857 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -1,4 +1,4 @@ -Create a key assosicated with a value that is obtained unserializing the provided serialized value (obtained via `DUMP`). +Create a key associated with a value that is obtained by deserializing the provided serialized value (obtained via `DUMP`). If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. diff --git a/commands/script exists.md b/commands/script exists.md index bda2569865..e89c34ad08 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -3,7 +3,7 @@ Returns information about the existence of the scripts in the script cache. This command accepts one or more SHA1 sums and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining operation can be performed solely using `EVALSHA` instead of `EVAL` to save bandwidth. -Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. +Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. @return diff --git a/commands/script flush.md b/commands/script flush.md index 1a550eced1..c435f7677b 100644 --- a/commands/script flush.md +++ b/commands/script flush.md @@ -1,6 +1,6 @@ Flush the Lua scripts cache. -Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. +Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. @return diff --git a/commands/script kill.md b/commands/script kill.md index 362740bbbf..f4e13ace2a 100644 --- a/commands/script kill.md +++ b/commands/script kill.md @@ -3,9 +3,9 @@ Kills the currently executing Lua script, assuming no write operation was yet pe This command is mainly useful to kill a script that is running for too much time(for instance because it entered an infinite loop because of a bug). The script will be killed and the client currently blocked into EVAL will see the command returning with an error. -If the script already performed write operations it can not be killed in this way because it would violate Lua script atomicity contract. In such a case only `SHUTDOWN NOSAVE` is able to kill the script, killign the Redis process in an hard way preventing it to persist with half-written informations. +If the script already performed write operations it can not be killed in this way because it would violate Lua script atomicity contract. In such a case only `SHUTDOWN NOSAVE` is able to kill the script, killing the Redis process in an hard way preventing it to persist with half-written information. -Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. +Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. @return diff --git a/commands/script load.md b/commands/script load.md index a3bfd38e16..82eae0c81b 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -5,7 +5,7 @@ The script is guaranteed to stay in the script cache forever (unless `SCRIPT FLU The command works in the same way even if the script was already present in the script cache. -Plase check the `EVAL` page for detailed information about how Redis Lua scripting works. +Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. @return diff --git a/commands/slowlog.md b/commands/slowlog.md index a993432318..848d0f50df 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -19,7 +19,7 @@ When a new command is logged and the slow log is already at its maximum length, the oldest one is removed from the queue of logged commands in order to make space. -The configuration can be done both editing the redis.conf file or +The configuration can be done by editing `redis.conf` or while the server is running using the `CONFIG GET` and `CONFIG SET` commands. diff --git a/commands/time.md b/commands/time.md index 456ed080fb..c9ea2cbf4f 100644 --- a/commands/time.md +++ b/commands/time.md @@ -3,8 +3,8 @@ O(1) -The `TIME` command returns the current server time as a two items lists: an unix timestamp and the amount of microseconds already elapsed in the current second. -Basically the interface is very similar to the one of the `gettimeofday` syscall. +The `TIME` command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. +Basically the interface is very similar to the one of the `gettimeofday` system call. @return diff --git a/wordlist b/wordlist index 14bdc98b4c..36b0d414a5 100644 --- a/wordlist +++ b/wordlist @@ -1,6 +1,10 @@ AOF +API +CJSON CLI +Ctrl GHz +IP JPEG JSON Lua @@ -13,26 +17,47 @@ SQL UTF Xeon Yukihiro +allocator +atomicity backticks +bitwise blazingly blog +boolean btw +cardinality +checksum dataset datasets +decrement +decrementing +deserializing +destkey desync +endian +functionalities +globals hostname +incrementing indices +infeasible keyspace +lexicographically multi pcall pipelined pipelining +scalable +semantical +snapshotting subcommand subcommands substring timestamp tuple tuples +unary +unordered unsubscribe unsubscribed unsubscribes From 7a5f0b131447bcaafb792d6b3c00ec40084d1d76 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 10:44:51 -0700 Subject: [PATCH 0164/2880] Use hash for headers --- commands/append.md | 3 +-- commands/bitcount.md | 6 ++---- commands/bitop.md | 9 +++------ commands/brpoplpush.md | 6 ++---- commands/eval.md | 39 +++++++++++++-------------------------- commands/expire.md | 9 +++------ commands/rpoplpush.md | 6 ++---- 7 files changed, 26 insertions(+), 52 deletions(-) diff --git a/commands/append.md b/commands/append.md index 6f7d59ffd4..1c0fd95989 100644 --- a/commands/append.md +++ b/commands/append.md @@ -14,8 +14,7 @@ empty string, so `APPEND` will be similar to `SET` in this special case. APPEND mykey " World" GET mykey -Pattern: Time series ---- +## Pattern: Time series the `APPEND` command can be used to create a very compact representation of a list of fixed-size samples, usually referred as *time series*. diff --git a/commands/bitcount.md b/commands/bitcount.md index 71e968ab14..b3863cef67 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -25,8 +25,7 @@ The number of bits set to 1. BITCOUNT mykey 0 0 BITCOUNT mykey 1 1 -Pattern: real time metrics using bitmaps ---- +## Pattern: real time metrics using bitmaps Bitmaps are a very space efficient representation of certain kinds of information. One example is a web application that needs the history @@ -47,8 +46,7 @@ the web site simply calling the `BITCOUNT` command against the bitmap. A similar pattern where user IDs are used instead of days is described in the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/). -Performance considerations ---- +## Performance considerations In the above example of counting days, even after 10 years the application is online we still have just `365*10` bits of data per user, that is diff --git a/commands/bitop.md b/commands/bitop.md index b30685e333..006fcebdc9 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -13,8 +13,7 @@ performs inversion of bits so it only makes sense as an unary operator. The result of the operation is always stored at *destkey*. -Handling of strings with different lengths ---- +## Handling of strings with different lengths When an operation is performed between strings having different lengths, all the strings shorter than the longest string in the set are treated as if @@ -37,16 +36,14 @@ The size of the string stored into the destination key, that is equal to the siz BITOP AND dest key1 key2 GET dest -Pattern: real time metrics using bitmaps ---- +## Pattern: real time metrics using bitmaps `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target bitmap where to perform the population counting operation. See the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/) for an interesting use cases. -Performance considerations ---- +## Performance considerations `BITOP` is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings. diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index f8b343f789..3598c91300 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -10,12 +10,10 @@ See `RPOPLPUSH` for more information. @bulk-reply: the element being popped from `source` and pushed to `destination`. If `timeout` is reached, a @nil-reply is returned. -Pattern: Reliable queue ---- +## Pattern: Reliable queue Please see the pattern description in the `RPOPLPUSH` documentation. -Pattern: Circular list ---- +## Pattern: Circular list Please see the pattern description in the `RPOPLPUSH` documentation. diff --git a/commands/eval.md b/commands/eval.md index 12782f41a3..b8d0cfba34 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -1,5 +1,4 @@ -Introduction to EVAL ---- +## Introduction to EVAL `EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0. @@ -65,8 +64,7 @@ in order to play well with it). However this rule is not enforced in order to pr Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules. -Conversion between Lua and Redis data types ---- +## Conversion between Lua and Redis data types Redis return values are converted into Lua data types when Lua calls a Redis command using call() or pcall(). Similarly Lua data types are @@ -121,8 +119,7 @@ The last example shows how it is possible to directly return from Lua the return value of `redis.call()` and `redis.pcall()` with the result of returning exactly what the called command would return if called directly. -Atomicity of scripts ---- +## Atomicity of scripts Redis uses the same Lua interpreter to run all the commands. Also Redis guarantees that a script is executed in an atomic way: no other script @@ -137,8 +134,7 @@ but if you are going to use slow scripts you should be aware that while the script is running no other client can execute commands since the server is busy. -Error handling ---- +## Error handling As already stated calls to `redis.call()` resulting into a Redis command error will stop the execution of the script and will return that error back, in a @@ -156,8 +152,7 @@ is returned in the format specified above (as a Lua table with an `err` field). The user can later return this exact error to the user just returning the error object returned by `redis.pcall()`. -Bandwidth and EVALSHA ---- +## Bandwidth and EVALSHA The `EVAL` command forces you to send the script body again and again. Redis does not need to recompile the script every time as it uses an internal @@ -203,8 +198,7 @@ Passing keys and arguments as `EVAL` additional arguments is also very useful in this context as the script string remains constant and can be efficiently cached by Redis. -Script cache semantics ---- +## Script cache semantics Executed scripts are guaranteed to be in the script cache **forever**. This means that if an `EVAL` is performed against a Redis instance all the @@ -231,8 +225,7 @@ against those scripts in a pipeline without the chance that an error will be generated since the script is not known (we'll see this problem in its details later). -The SCRIPT command ---- +## The SCRIPT command Redis offers a SCRIPT command that can be used in order to control the scripting subsystem. SCRIPT currently accepts three different commands: @@ -261,8 +254,7 @@ the dataset during their execution (since stopping a read only script does not violate the scripting engine guaranteed atomicity). See the next sections for more information about long running scripts. -Scripts as pure functions ---- +## Scripts as pure functions A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are replicated on slaves sending the @@ -381,8 +373,7 @@ as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. 32 or 64 bit systems like big or little endian systems will still produce the same output. -Global variables protection ---- +## Global variables protection Redis scripts are not allowed to create global variables, in order to avoid leaking data into the Lua state. If a script requires to take state across @@ -403,8 +394,7 @@ replication is not guaranteed: don't do it. Note for Lua newbies: in order to avoid using global variables in your scripts simply declare every variable you are going to use using the *local* keyword. -Available libraries ---- +## Available libraries The Redis Lua interpreter loads the following Lua libraries: @@ -422,8 +412,7 @@ can be sure that the environment for your Redis scripts is always the same. The CJSON library allows to manipulate JSON data in a very fast way from Lua. All the other libraries are standard Lua libraries. -Emitting Redis logs from scripts ---- +## Emitting Redis logs from scripts It is possible to write to the Redis log file from Lua scripts using the `redis.log` function. @@ -448,8 +437,7 @@ Will generate the following: [32343] 22 Mar 15:21:39 # Something is wrong with this script. -Sandbox and maximum execution time ---- +## Sandbox and maximum execution time Scripts should never try to access the external system, like the file system, nor calling any other system call. A script should just do its work operating @@ -478,8 +466,7 @@ the following happens: * It is possible to terminate a script that executed only read-only commands using the `SCRIPT KILL` command. This does not violate the scripting semantic as no data was yet written on the dataset by the script. * If the script already called write commands the only allowed command becomes `SHUTDOWN NOSAVE` that stops the server not saving the current data set on disk (basically the server is aborted). -EVALSHA in the context of pipelining ---- +## EVALSHA in the context of pipelining Care should be taken when executing `EVALSHA` in the context of a pipelined request, since even in a pipeline the order of execution of commands must diff --git a/commands/expire.md b/commands/expire.md index 10cc218f4b..fdee30d79b 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -21,16 +21,14 @@ If a key is overwritten by `RENAME`, like in the case of an existing key matter if the original `Key_A` had a timeout associated or not, the new key `Key_A` will inherit all the characteristics of `Key_B`. -Refreshing expires ---- +## Refreshing expires It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the new value. There are many useful applications for this, an example is documented in the *Navigation session* pattern section below. -Differences in Redis prior 2.1.3 ---- +## Differences in Redis prior 2.1.3 In Redis versions prior **2.1.3** altering a key with an expire set using a command altering its value had the effect of removing the key entirely. @@ -53,8 +51,7 @@ are now fixed. SET mykey "Hello World" TTL mykey -Pattern: Navigation session ---- +## Pattern: Navigation session Imagine you have a web service and you are interested in the latest N pages *recently* visited by your users, such that each adiacent page view was not diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 7ecdb25bec..89ebc18ad8 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -25,8 +25,7 @@ element of the list, so it can be considered as a list rotation command. LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 -Pattern: Reliable queue ---- +## Pattern: Reliable queue Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue @@ -49,8 +48,7 @@ An additional client may monitor the *processing* list for items that remain there for too much time, and will push those timed out items into the queue again if needed. -Pattern: Circular list ---- +## Pattern: Circular list Using `RPOPLPUSH` with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) From 3e542b3716ad60f7ac8ec6bbff43c655dcbc360a Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 11:53:58 -0700 Subject: [PATCH 0165/2880] Reference links by id --- commands/bgrewriteaof.md | 8 ++++++-- commands/bgsave.md | 4 +++- commands/bitcount.md | 4 +++- commands/bitop.md | 4 +++- commands/config get.md | 4 +++- commands/config set.md | 8 ++++++-- commands/discard.md | 4 +++- commands/exec.md | 8 ++++++-- commands/expireat.md | 4 ++-- commands/keys.md | 4 +++- commands/multi.md | 4 +++- commands/save.md | 4 +++- commands/sort.md | 8 ++++++-- commands/unwatch.md | 4 +++- commands/watch.md | 4 +++- commands/zadd.md | 4 +++- 16 files changed, 59 insertions(+), 21 deletions(-) diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 65ee3d4f1a..809ddeddec 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,4 +1,6 @@ -Instruct Redis to start an [Append Only File](/topics/persistence#append-only-file) rewrite process. The rewrite will create a small optimized version of the current Append Only File. +Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite will create a small optimized version of the current Append Only File. + +[aof]: /topics/persistence#append-only-file If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. @@ -9,7 +11,9 @@ The rewrite will be only triggered by Redis if there is not already a background Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. -Please check the documentation about [Redis Persistence](/topics/persistence#append-only-file) for more information. +Please refer to the [persistence documentation][persistence] for detailed information. + +[persistence]: /topics/persistence @return diff --git a/commands/bgsave.md b/commands/bgsave.md index 233964381d..ca624f14d8 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -5,7 +5,9 @@ Redis forks, the parent continues to server the clients, the child saves the DB on disk then exit. A client my be able to check if the operation succeeded using the `LASTSAVE` command. -Please refer to the [persistence documentation](/topics/persistence) for detailed information. +Please refer to the [persistence documentation][persistence] for detailed information. + +[persistence]: /topics/persistence @return diff --git a/commands/bitcount.md b/commands/bitcount.md index b3863cef67..154804a5a4 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -44,7 +44,9 @@ Later it will be trivial to know the number of single days the user visited the web site simply calling the `BITCOUNT` command against the bitmap. A similar pattern where user IDs are used instead of days is described -in the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/). +in the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]". + +[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps ## Performance considerations diff --git a/commands/bitop.md b/commands/bitop.md index 006fcebdc9..fe5b6ac4a5 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -41,7 +41,9 @@ The size of the string stored into the destination key, that is equal to the siz `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target bitmap where to perform the population counting operation. -See the article [Fast easy realtime metrics using Redis bitmaps](http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps/) for an interesting use cases. +See the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]" for an interesting use cases. + +[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps ## Performance considerations diff --git a/commands/config get.md b/commands/config get.md index 41c1703a5c..86632dee42 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -22,7 +22,9 @@ You can obtain a list of all the supported configuration parameters typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: +configuration parameter used in the [redis.conf][conf] file, with the following important differences: + +[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. * The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. diff --git a/commands/config set.md b/commands/config set.md index bfdec9abcb..04917a1edf 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -12,7 +12,9 @@ by Redis that will start acting as specified starting from the next command executed. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: +configuration parameter used in the [redis.conf][conf] file, with the following important differences: + +[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. * The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. @@ -28,7 +30,9 @@ datasets, should be set using `CONFIG SET` as "900 1 300 10". It is possible to switch persistence from RDB snapshotting to append only file (and the other way around) using the `CONFIG SET` command. For more information -about how to do that please check [persistence page](/topics/persistence). +about how to do that please check [persistence page][persistence]. + +[persistence]: /topics/persistence In general what you should know is that setting the `appendonly` parameter to `yes` will start a background process to save the initial append only file diff --git a/commands/discard.md b/commands/discard.md index 27640342b3..f5c75aa031 100644 --- a/commands/discard.md +++ b/commands/discard.md @@ -1,7 +1,9 @@ Flushes all previously queued commands in a -[transaction](/topics/transactions) and restores the connection state to +[transaction][transactions] and restores the connection state to normal. +[transactions]: /topics/transactions + If `WATCH` was used, `DISCARD` unwatches all keys. @return diff --git a/commands/exec.md b/commands/exec.md index 2fd1157589..8d3030ca87 100644 --- a/commands/exec.md +++ b/commands/exec.md @@ -1,10 +1,14 @@ Executes all previously queued commands in a -[transaction](/topics/transactions) and restores the connection state to +[transaction][transactions] and restores the connection state to normal. +[transactions]: /topics/transactions + When using `WATCH`, `EXEC` will execute commands only if the watched keys were not modified, allowing for a [check-and-set -mechanism](/topics/transactions#cas). +mechanism][cas]. + +[cas]: /topics/transactions#cas @return diff --git a/commands/expireat.md b/commands/expireat.md index 109836e11c..71a1b9aae5 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -1,7 +1,7 @@ `EXPIREAT` has the same effect and semantic as `EXPIRE`, but -instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [UNIX timestamp][2] (seconds since January 1, 1970). +instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [Unix timestamp][2] (seconds since January 1, 1970). -Please for the specific semantics of the commands refer to the [EXPIRE command documentation](/commands/expire). +Please for the specific semantics of the command refer to the documentation of `EXPIRE`. [2]: http://en.wikipedia.org/wiki/Unix_time diff --git a/commands/keys.md b/commands/keys.md index 66d819252a..e96b2a6537 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -9,7 +9,9 @@ production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use `KEYS` in your regular application code. If you're looking for a way to find keys in -a subset of your keyspace, consider using [sets](/topics/data-types#sets). +a subset of your keyspace, consider using [sets][sets]. + +[sets]: /topics/data-types#sets Supported glob-style patterns: diff --git a/commands/multi.md b/commands/multi.md index a53664ddc1..1e10b97ebc 100644 --- a/commands/multi.md +++ b/commands/multi.md @@ -1,7 +1,9 @@ -Marks the start of a [transaction](/topics/transactions) +Marks the start of a [transaction][transactions] block. Subsequent commands will be queued for atomic execution using `EXEC`. +[transactions]: /topics/transactions + @return @status-reply: always `OK`. diff --git a/commands/save.md b/commands/save.md index 386c976a39..79283412e9 100644 --- a/commands/save.md +++ b/commands/save.md @@ -2,7 +2,9 @@ The `SAVE` commands performs a **synchronous** save of the dataset producing a * You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. -For more information check the documentation [describing how Redis persistence works](/topics/persistence) in details. +Please refer to the [persistence documentation][persistence] for detailed information. + +[persistence]: /topics/persistence @return diff --git a/commands/sort.md b/commands/sort.md index e054d7c275..1f5ab2c4f0 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,11 +1,15 @@ Returns or stores the elements contained in the -[list](/topics/data-types#lists), [set](/topics/data-types#set) or [sorted -set](/topics/data-types#sorted-sets) at `key`. By default, sorting is numeric +[list][lists], [set][sets] or [sorted set][sorted-sets] + at `key`. By default, sorting is numeric and elements are compared by their value interpreted as double precision floating point number. This is `SORT` in its simplest form: SORT mylist +[lists]: /topics/data-types#lists +[sets]: /topics/data-types#set +[sorted-sets]: /topics/data-types#sorted-sets + Assuming `mylist` is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from large to small, use the `!DESC` modifier: diff --git a/commands/unwatch.md b/commands/unwatch.md index 40ac4b5194..853766dfcb 100644 --- a/commands/unwatch.md +++ b/commands/unwatch.md @@ -1,4 +1,6 @@ -Flushes all the previously watched keys for a [transaction](/topics/transactions). +Flushes all the previously watched keys for a [transaction][transactions]. + +[transactions]: /topics/transactions If you call `EXEC` or `DISCARD`, there's no need to manually call `UNWATCH`. diff --git a/commands/watch.md b/commands/watch.md index 604ccf032b..d3d8458ff6 100644 --- a/commands/watch.md +++ b/commands/watch.md @@ -1,4 +1,6 @@ -Marks the given keys to be watched for conditional execution of a [transaction](/topics/transactions). +Marks the given keys to be watched for conditional execution of a [transaction][transactions]. + +[transactions]: /topics/transactions @return diff --git a/commands/zadd.md b/commands/zadd.md index 0a3eaf1efe..4b9d85e573 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -7,7 +7,9 @@ The score values should be the string representation of a numeric value, and accepts double precision floating point numbers. For an introduction to sorted sets, see the data types page on [sorted -sets](/topics/data-types#sorted-sets). +sets][sorted-sets]. + +[sorted-sets]: /topics/data-types#sorted-sets @return From 066c91d4f7383a81cf4c07d8e5c4dd75372ca45f Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 11:59:06 -0700 Subject: [PATCH 0166/2880] The universe is 80 characters wide... --- Rakefile | 39 +++++ commands/append.md | 26 +-- commands/auth.md | 13 +- commands/bgrewriteaof.md | 12 +- commands/bgsave.md | 11 +- commands/bitcount.md | 41 +++-- commands/bitop.md | 25 +-- commands/blpop.md | 30 ++-- commands/config get.md | 15 +- commands/config set.md | 20 +-- commands/decr.md | 12 +- commands/decrby.md | 9 +- commands/del.md | 2 +- commands/dump.md | 10 +- commands/eval.md | 295 ++++++++++++++++++----------------- commands/exec.md | 8 +- commands/expire.md | 83 +++++----- commands/expireat.md | 3 +- commands/flushall.md | 3 +- commands/get.md | 6 +- commands/hdel.md | 4 +- commands/hgetall.md | 4 +- commands/hincrby.md | 3 +- commands/hincrbyfloat.md | 12 +- commands/hmget.md | 4 +- commands/hmset.md | 6 +- commands/hset.md | 4 +- commands/incr.md | 49 +++--- commands/incrby.md | 9 +- commands/incrbyfloat.md | 17 +- commands/info.md | 4 +- commands/keys.md | 6 +- commands/lastsave.md | 8 +- commands/lindex.md | 10 +- commands/linsert.md | 4 +- commands/llen.md | 6 +- commands/lpush.md | 16 +- commands/lpushx.md | 6 +- commands/lrange.md | 13 +- commands/lrem.md | 10 +- commands/ltrim.md | 8 +- commands/mget.md | 6 +- commands/migrate.md | 35 +++-- commands/monitor.md | 18 +-- commands/move.md | 4 +- commands/mset.md | 2 +- commands/msetnx.md | 4 +- commands/multi.md | 4 +- commands/object.md | 7 +- commands/persist.md | 4 +- commands/pttl.md | 4 +- commands/punsubscribe.md | 10 +- commands/renamenx.md | 4 +- commands/restore.md | 6 +- commands/rpoplpush.md | 27 ++-- commands/rpush.md | 16 +- commands/rpushx.md | 6 +- commands/sadd.md | 6 +- commands/save.md | 13 +- commands/script exists.md | 11 +- commands/script flush.md | 3 +- commands/script kill.md | 17 +- commands/script load.md | 15 +- commands/sdiffstore.md | 4 +- commands/select.md | 4 +- commands/setex.md | 2 +- commands/setnx.md | 25 ++- commands/setrange.md | 16 +- commands/shutdown.md | 11 +- commands/sinter.md | 4 +- commands/slaveof.md | 20 +-- commands/slowlog.md | 39 +++-- commands/smove.md | 5 +- commands/sort.md | 43 ++--- commands/spop.md | 4 +- commands/srem.md | 2 +- commands/strlen.md | 4 +- commands/subscribe.md | 4 +- commands/sunion.md | 3 +- commands/time.md | 6 +- commands/ttl.md | 6 +- commands/type.md | 6 +- commands/unsubscribe.md | 10 +- commands/watch.md | 3 +- commands/zadd.md | 11 +- commands/zcard.md | 4 +- commands/zcount.md | 8 +- commands/zinterstore.md | 10 +- commands/zrange.md | 6 +- commands/zrangebyscore.md | 16 +- commands/zrem.md | 3 +- commands/zrevrangebyscore.md | 3 +- commands/zunionstore.md | 2 +- 93 files changed, 754 insertions(+), 618 deletions(-) diff --git a/Rakefile b/Rakefile index ffd9609ffb..c352b41314 100644 --- a/Rakefile +++ b/Rakefile @@ -39,3 +39,42 @@ task :spellcheck do puts "#{file}: #{words.uniq.sort.join(" ")}" if words.any? end end + +namespace :format do + + def format(file) + return unless File.exist?(file) + + STDOUT.print "formatting #{file}..." + STDOUT.flush + + matcher = /^(?:\A|\r?\n)((?:[a-zA-Z].+?\r?\n)+)/m + body = File.read(file).gsub(matcher) do |match| + formatted = nil + + IO.popen("par p0s0w80", "r+") do |io| + io.puts match + io.close_write + formatted = io.read + end + + formatted + end + + File.open(file, "w") do |f| + f.print body + end + + STDOUT.puts + end + + task :file, :path do |t, args| + format(args[:path]) + end + + task :all do + Dir["commands/*.md"].each do |path| + format(path) + end + end +end diff --git a/commands/append.md b/commands/append.md index 1c0fd95989..79dd0ba7a0 100644 --- a/commands/append.md +++ b/commands/append.md @@ -1,6 +1,6 @@ -If `key` already exists and is a string, this command appends the `value` at -the end of the string. If `key` does not exist it is created and set as an -empty string, so `APPEND` will be similar to `SET` in this special case. +If `key` already exists and is a string, this command appends the `value` at the +end of the string. If `key` does not exist it is created and set as an empty +string, so `APPEND` will be similar to `SET` in this special case. @return @@ -16,9 +16,9 @@ empty string, so `APPEND` will be similar to `SET` in this special case. ## Pattern: Time series -the `APPEND` command can be used to create a very compact representation of -a list of fixed-size samples, usually referred as *time series*. -Every time a new sample arrives we can store it using the command +the `APPEND` command can be used to create a very compact representation of a +list of fixed-size samples, usually referred as *time series*. Every time a new +sample arrives we can store it using the command APPEND timeseries "fixed-size sample" @@ -28,12 +28,18 @@ Accessing individual elements in the time series is not hard: * `GETRANGE` allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. * `SETRANGE` can be used to overwrite an existing time serie. -The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. +The limitations of this pattern is that we are forced into an append-only mode +of operation, there is no way to cut the time series to a given size easily +because Redis currently lacks a command able to trim string objects. However the +space efficiency of time series stored in this way is remarkable. -Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of samples per key, to avoid dealing with very big keys, and to make this pattern more -friendly to be distributed across many Redis instances. +Hint: it is possible to switch to a different key based on the current Unix +time, in this way it is possible to have just a relatively small amount of +samples per key, to avoid dealing with very big keys, and to make this pattern +more friendly to be distributed across many Redis instances. -An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations). +An example sampling the temperature of a sensor using fixed-size strings (using +a binary format is better in real implementations). @cli APPEND ts "0043" diff --git a/commands/auth.md b/commands/auth.md index fc3dba8d76..8251f61cf5 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,11 +1,10 @@ -Request for authentication in a password protected Redis server. -Redis can be instructed to require a password before allowing clients -to execute commands. This is done using the `requirepass` directive in the -configuration file. +Request for authentication in a password protected Redis server. Redis can be +instructed to require a password before allowing clients to execute commands. +This is done using the `requirepass` directive in the configuration file. -If `password` matches the password in the configuration file, the server replies with -the `OK` status code and starts accepting commands. -Otherwise, an error is returned and the clients needs to try a new password. +If `password` matches the password in the configuration file, the server replies +with the `OK` status code and starts accepting commands. Otherwise, an error is +returned and the clients needs to try a new password. **Note**: because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 809ddeddec..befa266bf9 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,17 +1,21 @@ -Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite will create a small optimized version of the current Append Only File. +Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite +will create a small optimized version of the current Append Only File. [aof]: /topics/persistence#append-only-file If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. -The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically: +The rewrite will be only triggered by Redis if there is not already a background +process doing persistence. Specifically: * If a Redis child is creating a snapshot on disk, the AOF rewrite is *scheduled* but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command starting from Redis 2.6. * If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. -Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. +Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the +`BGREWRITEAOF` command can be used to trigger a rewrite at any time. -Please refer to the [persistence documentation][persistence] for detailed information. +Please refer to the [persistence documentation][persistence] for detailed +information. [persistence]: /topics/persistence diff --git a/commands/bgsave.md b/commands/bgsave.md index ca624f14d8..58f37e065c 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,11 +1,12 @@ -Save the DB in background. The OK code is immediately returned. -Redis forks, the parent continues to server the clients, the child -saves the DB on disk then exit. A client my be able to check if the -operation succeeded using the `LASTSAVE` command. +Save the DB in background. The OK code is immediately returned. Redis forks, +the parent continues to server the clients, the child saves the DB on disk +then exit. A client my be able to check if the operation succeeded using the +`LASTSAVE` command. -Please refer to the [persistence documentation][persistence] for detailed information. +Please refer to the [persistence documentation][persistence] for detailed +information. [persistence]: /topics/persistence diff --git a/commands/bitcount.md b/commands/bitcount.md index 154804a5a4..414aa47620 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -4,12 +4,11 @@ By default all the bytes contained in the string are examined. It is possible to specify the counting operation only in an interval passing the additional arguments *start* and *end*. -Like for the `GETRANGE` command start and end can contain negative values -in order to index bytes starting from the end of the string, where -1 is the -last byte, -2 is the penultimate, and so forth. +Like for the `GETRANGE` command start and end can contain negative values in +order to index bytes starting from the end of the string, where -1 is the last +byte, -2 is the penultimate, and so forth. -Non existing keys are treated as empty strings, so the command will return -zero. +Non existing keys are treated as empty strings, so the command will return zero. @return @@ -28,32 +27,32 @@ The number of bits set to 1. ## Pattern: real time metrics using bitmaps Bitmaps are a very space efficient representation of certain kinds of -information. One example is a web application that needs the history -of user visits, so that for instance it is possible to determine what -users are good targets of beta features, or for any other purpose. +information. One example is a web application that needs the history of user +visits, so that for instance it is possible to determine what users are good +targets of beta features, or for any other purpose. Using the `SETBIT` command this is trivial to accomplish, identifying every -day with a small progressive integer. For instance day 0 is the first day -the application was put online, day 1 the next day, and so forth. +day with a small progressive integer. For instance day 0 is the first day the +application was put online, day 1 the next day, and so forth. -Every time an user performs a page view, the application can register that -in the current day the user visited the web site using the `SETBIT` command -setting the bit corresponding to the current day. +Every time an user performs a page view, the application can register that in +the current day the user visited the web site using the `SETBIT` command setting +the bit corresponding to the current day. -Later it will be trivial to know the number of single days the user visited -the web site simply calling the `BITCOUNT` command against the bitmap. +Later it will be trivial to know the number of single days the user visited the +web site simply calling the `BITCOUNT` command against the bitmap. -A similar pattern where user IDs are used instead of days is described -in the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]". +A similar pattern where user IDs are used instead of days is described in the +article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]". [bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps ## Performance considerations -In the above example of counting days, even after 10 years the application -is online we still have just `365*10` bits of data per user, that is -just 456 bytes per user. With this amount of data `BITCOUNT` is still as fast -as any other O(1) Redis command like `GET` or `INCR`. +In the above example of counting days, even after 10 years the application is +online we still have just `365*10` bits of data per user, that is just 456 bytes +per user. With this amount of data `BITCOUNT` is still as fast as any other O(1) +Redis command like `GET` or `INCR`. When the bitmap is big, there are two alternatives: diff --git a/commands/bitop.md b/commands/bitop.md index fe5b6ac4a5..c8f05240f6 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -1,7 +1,8 @@ -Perform a bitwise operation between multiple keys (containing string -values) and store the result in the destination key. +Perform a bitwise operation between multiple keys (containing string values) and +store the result in the destination key. -The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: +The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** +and **NOT**, thus the valid forms to call the command are: + BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN* + BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN* @@ -15,9 +16,9 @@ The result of the operation is always stored at *destkey*. ## Handling of strings with different lengths -When an operation is performed between strings having different lengths, all -the strings shorter than the longest string in the set are treated as if -they were zero-padded up to the length of the longest string. +When an operation is performed between strings having different lengths, all the +strings shorter than the longest string in the set are treated as if they were +zero-padded up to the length of the longest string. The same holds true for non-existing keys, that are considered as a stream of zero bytes up to the length of the longest string. @@ -26,7 +27,8 @@ zero bytes up to the length of the longest string. @integer-reply -The size of the string stored into the destination key, that is equal to the size of the longest input string. +The size of the string stored into the destination key, that is equal to the +size of the longest input string. @examples @@ -41,7 +43,8 @@ The size of the string stored into the destination key, that is equal to the siz `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target bitmap where to perform the population counting operation. -See the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]" for an interesting use cases. +See the article called "[Fast easy realtime metrics using Redis +bitmaps][bitmaps]" for an interesting use cases. [bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps @@ -50,6 +53,6 @@ See the article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps `BITOP` is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings. -For real time metrics and statistics involving large inputs a good approach -is to use a slave (with read-only option disabled) where to perform the -bit-wise operations without blocking the master instance. +For real time metrics and statistics involving large inputs a good approach is +to use a slave (with read-only option disabled) where to perform the bit-wise +operations without blocking the master instance. diff --git a/commands/blpop.md b/commands/blpop.md index 73c7a2b13d..8685a0ea1c 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -21,15 +21,14 @@ that order). ## Blocking behavior -If none of the specified keys exist, `BLPOP` blocks -the connection until another client performs an `LPUSH` or `RPUSH` operation -against one of the keys. +If none of the specified keys exist, `BLPOP` blocks the connection until another +client performs an `LPUSH` or `RPUSH` operation against one of the keys. Once new data is present on one of the lists, the client returns with the name of the key unblocking it and the popped value. -When `BLPOP` causes a client to block and a non-zero timeout is specified, the -client will unblock returning a `nil` multi-bulk value when the specified +When `BLPOP` causes a client to block and a non-zero timeout is specified, +the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. @@ -38,9 +37,9 @@ be used to block indefinitely. ## Multiple clients blocking for the same keys -Multiple clients can block for the same key. They are put into -a queue, so the first to be served will be the one that started to wait -earlier, in a first-`!BLPOP` first-served fashion. +Multiple clients can block for the same key. They are put into a queue, so +the first to be served will be the one that started to wait earlier, in a +first-`!BLPOP` first-served fashion. ## `!BLPOP` inside a `!MULTI`/`!EXEC` transaction @@ -51,8 +50,8 @@ execute the block atomically, which in turn does not allow other clients to perform a push operation. The behavior of `BLPOP` inside `MULTI`/`EXEC` when the list is empty is to -return a `nil` multi-bulk reply, which is the same thing that happens when the -timeout is reached. If you like science fiction, think of time flowing at +return a `nil` multi-bulk reply, which is the same thing that happens when +the timeout is reached. If you like science fiction, think of time flowing at infinite speed inside a `MULTI`/`EXEC` block. @return @@ -76,12 +75,11 @@ infinite speed inside a `MULTI`/`EXEC` block. ## Pattern: Event notification Using blocking list operations it is possible to mount different blocking -primitives. For instance for some application you may need to block -waiting for elements into a Redis Set, so that as far as a new element is -added to the Set, it is possible to retrieve it without resort to polling. -This would require a blocking version of `SPOP` that is -not available, but using blocking list operations we can easily accomplish -this task. +primitives. For instance for some application you may need to block waiting for +elements into a Redis Set, so that as far as a new element is added to the Set, +it is possible to retrieve it without resort to polling. This would require +a blocking version of `SPOP` that is not available, but using blocking list +operations we can easily accomplish this task. The consumer will do: diff --git a/commands/config get.md b/commands/config get.md index 86632dee42..268dcb5df2 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -1,7 +1,7 @@ The `CONFIG GET` command is used to read the configuration parameters of a -running Redis server. Not all the configuration parameters are -supported in Redis 2.4, while Redis 2.6 can read the whole configuration of -a server using this command. +running Redis server. Not all the configuration parameters are supported in +Redis 2.4, while Redis 2.6 can read the whole configuration of a server using +this command. The symmetric command used to alter the configuration at run time is `CONFIG SET`. @@ -22,7 +22,8 @@ You can obtain a list of all the supported configuration parameters typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf][conf] file, with the following important differences: +configuration parameter used in the [redis.conf][conf] file, with the following +important differences: [conf]: http://github.com/antirez/redis/raw/2.2/redis.conf @@ -34,9 +35,9 @@ For instance what in `redis.conf` looks like: save 900 1 save 300 10 -that means, save after 900 seconds if there is at least 1 change to the -dataset, and after 300 seconds if there are at least 10 changes to the -datasets, will be reported by `CONFIG GET` as "900 1 300 10". +that means, save after 900 seconds if there is at least 1 change to the dataset, +and after 300 seconds if there are at least 10 changes to the datasets, will be +reported by `CONFIG GET` as "900 1 300 10". @return diff --git a/commands/config set.md b/commands/config set.md index 04917a1edf..82a632ab63 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -2,17 +2,17 @@ The `CONFIG SET` command is used in order to reconfigure the server at run time without the need to restart Redis. You can change both trivial parameters or switch from one to another persistence option using this command. -The list of configuration parameters supported by `CONFIG SET` can be -obtained issuing a `CONFIG GET *` command, that is the symmetrical command -used to obtain information about the configuration of a running -Redis instance. +The list of configuration parameters supported by `CONFIG SET` can be obtained +issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain +information about the configuration of a running Redis instance. All the configuration parameters set using `CONFIG SET` are immediately loaded by Redis that will start acting as specified starting from the next command executed. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf][conf] file, with the following important differences: +configuration parameter used in the [redis.conf][conf] file, with the following +important differences: [conf]: http://github.com/antirez/redis/raw/2.2/redis.conf @@ -24,9 +24,9 @@ For instance what in `redis.conf` looks like: save 900 1 save 300 10 -that means, save after 900 seconds if there is at least 1 change to the -dataset, and after 300 seconds if there are at least 10 changes to the -datasets, should be set using `CONFIG SET` as "900 1 300 10". +that means, save after 900 seconds if there is at least 1 change to the dataset, +and after 300 seconds if there are at least 10 changes to the datasets, should +be set using `CONFIG SET` as "900 1 300 10". It is possible to switch persistence from RDB snapshotting to append only file (and the other way around) using the `CONFIG SET` command. For more information @@ -40,8 +40,8 @@ In general what you should know is that setting the `appendonly` parameter to commands on the append only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start. -You can have both the AOF enabled with RDB snapshotting if you want, the -two options are not mutually exclusive. +You can have both the AOF enabled with RDB snapshotting if you want, the two +options are not mutually exclusive. @return diff --git a/commands/decr.md b/commands/decr.md index a1aeb4c1ad..038ba06b56 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -1,11 +1,9 @@ -Decrements the number stored at `key` by one. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that can not be represented as integer. This operation is limited to **64 -bit signed integers**. +Decrements the number stored at `key` by one. If the key does not exist, it +is set to `0` before performing the operation. An error is returned if the +key contains a value of the wrong type or contains a string that can not be +represented as integer. This operation is limited to **64 bit signed integers**. -See `INCR` for extra information on increment/decrement -operations. +See `INCR` for extra information on increment/decrement operations. @return diff --git a/commands/decrby.md b/commands/decrby.md index 28fc3f6bac..48db8d01da 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -1,8 +1,7 @@ -Decrements the number stored at `key` by `decrement`. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that can not be represented as integer. This operation is limited to 64 -bit signed integers. +Decrements the number stored at `key` by `decrement`. If the key does not exist, +it is set to `0` before performing the operation. An error is returned if the +key contains a value of the wrong type or contains a string that can not be +represented as integer. This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/del.md b/commands/del.md index 058f1c49f1..84e04c8039 100644 --- a/commands/del.md +++ b/commands/del.md @@ -1,4 +1,4 @@ -Removes the specified keys. A key is ignored if it does not exist. +Removes the specified keys. A key is ignored if it does not exist. @return diff --git a/commands/dump.md b/commands/dump.md index 7ce3eacf6d..3999ab35f3 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -1,12 +1,16 @@ -Serialize the value stored at key in a Redis-specific format and return it to the user. The returned value can be synthesized back into a Redis key using the `RESTORE` command. +Serialize the value stored at key in a Redis-specific format and return it to +the user. The returned value can be synthesized back into a Redis key using the +`RESTORE` command. -The serialization format is opaque and non-standard, however it has a few semantical characteristics: +The serialization format is opaque and non-standard, however it has a few +semantical characteristics: * It contains a 64bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. * Values are encoded in the same format used by RDB. * An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. -The serialized value does NOT contain expire information. In order to capture the time to live of the current value the `PTTL` command should be used. +The serialized value does NOT contain expire information. In order to capture +the time to live of the current value the `PTTL` command should be used. If `key` does not exist a nil bulk reply is returned. diff --git a/commands/eval.md b/commands/eval.md index b8d0cfba34..f41ac0a30b 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -3,17 +3,18 @@ `EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0. -The first argument of `EVAL` is a Lua 5.1 script. The script does not need -to define a Lua function (and should not). It is just a Lua program that will run in the context of the Redis server. +The first argument of `EVAL` is a Lua 5.1 script. The script does not need to +define a Lua function (and should not). It is just a Lua program that will run +in the context of the Redis server. -The second argument of `EVAL` is the number of arguments that follows -the script (starting from the third argument) that represent Redis key names. -This arguments can be accessed by Lua using the `KEYS` global variable in -the form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). +The second argument of `EVAL` is the number of arguments that follows the +script (starting from the third argument) that represent Redis key names. This +arguments can be accessed by Lua using the `KEYS` global variable in the form of +a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). -All the additional arguments should not represent key names and can -be accessed by Lua using the `ARGV` global variable, very similarly to -what happens with keys (so `ARGV[1]`, `ARGV[2]`, ...). +All the additional arguments should not represent key names and can be accessed +by Lua using the `ARGV` global variable, very similarly to what happens with +keys (so `ARGV[1]`, `ARGV[2]`, ...). The following example should clarify what stated above: @@ -23,12 +24,12 @@ The following example should clarify what stated above: 3) "first" 4) "second" -Note: as you can see Lua arrays are returned as Redis multi bulk -replies, that is a Redis return type that your client library will -likely convert into an Array type in your programming language. +Note: as you can see Lua arrays are returned as Redis multi bulk replies, that +is a Redis return type that your client library will likely convert into an +Array type in your programming language. -It is possible to call Redis commands from a Lua script using two different -Lua functions: +It is possible to call Redis commands from a Lua script using two different Lua +functions: * `redis.call()` * `redis.pcall()` @@ -39,44 +40,47 @@ error that in turn will force `EVAL` to return an error to the command caller, while `redis.pcall` will trap the error returning a Lua table representing the error. -The arguments of the `redis.call()` and `redis.pcall()` functions are simply -all the arguments of a well formed Redis command: +The arguments of the `redis.call()` and `redis.pcall()` functions are simply all +the arguments of a well formed Redis command: > eval "return redis.call('set','foo','bar')" 0 OK -The above script actually sets the key `foo` to the string `bar`. -However it violates the `EVAL` command semantics as all the keys that the -script uses should be passed using the KEYS array, in the following way: +The above script actually sets the key `foo` to the string `bar`. However it +violates the `EVAL` command semantics as all the keys that the script uses +should be passed using the KEYS array, in the following way: > eval "return redis.call('set',KEYS[1],'bar')" 1 foo OK -The reason for passing keys in the proper way is that, before of `EVAL` all -the Redis commands could be analyzed before execution in order to -establish what are the keys the command will operate on. +The reason for passing keys in the proper way is that, before of `EVAL` all the +Redis commands could be analyzed before execution in order to establish what are +the keys the command will operate on. -In order for this to be true for `EVAL` also keys must be explicit. -This is useful in many ways, but especially in order to make sure Redis Cluster -is able to forward your request to the appropriate cluster node (Redis -Cluster is a work in progress, but the scripting feature was designed -in order to play well with it). However this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster. +In order for this to be true for `EVAL` also keys must be explicit. This is +useful in many ways, but especially in order to make sure Redis Cluster is able +to forward your request to the appropriate cluster node (Redis Cluster is a +work in progress, but the scripting feature was designed in order to play well +with it). However this rule is not enforced in order to provide the user with +opportunities to abuse the Redis single instance configuration, at the cost of +writing scripts not compatible with Redis Cluster. -Lua scripts can return a value, that is converted from the Lua type to the Redis protocol using a set of conversion rules. +Lua scripts can return a value, that is converted from the Lua type to the Redis +protocol using a set of conversion rules. ## Conversion between Lua and Redis data types -Redis return values are converted into Lua data types when Lua calls a -Redis command using call() or pcall(). Similarly Lua data types are -converted into Redis protocol when a Lua script returns some value, so that -scripts can control what `EVAL` will reply to the client. +Redis return values are converted into Lua data types when Lua calls a Redis +command using call() or pcall(). Similarly Lua data types are converted into +Redis protocol when a Lua script returns some value, so that scripts can control +what `EVAL` will reply to the client. -This conversion between data types is designed in a way that if -a Redis type is converted into a Lua type, and then the result is converted -back into a Redis type, the result is the same as of the initial value. +This conversion between data types is designed in a way that if a Redis type is +converted into a Lua type, and then the result is converted back into a Redis +type, the result is the same as of the initial value. -In other words there is a one to one conversion between Lua and Redis types. -The following table shows you all the conversions rules: +In other words there is a one to one conversion between Lua and Redis types. The +following table shows you all the conversions rules: **Redis to Lua** conversion table. @@ -115,30 +119,29 @@ The followings are a few conversion examples: > eval "return redis.call('get','foo')" 0 "bar" -The last example shows how it is possible to directly return from Lua -the return value of `redis.call()` and `redis.pcall()` with the result of -returning exactly what the called command would return if called directly. +The last example shows how it is possible to directly return from Lua the return +value of `redis.call()` and `redis.pcall()` with the result of returning exactly +what the called command would return if called directly. ## Atomicity of scripts Redis uses the same Lua interpreter to run all the commands. Also Redis -guarantees that a script is executed in an atomic way: no other script -or Redis command will be executed while a script is being executed. -This semantics is very similar to the one of `MULTI` / `EXEC`. -From the point of view of all the other clients the effects of a script -are either still not visible or already completed. - -However this also means that executing slow scripts is not a good idea. -It is not hard to create fast scripts, as the script overhead is very low, -but if you are going to use slow scripts you should be aware that while the -script is running no other client can execute commands since the server -is busy. +guarantees that a script is executed in an atomic way: no other script or Redis +command will be executed while a script is being executed. This semantics is +very similar to the one of `MULTI` / `EXEC`. From the point of view of all the +other clients the effects of a script are either still not visible or already +completed. + +However this also means that executing slow scripts is not a good idea. It is +not hard to create fast scripts, as the script overhead is very low, but if +you are going to use slow scripts you should be aware that while the script is +running no other client can execute commands since the server is busy. ## Error handling As already stated calls to `redis.call()` resulting into a Redis command error -will stop the execution of the script and will return that error back, in a -way that makes it obvious that the error was generated by a script: +will stop the execution of the script and will return that error back, in a way +that makes it obvious that the error was generated by a script: > del foo (integer) 1 @@ -147,17 +150,17 @@ way that makes it obvious that the error was generated by a script: > eval "return redis.call('get','foo')" 0 (error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value -Using the `redis.pcall()` command no error is raised, but an error object -is returned in the format specified above (as a Lua table with an `err` -field). The user can later return this exact error to the user just returning -the error object returned by `redis.pcall()`. +Using the `redis.pcall()` command no error is raised, but an error object is +returned in the format specified above (as a Lua table with an `err` field). +The user can later return this exact error to the user just returning the error +object returned by `redis.pcall()`. ## Bandwidth and EVALSHA -The `EVAL` command forces you to send the script body again and again. -Redis does not need to recompile the script every time as it uses an internal -caching mechanism, however paying the cost of the additional bandwidth may -not be optimal in many contexts. +The `EVAL` command forces you to send the script body again and again. Redis +does not need to recompile the script every time as it uses an internal caching +mechanism, however paying the cost of the additional bandwidth may not be +optimal in many contexts. On the other hand defining commands using a special command or via `redis.conf` would be a problem for a few reasons: @@ -168,8 +171,8 @@ would be a problem for a few reasons: * Reading an application code the full semantic could not be clear since the application would call commands defined server side. -In order to avoid the above three problems and at the same time don't incur -in the bandwidth penalty, Redis implements the `EVALSHA` command. +In order to avoid the above three problems and at the same time don't incur in +the bandwidth penalty, Redis implements the `EVALSHA` command. `EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument it has the SHA1 sum of a script. The behavior is the following: @@ -192,43 +195,42 @@ Example: The client library implementation can always optimistically send `EVALSHA` under the hoods even when the client actually called `EVAL`, in the hope the script -was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be used instead. +was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will +be used instead. -Passing keys and arguments as `EVAL` additional arguments is also -very useful in this context as the script string remains constant and can be -efficiently cached by Redis. +Passing keys and arguments as `EVAL` additional arguments is also very useful in +this context as the script string remains constant and can be efficiently cached +by Redis. ## Script cache semantics -Executed scripts are guaranteed to be in the script cache **forever**. -This means that if an `EVAL` is performed against a Redis instance all the -subsequent `EVALSHA` calls will succeed. - -The only way to flush the script cache is by explicitly calling the -SCRIPT FLUSH command, that will *completely flush* the scripts cache removing -all the scripts executed so far. This is usually -needed only when the instance is going to be instantiated for another -customer or application in a cloud environment. - -The reason why scripts can be cached for long time is that it is unlikely -for a well written application to have so many different scripts to create -memory problems. Every script is conceptually like the implementation of -a new command, and even a large application will likely have just a few -hundreds of that. Even if the application is modified many times and -scripts will change, still the memory used is negligible. - -The fact that the user can count on Redis not removing scripts -is semantically a very good thing. For instance an application taking -a persistent connection to Redis can stay sure that if a script was -sent once it is still in memory, thus for instance can use EVALSHA -against those scripts in a pipeline without the chance that an error -will be generated since the script is not known (we'll see this problem -in its details later). +Executed scripts are guaranteed to be in the script cache **forever**. This +means that if an `EVAL` is performed against a Redis instance all the subsequent +`EVALSHA` calls will succeed. + +The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH +command, that will *completely flush* the scripts cache removing all the scripts +executed so far. This is usually needed only when the instance is going to be +instantiated for another customer or application in a cloud environment. + +The reason why scripts can be cached for long time is that it is unlikely for +a well written application to have so many different scripts to create memory +problems. Every script is conceptually like the implementation of a new command, +and even a large application will likely have just a few hundreds of that. Even +if the application is modified many times and scripts will change, still the +memory used is negligible. + +The fact that the user can count on Redis not removing scripts is semantically a +very good thing. For instance an application taking a persistent connection to +Redis can stay sure that if a script was sent once it is still in memory, thus +for instance can use EVALSHA against those scripts in a pipeline without the +chance that an error will be generated since the script is not known (we'll see +this problem in its details later). ## The SCRIPT command -Redis offers a SCRIPT command that can be used in order to control -the scripting subsystem. SCRIPT currently accepts three different commands: +Redis offers a SCRIPT command that can be used in order to control the scripting +subsystem. SCRIPT currently accepts three different commands: * SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts cache. It is mostly useful in a cloud environment where the same @@ -257,18 +259,18 @@ See the next sections for more information about long running scripts. ## Scripts as pure functions A very important part of scripting is writing scripts that are pure functions. -Scripts executed in a Redis instance are replicated on slaves sending the -same script, instead of the resulting commands. The same happens for the -Append Only File. The reason is that scripts are much faster than sending -commands one after the other to a Redis instance, so if the client is -taking the master very busy sending scripts, turning this scripts into single -commands for the slave / AOF would result in too much bandwidth for the -replication link or the Append Only File (and also too much CPU since -dispatching a command received via network is a lot more work for Redis -compared to dispatching a command invoked by Lua scripts). - -The only drawback with this approach is that scripts are required to -have the following property: +Scripts executed in a Redis instance are replicated on slaves sending the same +script, instead of the resulting commands. The same happens for the Append Only +File. The reason is that scripts are much faster than sending commands one after +the other to a Redis instance, so if the client is taking the master very busy +sending scripts, turning this scripts into single commands for the slave / AOF +would result in too much bandwidth for the replication link or the Append Only +File (and also too much CPU since dispatching a command received via network +is a lot more work for Redis compared to dispatching a command invoked by Lua +scripts). + +The only drawback with this approach is that scripts are required to have the +following property: * The script always evaluates the same Redis *write* commands with the same arguments given the same input data set. Operations performed by @@ -301,9 +303,9 @@ time a new script is executed. This means that calling `math.random` will always generate the same sequence of numbers every time a script is executed if `math.randomseed` is not used. -However the user is still able to write commands with random behaviors -using the following simple trick. Imagine I want to write a Redis -script that will populate a list with N random integers. +However the user is still able to write commands with random behaviors using +the following simple trick. Imagine I want to write a Redis script that will +populate a list with N random integers. I can start writing the following script, using a small Ruby program: @@ -340,11 +342,10 @@ following elements: 9) "0.74990198051087" 10) "0.17082803611217" -In order to make it a pure function, but still making sure that every -invocation of the script will result in different random elements, we can -simply add an additional argument to the script, that will be used in order to -seed the Lua pseudo random number generator. The new script will be like the -following: +In order to make it a pure function, but still making sure that every invocation +of the script will result in different random elements, we can simply add an +additional argument to the script, that will be used in order to seed the Lua +pseudo random number generator. The new script will be like the following: RandomPushScript = < eval 'a=10' 0 (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' @@ -392,7 +394,8 @@ protection, is not hard. However it is hardly possible to do it accidentally. If the user messes with the Lua global state, the consistency of AOF and replication is not guaranteed: don't do it. -Note for Lua newbies: in order to avoid using global variables in your scripts simply declare every variable you are going to use using the *local* keyword. +Note for Lua newbies: in order to avoid using global variables in your scripts +simply declare every variable you are going to use using the *local* keyword. ## Available libraries @@ -406,8 +409,8 @@ The Redis Lua interpreter loads the following Lua libraries: * cjson lib. * cmsgpack lib. -Every Redis instance is *guaranteed* to have all the above libraries so you -can be sure that the environment for your Redis scripts is always the same. +Every Redis instance is *guaranteed* to have all the above libraries so you can +be sure that the environment for your Redis scripts is always the same. The CJSON library allows to manipulate JSON data in a very fast way from Lua. All the other libraries are standard Lua libraries. @@ -426,8 +429,9 @@ It is possible to write to the Redis log file from Lua scripts using the * `redis.LOG_NOTICE` * `redis.LOG_WARNING` -They exactly correspond to the normal Redis log levels. Only logs emitted by scripting using a log level that is equal or greater than the currently configured -Redis instance log level will be emitted. +They exactly correspond to the normal Redis log levels. Only logs emitted +by scripting using a log level that is equal or greater than the currently +configured Redis instance log level will be emitted. The `message` argument is simply a string. Example: @@ -440,26 +444,24 @@ Will generate the following: ## Sandbox and maximum execution time Scripts should never try to access the external system, like the file system, -nor calling any other system call. A script should just do its work operating -on Redis data and passed arguments. +nor calling any other system call. A script should just do its work operating on +Redis data and passed arguments. Scripts are also subject to a maximum execution time (five seconds by default). This default timeout is huge since a script should run usually in a sub millisecond amount of time. The limit is mostly needed in order to avoid -problems when developing scripts that may loop forever for a programming -error. +problems when developing scripts that may loop forever for a programming error. -It is possible to modify the maximum time a script can be executed -with milliseconds precision, either via `redis.conf` or using the -CONFIG GET / CONFIG SET command. The configuration parameter -affecting max execution time is called `lua-time-limit`. +It is possible to modify the maximum time a script can be executed with +milliseconds precision, either via `redis.conf` or using the CONFIG GET / CONFIG +SET command. The configuration parameter affecting max execution time is called +`lua-time-limit`. -When a script reaches the timeout it is not automatically terminated by -Redis since this violates the contract Redis has with the scripting engine -to ensure that scripts are atomic in nature. Stopping a script half-way means -to possibly leave the dataset with half-written data inside. -For this reasons when a script executes for more than the specified time -the following happens: +When a script reaches the timeout it is not automatically terminated by Redis +since this violates the contract Redis has with the scripting engine to ensure +that scripts are atomic in nature. Stopping a script half-way means to possibly +leave the dataset with half-written data inside. For this reasons when a script +executes for more than the specified time the following happens: * Redis logs that a script that is running for too much time is still in execution. * It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. @@ -469,12 +471,11 @@ the following happens: ## EVALSHA in the context of pipelining Care should be taken when executing `EVALSHA` in the context of a pipelined -request, since even in a pipeline the order of execution of commands must -be guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not -be reissued later otherwise the order of execution is violated. +request, since even in a pipeline the order of execution of commands must be +guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not be +reissued later otherwise the order of execution is violated. -The client library implementation should take one of the following -approaches: +The client library implementation should take one of the following approaches: * Always use plain `EVAL` when in the context of a pipeline. diff --git a/commands/exec.md b/commands/exec.md index 8d3030ca87..b4a804f4bc 100644 --- a/commands/exec.md +++ b/commands/exec.md @@ -4,9 +4,8 @@ normal. [transactions]: /topics/transactions -When using `WATCH`, `EXEC` will execute commands only if the -watched keys were not modified, allowing for a [check-and-set -mechanism][cas]. +When using `WATCH`, `EXEC` will execute commands only if the watched keys were +not modified, allowing for a [check-and-set mechanism][cas]. [cas]: /topics/transactions#cas @@ -15,5 +14,4 @@ mechanism][cas]. @multi-bulk-reply: each element being the reply to each of the commands in the atomic transaction. -When using `WATCH`, `EXEC` can return a @nil-reply if the execution was -aborted. +When using `WATCH`, `EXEC` can return a @nil-reply if the execution was aborted. diff --git a/commands/expire.md b/commands/expire.md index fdee30d79b..b099bb93ea 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -2,8 +2,8 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be _volatile_ in Redis terminology. -The timeout is cleared only when the key is removed using the `DEL` command or -overwritten using the `SET` or `GETSET` commands. This means that all the +The timeout is cleared only when the key is removed using the `DEL` command +or overwritten using the `SET` or `GETSET` commands. This means that all the operations that conceptually *alter* the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with `INCR`, pushing a new value into a list @@ -25,15 +25,15 @@ matter if the original `Key_A` had a timeout associated or not, the new key It is possible to call `EXPIRE` using as argument a key that already has an existing expire set. In this case the time to live of a key is *updated* to the -new value. There are many useful applications for this, an example is -documented in the *Navigation session* pattern section below. +new value. There are many useful applications for this, an example is documented +in the *Navigation session* pattern section below. ## Differences in Redis prior 2.1.3 -In Redis versions prior **2.1.3** altering a key with an expire set using -a command altering its value had the effect of removing the key entirely. -This semantics was needed because of limitations in the replication layer that -are now fixed. +In Redis versions prior **2.1.3** altering a key with an expire set using a +command altering its value had the effect of removing the key entirely. This +semantics was needed because of limitations in the replication layer that are +now fixed. @return @@ -60,8 +60,8 @@ at this set of page views as a *Navigation session* if your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products. -You can easily model this pattern in Redis using the following strategy: -every time the user does a page view you call the following commands: +You can easily model this pattern in Redis using the following strategy: every +time the user does a page view you call the following commands: MULTI RPUSH pagewviews.user: http://..... @@ -79,43 +79,51 @@ using `RPUSH`. ## Keys with an expire -Normally Redis keys are created without an associated time to live. The key -will simply live forever, unless it is removed by the user in an explicit -way, for instance using the `DEL` command. +Normally Redis keys are created without an associated time to live. The key will +simply live forever, unless it is removed by the user in an explicit way, for +instance using the `DEL` command. The `EXPIRE` family of commands is able to associate an expire to a given key, at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed. -The key time to live can be updated or entirely removed using the `EXPIRE` and `PERSIST` command (or other strictly related commands). +The key time to live can be updated or entirely removed using the `EXPIRE` and +`PERSIST` command (or other strictly related commands). ## Expire accuracy -In Redis 2.4 the expire might not be pin-point accurate, and it could be -between zero to one seconds out. +In Redis 2.4 the expire might not be pin-point accurate, and it could be between +zero to one seconds out. Since Redis 2.6 the expire error is from 0 to 1 milliseconds. ## Expires and persistence -Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active. +Keys expiring information is stored as absolute Unix timestamps (in milliseconds +in case of Redis version 2.6 or greater). This means that the time is flowing +even when the Redis instance is not active. -For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time). +For expires to work well, the computer time must be taken stable. If you move an +RDB file from two computers with a big desync in their clocks, funny things may +happen (like all the keys loaded to be expired at loading time). -Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds. +Even running instances will always check the computer clock, so for instance if +you set a key with a time to live of 1000 seconds, and then set your computer +time 2000 seconds in the future, the key will be expired immediately, instead of +lasting for 1000 seconds. ## How Redis expires keys Redis keys are expired in two ways: a passive way, and an active way. -A key is actively expired simply when some client tries to access it, and -the key is found to be timed out. +A key is actively expired simply when some client tries to access it, and the +key is found to be timed out. -Of course this is not enough as there are expired keys that will never -be accessed again. This keys should be expired anyway, so periodically -Redis test a few keys at random among keys with an expire set. -All the keys that are already expired are deleted from the keyspace. +Of course this is not enough as there are expired keys that will never be +accessed again. This keys should be expired anyway, so periodically Redis test a +few keys at random among keys with an expire set. All the keys that are already +expired are deleted from the keyspace. Specifically this is what Redis does 10 times per second: @@ -123,24 +131,23 @@ Specifically this is what Redis does 10 times per second: 2. Delete all the keys found expired. 3. If more than 25 keys were expired, start again from step 1. -This is a trivial probabilistic algorithm, basically the assumption is -that our sample is representative of the whole key space, -and we continue to expire until the percentage of keys that are likely -to be expired is under 25% +This is a trivial probabilistic algorithm, basically the assumption is that our +sample is representative of the whole key space, and we continue to expire until +the percentage of keys that are likely to be expired is under 25% -This means that at any given moment the maximum amount of keys already -expired that are using memory is at max equal to max amount of write -operations per second divided by 4. +This means that at any given moment the maximum amount of keys already expired +that are using memory is at max equal to max amount of write operations per +second divided by 4. ## How expires are handled in the replication link and AOF file -In order to obtain a correct behavior without sacrificing consistency, when -a key expires, a `DEL` operation is synthesized in both the AOF file and gains -all the attached slaves. This way the expiration process is centralized in -the master instance, and there is no chance of consistency errors. +In order to obtain a correct behavior without sacrificing consistency, when a +key expires, a `DEL` operation is synthesized in both the AOF file and gains +all the attached slaves. This way the expiration process is centralized in the +master instance, and there is no chance of consistency errors. However while the slaves connected to a master will not expire keys independently (but will wait for the `DEL` coming from the master), they'll still take the full state of the expires existing in the dataset, so when a -slave is elected to a master it will be able to expire the keys -independently, fully acting as a master. +slave is elected to a master it will be able to expire the keys independently, +fully acting as a master. diff --git a/commands/expireat.md b/commands/expireat.md index 71a1b9aae5..42196b88e6 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -1,7 +1,8 @@ `EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [Unix timestamp][2] (seconds since January 1, 1970). -Please for the specific semantics of the command refer to the documentation of `EXPIRE`. +Please for the specific semantics of the command refer to the documentation of +`EXPIRE`. [2]: http://en.wikipedia.org/wiki/Unix_time diff --git a/commands/flushall.md b/commands/flushall.md index f5ba483bc5..517a357c63 100644 --- a/commands/flushall.md +++ b/commands/flushall.md @@ -1,4 +1,5 @@ -Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. +Delete all the keys of all the existing databases, not just the currently +selected one. This command never fails. @return diff --git a/commands/get.md b/commands/get.md index baf29780bc..a7543eb16c 100644 --- a/commands/get.md +++ b/commands/get.md @@ -1,6 +1,6 @@ -Get the value of `key`. If the key does not exist the special value `nil` is returned. -An error is returned if the value stored at `key` is not a string, because `GET` -only handles string values. +Get the value of `key`. If the key does not exist the special value `nil` is +returned. An error is returned if the value stored at `key` is not a string, +because `GET` only handles string values. @return diff --git a/commands/hdel.md b/commands/hdel.md index 99928296f2..65e6b3c570 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -1,6 +1,6 @@ Removes the specified fields from the hash stored at `key`. Specified fields -that do not exist within this hash are ignored. -If `key` does not exist, it is treated as an empty hash and this command returns +that do not exist within this hash are ignored. If `key` does not exist, it is +treated as an empty hash and this command returns `0`. @return diff --git a/commands/hgetall.md b/commands/hgetall.md index f4072756c8..d96c85cc4f 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -1,6 +1,6 @@ Returns all fields and values of the hash stored at `key`. In the returned -value, every field name is followed by its value, so the length -of the reply is twice the size of the hash. +value, every field name is followed by its value, so the length of the reply is +twice the size of the hash. @return diff --git a/commands/hincrby.md b/commands/hincrby.md index abc7cc5eac..6514dadb0e 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -3,8 +3,7 @@ Increments the number stored at `field` in the hash stored at `key` by `field` does not exist the value is set to `0` before the operation is performed. -The range of values supported by `HINCRBY` is limited to 64 bit signed -integers. +The range of values supported by `HINCRBY` is limited to 64 bit signed integers. @return diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md index 2668c09049..50d6fa9446 100644 --- a/commands/hincrbyfloat.md +++ b/commands/hincrbyfloat.md @@ -1,9 +1,14 @@ -Increment the specified `field` of an hash stored at `key`, and representing a floating point number, by the specified `increment`. If the field does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: +Increment the specified `field` of an hash stored at `key`, and representing +a floating point number, by the specified `increment`. If the field does not +exist, it is set to `0` before performing the operation. An error is returned if +one of the following conditions occur: * The field contains a value of the wrong type (not a string). * The current field content or the specified increment are not parsable as a double precision floating point number. -The exact behavior of this command is identical to the one of the `INCRBYFLOAT` command, please refer to the documentation of `INCRBYFLOAT` for further information. +The exact behavior of this command is identical to the one of the `INCRBYFLOAT` +command, please refer to the documentation of `INCRBYFLOAT` for further +information. @return @@ -19,5 +24,6 @@ The exact behavior of this command is identical to the one of the `INCRBYFLOAT` ## Implementation details -The command is always propagated in the replication link and the Append Only File as a `HSET` operation, so that differences in the underlying floating point +The command is always propagated in the replication link and the Append Only +File as a `HSET` operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency. diff --git a/commands/hmget.md b/commands/hmget.md index ea08ce6b7c..66839082cd 100644 --- a/commands/hmget.md +++ b/commands/hmget.md @@ -2,8 +2,8 @@ Returns the values associated with the specified `fields` in the hash stored at `key`. For every `field` that does not exist in the hash, a `nil` value is returned. -Because a non-existing keys are treated as empty hashes, running `HMGET` -against a non-existing `key` will return a list of `nil` values. +Because a non-existing keys are treated as empty hashes, running `HMGET` against +a non-existing `key` will return a list of `nil` values. @return diff --git a/commands/hmset.md b/commands/hmset.md index 2b27655d6c..e2ad3e058f 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -1,6 +1,6 @@ -Sets the specified fields to their respective values in the hash -stored at `key`. This command overwrites any existing fields in the hash. -If `key` does not exist, a new key holding a hash is created. +Sets the specified fields to their respective values in the hash stored at +`key`. This command overwrites any existing fields in the hash. If `key` does +not exist, a new key holding a hash is created. @return diff --git a/commands/hset.md b/commands/hset.md index f0f76ff454..0fd56764f7 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -1,6 +1,6 @@ Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a -new key holding a hash is created. If `field` already exists in the hash, it -is overwritten. +new key holding a hash is created. If `field` already exists in the hash, it is +overwritten. @return diff --git a/commands/incr.md b/commands/incr.md index 7c8b7d351c..488974cdde 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -1,16 +1,15 @@ -Increments the number stored at `key` by one. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that can not be represented as integer. This operation is limited to 64 -bit signed integers. +Increments the number stored at `key` by one. If the key does not exist, it +is set to `0` before performing the operation. An error is returned if the +key contains a value of the wrong type or contains a string that can not be +represented as integer. This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated integer type. The the string stored at the key is interpreted as a base-10 **64 bit signed integer** to execute the operation. Redis stores integers in their integer representation, so for string values -that actually hold an integer, there is no overhead for storing the -string representation of the integer. +that actually hold an integer, there is no overhead for storing the string +representation of the integer. @return @@ -44,12 +43,12 @@ This simple pattern can be extended in many ways: The rate limiter pattern is a special counter that is used to limit the rate at which an operation can be performed. The classical materialization of this -pattern involves limiting the number of requests that can be performed against -a public API. +pattern involves limiting the number of requests that can be performed against a +public API. We provide two implementations of this pattern using `INCR`, where we assume -that the problem to solve is limiting the number of API calls to a maximum -of *ten requests per second per IP address*. +that the problem to solve is limiting the number of API calls to a maximum of +*ten requests per second per IP address*. ## Pattern: Rate limiter 1 @@ -69,19 +68,17 @@ The more simple and direct implementation of this pattern is the following: PERFORM_API_CALL() END -Basically we have a counter for every IP, for every different second. -But this counters are always incremented setting an expire of 10 seconds so -that they'll be removed by Redis automatically when the current second is -a different one. +Basically we have a counter for every IP, for every different second. But this +counters are always incremented setting an expire of 10 seconds so that they'll +be removed by Redis automatically when the current second is a different one. Note the used of `MULTI` and `EXEC` in order to make sure that we'll both increment and set the expire at every API call. ## Pattern: Rate limiter 2 -An alternative implementation uses a single counter, but is a bit more -complex to get it right without race conditions. We'll examine different -variants. +An alternative implementation uses a single counter, but is a bit more complex +to get it right without race conditions. We'll examine different variants. FUNCTION LIMIT_API_CALL(ip): current = GET(ip) @@ -104,9 +101,9 @@ from the first request performed in the current second. If there are more than client performs the `INCR` command but does not perform the `EXPIRE` the key will be leaked until we'll see the same IP address again. -This can be fixed easily turning the `INCR` with optional `EXPIRE` into a -Lua script that is send using the `EVAL` command (only available since Redis -version 2.6). +This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua +script that is send using the `EVAL` command (only available since Redis version +2.6). local current current = redis.call("incr",KEYS[1]) @@ -115,8 +112,10 @@ version 2.6). end There is a different way to fix this issue without using scripting, but using -Redis lists instead of counters. -The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application. +Redis lists instead of counters. The implementation is more complex and uses +more advanced features but has the advantage of remembering the IP addresses +of the clients currently performing an API call, that may be useful or not +depending on the application. FUNCTION LIMIT_API_CALL(ip) current = LLEN(ip) @@ -136,6 +135,8 @@ The implementation is more complex and uses more advanced features but has the a The `RPUSHX` command only pushes the element if the key already exists. -Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside the +Note that we have a race here, but it is not a problem: `EXISTS` may return +false but the key may be created by another client before we create it inside +the `MULTI`/`EXEC` block. However this race will just miss an API call under rare conditions, so the rate limiting will still work correctly. diff --git a/commands/incrby.md b/commands/incrby.md index bf1b25b80e..4e121d9be5 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -1,8 +1,7 @@ -Increments the number stored at `key` by `increment`. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that can not be represented as integer. This operation is limited to 64 -bit signed integers. +Increments the number stored at `key` by `increment`. If the key does not exist, +it is set to `0` before performing the operation. An error is returned if the +key contains a value of the wrong type or contains a string that can not be +represented as integer. This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md index 663dae098a..4e8d47986f 100644 --- a/commands/incrbyfloat.md +++ b/commands/incrbyfloat.md @@ -1,14 +1,20 @@ -Increment the string representing a floating point number stored at `key` by -the specified `increment`. If the key does not exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: +Increment the string representing a floating point number stored at `key` +by the specified `increment`. If the key does not exist, it is set to `0` +before performing the operation. An error is returned if one of the following +conditions occur: * The key contains a value of the wrong type (not a string). * The current key content or the specified increment are not parsable as a double precision floating point number. -If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a string. +If the command is successful the new incremented value is stored as the new +value of the key (replacing the old one), and returned to the caller as a +string. Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed -after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits representing the decimal part of the number. Trailing zeroes are always removed. +after the increment is stored consistently in the same format, that is, an +integer number followed (if needed) by a dot, and a variable number of digits +representing the decimal part of the number. Trailing zeroes are always removed. The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation. @@ -27,5 +33,6 @@ regardless of the actual internal precision of the computation. ## Implementation details -The command is always propagated in the replication link and the Append Only File as a `SET` operation, so that differences in the underlying floating point +The command is always propagated in the replication link and the Append Only +File as a `SET` operation, so that differences in the underlying floating point math implementation will not be sources of inconsistency. diff --git a/commands/info.md b/commands/info.md index 3bb5ebfd5d..a679501a87 100644 --- a/commands/info.md +++ b/commands/info.md @@ -1,5 +1,5 @@ -The `INFO` command returns information and statistics about the server -in a format that is simple to parse by computers and easy to read by humans. +The `INFO` command returns information and statistics about the server in a +format that is simple to parse by computers and easy to read by humans. @return diff --git a/commands/keys.md b/commands/keys.md index e96b2a6537..234fb4def8 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -1,8 +1,8 @@ Returns all keys matching `pattern`. -While the time complexity for this operation is O(N), the constant -times are fairly low. For example, Redis running on an entry level laptop can -scan a 1 million key database in 40 milliseconds. +While the time complexity for this operation is O(N), the constant times are +fairly low. For example, Redis running on an entry level laptop can scan a 1 +million key database in 40 milliseconds. **Warning**: consider `KEYS` as a command that should only be used in production environments with extreme care. It may ruin performance when it is diff --git a/commands/lastsave.md b/commands/lastsave.md index 32b18ec259..93c5cbcce5 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,7 +1,7 @@ -Return the UNIX TIME of the last DB save executed with success. -A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` -value, then issuing a `BGSAVE` command and checking at regular intervals -every N seconds if `LASTSAVE` changed. +Return the UNIX TIME of the last DB save executed with success. A client may +check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then +issuing a `BGSAVE` command and checking at regular intervals every N seconds if +`LASTSAVE` changed. @return diff --git a/commands/lindex.md b/commands/lindex.md index e88bd16855..7432ba5851 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -1,8 +1,8 @@ -Returns the element at index `index` in the list stored at `key`. -The index is zero-based, so `0` means the first element, `1` the second -element and so on. Negative indices can be used to designate elements -starting at the tail of the list. Here, `-1` means the last element, `-2` means -the penultimate and so forth. +Returns the element at index `index` in the list stored at `key`. The index +is zero-based, so `0` means the first element, `1` the second element and so +on. Negative indices can be used to designate elements starting at the tail of +the list. Here, `-1` means the last element, `-2` means the penultimate and so +forth. When the value at `key` is not a list, an error is returned. diff --git a/commands/linsert.md b/commands/linsert.md index 3a5ed458c9..9fb680f003 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -1,5 +1,5 @@ -Inserts `value` in the list stored at `key` either before or after the -reference value `pivot`. +Inserts `value` in the list stored at `key` either before or after the reference +value `pivot`. When `key` does not exist, it is considered an empty list and no operation is performed. diff --git a/commands/llen.md b/commands/llen.md index 6e5eeeec2c..d61808551f 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -1,6 +1,6 @@ -Returns the length of the list stored at `key`. -If `key` does not exist, it is interpreted as an empty list and `0` is returned. -An error is returned when the value stored at `key` is not a list. +Returns the length of the list stored at `key`. If `key` does not exist, it is +interpreted as an empty list and `0` is returned. An error is returned when the +value stored at `key` is not a list. @return diff --git a/commands/lpush.md b/commands/lpush.md index a3a497d7fb..513b82f811 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -1,9 +1,13 @@ -Insert all the specified values at the head of the list stored at `key`. -If `key` does not exist, it is created as empty list before performing -the push operations. -When `key` holds a value that is not a list, an error is returned. - -It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command `LPUSH mylist a b c` will result into a list containing `c` as first element, `b` as second element and `a` as third element. +Insert all the specified values at the head of the list stored at `key`. If +`key` does not exist, it is created as empty list before performing the push +operations. When `key` holds a value that is not a list, an error is returned. + +It is possible to push multiple elements using a single command call just +specifying multiple arguments at the end of the command. Elements are inserted +one after the other to the head of the list, from the leftmost element to the +rightmost element. So for instance the command `LPUSH mylist a b c` will result +into a list containing `c` as first element, `b` as second element and `a` as +third element. @return diff --git a/commands/lpushx.md b/commands/lpushx.md index 10a85c40b2..eeda9bc025 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -1,6 +1,6 @@ -Inserts `value` at the head of the list stored at `key`, only if `key` -already exists and holds a list. In contrary to `LPUSH`, no operation will -be performed when `key` does not yet exist. +Inserts `value` at the head of the list stored at `key`, only if `key` already +exists and holds a list. In contrary to `LPUSH`, no operation will be performed +when `key` does not yet exist. @return diff --git a/commands/lrange.md b/commands/lrange.md index 66f9ac0e7c..c6468110c9 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -1,4 +1,4 @@ -Returns the specified elements of the list stored at `key`. The offsets +Returns the specified elements of the list stored at `key`. The offsets `start` and `stop` are zero-based indexes, with `0` being the first element of the list (the head of the list), `1` being the next element and so on. @@ -8,18 +8,17 @@ penultimate, and so on. ## Consistency with range functions in various programming languages -Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will -return 11 elements, that is, the rightmost item is included. This **may or may -not** be consistent with behavior of range-related functions in your +Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` +will return 11 elements, that is, the rightmost item is included. This **may +or may not** be consistent with behavior of range-related functions in your programming language of choice (think Ruby's `Range.new`, `Array#slice` or Python's `range()` function). ## Out-of-range indexes Out of range indexes will not produce an error. If `start` is larger than the -end of the list, an empty list is returned. If `stop` is -larger than the actual end of the list, Redis will treat it like the last -element of the list. +end of the list, an empty list is returned. If `stop` is larger than the actual +end of the list, Redis will treat it like the last element of the list. @return diff --git a/commands/lrem.md b/commands/lrem.md index 418aca73c7..0ba5010b6a 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -1,6 +1,6 @@ -Removes the first `count` occurrences of elements equal to `value` from the -list stored at `key`. The `count` argument influences the operation in the -following ways: +Removes the first `count` occurrences of elements equal to `value` from the list +stored at `key`. The `count` argument influences the operation in the following +ways: * `count > 0`: Remove elements equal to `value` moving from head to tail. * `count < 0`: Remove elements equal to `value` moving from tail to head. @@ -9,8 +9,8 @@ following ways: For example, `LREM list -2 "hello"` will remove the last two occurrences of `"hello"` in the list stored at `list`. -Note that non-existing keys are treated like empty lists, so when `key` does -not exist, the command will always return `0`. +Note that non-existing keys are treated like empty lists, so when `key` does not +exist, the command will always return `0`. @return diff --git a/commands/ltrim.md b/commands/ltrim.md index 2b649e5f22..7d4e98a5cd 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -1,6 +1,6 @@ Trim an existing list so that it will contain only the specified range of -elements specified. Both `start` and `stop` are zero-based indexes, where `0` -is the first element of the list (the head), `1` the next element and so on. +elements specified. Both `start` and `stop` are zero-based indexes, where `0` is +the first element of the list (the head), `1` the next element and so on. For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that only the first three elements of the list will remain. @@ -11,8 +11,8 @@ element and so on. Out of range indexes will not produce an error: if `start` is larger than the end of the list, or `start > end`, the result will be an empty list (which -causes `key` to be removed). If `end` is larger than the end of the list, -Redis will treat it like the last element of the list. +causes `key` to be removed). If `end` is larger than the end of the list, Redis +will treat it like the last element of the list. A common use of `LTRIM` is together with `LPUSH`/`RPUSH`. For example: diff --git a/commands/mget.md b/commands/mget.md index 899a354edc..fbd49197f8 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -1,6 +1,6 @@ -Returns the values of all specified keys. For every key that does not hold a string value -or does not exist, the special value `nil` is returned. -Because of this, the operation never fails. +Returns the values of all specified keys. For every key that does not hold a +string value or does not exist, the special value `nil` is returned. Because of +this, the operation never fails. @return diff --git a/commands/migrate.md b/commands/migrate.md index d8383d947c..3fe1a682a0 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -1,20 +1,35 @@ -Atomically transfer a key from a source Redis instance to a destination Redis instance. On success the key is deleted from the original instance and is guaranteed to exist in the target instance. - -The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs. - -The command internally uses `DUMP` to generate the serialized version of the key value, and `RESTORE` in order to synthesize the key in the target instance. -The source instance acts as a client for the target instance. If the target instance returns OK to the `RESTORE` command, the source instance deletes the key using `DEL`. - -The timeout specifies the maximum idle time in any moment of the communication with the destination instance in milliseconds. This means that the operation does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds. +Atomically transfer a key from a source Redis instance to a destination Redis +instance. On success the key is deleted from the original instance and is +guaranteed to exist in the target instance. + +The command is atomic and blocks the two instances for the time required to +transfer the key, at any given time the key will appear to exist in a given +instance or in the other instance, unless a timeout error occurs. + +The command internally uses `DUMP` to generate the serialized version of the key +value, and `RESTORE` in order to synthesize the key in the target instance. The +source instance acts as a client for the target instance. If the target instance +returns OK to the `RESTORE` command, the source instance deletes the key using +`DEL`. + +The timeout specifies the maximum idle time in any moment of the communication +with the destination instance in milliseconds. This means that the operation +does not need to be completed within the specified amount of milliseconds, but +that the transfer should make progresses without blocking for more than the +specified amount of milliseconds. `MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -`IOERR` returned. When this happens the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. -It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the key is *also* present in the target instance and act accordingly. +It is not possible for the key to get lost in the event of a timeout, but the +client calling `MIGRATE`, in the event of a timeout error, should check if the +key is *also* present in the target instance and act accordingly. -When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the same name was also *already* present on the target instance). +When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that +the key is still only present in the originating instance (unless a key with the +same name was also *already* present on the target instance). On success OK is returned. diff --git a/commands/monitor.md b/commands/monitor.md index 16133672b1..d1e84c56f8 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -3,9 +3,9 @@ processed by the Redis server. It can help in understanding what is happening to the database. This command can both be used via `redis-cli` and via `telnet`. -The ability to see all the requests processed by the server is useful in -order to spot bugs in an application both when using Redis as a database -and as a distributed caching system. +The ability to see all the requests processed by the server is useful in order +to spot bugs in an application both when using Redis as a database and as a +distributed caching system. $ redis-cli monitor 1339518083.107412 [0 127.0.0.1:60866] "keys" "*" @@ -39,9 +39,9 @@ Manually issue the `QUIT` command to stop a `MONITOR` stream running via ## Cost of running `MONITOR` -Because `MONITOR` streams back **all** commands, its use comes at a -cost. The following (totally unscientific) benchmark numbers illustrate -what the cost of running `MONITOR` can be. +Because `MONITOR` streams back **all** commands, its use comes at a cost. The +following (totally unscientific) benchmark numbers illustrate what the cost of +running `MONITOR` can be. Benchmark result **without** `MONITOR` running: @@ -62,9 +62,9 @@ Benchmark result **with** `MONITOR` running (`redis-cli monitor > GET: 45330.91 requests per second INCR: 41771.09 requests per second -In this particular case, running a single `MONITOR` client can reduce -the throughput by more than 50%. Running more `MONITOR` clients will -reduce throughput even more. +In this particular case, running a single `MONITOR` client can reduce the +throughput by more than 50%. Running more `MONITOR` clients will reduce +throughput even more. @return diff --git a/commands/move.md b/commands/move.md index 7af84ddd65..864aeaecf0 100644 --- a/commands/move.md +++ b/commands/move.md @@ -1,7 +1,7 @@ Move `key` from the currently selected database (see `SELECT`) to the specified destination database. When `key` already exists in the destination database, or -it does not exist in the source database, it does nothing. It is possible to -use `MOVE` as a locking primitive because of this. +it does not exist in the source database, it does nothing. It is possible to use +`MOVE` as a locking primitive because of this. @return diff --git a/commands/mset.md b/commands/mset.md index 1de357313e..a0e5cbaa86 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -1,5 +1,5 @@ Sets the given keys to their respective values. `MSET` replaces existing values -with new values, just as regular `SET`. See `MSETNX` if you don't want to +with new values, just as regular `SET`. See `MSETNX` if you don't want to overwrite existing values. `MSET` is atomic, so all given keys are set at once. It is not possible for diff --git a/commands/msetnx.md b/commands/msetnx.md index fc7db7dc8a..7359438971 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -2,8 +2,8 @@ Sets the given keys to their respective values. `MSETNX` will not perform any operation at all even if just a single key already exists. Because of this semantic `MSETNX` can be used in order to set different keys -representing different fields of an unique logic object in a way that -ensures that either all the fields or none at all are set. +representing different fields of an unique logic object in a way that ensures +that either all the fields or none at all are set. `MSETNX` is atomic, so all given keys are set at once. It is not possible for clients to see that some of the keys were updated while others are unchanged. diff --git a/commands/multi.md b/commands/multi.md index 1e10b97ebc..0a5bf7138a 100644 --- a/commands/multi.md +++ b/commands/multi.md @@ -1,5 +1,5 @@ -Marks the start of a [transaction][transactions] -block. Subsequent commands will be queued for atomic execution using +Marks the start of a [transaction][transactions] block. Subsequent commands will +be queued for atomic execution using `EXEC`. [transactions]: /topics/transactions diff --git a/commands/object.md b/commands/object.md index 8b36531a05..7b94fd4b15 100644 --- a/commands/object.md +++ b/commands/object.md @@ -18,7 +18,9 @@ Objects can be encoded in different ways: * Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special encoding used for small hashes. * Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size. -All the specially encoded types are automatically converted to the general type once you perform an operation that makes it no possible for Redis to retain the space saving encoding. +All the specially encoded types are automatically converted to the general type +once you perform an operation that makes it no possible for Redis to retain the +space saving encoding. @return @@ -40,7 +42,8 @@ If the object you try to inspect is missing, a null bulk reply is returned. redis> object idletime mylist (integer) 10 -In the following example you can see how the encoding changes once Redis is no longer able to use the space saving encoding. +In the following example you can see how the encoding changes once Redis is no +longer able to use the space saving encoding. redis> set foo 1000 OK diff --git a/commands/persist.md b/commands/persist.md index 013466a85a..5c8e63e32a 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -1,4 +1,6 @@ -Remove the existing timeout on `key`, turning the key from _volatile_ (a key with an expire set) to _persistent_ (a key that will never expire as no timeout is associated). +Remove the existing timeout on `key`, turning the key from _volatile_ (a key +with an expire set) to _persistent_ (a key that will never expire as no timeout +is associated). @return diff --git a/commands/pttl.md b/commands/pttl.md index 9d055a19fc..c42fa8bf57 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -3,7 +3,9 @@ O(1) -Like `TTL` this command returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. +Like `TTL` this command returns the remaining time to live of a key that has an +expire set, with the sole difference that `TTL` returns the amount of remaining +time in seconds while `PTTL` returns it in milliseconds. @return diff --git a/commands/punsubscribe.md b/commands/punsubscribe.md index 0aba74a197..449fddcfec 100644 --- a/commands/punsubscribe.md +++ b/commands/punsubscribe.md @@ -1,6 +1,6 @@ -Unsubscribes the client from the given patterns, or from all of them if -none is given. +Unsubscribes the client from the given patterns, or from all of them if none is +given. -When no patters are specified, the client is unsubscribed from all -the previously subscribed patterns. In this case, a message for every -unsubscribed pattern will be sent to the client. +When no patters are specified, the client is unsubscribed from all the +previously subscribed patterns. In this case, a message for every unsubscribed +pattern will be sent to the client. diff --git a/commands/renamenx.md b/commands/renamenx.md index 0318690765..5f95e76c45 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -1,5 +1,5 @@ -Renames `key` to `newkey` if `newkey` does not yet exist. -It returns an error under the same conditions as `RENAME`. +Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error +under the same conditions as `RENAME`. @return diff --git a/commands/restore.md b/commands/restore.md index 3734f3f857..55a1e5027b 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -1,6 +1,8 @@ -Create a key associated with a value that is obtained by deserializing the provided serialized value (obtained via `DUMP`). +Create a key associated with a value that is obtained by deserializing the +provided serialized value (obtained via `DUMP`). -If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. +If `ttl` is 0 the key is created without any expire, otherwise the specified +expire time (in milliseconds) is set. `RESTORE` checks the RDB version and data checksum. If they don't match an error is returned. diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 89ebc18ad8..98602df295 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -6,8 +6,8 @@ For example: consider `source` holding the list `a,b,c`, and `destination` holding the list `x,y,z`. Executing `RPOPLPUSH` results in `source` holding `a,b` and `destination` holding `c,x,y,z`. -If `source` does not exist, the value `nil` is returned and no operation is -performed. If `source` and `destination` are the same, the operation is +If `source` does not exist, the value `nil` is returned and no operation +is performed. If `source` and `destination` are the same, the operation is equivalent to removing the last element from the list and pushing it as first element of the list, so it can be considered as a list rotation command. @@ -27,10 +27,10 @@ element of the list, so it can be considered as a list rotation command. ## Pattern: Reliable queue -Redis is often used as a messaging server to implement processing of -background jobs or other kinds of messaging tasks. A simple form of queue -is often obtained pushing values into a list in the producer side, and -waiting for this values in the consumer side using `RPOP` +Redis is often used as a messaging server to implement processing of background +jobs or other kinds of messaging tasks. A simple form of queue is often obtained +pushing values into a list in the producer side, and waiting for this values in +the consumer side using `RPOP` (using polling), or `BRPOP` if the client is better served by a blocking operation. @@ -50,15 +50,20 @@ again if needed. ## Pattern: Circular list -Using `RPOPLPUSH` with the same source and destination key, a client can -visit all the elements of an N-elements list, one after the other, in O(N) -without transferring the full list from the server to the client using a single +Using `RPOPLPUSH` with the same source and destination key, a client can visit +all the elements of an N-elements list, one after the other, in O(N) without +transferring the full list from the server to the client using a single `LRANGE` operation. The above pattern works even if the following two conditions: * There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. * Even if other clients are actively pushing new items at the end of the list. -The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a monitoring system that must check that a set of web sites are reachable, with the smallest delay possible, using a number of parallel workers. +The above makes it very simple to implement a system where a set of items must +be processed by N workers continuously as fast as possible. An example is a +monitoring system that must check that a set of web sites are reachable, with +the smallest delay possible, using a number of parallel workers. -Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be processed at the next iteration. +Note that this implementation of workers is trivially scalable and reliable, +because even if a message is lost the item is still in the queue and will be +processed at the next iteration. diff --git a/commands/rpush.md b/commands/rpush.md index cdc897fe9b..fd9123ad2d 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -1,9 +1,13 @@ -Insert all the specified values at the tail of the list stored at `key`. -If `key` does not exist, it is created as empty list before performing the -push operation. -When `key` holds a value that is not a list, an error is returned. - -It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the tail of the list, from the leftmost element to the rightmost element. So for instance the command `RPUSH mylist a b c` will result into a list containing `a` as first element, `b` as second element and `c` as third element. +Insert all the specified values at the tail of the list stored at `key`. If +`key` does not exist, it is created as empty list before performing the push +operation. When `key` holds a value that is not a list, an error is returned. + +It is possible to push multiple elements using a single command call just +specifying multiple arguments at the end of the command. Elements are inserted +one after the other to the tail of the list, from the leftmost element to the +rightmost element. So for instance the command `RPUSH mylist a b c` will result +into a list containing `a` as first element, `b` as second element and `c` as +third element. @return diff --git a/commands/rpushx.md b/commands/rpushx.md index 8c7f3d324f..3095141e3b 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -1,6 +1,6 @@ -Inserts `value` at the tail of the list stored at `key`, only if `key` -already exists and holds a list. In contrary to `RPUSH`, no operation will -be performed when `key` does not yet exist. +Inserts `value` at the tail of the list stored at `key`, only if `key` already +exists and holds a list. In contrary to `RPUSH`, no operation will be performed +when `key` does not yet exist. @return diff --git a/commands/sadd.md b/commands/sadd.md index 9afe485ae4..0683f8f65f 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -1,6 +1,6 @@ -Add the specified members to the set stored at `key`. Specified members that -are already a member of this set are ignored. If `key` does not exist, a new -set is created before adding the specified members. +Add the specified members to the set stored at `key`. Specified members that are +already a member of this set are ignored. If `key` does not exist, a new set is +created before adding the specified members. An error is returned when the value stored at `key` is not a set. diff --git a/commands/save.md b/commands/save.md index 79283412e9..ba9a7878d2 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,8 +1,15 @@ -The `SAVE` commands performs a **synchronous** save of the dataset producing a *point in time* snapshot of all the data inside the Redis instance, in the form of an RDB file. +The `SAVE` commands performs a **synchronous** save of the dataset producing a +*point in time* snapshot of all the data inside the Redis instance, in the form +of an RDB file. -You almost never what to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. +You almost never what to call `SAVE` in production environments where it will +block all the other clients. Instead usually `BGSAVE` is used. However in case +of issues preventing Redis to create the background saving child (for instance +errors in the fork(2) system call), the `SAVE` command can be a good last resort +to perform the dump of the latest dataset. -Please refer to the [persistence documentation][persistence] for detailed information. +Please refer to the [persistence documentation][persistence] for detailed +information. [persistence]: /topics/persistence diff --git a/commands/script exists.md b/commands/script exists.md index e89c34ad08..84b9b18639 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -1,9 +1,14 @@ Returns information about the existence of the scripts in the script cache. -This command accepts one or more SHA1 sums and returns a list of ones or zeros to signal if the scripts are already defined or not inside the script cache. -This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining operation can be performed solely using `EVALSHA` instead of `EVAL` to save bandwidth. +This command accepts one or more SHA1 sums and returns a list of ones or zeros +to signal if the scripts are already defined or not inside the script cache. +This can be useful before a pipelining operation to ensure that scripts are +loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining +operation can be performed solely using `EVALSHA` instead of `EVAL` to save +bandwidth. -Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. +Please refer to the `EVAL` documentation for detailed information about Redis +Lua scripting. @return diff --git a/commands/script flush.md b/commands/script flush.md index c435f7677b..784ab4503a 100644 --- a/commands/script flush.md +++ b/commands/script flush.md @@ -1,6 +1,7 @@ Flush the Lua scripts cache. -Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. +Please refer to the `EVAL` documentation for detailed information about Redis +Lua scripting. @return diff --git a/commands/script kill.md b/commands/script kill.md index f4e13ace2a..2be073997b 100644 --- a/commands/script kill.md +++ b/commands/script kill.md @@ -1,11 +1,18 @@ -Kills the currently executing Lua script, assuming no write operation was yet performed by the script. +Kills the currently executing Lua script, assuming no write operation was yet +performed by the script. -This command is mainly useful to kill a script that is running for too much time(for instance because it entered an infinite loop because of a bug). -The script will be killed and the client currently blocked into EVAL will see the command returning with an error. +This command is mainly useful to kill a script that is running for too much +time(for instance because it entered an infinite loop because of a bug). The +script will be killed and the client currently blocked into EVAL will see the +command returning with an error. -If the script already performed write operations it can not be killed in this way because it would violate Lua script atomicity contract. In such a case only `SHUTDOWN NOSAVE` is able to kill the script, killing the Redis process in an hard way preventing it to persist with half-written information. +If the script already performed write operations it can not be killed in this +way because it would violate Lua script atomicity contract. In such a case only +`SHUTDOWN NOSAVE` is able to kill the script, killing the Redis process in an +hard way preventing it to persist with half-written information. -Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. +Please refer to the `EVAL` documentation for detailed information about Redis +Lua scripting. @return diff --git a/commands/script load.md b/commands/script load.md index 82eae0c81b..eff51a1774 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -1,11 +1,16 @@ -Load a script into the scripts cache, without executing it. -After the specified command is loaded into the script cache it will be callable using `EVALSHA` with the correct SHA1 digest of the script, exactly like after the first successful invocation of `EVAL`. +Load a script into the scripts cache, without executing it. After the specified +command is loaded into the script cache it will be callable using `EVALSHA` with +the correct SHA1 digest of the script, exactly like after the first successful +invocation of `EVAL`. -The script is guaranteed to stay in the script cache forever (unless `SCRIPT FLUSH` is called). +The script is guaranteed to stay in the script cache forever (unless `SCRIPT +FLUSH` is called). -The command works in the same way even if the script was already present in the script cache. +The command works in the same way even if the script was already present in the +script cache. -Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. +Please refer to the `EVAL` documentation for detailed information about Redis +Lua scripting. @return diff --git a/commands/sdiffstore.md b/commands/sdiffstore.md index 0bb3003137..db95908556 100644 --- a/commands/sdiffstore.md +++ b/commands/sdiffstore.md @@ -1,5 +1,5 @@ -This command is equal to `SDIFF`, but instead of returning the resulting set, -it is stored in `destination`. +This command is equal to `SDIFF`, but instead of returning the resulting set, it +is stored in `destination`. If `destination` already exists, it is overwritten. diff --git a/commands/select.md b/commands/select.md index 702efa463d..0d78628da7 100644 --- a/commands/select.md +++ b/commands/select.md @@ -1,5 +1,5 @@ -Select the DB with having the specified zero-based numeric index. -New connections always use DB 0. +Select the DB with having the specified zero-based numeric index. New +connections always use DB 0. @return diff --git a/commands/setex.md b/commands/setex.md index 05f92b06ef..465a2bc28a 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -1,5 +1,5 @@ Set `key` to hold the string `value` and set `key` to timeout after a given -number of seconds. This command is equivalent to executing the following +number of seconds. This command is equivalent to executing the following commands: SET mykey value diff --git a/commands/setnx.md b/commands/setnx.md index d1700a0c25..b437f68c87 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -1,6 +1,5 @@ -Set `key` to hold string `value` if `key` does not exist. -In that case, it is equal to `SET`. When `key` already holds -a value, no operation is performed. +Set `key` to hold string `value` if `key` does not exist. In that case, it is +equal to `SET`. When `key` already holds a value, no operation is performed. `SETNX` is short for "**SET** if **N**ot e**X**ists". @return @@ -24,22 +23,20 @@ the lock of the key `foo`, the client could try the following: SETNX lock.foo -If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` -key to the Unix time at which the lock should no longer be considered valid. -The client will later use `DEL lock.foo` in order to release the lock. +If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key +to the Unix time at which the lock should no longer be considered valid. The +client will later use `DEL lock.foo` in order to release the lock. -If `SETNX` returns `0` the key is already locked by some other client. We can -either return to the caller if it's a non blocking lock, or enter a -loop retrying to hold the lock until we succeed or some kind of timeout -expires. +If `SETNX` returns `0` the key is already locked by some other client. We +can either return to the caller if it's a non blocking lock, or enter a loop +retrying to hold the lock until we succeed or some kind of timeout expires. ### Handling deadlocks In the above locking algorithm there is a problem: what happens if a client -fails, crashes, or is otherwise not able to release the lock? -It's possible to detect this condition because the lock key contains a -UNIX timestamp. If such a timestamp is equal to the current Unix time the lock -is no longer valid. +fails, crashes, or is otherwise not able to release the lock? It's possible to +detect this condition because the lock key contains a UNIX timestamp. If such a +timestamp is equal to the current Unix time the lock is no longer valid. When this happens we can't just call `DEL` against the key to remove the lock and then try to issue a `SETNX`, as there is a race condition here, when diff --git a/commands/setrange.md b/commands/setrange.md index 149fea9af4..7a6f476503 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,9 +1,9 @@ -Overwrites part of the string stored at _key_, starting at the specified -offset, for the entire length of _value_. If the offset is larger than the -current length of the string at _key_, the string is padded with zero-bytes to -make _offset_ fit. Non-existing keys are considered as empty strings, so this -command will make sure it holds a string large enough to be able to set _value_ -at _offset_. +Overwrites part of the string stored at _key_, starting at the specified offset, +for the entire length of _value_. If the offset is larger than the current +length of the string at _key_, the string is padded with zero-bytes to make +_offset_ fit. Non-existing keys are considered as empty strings, so this command +will make sure it holds a string large enough to be able to set _value_ at +_offset_. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you @@ -21,8 +21,8 @@ have the allocation overhead. ## Patterns -Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis strings -as a linear array with O(1) random access. This is a very fast and +Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis +strings as a linear array with O(1) random access. This is a very fast and efficient storage in many real world use cases. @return diff --git a/commands/shutdown.md b/commands/shutdown.md index e901bf4966..ba19c0c51f 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -5,10 +5,10 @@ The command behavior is the following: * Flush the Append Only File if AOF is enabled. * Quit the server. -If persistence is enabled this commands makes sure that Redis is switched -off without the lost of any data. This is not guaranteed if the client uses -simply `SAVE` and then `QUIT` because other clients may alter the DB data -between the two commands. +If persistence is enabled this commands makes sure that Redis is switched off +without the lost of any data. This is not guaranteed if the client uses simply +`SAVE` and then `QUIT` because other clients may alter the DB data between the +two commands. Note: A Redis instance that is configured for not persisting on disk (no AOF configured, nor "save" directive) will not dump the RDB file on @@ -17,7 +17,8 @@ to block on when shutting down. ## SAVE and NOSAVE modifiers -It is possible to specify an optional modifier to alter the behavior of the command. Specifically: +It is possible to specify an optional modifier to alter the behavior of the +command. Specifically: * **SHUTDOWN SAVE** will force a DB saving operation even if no save points are configured. * **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more save points are configured. (You can think at this variant as an hypothetical **ABORT** command that just stops the server). diff --git a/commands/sinter.md b/commands/sinter.md index 7a98ce7da0..e6fdf56b22 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -9,8 +9,8 @@ For example: SINTER key1 key2 key3 = {c} Keys that do not exist are considered to be empty sets. With one of the keys -being an empty set, the resulting set is also empty (since set intersection -with an empty set always results in an empty set). +being an empty set, the resulting set is also empty (since set intersection with +an empty set always results in an empty set). @return diff --git a/commands/slaveof.md b/commands/slaveof.md index d751ca076b..3ff3091476 100644 --- a/commands/slaveof.md +++ b/commands/slaveof.md @@ -1,17 +1,17 @@ The `SLAVEOF` command can change the replication settings of a slave on the fly. -If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE -will turn off the replication, turning the Redis server into a MASTER. -In the proper form `SLAVEOF` hostname port will make the server a slave of another -server listening at the specified hostname and port. +If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE will +turn off the replication, turning the Redis server into a MASTER. In the proper +form `SLAVEOF` hostname port will make the server a slave of another server +listening at the specified hostname and port. -If a server is already a slave of some master, `SLAVEOF` hostname port will -stop the replication against the old server and start the synchronization -against the new one, discarding the old dataset. +If a server is already a slave of some master, `SLAVEOF` hostname port will stop +the replication against the old server and start the synchronization against the +new one, discarding the old dataset. The form `SLAVEOF` NO ONE will stop replication, turning the server into a -MASTER, but will not discard the replication. So, if the old master stops working, -it is possible to turn the slave into a master and set the application to -use this new master in read/write. Later when the other Redis server is +MASTER, but will not discard the replication. So, if the old master stops +working, it is possible to turn the slave into a master and set the application +to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a slave. @return diff --git a/commands/slowlog.md b/commands/slowlog.md index 848d0f50df..19239b722e 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -3,11 +3,10 @@ This command is used in order to read and reset the Redis slow queries log. ## Redis slow log overview The Redis Slow Log is a system to log queries that exceeded a specified -execution time. The execution time does not include I/O operations -like talking with the client, sending the reply and so forth, -but just the time needed to actually execute the command (this is the only -stage of command execution where the thread is blocked and can not serve -other requests in the meantime). +execution time. The execution time does not include I/O operations like talking +with the client, sending the reply and so forth, but just the time needed to +actually execute the command (this is the only stage of command execution where +the thread is blocked and can not serve other requests in the meantime). You can configure the slow log with two parameters: *slowlog-log-slower-than* tells Redis @@ -19,15 +18,14 @@ When a new command is logged and the slow log is already at its maximum length, the oldest one is removed from the queue of logged commands in order to make space. -The configuration can be done by editing `redis.conf` or -while the server is running using -the `CONFIG GET` and `CONFIG SET` commands. +The configuration can be done by editing `redis.conf` or while the server is +running using the `CONFIG GET` and `CONFIG SET` commands. ## Reading the slow log The slow log is accumulated in memory, so no file is written with information -about the slow command executions. This makes the slow log remarkably fast -at the point that you can enable the logging of all the commands (setting the +about the slow command executions. This makes the slow log remarkably fast at +the point that you can enable the logging of all the commands (setting the *slowlog-log-slower-than* config parameter to zero) with minor performance hit. @@ -35,9 +33,9 @@ To read the slow log the **SLOWLOG GET** command is used, that returns every entry in the slow log. It is possible to return only the N most recent entries passing an additional argument to the command (for instance **SLOWLOG GET 10**). -Note that you need a recent version of redis-cli in order to read the slow -log output, since it uses some features of the protocol that were not -formerly implemented in redis-cli (deeply nested multi bulk replies). +Note that you need a recent version of redis-cli in order to read the slow log +output, since it uses some features of the protocol that were not formerly +implemented in redis-cli (deeply nested multi bulk replies). ## Output format @@ -60,17 +58,18 @@ Every entry is composed of four fields: * The array composing the arguments of the command. The entry's unique ID can be used in order to avoid processing slow log entries -multiple times (for instance you may have a script sending you an email -alert for every new slow log entry). +multiple times (for instance you may have a script sending you an email alert +for every new slow log entry). -The ID is never reset in the course of the Redis server execution, only a -server restart will reset it. +The ID is never reset in the course of the Redis server execution, only a server +restart will reset it. ## Obtaining the current length of the slow log -It is possible to get just the length of the slow log using the command **SLOWLOG LEN**. +It is possible to get just the length of the slow log using the command +**SLOWLOG LEN**. ## Resetting the slow log. -You can reset the slow log using the **SLOWLOG RESET** command. -Once deleted the information is lost forever. +You can reset the slow log using the **SLOWLOG RESET** command. Once deleted the +information is lost forever. diff --git a/commands/smove.md b/commands/smove.md index 9fd4ee0e53..bfbcfb033a 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -4,9 +4,8 @@ member of `source` **or** `destination` for other clients. If the source set does not exist or does not contain the specified element, no operation is performed and `0` is returned. Otherwise, the element is removed -from the source set and added to the destination set. When the specified -element already exists in the destination set, it is only removed from the -source set. +from the source set and added to the destination set. When the specified element +already exists in the destination set, it is only removed from the source set. An error is returned if `source` or `destination` does not hold a set value. diff --git a/commands/sort.md b/commands/sort.md index 1f5ab2c4f0..795d081c74 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -16,8 +16,8 @@ large to small, use the `!DESC` modifier: SORT mylist DESC -When `mylist` contains string values and you want to sort them lexicographically, -use the `!ALPHA` modifier: +When `mylist` contains string values and you want to sort them +lexicographically, use the `!ALPHA` modifier: SORT mylist ALPHA @@ -25,32 +25,32 @@ Redis is UTF-8 aware, assuming you correctly set the `!LC_COLLATE` environment variable. The number of returned elements can be limited using the `!LIMIT` modifier. -This modifier takes the `offset` argument, specifying the number of elements to -skip and the `count` argument, specifying the number of elements to return from -starting at `offset`. The following example will return 10 elements of the +This modifier takes the `offset` argument, specifying the number of elements +to skip and the `count` argument, specifying the number of elements to return +from starting at `offset`. The following example will return 10 elements of the sorted version of `mylist`, starting at element 0 (`offset` is zero-based): SORT mylist LIMIT 0 10 -Almost all modifiers can be used together. The following example will return -the first 5 elements, lexicographically sorted in descending order: +Almost all modifiers can be used together. The following example will return the +first 5 elements, lexicographically sorted in descending order: SORT mylist LIMIT 0 5 ALPHA DESC ## Sorting by external keys Sometimes you want to sort elements using external keys as weights to compare -instead of comparing the actual elements in the list, set or sorted set. Let's -say the list `mylist` contains the elements `1`, `2` and `3` representing -unique IDs of objects stored in `object_1`, `object_2` and `object_3`. When -these objects have associated weights stored in `weight_1`, `weight_2` and +instead of comparing the actual elements in the list, set or sorted set. Let's +say the list `mylist` contains the elements `1`, `2` and `3` representing unique +IDs of objects stored in `object_1`, `object_2` and `object_3`. When these +objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: SORT mylist BY weight_* -The `BY` option takes a pattern (equal to `weight_*` in this example) that is -used to generate the keys that are used for sorting. These key names are +The `BY` option takes a pattern (equal to `weight_*` in this example) that +is used to generate the keys that are used for sorting. These key names are obtained substituting the first occurrence of `*` with the actual value of the element in the list (`1`, `2` and `3` in this example). @@ -66,13 +66,13 @@ the sorting operation. This is useful if you want to retrieve external keys Our previous example returns just the sorted IDs. In some cases, it is more useful to get the actual objects instead of their IDs (`object_1`, `object_2` -and `object_3`). Retrieving external keys based on the elements in a list, set +and `object_3`). Retrieving external keys based on the elements in a list, set or sorted set can be done with the following command: SORT mylist BY weight_* GET object_* -The `!GET` option can be used multiple times in order to get more keys for -every element of the original list, set or sorted set. +The `!GET` option can be used multiple times in order to get more keys for every +element of the original list, set or sorted set. It is also possible to `!GET` the element itself using the special pattern `#`: @@ -92,8 +92,9 @@ of a `SORT` operation can be cached for some time. Other clients will use the cached list instead of calling `SORT` for every request. When the key will timeout, an updated version of the cache can be created by calling `SORT ... STORE` again. -Note that for correctly implementing this pattern it is important to avoid multiple -clients rebuilding the cache at the same time. Some kind of locking is needed here +Note that for correctly implementing this pattern it is important to avoid +multiple clients rebuilding the cache at the same time. Some kind of locking is +needed here (for instance using `SETNX`). ## Using hashes in `!BY` and `!GET` @@ -103,9 +104,9 @@ following syntax: SORT mylist BY weight_*->fieldname GET object_*->fieldname -The string `->` is used to separate the key name from the hash field name. -The key is substituted as documented above, and the hash stored at the -resulting key is accessed to retrieve the specified hash field. +The string `->` is used to separate the key name from the hash field name. The +key is substituted as documented above, and the hash stored at the resulting key +is accessed to retrieve the specified hash field. @return diff --git a/commands/spop.md b/commands/spop.md index 7564dc797d..c50e332c8f 100644 --- a/commands/spop.md +++ b/commands/spop.md @@ -1,7 +1,7 @@ Removes and returns a random element from the set value stored at `key`. -This operation is similar to `SRANDMEMBER`, that returns a random -element from a set but does not remove it. +This operation is similar to `SRANDMEMBER`, that returns a random element from a +set but does not remove it. @return diff --git a/commands/srem.md b/commands/srem.md index 057585d19f..16b16d0b8b 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -1,5 +1,5 @@ Remove the specified members from the set stored at `key`. Specified members -that are not a member of this set are ignored. If `key` does not exist, it is +that are not a member of this set are ignored. If `key` does not exist, it is treated as an empty set and this command returns `0`. An error is returned when the value stored at `key` is not a set. diff --git a/commands/strlen.md b/commands/strlen.md index cfdf1243a0..bcaf9d5f4b 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -1,5 +1,5 @@ -Returns the length of the string value stored at `key`. -An error is returned when `key` holds a non-string value. +Returns the length of the string value stored at `key`. An error is returned +when `key` holds a non-string value. @return diff --git a/commands/subscribe.md b/commands/subscribe.md index 33c8b4fd4c..6709664d69 100644 --- a/commands/subscribe.md +++ b/commands/subscribe.md @@ -1,5 +1,5 @@ Subscribes the client to the specified channels. -Once the client enters the subscribed state it is not supposed to issue -any other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, +Once the client enters the subscribed state it is not supposed to issue any +other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE` and `PUNSUBSCRIBE` commands. diff --git a/commands/sunion.md b/commands/sunion.md index 7de66a17f4..98c30cd5fb 100644 --- a/commands/sunion.md +++ b/commands/sunion.md @@ -1,5 +1,4 @@ -Returns the members of the set resulting from the union of all the -given sets. +Returns the members of the set resulting from the union of all the given sets. For example: diff --git a/commands/time.md b/commands/time.md index c9ea2cbf4f..a7c5ce454d 100644 --- a/commands/time.md +++ b/commands/time.md @@ -3,8 +3,10 @@ O(1) -The `TIME` command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. -Basically the interface is very similar to the one of the `gettimeofday` system call. +The `TIME` command returns the current server time as a two items lists: a Unix +timestamp and the amount of microseconds already elapsed in the current second. +Basically the interface is very similar to the one of the `gettimeofday` system +call. @return diff --git a/commands/ttl.md b/commands/ttl.md index 928ee551fb..8cc59c3242 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -1,6 +1,6 @@ -Returns the remaining time to live of a key that has a timeout. This -introspection capability allows a Redis client to check how many seconds a -given key will continue to be part of the dataset. +Returns the remaining time to live of a key that has a timeout. This +introspection capability allows a Redis client to check how many seconds a given +key will continue to be part of the dataset. @return diff --git a/commands/type.md b/commands/type.md index 21b2d948ea..4c68019a76 100644 --- a/commands/type.md +++ b/commands/type.md @@ -1,6 +1,6 @@ -Returns the string representation of the type of the value stored at `key`. -The different types that can be returned are: `string`, `list`, `set`, `zset` -and `hash`. +Returns the string representation of the type of the value stored at `key`. The +different types that can be returned are: `string`, `list`, `set`, `zset` and +`hash`. @return diff --git a/commands/unsubscribe.md b/commands/unsubscribe.md index 35a65eebf0..78c4d0c759 100644 --- a/commands/unsubscribe.md +++ b/commands/unsubscribe.md @@ -1,6 +1,6 @@ -Unsubscribes the client from the given channels, or from all of them if -none is given. +Unsubscribes the client from the given channels, or from all of them if none is +given. -When no channels are specified, the client is unsubscribed from all -the previously subscribed channels. In this case, a message for every -unsubscribed channel will be sent to the client. +When no channels are specified, the client is unsubscribed from all the +previously subscribed channels. In this case, a message for every unsubscribed +channel will be sent to the client. diff --git a/commands/watch.md b/commands/watch.md index d3d8458ff6..1e3e01e67a 100644 --- a/commands/watch.md +++ b/commands/watch.md @@ -1,4 +1,5 @@ -Marks the given keys to be watched for conditional execution of a [transaction][transactions]. +Marks the given keys to be watched for conditional execution of a +[transaction][transactions]. [transactions]: /topics/transactions diff --git a/commands/zadd.md b/commands/zadd.md index 4b9d85e573..0f694783e1 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -1,7 +1,10 @@ -Adds all the specified members with the specified scores to the sorted set stored at `key`. It is possible to specify multiple score/member pairs. -If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering. If `key` does not exist, a new sorted set with the specified members as sole -members is created, like if the sorted set was empty. -If the key exists but does not hold a sorted set, an error is returned. +Adds all the specified members with the specified scores to the sorted set +stored at `key`. It is possible to specify multiple score/member pairs. If a +specified member is already a member of the sorted set, the score is updated and +the element reinserted at the right position to ensure the correct ordering. +If `key` does not exist, a new sorted set with the specified members as sole +members is created, like if the sorted set was empty. If the key exists but does +not hold a sorted set, an error is returned. The score values should be the string representation of a numeric value, and accepts double precision floating point numbers. diff --git a/commands/zcard.md b/commands/zcard.md index 5fb0f84dce..89d81c21a6 100644 --- a/commands/zcard.md +++ b/commands/zcard.md @@ -1,5 +1,5 @@ -Returns the sorted set cardinality (number of elements) of the sorted set -stored at `key`. +Returns the sorted set cardinality (number of elements) of the sorted set stored +at `key`. @return diff --git a/commands/zcount.md b/commands/zcount.md index fc44a1ac2b..b554a6b630 100644 --- a/commands/zcount.md +++ b/commands/zcount.md @@ -1,8 +1,8 @@ -Returns the number of elements in the sorted set at `key` with -a score between `min` and `max`. +Returns the number of elements in the sorted set at `key` with a score between +`min` and `max`. -The `min` and `max` arguments have the same semantic as described -for `ZRANGEBYSCORE`. +The `min` and `max` arguments have the same semantic as described for +`ZRANGEBYSCORE`. @return diff --git a/commands/zinterstore.md b/commands/zinterstore.md index abf98f0f62..87cb375dd1 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -1,12 +1,12 @@ Computes the intersection of `numkeys` sorted sets given by the specified keys, -and stores the result in `destination`. It is mandatory to provide the number -of input keys (`numkeys`) before passing the input keys and the other +and stores the result in `destination`. It is mandatory to provide the number of +input keys (`numkeys`) before passing the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the -sorted sets where it exists. Because intersection requires an element -to be a member of every given sorted set, this results in the score of every -element in the resulting sorted set to be equal to the number of input sorted sets. +sorted sets where it exists. Because intersection requires an element to be a +member of every given sorted set, this results in the score of every element in +the resulting sorted set to be equal to the number of input sorted sets. For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. diff --git a/commands/zrange.md b/commands/zrange.md index 37dc66cbab..046ced56b4 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -2,8 +2,8 @@ Returns the specified range of elements in the sorted set stored at `key`. The elements are considered to be ordered from the lowest to the highest score. Lexicographical order is used for elements with equal score. -See `ZREVRANGE` when you need the elements ordered from highest to lowest -score (and descending lexicographical order for elements with equal score). +See `ZREVRANGE` when you need the elements ordered from highest to lowest score +(and descending lexicographical order for elements with equal score). Both `start` and `stop` are zero-based indexes, where `0` is the first element, `1` is the next element and so on. They can also be negative numbers indicating @@ -16,7 +16,7 @@ If `stop` is larger than the end of the sorted set Redis will treat it like it is the last element of the sorted set. It is possible to pass the `WITHSCORES` option in order to return the scores of -the elements together with the elements. The returned list will contain +the elements together with the elements. The returned list will contain `value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples). diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 88574ea15c..f78a6e3864 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -1,6 +1,6 @@ Returns all the elements in the sorted set at `key` with a score between `min` -and `max` (including elements with score equal to `min` or `max`). The -elements are considered to be ordered from low to high scores. +and `max` (including elements with score equal to `min` or `max`). The elements +are considered to be ordered from low to high scores. The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not @@ -12,9 +12,9 @@ elements (similar to _SELECT LIMIT offset, count_ in SQL). Keep in mind that if before getting to the elements to return, which can add up to O(N) time complexity. -The optional `WITHSCORES` argument makes the command return both the element -and its score, instead of the element alone. This option is available since -Redis 2.0. +The optional `WITHSCORES` argument makes the command return both the element and +its score, instead of the element alone. This option is available since Redis +2.0. ## Exclusive intervals and infinity @@ -22,9 +22,9 @@ Redis 2.0. the highest or lowest score in the sorted set to get all elements from or up to a certain score. -By default, the interval specified by `min` and `max` is closed (inclusive). -It is possible to specify an open interval (exclusive) by prefixing the score -with the character `(`. For example: +By default, the interval specified by `min` and `max` is closed (inclusive). It +is possible to specify an open interval (exclusive) by prefixing the score with +the character `(`. For example: ZRANGEBYSCORE zset (1 5 diff --git a/commands/zrem.md b/commands/zrem.md index 42043db9a2..9d0f051c1a 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -1,4 +1,5 @@ -Removes the specified members from the sorted set stored at `key`. Non existing members are ignored. +Removes the specified members from the sorted set stored at `key`. Non existing +members are ignored. An error is returned when `key` exists and does not hold a sorted set. diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index a0b1866a2c..6e932bdb5a 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -3,7 +3,8 @@ and `min` (including elements with score equal to `max` or `min`). In contrary to the default ordering of sorted sets, for this command the elements are considered to be ordered from high to low scores. -The elements having the same score are returned in reverse lexicographical order. +The elements having the same score are returned in reverse lexicographical +order. Apart from the reversed ordering, `ZREVRANGEBYSCORE` is similar to `ZRANGEBYSCORE`. diff --git a/commands/zunionstore.md b/commands/zunionstore.md index a235a2b2c2..872c2ac4c7 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -9,7 +9,7 @@ sorted sets where it exists. Using the `WEIGHTS` option, it is possible to specify a multiplication factor for each input sorted set. This means that the score of every element in every input sorted set is multiplied by this factor before being passed to the -aggregation function. When `WEIGHTS` is not given, the multiplication factors +aggregation function. When `WEIGHTS` is not given, the multiplication factors default to `1`. With the `AGGREGATE` option, it is possible to specify how the results of the From 5a5f9623ae197a1d425e42e1d61a32f1c505b6f1 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 13:54:07 -0700 Subject: [PATCH 0167/2880] Wrapping --- commands/bgsave.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index 58f37e065c..4e8fe59708 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,8 +1,8 @@ Save the DB in background. The OK code is immediately returned. Redis forks, -the parent continues to server the clients, the child saves the DB on disk -then exit. A client my be able to check if the operation succeeded using the +the parent continues to server the clients, the child saves the DB on disk then +exit. A client my be able to check if the operation succeeded using the `LASTSAVE` command. Please refer to the [persistence documentation][persistence] for detailed From 6a0bdd9d98b9b9c39c8dd4cdfa45826774d8bf33 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 13:54:20 -0700 Subject: [PATCH 0168/2880] Wrap return section --- Rakefile | 2 +- commands/config set.md | 3 ++- commands/exec.md | 4 ++-- commands/hdel.md | 3 ++- commands/lastsave.md | 4 ++-- commands/pttl.md | 3 ++- commands/sadd.md | 3 ++- commands/script exists.md | 6 ++++-- commands/script load.md | 4 ++-- commands/shutdown.md | 4 ++-- commands/srem.md | 3 ++- commands/strlen.md | 3 ++- commands/ttl.md | 3 ++- commands/zrangebyscore.md | 4 ++-- commands/zrevrangebyscore.md | 4 ++-- 15 files changed, 31 insertions(+), 22 deletions(-) diff --git a/Rakefile b/Rakefile index c352b41314..605e206ae7 100644 --- a/Rakefile +++ b/Rakefile @@ -48,7 +48,7 @@ namespace :format do STDOUT.print "formatting #{file}..." STDOUT.flush - matcher = /^(?:\A|\r?\n)((?:[a-zA-Z].+?\r?\n)+)/m + matcher = /^(?:\A|\r?\n)((?:[a-zA-Z@].+?\r?\n)+)/m body = File.read(file).gsub(matcher) do |match| formatted = nil diff --git a/commands/config set.md b/commands/config set.md index 82a632ab63..8651a8d4e7 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -45,4 +45,5 @@ options are not mutually exclusive. @return -@status-reply: `OK` when the configuration was set properly. Otherwise an error is returned. +@status-reply: `OK` when the configuration was set properly. Otherwise an error +is returned. diff --git a/commands/exec.md b/commands/exec.md index b4a804f4bc..16341268c6 100644 --- a/commands/exec.md +++ b/commands/exec.md @@ -11,7 +11,7 @@ not modified, allowing for a [check-and-set mechanism][cas]. @return -@multi-bulk-reply: each element being the reply to each of the commands -in the atomic transaction. +@multi-bulk-reply: each element being the reply to each of the commands in the +atomic transaction. When using `WATCH`, `EXEC` can return a @nil-reply if the execution was aborted. diff --git a/commands/hdel.md b/commands/hdel.md index 65e6b3c570..3bec43f9c6 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -5,7 +5,8 @@ treated as an empty hash and this command returns @return -@integer-reply: the number of fields that were removed from the hash, not including specified but non existing fields. +@integer-reply: the number of fields that were removed from the hash, not +including specified but non existing fields. @history diff --git a/commands/lastsave.md b/commands/lastsave.md index 93c5cbcce5..8a4dea1ee9 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,6 +1,6 @@ Return the UNIX TIME of the last DB save executed with success. A client may -check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then -issuing a `BGSAVE` command and checking at regular intervals every N seconds if +check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then issuing +a `BGSAVE` command and checking at regular intervals every N seconds if `LASTSAVE` changed. @return diff --git a/commands/pttl.md b/commands/pttl.md index c42fa8bf57..dde4e65673 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -9,7 +9,8 @@ time in seconds while `PTTL` returns it in milliseconds. @return -@integer-reply: Time to live in milliseconds or `-1` when `key` does not exist or does not have a timeout. +@integer-reply: Time to live in milliseconds or `-1` when `key` does not exist +or does not have a timeout. @examples diff --git a/commands/sadd.md b/commands/sadd.md index 0683f8f65f..b698b0ecce 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -6,7 +6,8 @@ An error is returned when the value stored at `key` is not a set. @return -@integer-reply: the number of elements that were added to the set, not including all the elements already present into the set. +@integer-reply: the number of elements that were added to the set, not including +all the elements already present into the set. @history diff --git a/commands/script exists.md b/commands/script exists.md index 84b9b18639..17aac20883 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -12,8 +12,10 @@ Lua scripting. @return -@multi-bulk-reply -The command returns an array of integers that correspond to the specified SHA1 sum arguments. For every corresponding SHA1 sum of a script that actually exists in the script cache, an 1 is returned, otherwise 0 is returned. +@multi-bulk-reply The command returns an array of integers that correspond to +the specified SHA1 sum arguments. For every corresponding SHA1 sum of a script +that actually exists in the script cache, an 1 is returned, otherwise 0 is +returned. @example diff --git a/commands/script load.md b/commands/script load.md index eff51a1774..15bd451c86 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -14,5 +14,5 @@ Lua scripting. @return -@bulk-reply -This command returns the SHA1 sum of the script added into the script cache. +@bulk-reply This command returns the SHA1 sum of the script added into the +script cache. diff --git a/commands/shutdown.md b/commands/shutdown.md index ba19c0c51f..860d8899c7 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -25,5 +25,5 @@ command. Specifically: @return -@status-reply on error. On success nothing is returned since the server -quits and the connection is closed. +@status-reply on error. On success nothing is returned since the server quits +and the connection is closed. diff --git a/commands/srem.md b/commands/srem.md index 16b16d0b8b..c33da4fbfc 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -6,7 +6,8 @@ An error is returned when the value stored at `key` is not a set. @return -@integer-reply: the number of members that were removed from the set, not including non existing members. +@integer-reply: the number of members that were removed from the set, not +including non existing members. @history diff --git a/commands/strlen.md b/commands/strlen.md index bcaf9d5f4b..b785fe9d1d 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -3,7 +3,8 @@ when `key` holds a non-string value. @return -@integer-reply: the length of the string at `key`, or `0` when `key` does not exist. +@integer-reply: the length of the string at `key`, or `0` when `key` does not +exist. @examples diff --git a/commands/ttl.md b/commands/ttl.md index 8cc59c3242..409063e98b 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -4,7 +4,8 @@ key will continue to be part of the dataset. @return -@integer-reply: TTL in seconds or `-1` when `key` does not exist or does not have a timeout. +@integer-reply: TTL in seconds or `-1` when `key` does not exist or does not +have a timeout. @examples diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index f78a6e3864..afdb08102d 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -36,8 +36,8 @@ Will return all the elements with `5 < score < 10` (5 and 10 excluded). @return -@multi-bulk-reply: list of elements in the specified score range (optionally with -their scores). +@multi-bulk-reply: list of elements in the specified score range (optionally +with their scores). @examples diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index 6e932bdb5a..c20bd61915 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -11,8 +11,8 @@ Apart from the reversed ordering, `ZREVRANGEBYSCORE` is similar to @return -@multi-bulk-reply: list of elements in the specified score range (optionally with -their scores). +@multi-bulk-reply: list of elements in the specified score range (optionally +with their scores). @examples From bf4cb6175c6b424790d49ac1ffb741c5ff121949 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 18:21:51 -0700 Subject: [PATCH 0169/2880] Reformat command documentation --- Rakefile | 17 +-- commands/append.md | 5 +- commands/auth.md | 5 +- commands/bgrewriteaof.md | 20 ++-- commands/bgsave.md | 11 +- commands/bitcount.md | 14 ++- commands/bitop.md | 19 ++-- commands/blpop.md | 33 +++--- commands/brpop.md | 23 ++-- commands/brpoplpush.md | 12 +-- commands/config get.md | 22 ++-- commands/config set.md | 22 ++-- commands/dbsize.md | 2 - commands/debug object.md | 4 +- commands/debug segfault.md | 4 +- commands/decrby.md | 1 - commands/del.md | 1 - commands/discard.md | 7 +- commands/dump.md | 8 +- commands/echo.md | 1 - commands/eval.md | 129 ++++++++++++---------- commands/exec.md | 11 +- commands/exists.md | 1 - commands/expire.md | 18 ++-- commands/expireat.md | 10 +- commands/get.md | 1 - commands/getbit.md | 11 +- commands/getrange.md | 4 +- commands/getset.md | 5 +- commands/hdel.md | 9 +- commands/hexists.md | 1 - commands/hget.md | 1 - commands/hgetall.md | 1 - commands/hincrby.md | 5 +- commands/hincrbyfloat.md | 3 +- commands/hkeys.md | 1 - commands/hlen.md | 1 - commands/hmget.md | 1 - commands/hmset.md | 1 - commands/hset.md | 1 - commands/hsetnx.md | 1 - commands/hvals.md | 1 - commands/incr.md | 27 +++-- commands/incrby.md | 1 - commands/incrbyfloat.md | 3 +- commands/info.md | 22 ++-- commands/keys.md | 15 ++- commands/lastsave.md | 4 +- commands/lindex.md | 1 - commands/linsert.md | 1 - commands/llen.md | 1 - commands/lpop.md | 1 - commands/lpush.md | 4 +- commands/lpushx.md | 1 - commands/lrange.md | 7 +- commands/lrem.md | 1 - commands/lset.md | 5 +- commands/ltrim.md | 3 +- commands/mget.md | 1 - commands/migrate.md | 5 +- commands/monitor.md | 13 +-- commands/move.md | 1 - commands/mset.md | 1 - commands/msetnx.md | 1 - commands/multi.md | 7 +- commands/object.md | 30 ++++-- commands/persist.md | 5 +- commands/pexpireat.md | 4 +- commands/ping.md | 1 - commands/psetex.md | 3 +- commands/pttl.md | 2 - commands/quit.md | 1 - commands/randomkey.md | 1 - commands/rename.md | 1 - commands/renamenx.md | 1 - commands/restore.md | 3 +- commands/rpop.md | 1 - commands/rpoplpush.md | 27 +++-- commands/rpush.md | 4 +- commands/rpushx.md | 1 - commands/sadd.md | 4 +- commands/save.md | 5 +- commands/scard.md | 5 +- commands/sdiff.md | 1 - commands/select.md | 1 - commands/set.md | 1 - commands/setbit.md | 30 +++--- commands/setex.md | 3 +- commands/setnx.md | 27 ++--- commands/setrange.md | 25 +++-- commands/shutdown.md | 15 +-- commands/sinter.md | 1 - commands/sismember.md | 1 - commands/slowlog.md | 20 ++-- commands/smembers.md | 1 - commands/smove.md | 1 - commands/sort.md | 34 +++--- commands/spop.md | 1 - commands/srandmember.md | 1 - commands/srem.md | 4 +- commands/strlen.md | 1 - commands/subscribe.md | 4 +- commands/sunion.md | 1 - commands/time.md | 1 - commands/ttl.md | 1 - commands/type.md | 1 - commands/unwatch.md | 4 +- commands/watch.md | 4 +- commands/zadd.md | 11 +- commands/zcard.md | 1 - commands/zcount.md | 1 - commands/zincrby.md | 5 +- commands/zinterstore.md | 5 +- commands/zrange.md | 7 +- commands/zrangebyscore.md | 3 +- commands/zrank.md | 3 +- commands/zrem.md | 7 +- commands/zremrangebyrank.md | 13 ++- commands/zremrangebyscore.md | 1 - commands/zrevrange.md | 1 - commands/zrevrangebyscore.md | 1 - commands/zrevrank.md | 3 +- commands/zscore.md | 5 +- commands/zunionstore.md | 1 - remarkdown.rb | 204 +++++++++++++++++++++++++++++++++++ 125 files changed, 650 insertions(+), 479 deletions(-) create mode 100644 remarkdown.rb diff --git a/Rakefile b/Rakefile index 605e206ae7..9711d24ede 100644 --- a/Rakefile +++ b/Rakefile @@ -42,24 +42,17 @@ end namespace :format do + require "./remarkdown" + def format(file) return unless File.exist?(file) STDOUT.print "formatting #{file}..." STDOUT.flush - matcher = /^(?:\A|\r?\n)((?:[a-zA-Z@].+?\r?\n)+)/m - body = File.read(file).gsub(matcher) do |match| - formatted = nil - - IO.popen("par p0s0w80", "r+") do |io| - io.puts match - io.close_write - formatted = io.read - end - - formatted - end + body = File.read(file) + body = ReMarkdown.new(body).to_s + body = body.gsub(/^\s+$/, "") File.open(file, "w") do |f| f.print body diff --git a/commands/append.md b/commands/append.md index 79dd0ba7a0..c4c6ecca9b 100644 --- a/commands/append.md +++ b/commands/append.md @@ -25,7 +25,10 @@ sample arrives we can store it using the command Accessing individual elements in the time series is not hard: * `STRLEN` can be used in order to obtain the number of samples. -* `GETRANGE` allows for random access of elements. If our time series have an associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. +* `GETRANGE` allows for random access of elements. If our time series have an + associated time information we can easily implement a binary search to get + range combining `GETRANGE` with the Lua scripting engine available in Redis + 2.6. * `SETRANGE` can be used to overwrite an existing time serie. The limitations of this pattern is that we are forced into an append-only mode diff --git a/commands/auth.md b/commands/auth.md index 8251f61cf5..661ecd8016 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -7,10 +7,9 @@ with the `OK` status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password. **Note**: because of the high performance nature of Redis, it is possible to try -a lot of passwords in parallel in very short time, so make sure to generate -a strong and very long password so that this attack is infeasible. +a lot of passwords in parallel in very short time, so make sure to generate a +strong and very long password so that this attack is infeasible. @return @status-reply - diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index befa266bf9..51e97ad4c0 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,23 +1,27 @@ -Instruct Redis to start an [Append Only File][aof] rewrite process. The rewrite -will create a small optimized version of the current Append Only File. +Instruct Redis to start an [Append Only File][tpaof] rewrite process. The +rewrite will create a small optimized version of the current Append Only File. -[aof]: /topics/persistence#append-only-file +[tpaof]: /topics/persistence#append-only-file If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically: -* If a Redis child is creating a snapshot on disk, the AOF rewrite is *scheduled* but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command starting from Redis 2.6. -* If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. +* If a Redis child is creating a snapshot on disk, the AOF rewrite is + *scheduled* but not started until the saving child producing the RDB file + terminates. In this case the `BGREWRITEAOF` will still return an OK code, but + with an appropriate message. You can check if an AOF rewrite is scheduled + looking at the `INFO` command starting from Redis 2.6. +* If an AOF rewrite is already in progress the command returns an error and no + AOF rewrite will be scheduled for a later time. Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the `BGREWRITEAOF` command can be used to trigger a rewrite at any time. -Please refer to the [persistence documentation][persistence] for detailed -information. +Please refer to the [persistence documentation][tp] for detailed information. -[persistence]: /topics/persistence +[tp]: /topics/persistence @return diff --git a/commands/bgsave.md b/commands/bgsave.md index 4e8fe59708..25a5f1f696 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,14 +1,11 @@ - - Save the DB in background. The OK code is immediately returned. Redis forks, -the parent continues to server the clients, the child saves the DB on disk then -exit. A client my be able to check if the operation succeeded using the +the parent continues to server the clients, the child saves the DB on disk +then exit. A client my be able to check if the operation succeeded using the `LASTSAVE` command. -Please refer to the [persistence documentation][persistence] for detailed -information. +Please refer to the [persistence documentation][tp] for detailed information. -[persistence]: /topics/persistence +[tp]: /topics/persistence @return diff --git a/commands/bitcount.md b/commands/bitcount.md index 414aa47620..e7fee5733d 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -42,10 +42,11 @@ the bit corresponding to the current day. Later it will be trivial to know the number of single days the user visited the web site simply calling the `BITCOUNT` command against the bitmap. -A similar pattern where user IDs are used instead of days is described in the -article called "[Fast easy realtime metrics using Redis bitmaps][bitmaps]". +A similar pattern where user IDs are used instead of days is described +in the article called "[Fast easy realtime metrics using Redis +bitmaps][hbgc212fermurb]". -[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps +[hbgc212fermurb]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps ## Performance considerations @@ -56,5 +57,8 @@ Redis command like `GET` or `INCR`. When the bitmap is big, there are two alternatives: -+ Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script. -+ Running the bitmap incrementally using the `BITCOUNT` *start* and *end* optional parameters, accumulating the results client-side, and optionally caching the result into a key. +* Taking a separated key that is incremented every time the bitmap is modified. + This can be very efficient and atomic using a small Redis Lua script. +* Running the bitmap incrementally using the `BITCOUNT` *start* and *end* + optional parameters, accumulating the results client-side, and optionally + caching the result into a key. diff --git a/commands/bitop.md b/commands/bitop.md index c8f05240f6..ad918bf9fe 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -4,10 +4,10 @@ store the result in the destination key. The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: -+ BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN* -+ BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN* -+ BITOP XOR *destkey srckey1 srckey2 srckey3 ... srckeyN* -+ BITOP NOT *destkey srckey* +* BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN* +* BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN* +* BITOP XOR *destkey srckey1 srckey2 srckey3 ... srckeyN* +* BITOP NOT *destkey srckey* As you can see **NOT** is special as it only takes an input key, because it performs inversion of bits so it only makes sense as an unary operator. @@ -40,18 +40,19 @@ size of the longest input string. ## Pattern: real time metrics using bitmaps -`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. Different bitmaps can be combined in order to obtain a target +`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command +documentation. Different bitmaps can be combined in order to obtain a target bitmap where to perform the population counting operation. See the article called "[Fast easy realtime metrics using Redis -bitmaps][bitmaps]" for an interesting use cases. +bitmaps][hbgc212fermurb]" for an interesting use cases. -[bitmaps]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps +[hbgc212fermurb]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps ## Performance considerations -`BITOP` is a potentially slow command as it runs in O(N) time. -Care should be taken when running it against long input strings. +`BITOP` is a potentially slow command as it runs in O(N) time. Care should be +taken when running it against long input strings. For real time metrics and statistics involving large inputs a good approach is to use a slave (with read-only option disabled) where to perform the bit-wise diff --git a/commands/blpop.md b/commands/blpop.md index 8685a0ea1c..c500782eee 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -1,4 +1,4 @@ -`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP` +`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP` because it blocks the connection when there are no elements to pop from any of the given lists. An element is popped from the head of the first list that is non-empty, with the given keys being checked in the order that they are given. @@ -10,8 +10,8 @@ non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. Keys are checked in the order that they are given. Let's say that the key -`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider -the following command: +`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the +following command: BLPOP list1 list2 list3 0 @@ -37,30 +37,31 @@ be used to block indefinitely. ## Multiple clients blocking for the same keys -Multiple clients can block for the same key. They are put into a queue, so -the first to be served will be the one that started to wait earlier, in a -first-`!BLPOP` first-served fashion. +Multiple clients can block for the same key. They are put into a queue, so the +first to be served will be the one that started to wait earlier, in a first- +`!BLPOP` first-served fashion. -## `!BLPOP` inside a `!MULTI`/`!EXEC` transaction +## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction `BLPOP` can be used with pipelining (sending multiple commands and reading the -replies in batch), but it does not make sense to use `BLPOP` inside a -`MULTI`/`EXEC` block. This would require blocking the entire server in order to -execute the block atomically, which in turn does not allow other clients to -perform a push operation. +replies in batch), but it does not make sense to use `BLPOP` inside a `MULTI` / +`EXEC` block. This would require blocking the entire server in order to execute +the block atomically, which in turn does not allow other clients to perform a +push operation. -The behavior of `BLPOP` inside `MULTI`/`EXEC` when the list is empty is to +The behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to return a `nil` multi-bulk reply, which is the same thing that happens when the timeout is reached. If you like science fiction, think of time flowing at -infinite speed inside a `MULTI`/`EXEC` block. +infinite speed inside a `MULTI` / `EXEC` block. @return @multi-bulk-reply: specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. -* A two-element multi-bulk with the first element being the name of the key where an element - was popped and the second element being the value of the popped element. +* A two-element multi-bulk with the first element being the name of the key + where an element was popped and the second element being the value of the + popped element. @examples @@ -96,5 +97,3 @@ While in the producer side we'll use simply: SADD key element LPUSH helper_key x EXEC - - diff --git a/commands/brpop.md b/commands/brpop.md index 2211cdf957..da44a3ae4e 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -1,21 +1,22 @@ -`BRPOP` is a blocking list pop primitive. It is the blocking version of -`RPOP` because it blocks the connection when there are no -elements to pop from any of the given lists. An element is popped from the -tail of the first list that is non-empty, with the given keys being checked -in the order that they are given. +`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP` +because it blocks the connection when there are no elements to pop from any of +the given lists. An element is popped from the tail of the first list that is +non-empty, with the given keys being checked in the order that they are given. -See the [BLPOP documentation](/commands/blpop) for the exact semantics, since -`BRPOP` is identical to `BLPOP` with the only difference -being that it pops elements from the tail of a list instead of popping from the -head. +See the [BLPOP documentation][cb] for the exact semantics, since `BRPOP` is +identical to `BLPOP` with the only difference being that it pops elements from +the tail of a list instead of popping from the head. + +[cb]: /commands/blpop @return @multi-bulk-reply: specifically: * A `nil` multi-bulk when no element could be popped and the timeout expired. -* A two-element multi-bulk with the first element being the name of the key where an element - was popped and the second element being the value of the popped element. +* A two-element multi-bulk with the first element being the name of the key + where an element was popped and the second element being the value of the + popped element. @examples diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 3598c91300..73b726db62 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -1,14 +1,14 @@ -`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. -When `source` contains elements, this command behaves exactly like -`RPOPLPUSH`. When `source` is empty, Redis will block -the connection until another client pushes to it or until `timeout` is reached. A `timeout` of zero can be used to block indefinitely. +`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` contains +elements, this command behaves exactly like `RPOPLPUSH`. When `source` is empty, +Redis will block the connection until another client pushes to it or until +`timeout` is reached. A `timeout` of zero can be used to block indefinitely. See `RPOPLPUSH` for more information. @return -@bulk-reply: the element being popped from `source` and pushed to -`destination`. If `timeout` is reached, a @nil-reply is returned. +@bulk-reply: the element being popped from `source` and pushed to `destination`. +If `timeout` is reached, a @nil-reply is returned. ## Pattern: Reliable queue diff --git a/commands/config get.md b/commands/config get.md index 268dcb5df2..588d46fd76 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -3,12 +3,12 @@ running Redis server. Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 can read the whole configuration of a server using this command. -The symmetric command used to alter the configuration at run time is -`CONFIG SET`. +The symmetric command used to alter the configuration at run time is `CONFIG +SET`. `CONFIG GET` takes a single argument, that is glob style pattern. All the -configuration parameters matching this parameter are reported as a -list of key-value pairs. Example: +configuration parameters matching this parameter are reported as a list of +key-value pairs. Example: redis> config get *max-*-entries* 1) "hash-max-zipmap-entries" @@ -22,13 +22,17 @@ You can obtain a list of all the supported configuration parameters typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf][conf] file, with the following -important differences: +configuration parameter used in the [redis.conf][hgcarr22rc] file, with the +following important differences: -[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf +[hgcarr22rc]: http://github.com/antirez/redis/raw/2.2/redis.conf -* Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. +* Where bytes or other quantities are specified, it is not possible to use + the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything + should be specified as a well formed 64 bit integer, in the base unit of the + configuration directive. +* The save parameter is a single string of space separated integers. Every pair + of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: diff --git a/commands/config set.md b/commands/config set.md index 8651a8d4e7..084c88aabc 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -11,13 +11,17 @@ by Redis that will start acting as specified starting from the next command executed. All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf][conf] file, with the following -important differences: +configuration parameter used in the [redis.conf][hgcarr22rc] file, with the +following important differences: -[conf]: http://github.com/antirez/redis/raw/2.2/redis.conf +[hgcarr22rc]: http://github.com/antirez/redis/raw/2.2/redis.conf -* Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. +* Where bytes or other quantities are specified, it is not possible to use + the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything + should be specified as a well formed 64 bit integer, in the base unit of the + configuration directive. +* The save parameter is a single string of space separated integers. Every pair + of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: @@ -30,15 +34,15 @@ be set using `CONFIG SET` as "900 1 300 10". It is possible to switch persistence from RDB snapshotting to append only file (and the other way around) using the `CONFIG SET` command. For more information -about how to do that please check [persistence page][persistence]. +about how to do that please check [persistence page][tp]. -[persistence]: /topics/persistence +[tp]: /topics/persistence In general what you should know is that setting the `appendonly` parameter to `yes` will start a background process to save the initial append only file (obtained from the in memory data set), and will append all the subsequent -commands on the append only file, thus obtaining exactly the same effect of -a Redis server that started with AOF turned on since the start. +commands on the append only file, thus obtaining exactly the same effect of a +Redis server that started with AOF turned on since the start. You can have both the AOF enabled with RDB snapshotting if you want, the two options are not mutually exclusive. diff --git a/commands/dbsize.md b/commands/dbsize.md index 468ac7d678..8818785166 100644 --- a/commands/dbsize.md +++ b/commands/dbsize.md @@ -1,5 +1,3 @@ - - Return the number of keys in the currently selected database. @return diff --git a/commands/debug object.md b/commands/debug object.md index ffc969d8ab..4d3cf6de22 100644 --- a/commands/debug object.md +++ b/commands/debug object.md @@ -1,4 +1,4 @@ -`DEBUG OBJECT` is a debugging command that should not be used by clients. -Check the `OBJECT` command instead. +`DEBUG OBJECT` is a debugging command that should not be used by clients. Check +the `OBJECT` command instead. @status-reply diff --git a/commands/debug segfault.md b/commands/debug segfault.md index 7524c166f2..c01d06a38c 100644 --- a/commands/debug segfault.md +++ b/commands/debug segfault.md @@ -1,4 +1,4 @@ -`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. -It is used to simulate bugs during the development. +`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. It is +used to simulate bugs during the development. @status-reply diff --git a/commands/decrby.md b/commands/decrby.md index 48db8d01da..16a77dc814 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -14,4 +14,3 @@ See `INCR` for extra information on increment/decrement operations. @cli SET mykey "10" DECRBY mykey 5 - diff --git a/commands/del.md b/commands/del.md index 84e04c8039..6d20c0c314 100644 --- a/commands/del.md +++ b/commands/del.md @@ -10,4 +10,3 @@ Removes the specified keys. A key is ignored if it does not exist. SET key1 "Hello" SET key2 "World" DEL key1 key2 key3 - diff --git a/commands/discard.md b/commands/discard.md index f5c75aa031..19cece2906 100644 --- a/commands/discard.md +++ b/commands/discard.md @@ -1,8 +1,7 @@ -Flushes all previously queued commands in a -[transaction][transactions] and restores the connection state to -normal. +Flushes all previously queued commands in a [transaction][tt] and restores the +connection state to normal. -[transactions]: /topics/transactions +[tt]: /topics/transactions If `WATCH` was used, `DISCARD` unwatches all keys. diff --git a/commands/dump.md b/commands/dump.md index 3999ab35f3..58ec12a22e 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -5,9 +5,13 @@ the user. The returned value can be synthesized back into a Redis key using the The serialization format is opaque and non-standard, however it has a few semantical characteristics: -* It contains a 64bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. +* It contains a 64bit checksum that is used to make sure errors will be + detected. The `RESTORE` command makes sure to check the checksum before + synthesizing a key using the serialized value. * Values are encoded in the same format used by RDB. -* An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. +* An RDB version is encoded inside the serialized value, so that different Redis + versions with incompatible RDB formats will refuse to process the serialized + value. The serialized value does NOT contain expire information. In order to capture the time to live of the current value the `PTTL` command should be used. diff --git a/commands/echo.md b/commands/echo.md index d72c833f7a..3e1767eb20 100644 --- a/commands/echo.md +++ b/commands/echo.md @@ -8,4 +8,3 @@ Returns `message`. @cli ECHO "Hello World!" - diff --git a/commands/eval.md b/commands/eval.md index f41ac0a30b..6bff2b985e 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -103,7 +103,7 @@ following table shows you all the conversions rules: There is an additional Lua to Redis conversion rule that has no corresponding Redis to Lua conversion rule: - * Lua boolean true -> Redis integer reply with value of 1. +* Lua boolean true -> Redis integer reply with value of 1. The followings are a few conversion examples: @@ -167,20 +167,23 @@ would be a problem for a few reasons: * Different instances may have different versions of a command implementation. -* Deployment is hard if there is to make sure all the instances contain a given command, especially in a distributed environment. +* Deployment is hard if there is to make sure all the instances contain a given + command, especially in a distributed environment. -* Reading an application code the full semantic could not be clear since the application would call commands defined server side. +* Reading an application code the full semantic could not be clear since the + application would call commands defined server side. In order to avoid the above three problems and at the same time don't incur in the bandwidth penalty, Redis implements the `EVALSHA` command. -`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument it has the SHA1 sum of a script. The behavior is the following: +`EVALSHA` works exactly as `EVAL`, but instead of having a script as first +argument it has the SHA1 sum of a script. The behavior is the following: -* If the server still remembers a script whose SHA1 sum was the one -specified, the script is executed. +* If the server still remembers a script whose SHA1 sum was the one specified, + the script is executed. -* If the server does not remember a script with this SHA1 sum, a special -error is returned that will tell the client to use `EVAL` instead. +* If the server does not remember a script with this SHA1 sum, a special error + is returned that will tell the client to use `EVAL` instead. Example: @@ -232,29 +235,28 @@ this problem in its details later). Redis offers a SCRIPT command that can be used in order to control the scripting subsystem. SCRIPT currently accepts three different commands: -* SCRIPT FLUSH. This command is the only way to force Redis to flush the -scripts cache. It is mostly useful in a cloud environment where the same -instance can be reassigned to a different user. It is also useful for -testing client libraries implementations of the scripting feature. - -* SCRIPT EXISTS *sha1* *sha2* ... *shaN*. Given a list of SHA1 digests -as arguments this command returns an array of 1 or 0, where 1 means the -specific SHA1 is recognized as a script already present in the scripting -cache, while 0 means that a script with this SHA1 was never seen before -(or at least never seen after the latest SCRIPT FLUSH command). - -* SCRIPT LOAD *script*. This command registers the specified script in -the Redis script cache. The command is useful in all the contexts where -we want to make sure that `EVALSHA` will not fail (for instance during a -pipeline or MULTI/EXEC operation), without the need to actually execute the -script. - -* SCRIPT KILL. This command is the only wait to interrupt a long running -script that reached the configured maximum execution time for scripts. -The SCRIPT KILL command can only be used with scripts that did not modified -the dataset during their execution (since stopping a read only script does -not violate the scripting engine guaranteed atomicity). -See the next sections for more information about long running scripts. +* SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts + cache. It is mostly useful in a cloud environment where the same instance + can be reassigned to a different user. It is also useful for testing client + libraries implementations of the scripting feature. + +* SCRIPT EXISTS *sha1* *sha2*... *shaN*. Given a list of SHA1 digests as + arguments this command returns an array of 1 or 0, where 1 means the specific + SHA1 is recognized as a script already present in the scripting cache, while + 0 means that a script with this SHA1 was never seen before (or at least never + seen after the latest SCRIPT FLUSH command). + +* SCRIPT LOAD *script*. This command registers the specified script in the + Redis script cache. The command is useful in all the contexts where we want + to make sure that `EVALSHA` will not fail (for instance during a pipeline or + MULTI/EXEC operation), without the need to actually execute the script. + +* SCRIPT KILL. This command is the only wait to interrupt a long running script + that reached the configured maximum execution time for scripts. The SCRIPT + KILL command can only be used with scripts that did not modified the dataset + during their execution (since stopping a read only script does not violate + the scripting engine guaranteed atomicity). See the next sections for more + information about long running scripts. ## Scripts as pure functions @@ -272,11 +274,11 @@ scripts). The only drawback with this approach is that scripts are required to have the following property: -* The script always evaluates the same Redis *write* commands with the -same arguments given the same input data set. Operations performed by -the script cannot depend on any hidden (non explicit) information or state -that may change as script execution proceeds or between different executions of -the script, nor can it depend on any external input from I/O devices. +* The script always evaluates the same Redis *write* commands with the same + arguments given the same input data set. Operations performed by the script + cannot depend on any hidden (non explicit) information or state that may + change as script execution proceeds or between different executions of the + script, nor can it depend on any external input from I/O devices. Things like using the system time, calling Redis random commands like `RANDOMKEY`, or using Lua random number generator, could result into scripts @@ -284,24 +286,30 @@ that will not evaluate always in the same way. In order to enforce this behavior in scripts Redis does the following: -* Lua does not export commands to access the system time or other external state. +* Lua does not export commands to access the system time or other external + state. -* Redis will block the script with an error if a script will call a -Redis command able to alter the data set **after** a Redis *random* -command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is -read only and does not modify the data set it is free to call those commands. -Note that a *random command* does not necessarily identifies a command that -uses random numbers: any non deterministic command is considered a random -command (the best example in this regard is the `TIME` command). +* Redis will block the script with an error if a script will call a Redis + command able to alter the data set **after** a Redis *random* command like + `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is read only + and does not modify the data set it is free to call those commands. Note that + a *random command* does not necessarily identifies a command that uses random + numbers: any non deterministic command is considered a random command (the + best example in this regard is the `TIME` command). * Redis commands that may return elements in random order, like `SMEMBERS` -(because Redis Sets are *unordered*) have a different behavior when called from Lua, and undergone a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements. + (because Redis Sets are *unordered*) have a different behavior when called + from Lua, and undergone a silent lexicographical sorting filter before + returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always + return the Set elements in the same order, while the same command invoked from + normal clients may return different results even if the key contains exactly + the same elements. * Lua pseudo random number generation functions `math.random` and -`math.randomseed` are modified in order to always have the same seed every -time a new script is executed. This means that calling `math.random` will -always generate the same sequence of numbers every time a script is -executed if `math.randomseed` is not used. + `math.randomseed` are modified in order to always have the same seed every + time a new script is executed. This means that calling `math.random` will + always generate the same sequence of numbers every time a script is executed + if `math.randomseed` is not used. However the user is still able to write commands with random behaviors using the following simple trick. Imagine I want to write a Redis script that will @@ -463,10 +471,17 @@ that scripts are atomic in nature. Stopping a script half-way means to possibly leave the dataset with half-written data inside. For this reasons when a script executes for more than the specified time the following happens: -* Redis logs that a script that is running for too much time is still in execution. -* It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. -* It is possible to terminate a script that executed only read-only commands using the `SCRIPT KILL` command. This does not violate the scripting semantic as no data was yet written on the dataset by the script. -* If the script already called write commands the only allowed command becomes `SHUTDOWN NOSAVE` that stops the server not saving the current data set on disk (basically the server is aborted). +* Redis logs that a script that is running for too much time is still in + execution. +* It starts accepting commands again from other clients, but will reply with + a BUSY error to all the clients sending normal commands. The only allowed + commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. +* It is possible to terminate a script that executed only read-only commands + using the `SCRIPT KILL` command. This does not violate the scripting semantic + as no data was yet written on the dataset by the script. +* If the script already called write commands the only allowed command becomes + `SHUTDOWN NOSAVE` that stops the server not saving the current data set on + disk (basically the server is aborted). ## EVALSHA in the context of pipelining @@ -479,7 +494,7 @@ The client library implementation should take one of the following approaches: * Always use plain `EVAL` when in the context of a pipeline. -* Accumulate all the commands to send into the pipeline, then check for -`EVAL` commands and use the `SCRIPT EXISTS` command to check if all the -scripts are already defined. If not add `SCRIPT LOAD` commands on top of -the pipeline as required, and use `EVALSHA` for all the `EVAL` calls. +* Accumulate all the commands to send into the pipeline, then check for `EVAL` + commands and use the `SCRIPT EXISTS` command to check if all the scripts are + already defined. If not add `SCRIPT LOAD` commands on top of the pipeline as + required, and use `EVALSHA` for all the `EVAL` calls. diff --git a/commands/exec.md b/commands/exec.md index 16341268c6..3af25d93ef 100644 --- a/commands/exec.md +++ b/commands/exec.md @@ -1,13 +1,12 @@ -Executes all previously queued commands in a -[transaction][transactions] and restores the connection state to -normal. +Executes all previously queued commands in a [transaction][tt] and restores the +connection state to normal. -[transactions]: /topics/transactions +[tt]: /topics/transactions When using `WATCH`, `EXEC` will execute commands only if the watched keys were -not modified, allowing for a [check-and-set mechanism][cas]. +not modified, allowing for a [check-and-set mechanism][ttc]. -[cas]: /topics/transactions#cas +[ttc]: /topics/transactions#cas @return diff --git a/commands/exists.md b/commands/exists.md index 7d1cf0a80e..8df55a66e1 100644 --- a/commands/exists.md +++ b/commands/exists.md @@ -13,4 +13,3 @@ Returns if `key` exists. SET key1 "Hello" EXISTS key1 EXISTS key2 - diff --git a/commands/expire.md b/commands/expire.md index b099bb93ea..fe17727887 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -1,6 +1,6 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be -_volatile_ in Redis terminology. +*volatile* in Redis terminology. The timeout is cleared only when the key is removed using the `DEL` command or overwritten using the `SET` or `GETSET` commands. This means that all the @@ -16,10 +16,10 @@ using the `PERSIST` command. If a key is renamed with `RENAME`, the associated time to live is transferred to the new key name. -If a key is overwritten by `RENAME`, like in the case of an existing key -`Key_A` that is overwritten by a call like `RENAME Key_B Key_A`, it does not -matter if the original `Key_A` had a timeout associated or not, the new key -`Key_A` will inherit all the characteristics of `Key_B`. +If a key is overwritten by `RENAME`, like in the case of an existing key `Key_A` +that is overwritten by a call like `RENAME Key_B Key_A`, it does not matter if +the original `Key_A` had a timeout associated or not, the new key `Key_A` will +inherit all the characteristics of `Key_B`. ## Refreshing expires @@ -55,10 +55,10 @@ now fixed. Imagine you have a web service and you are interested in the latest N pages *recently* visited by your users, such that each adiacent page view was not -performed more than 60 seconds after the previous. Conceptually you may think -at this set of page views as a *Navigation session* if your user, that may -contain interesting information about what kind of products he or she is -looking for currently, so that you can recommend related products. +performed more than 60 seconds after the previous. Conceptually you may think at +this set of page views as a *Navigation session* if your user, that may contain +interesting information about what kind of products he or she is looking for +currently, so that you can recommend related products. You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands: diff --git a/commands/expireat.md b/commands/expireat.md index 42196b88e6..8fa0d774fb 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -1,11 +1,12 @@ -`EXPIREAT` has the same effect and semantic as `EXPIRE`, but -instead of specifying the number of seconds representing the TTL (time to live), it takes an absolute [Unix timestamp][2] (seconds since January 1, 1970). +`EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of +specifying the number of seconds representing the TTL (time to live), it takes +an absolute [Unix timestamp][hewowu] (seconds since January 1, 1970). + +[hewowu]: http://en.wikipedia.org/wiki/Unix_time Please for the specific semantics of the command refer to the documentation of `EXPIRE`. -[2]: http://en.wikipedia.org/wiki/Unix_time - ## Background `EXPIREAT` was introduced in order to convert relative timeouts to absolute @@ -26,4 +27,3 @@ specify that a given key should expire at a given time in the future. EXISTS mykey EXPIREAT mykey 1293840000 EXISTS mykey - diff --git a/commands/get.md b/commands/get.md index a7543eb16c..4c0b79beeb 100644 --- a/commands/get.md +++ b/commands/get.md @@ -12,4 +12,3 @@ because `GET` only handles string values. GET nonexisting SET mykey "Hello" GET mykey - diff --git a/commands/getbit.md b/commands/getbit.md index b0555f72e3..c74d66f9ef 100644 --- a/commands/getbit.md +++ b/commands/getbit.md @@ -1,13 +1,13 @@ -Returns the bit value at _offset_ in the string value stored at _key_. +Returns the bit value at *offset* in the string value stored at *key*. -When _offset_ is beyond the string length, the string is assumed to be a -contiguous space with 0 bits. When _key_ does not exist it is assumed to be an -empty string, so _offset_ is always out of range and the value is also assumed +When *offset* is beyond the string length, the string is assumed to be a +contiguous space with 0 bits. When *key* does not exist it is assumed to be an +empty string, so *offset* is always out of range and the value is also assumed to be a contiguous space with 0 bits. @return -@integer-reply: the bit value stored at _offset_. +@integer-reply: the bit value stored at *offset*. @examples @@ -16,4 +16,3 @@ to be a contiguous space with 0 bits. GETBIT mykey 0 GETBIT mykey 7 GETBIT mykey 100 - diff --git a/commands/getrange.md b/commands/getrange.md index be937e0728..0b1e46e5bb 100644 --- a/commands/getrange.md +++ b/commands/getrange.md @@ -1,4 +1,5 @@ -**Warning**: this command was renamed to `GETRANGE`, it is called `SUBSTR` in Redis versions `<= 2.0`. +**Warning**: this command was renamed to `GETRANGE`, it is called `SUBSTR` in +Redis versions `<= 2.0`. Returns the substring of the string value stored at `key`, determined by the offsets `start` and `end` (both are inclusive). Negative offsets can be used in @@ -20,4 +21,3 @@ the actual length of the string. GETRANGE mykey -3 -1 GETRANGE mykey 0 -1 GETRANGE mykey 10 100 - diff --git a/commands/getset.md b/commands/getset.md index b42389d36d..3aebcf7e1c 100644 --- a/commands/getset.md +++ b/commands/getset.md @@ -3,10 +3,10 @@ Returns an error when `key` exists but does not hold a string value. ## Design pattern -`GETSET` can be used together with `INCR` for counting with atomic reset. For +`GETSET` can be used together with `INCR` for counting with atomic reset. For example: a process may call `INCR` against the key `mycounter` every time some event occurs, but from time to time we need to get the value of the counter and -reset it to zero atomically. This can be done using `GETSET mycounter "0"`: +reset it to zero atomically. This can be done using `GETSET mycounter "0"`: @cli INCR mycounter @@ -23,4 +23,3 @@ reset it to zero atomically. This can be done using `GETSET mycounter "0"`: SET mykey "Hello" GETSET mykey "World" GET mykey - diff --git a/commands/hdel.md b/commands/hdel.md index 3bec43f9c6..3b9241d873 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -1,7 +1,6 @@ Removes the specified fields from the hash stored at `key`. Specified fields that do not exist within this hash are ignored. If `key` does not exist, it is -treated as an empty hash and this command returns -`0`. +treated as an empty hash and this command returns `0`. @return @@ -10,10 +9,11 @@ including specified but non existing fields. @history -* `>= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4 can only remove a field per call. +* `>= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4 + can only remove a field per call. To remove multiple fields from a hash in an atomic fashion in earlier - versions, use a `MULTI`/`EXEC` block. + versions, use a `MULTI` / `EXEC` block. @examples @@ -21,4 +21,3 @@ including specified but non existing fields. HSET myhash field1 "foo" HDEL myhash field1 HDEL myhash field2 - diff --git a/commands/hexists.md b/commands/hexists.md index e52755e56f..0df9c222ec 100644 --- a/commands/hexists.md +++ b/commands/hexists.md @@ -13,4 +13,3 @@ Returns if `field` is an existing field in the hash stored at `key`. HSET myhash field1 "foo" HEXISTS myhash field1 HEXISTS myhash field2 - diff --git a/commands/hget.md b/commands/hget.md index ff5448b4fd..5eae9ea7bb 100644 --- a/commands/hget.md +++ b/commands/hget.md @@ -11,4 +11,3 @@ present in the hash or `key` does not exist. HSET myhash field1 "foo" HGET myhash field1 HGET myhash field2 - diff --git a/commands/hgetall.md b/commands/hgetall.md index d96c85cc4f..6a858f0035 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -13,4 +13,3 @@ empty list when `key` does not exist. HSET myhash field1 "Hello" HSET myhash field2 "World" HGETALL myhash - diff --git a/commands/hincrby.md b/commands/hincrby.md index 6514dadb0e..75c49aab02 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -1,6 +1,6 @@ Increments the number stored at `field` in the hash stored at `key` by -`increment`. If `key` does not exist, a new key holding a hash is created. If -`field` does not exist the value is set to `0` before the operation is +`increment`. If `key` does not exist, a new key holding a hash is created. +If `field` does not exist the value is set to `0` before the operation is performed. The range of values supported by `HINCRBY` is limited to 64 bit signed integers. @@ -19,4 +19,3 @@ operations can be performed: HINCRBY myhash field 1 HINCRBY myhash field -1 HINCRBY myhash field -10 - diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md index 50d6fa9446..b80f7b619d 100644 --- a/commands/hincrbyfloat.md +++ b/commands/hincrbyfloat.md @@ -4,7 +4,8 @@ exist, it is set to `0` before performing the operation. An error is returned if one of the following conditions occur: * The field contains a value of the wrong type (not a string). -* The current field content or the specified increment are not parsable as a double precision floating point number. +* The current field content or the specified increment are not parsable as a + double precision floating point number. The exact behavior of this command is identical to the one of the `INCRBYFLOAT` command, please refer to the documentation of `INCRBYFLOAT` for further diff --git a/commands/hkeys.md b/commands/hkeys.md index 2f6b4001d7..6bc8bd3cf4 100644 --- a/commands/hkeys.md +++ b/commands/hkeys.md @@ -11,4 +11,3 @@ not exist. HSET myhash field1 "Hello" HSET myhash field2 "World" HKEYS myhash - diff --git a/commands/hlen.md b/commands/hlen.md index b068cd185f..df116704f7 100644 --- a/commands/hlen.md +++ b/commands/hlen.md @@ -10,4 +10,3 @@ Returns the number of fields contained in the hash stored at `key`. HSET myhash field1 "Hello" HSET myhash field2 "World" HLEN myhash - diff --git a/commands/hmget.md b/commands/hmget.md index 66839082cd..d8b470db54 100644 --- a/commands/hmget.md +++ b/commands/hmget.md @@ -14,4 +14,3 @@ order as they are requested. HSET myhash field1 "Hello" HSET myhash field2 "World" HMGET myhash field1 field2 nofield - diff --git a/commands/hmset.md b/commands/hmset.md index e2ad3e058f..fba775cfaa 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -12,4 +12,3 @@ not exist, a new key holding a hash is created. HMSET myhash field1 "Hello" field2 "World" HGET myhash field1 HGET myhash field2 - diff --git a/commands/hset.md b/commands/hset.md index 0fd56764f7..cfe15ef113 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -14,4 +14,3 @@ overwritten. @cli HSET myhash field1 "Hello" HGET myhash field1 - diff --git a/commands/hsetnx.md b/commands/hsetnx.md index 0bf86efe5f..49e98781e4 100644 --- a/commands/hsetnx.md +++ b/commands/hsetnx.md @@ -15,4 +15,3 @@ yet exist. If `key` does not exist, a new key holding a hash is created. If HSETNX myhash field "Hello" HSETNX myhash field "World" HGET myhash field - diff --git a/commands/hvals.md b/commands/hvals.md index d6793c300d..31ca894996 100644 --- a/commands/hvals.md +++ b/commands/hvals.md @@ -11,4 +11,3 @@ not exist. HSET myhash field1 "Hello" HSET myhash field2 "World" HVALS myhash - diff --git a/commands/incr.md b/commands/incr.md index 488974cdde..4f2d42087d 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -35,9 +35,15 @@ string representing the current date. This simple pattern can be extended in many ways: -* It is possible to use `INCR` and `EXPIRE` together at every page view to have a counter counting only the latest N page views separated by less than the specified amount of seconds. -* A client may use GETSET in order to atomically get the current counter value and reset it to zero. -* Using other atomic increment/decrement commands like `DECR` or `INCRBY` it is possible to handle values that may get bigger or smaller depending on the operations performed by the user. Imagine for instance the score of different users in an online game. +* It is possible to use `INCR` and `EXPIRE` together at every page view to have + a counter counting only the latest N page views separated by less than the + specified amount of seconds. +* A client may use GETSET in order to atomically get the current counter value + and reset it to zero. +* Using other atomic increment/decrement commands like `DECR` or `INCRBY` it + is possible to handle values that may get bigger or smaller depending on the + operations performed by the user. Imagine for instance the score of different + users in an online game. ## Pattern: Rate limiter @@ -94,12 +100,12 @@ to get it right without race conditions. We'll examine different variants. The counter is created in a way that it only will survive one second, starting from the first request performed in the current second. If there are more than -10 requests in the same second the counter will reach a value greater than -10, otherwise it will expire and start again from 0. +10 requests in the same second the counter will reach a value greater than 10, +otherwise it will expire and start again from 0. -**In the above code there is a race condition**. If for some reason the -client performs the `INCR` command but does not perform the `EXPIRE` the -key will be leaked until we'll see the same IP address again. +**In the above code there is a race condition**. If for some reason the client +performs the `INCR` command but does not perform the `EXPIRE` the key will be +leaked until we'll see the same IP address again. This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua script that is send using the `EVAL` command (only available since Redis version @@ -137,6 +143,5 @@ The `RPUSHX` command only pushes the element if the key already exists. Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside -the -`MULTI`/`EXEC` block. However this race will just miss an API call under rare -conditions, so the rate limiting will still work correctly. +the `MULTI` / `EXEC` block. However this race will just miss an API call under +rare conditions, so the rate limiting will still work correctly. diff --git a/commands/incrby.md b/commands/incrby.md index 4e121d9be5..e60e45fe9d 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -14,4 +14,3 @@ See `INCR` for extra information on increment/decrement operations. @cli SET mykey "10" INCRBY mykey 5 - diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md index 4e8d47986f..8dcea20f13 100644 --- a/commands/incrbyfloat.md +++ b/commands/incrbyfloat.md @@ -4,7 +4,8 @@ before performing the operation. An error is returned if one of the following conditions occur: * The key contains a value of the wrong type (not a string). -* The current key content or the specified increment are not parsable as a double precision floating point number. +* The current key content or the specified increment are not parsable as a + double precision floating point number. If the command is successful the new incremented value is stored as the new value of the key (replacing the old one), and returned to the caller as a diff --git a/commands/info.md b/commands/info.md index a679501a87..a1b6fffe66 100644 --- a/commands/info.md +++ b/commands/info.md @@ -23,24 +23,24 @@ All the fields are in the form of `field:value` terminated by `\r\n`. ## Notes * `used_memory` is the total number of bytes allocated by Redis using its - allocator (either standard `libc` `malloc`, or an alternative allocator such as - [`tcmalloc`][1] + allocator (either standard `libc` `malloc`, or an alternative allocator such + as [`tcmalloc`][hcgcpgp] * `used_memory_rss` is the number of bytes that Redis allocated as seen by the operating system. Optimally, this number is close to `used_memory` and there is little memory fragmentation. This is the number reported by tools such as - `top` and `ps`. A large difference between these numbers means there is - memory fragmentation. Because Redis does not have control over how its - allocations are mapped to memory pages, `used_memory_rss` is often the result - of a spike in memory usage. The ratio between `used_memory_rss` and - `used_memory` is given as `mem_fragmentation_ratio`. + `top` and `ps`. A large difference between these numbers means there is memory + fragmentation. Because Redis does not have control over how its allocations + are mapped to memory pages, `used_memory_rss` is often the result of a spike + in memory usage. The ratio between `used_memory_rss` and `used_memory` is + given as `mem_fragmentation_ratio`. * `changes_since_last_save` refers to the number of operations that produced some kind of change in the dataset since the last time either `SAVE` or `BGSAVE` was called. -* `allocation_stats` holds a histogram containing the number of allocations of - a certain size (up to 256). This provides a means of introspection for the - type of allocations performed by Redis at run time. +* `allocation_stats` holds a histogram containing the number of allocations of a + certain size (up to 256). This provides a means of introspection for the type + of allocations performed by Redis at run time. -[1]: http://code.google.com/p/google-perftools/ +[hcgcpgp]: http://code.google.com/p/google-perftools/ diff --git a/commands/keys.md b/commands/keys.md index 234fb4def8..8499438d3a 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -4,14 +4,14 @@ While the time complexity for this operation is O(N), the constant times are fairly low. For example, Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds. -**Warning**: consider `KEYS` as a command that should only be used in -production environments with extreme care. It may ruin performance when it is -executed against large databases. This command is intended for debugging and -special operations, such as changing your keyspace layout. Don't use `KEYS` -in your regular application code. If you're looking for a way to find keys in -a subset of your keyspace, consider using [sets][sets]. +**Warning**: consider `KEYS` as a command that should only be used in production +environments with extreme care. It may ruin performance when it is executed +against large databases. This command is intended for debugging and special +operations, such as changing your keyspace layout. Don't use `KEYS` in your +regular application code. If you're looking for a way to find keys in a subset +of your keyspace, consider using [sets][tdts]. -[sets]: /topics/data-types#sets +[tdts]: /topics/data-types#sets Supported glob-style patterns: @@ -32,4 +32,3 @@ Use `\` to escape special characters if you want to match them verbatim. KEYS *o* KEYS t?? KEYS * - diff --git a/commands/lastsave.md b/commands/lastsave.md index 8a4dea1ee9..93c5cbcce5 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,6 +1,6 @@ Return the UNIX TIME of the last DB save executed with success. A client may -check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then issuing -a `BGSAVE` command and checking at regular intervals every N seconds if +check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then +issuing a `BGSAVE` command and checking at regular intervals every N seconds if `LASTSAVE` changed. @return diff --git a/commands/lindex.md b/commands/lindex.md index 7432ba5851..c9bd85ccc3 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -18,4 +18,3 @@ When the value at `key` is not a list, an error is returned. LINDEX mylist 0 LINDEX mylist -1 LINDEX mylist 3 - diff --git a/commands/linsert.md b/commands/linsert.md index 9fb680f003..5efcc71091 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -18,4 +18,3 @@ the value `pivot` was not found. RPUSH mylist "World" LINSERT mylist BEFORE "World" "There" LRANGE mylist 0 -1 - diff --git a/commands/llen.md b/commands/llen.md index d61808551f..b4d7e16319 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -12,4 +12,3 @@ value stored at `key` is not a list. LPUSH mylist "World" LPUSH mylist "Hello" LLEN mylist - diff --git a/commands/lpop.md b/commands/lpop.md index 056fea6efd..6b68c3eb41 100644 --- a/commands/lpop.md +++ b/commands/lpop.md @@ -12,4 +12,3 @@ Removes and returns the first element of the list stored at `key`. RPUSH mylist "three" LPOP mylist LRANGE mylist 0 -1 - diff --git a/commands/lpush.md b/commands/lpush.md index 513b82f811..f4125de5e4 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -15,7 +15,8 @@ third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. +* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 + it was possible to push a single value per command. @examples @@ -23,4 +24,3 @@ third element. LPUSH mylist "world" LPUSH mylist "hello" LRANGE mylist 0 -1 - diff --git a/commands/lpushx.md b/commands/lpushx.md index eeda9bc025..22da91e651 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -14,4 +14,3 @@ when `key` does not yet exist. LPUSHX myotherlist "Hello" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 - diff --git a/commands/lrange.md b/commands/lrange.md index c6468110c9..9a6f9c9a85 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -1,6 +1,6 @@ -Returns the specified elements of the list stored at `key`. The offsets -`start` and `stop` are zero-based indexes, with `0` being the first element of -the list (the head of the list), `1` being the next element and so on. +Returns the specified elements of the list stored at `key`. The offsets `start` +and `stop` are zero-based indexes, with `0` being the first element of the list +(the head of the list), `1` being the next element and so on. These offsets can also be negative numbers indicating offsets starting at the end of the list. For example, `-1` is the last element of the list, `-2` the @@ -34,4 +34,3 @@ end of the list, Redis will treat it like the last element of the list. LRANGE mylist -3 2 LRANGE mylist -100 100 LRANGE mylist 5 10 - diff --git a/commands/lrem.md b/commands/lrem.md index 0ba5010b6a..baa2328857 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -25,4 +25,3 @@ exist, the command will always return `0`. RPUSH mylist "hello" LREM mylist -2 "hello" LRANGE mylist 0 -1 - diff --git a/commands/lset.md b/commands/lset.md index 94331dd492..12fa6e7621 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -1,5 +1,5 @@ -Sets the list element at `index` to `value`. For more information on the -`index` argument, see `LINDEX`. +Sets the list element at `index` to `value`. For more information on the `index` +argument, see `LINDEX`. An error is returned for out of range indexes. @@ -16,4 +16,3 @@ An error is returned for out of range indexes. LSET mylist 0 "four" LSET mylist -2 "five" LRANGE mylist 0 -1 - diff --git a/commands/ltrim.md b/commands/ltrim.md index 7d4e98a5cd..08b8f87d40 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -14,7 +14,7 @@ end of the list, or `start > end`, the result will be an empty list (which causes `key` to be removed). If `end` is larger than the end of the list, Redis will treat it like the last element of the list. -A common use of `LTRIM` is together with `LPUSH`/`RPUSH`. For example: +A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. For example: LPUSH mylist someelement LTRIM mylist 0 99 @@ -37,4 +37,3 @@ element is removed from the tail of the list. RPUSH mylist "three" LTRIM mylist 1 -1 LRANGE mylist 0 -1 - diff --git a/commands/mget.md b/commands/mget.md index fbd49197f8..eda675894e 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -12,4 +12,3 @@ this, the operation never fails. SET key1 "Hello" SET key2 "World" MGET key1 key2 nonexisting - diff --git a/commands/migrate.md b/commands/migrate.md index 3fe1a682a0..d7ff3f4a3f 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -18,7 +18,10 @@ does not need to be completed within the specified amount of milliseconds, but that the transfer should make progresses without blocking for more than the specified amount of milliseconds. -`MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the operation is aborted and the special error -`IOERR` returned. When this happens the following two cases are possible: +`MIGRATE` needs to perform I/O operations and to honor the specified timeout. +When there is an I/O error during the transfer or if the timeout is reached the +operation is aborted and the special error - `IOERR` returned. When this happens +the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. diff --git a/commands/monitor.md b/commands/monitor.md index d1e84c56f8..93ebd1af0a 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -1,7 +1,6 @@ -`MONITOR` is a debugging command that streams back every command -processed by the Redis server. It can help in understanding what is -happening to the database. This command can both be used via `redis-cli` -and via `telnet`. +`MONITOR` is a debugging command that streams back every command processed +by the Redis server. It can help in understanding what is happening to the +database. This command can both be used via `redis-cli` and via `telnet`. The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a @@ -15,8 +14,7 @@ distributed caching system. 1339518099.363765 [0 127.0.0.1:60866] "del" "x" 1339518100.544926 [0 127.0.0.1:60866] "get" "x" -Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via -`redis-cli`. +Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. $ telnet localhost 6379 Trying 127.0.0.1... @@ -52,8 +50,7 @@ Benchmark result **without** `MONITOR` running: GET: 104275.29 requests per second INCR: 93283.58 requests per second -Benchmark result **with** `MONITOR` running (`redis-cli monitor > -/dev/null`): +Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`): $ src/redis-benchmark -c 10 -n 100000 -q PING_INLINE: 58479.53 requests per second diff --git a/commands/move.md b/commands/move.md index 864aeaecf0..f63a045bdb 100644 --- a/commands/move.md +++ b/commands/move.md @@ -9,4 +9,3 @@ it does not exist in the source database, it does nothing. It is possible to use * `1` if `key` was moved. * `0` if `key` was not moved. - diff --git a/commands/mset.md b/commands/mset.md index a0e5cbaa86..4a45c03637 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -15,4 +15,3 @@ clients to see that some of the keys were updated while others are unchanged. MSET key1 "Hello" key2 "World" GET key1 GET key2 - diff --git a/commands/msetnx.md b/commands/msetnx.md index 7359438971..e9b656bcfb 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -21,4 +21,3 @@ clients to see that some of the keys were updated while others are unchanged. MSETNX key1 "Hello" key2 "there" MSETNX key2 "there" key3 "world" MGET key1 key2 key3 - diff --git a/commands/multi.md b/commands/multi.md index 0a5bf7138a..f6c69be303 100644 --- a/commands/multi.md +++ b/commands/multi.md @@ -1,8 +1,7 @@ -Marks the start of a [transaction][transactions] block. Subsequent commands will -be queued for atomic execution using -`EXEC`. +Marks the start of a [transaction][tt] block. Subsequent commands will be queued +for atomic execution using `EXEC`. -[transactions]: /topics/transactions +[tt]: /topics/transactions @return diff --git a/commands/object.md b/commands/object.md index 7b94fd4b15..35379b01bc 100644 --- a/commands/object.md +++ b/commands/object.md @@ -6,17 +6,30 @@ key eviction policies when using Redis as a Cache. The `OBJECT` command supports multiple sub commands: -* `OBJECT REFCOUNT ` returns the number of references of the value associated with the specified key. This command is mainly useful for debugging. -* `OBJECT ENCODING ` returns the kind of internal representation used in order to store the value associated with a key. -* `OBJECT IDLETIME ` returns the number of seconds since the object stored at the specified key is idle (not requested by read or write operations). While the value is returned in seconds the actual resolution of this timer is 10 seconds, but may vary in future implementations. +* `OBJECT REFCOUNT ` returns the number of references of the value + associated with the specified key. This command is mainly useful for + debugging. +* `OBJECT ENCODING ` returns the kind of internal representation used in + order to store the value associated with a key. +* `OBJECT IDLETIME ` returns the number of seconds since the object stored + at the specified key is idle (not requested by read or write operations). + While the value is returned in seconds the actual resolution of this timer is + 10 seconds, but may vary in future implementations. Objects can be encoded in different ways: -* Strings can be encoded as `raw` (normal string encoding) or `int` (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space). -* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the special representation that is used to save space for small lists. -* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special encoding used for small sets composed solely of integers. -* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special encoding used for small hashes. -* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size. +* Strings can be encoded as `raw` (normal string encoding) or `int` (strings + representing integers in a 64 bit signed interval are encoded in this way in + order to save space). +* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the + special representation that is used to save space for small lists. +* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special + encoding used for small sets composed solely of integers. +* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special + encoding used for small hashes. +* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List + type small sorted sets can be specially encoded using `ziplist`, while the + `skiplist` encoding is the one that works with sorted sets of any size. All the specially encoded types are automatically converted to the general type once you perform an operation that makes it no possible for Redis to retain the @@ -55,4 +68,3 @@ longer able to use the space saving encoding. "1000bar" redis> object encoding foo "raw" - diff --git a/commands/persist.md b/commands/persist.md index 5c8e63e32a..f0c4ccd14c 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -1,5 +1,5 @@ -Remove the existing timeout on `key`, turning the key from _volatile_ (a key -with an expire set) to _persistent_ (a key that will never expire as no timeout +Remove the existing timeout on `key`, turning the key from *volatile* (a key +with an expire set) to *persistent* (a key that will never expire as no timeout is associated). @return @@ -17,4 +17,3 @@ is associated). TTL mykey PERSIST mykey TTL mykey - diff --git a/commands/pexpireat.md b/commands/pexpireat.md index 9ffd8bb3fb..febff0daca 100644 --- a/commands/pexpireat.md +++ b/commands/pexpireat.md @@ -2,8 +2,8 @@ O(1) - -`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at which the key will expire is specified in milliseconds instead of seconds. +`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at +which the key will expire is specified in milliseconds instead of seconds. @return diff --git a/commands/ping.md b/commands/ping.md index a04eb4408f..f80ca6a931 100644 --- a/commands/ping.md +++ b/commands/ping.md @@ -9,4 +9,3 @@ alive, or to measure latency. @cli PING - diff --git a/commands/psetex.md b/commands/psetex.md index add3901b3a..fd0f0e731c 100644 --- a/commands/psetex.md +++ b/commands/psetex.md @@ -2,7 +2,8 @@ O(1) -`PSETEX` works exactly like `SETEX` with the sole difference that the expire time is specified in milliseconds instead of seconds. +`PSETEX` works exactly like `SETEX` with the sole difference that the expire +time is specified in milliseconds instead of seconds. @examples diff --git a/commands/pttl.md b/commands/pttl.md index dde4e65673..7d3f6b4c21 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -2,7 +2,6 @@ O(1) - Like `TTL` this command returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. @@ -18,4 +17,3 @@ or does not have a timeout. SET mykey "Hello" EXPIRE mykey 1 PTTL mykey - diff --git a/commands/quit.md b/commands/quit.md index 69cf085214..f36b86a1ce 100644 --- a/commands/quit.md +++ b/commands/quit.md @@ -4,4 +4,3 @@ pending replies have been written to the client. @return @status-reply: always OK. - diff --git a/commands/randomkey.md b/commands/randomkey.md index 9517bfaad5..94a9b800a2 100644 --- a/commands/randomkey.md +++ b/commands/randomkey.md @@ -3,4 +3,3 @@ Return a random key from the currently selected database. @return @bulk-reply: the random key, or `nil` when the database is empty. - diff --git a/commands/rename.md b/commands/rename.md index 5e1d5fc191..1dd586018a 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -12,4 +12,3 @@ is overwritten. SET mykey "Hello" RENAME mykey myotherkey GET myotherkey - diff --git a/commands/renamenx.md b/commands/renamenx.md index 5f95e76c45..1128bf6757 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -15,4 +15,3 @@ under the same conditions as `RENAME`. SET myotherkey "World" RENAMENX mykey myotherkey GET myotherkey - diff --git a/commands/restore.md b/commands/restore.md index 55a1e5027b..9b04f820fe 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -4,7 +4,8 @@ provided serialized value (obtained via `DUMP`). If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. -`RESTORE` checks the RDB version and data checksum. If they don't match an error is returned. +`RESTORE` checks the RDB version and data checksum. If they don't match an error +is returned. @return diff --git a/commands/rpop.md b/commands/rpop.md index ec65f946dc..d28fcc7e24 100644 --- a/commands/rpop.md +++ b/commands/rpop.md @@ -12,4 +12,3 @@ Removes and returns the last element of the list stored at `key`. RPUSH mylist "three" RPOP mylist LRANGE mylist 0 -1 - diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 98602df295..f48fb4a45e 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -29,20 +29,18 @@ element of the list, so it can be considered as a list rotation command. Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging tasks. A simple form of queue is often obtained -pushing values into a list in the producer side, and waiting for this values in -the consumer side using `RPOP` -(using polling), or `BRPOP` if the client is better served -by a blocking operation. +pushing values into a list in the producer side, and waiting for this values +in the consumer side using `RPOP` (using polling), or `BRPOP` if the client is +better served by a blocking operation. However in this context the obtained queue is not *reliable* as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process. -`RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) -offers a way to avoid this problem: the consumer fetches the message and -at the same time pushes it into a *processing* list. It will use the -`LREM` command in order to remove the message from the -*processing* list once the message has been processed. +`RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid +this problem: the consumer fetches the message and at the same time pushes it +into a *processing* list. It will use the `LREM` command in order to remove the +message from the *processing* list once the message has been processed. An additional client may monitor the *processing* list for items that remain there for too much time, and will push those timed out items into the queue @@ -52,12 +50,13 @@ again if needed. Using `RPOPLPUSH` with the same source and destination key, a client can visit all the elements of an N-elements list, one after the other, in O(N) without -transferring the full list from the server to the client using a single -`LRANGE` operation. +transferring the full list from the server to the client using a single `LRANGE` +operation. -The above pattern works even if the following two conditions: -* There are multiple clients rotating the list: they'll fetch different elements, until all the elements of the list are visited, and the process restarts. -* Even if other clients are actively pushing new items at the end of the list. +The above pattern works even if the following two conditions: * There are +multiple clients rotating the list: they'll fetch different elements, until all +the elements of the list are visited, and the process restarts. * Even if other +clients are actively pushing new items at the end of the list. The above makes it very simple to implement a system where a set of items must be processed by N workers continuously as fast as possible. An example is a diff --git a/commands/rpush.md b/commands/rpush.md index fd9123ad2d..a655598e0a 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -15,7 +15,8 @@ third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. +* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 + it was possible to push a single value per command. @examples @@ -23,4 +24,3 @@ third element. RPUSH mylist "hello" RPUSH mylist "world" LRANGE mylist 0 -1 - diff --git a/commands/rpushx.md b/commands/rpushx.md index 3095141e3b..a7f8d04a73 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -14,4 +14,3 @@ when `key` does not yet exist. RPUSHX myotherlist "World" LRANGE mylist 0 -1 LRANGE myotherlist 0 -1 - diff --git a/commands/sadd.md b/commands/sadd.md index b698b0ecce..e6ad0cf3d4 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -11,7 +11,8 @@ all the elements already present into the set. @history -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions before 2.4 are only able to add a single member per call. +* `>= 2.4`: Accepts multiple `member` arguments. Redis versions before 2.4 are + only able to add a single member per call. @examples @@ -20,4 +21,3 @@ all the elements already present into the set. SADD myset "World" SADD myset "World" SMEMBERS myset - diff --git a/commands/save.md b/commands/save.md index ba9a7878d2..961a6a2d5d 100644 --- a/commands/save.md +++ b/commands/save.md @@ -8,10 +8,9 @@ of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the `SAVE` command can be a good last resort to perform the dump of the latest dataset. -Please refer to the [persistence documentation][persistence] for detailed -information. +Please refer to the [persistence documentation][tp] for detailed information. -[persistence]: /topics/persistence +[tp]: /topics/persistence @return diff --git a/commands/scard.md b/commands/scard.md index e54ca5be14..59f0926a9f 100644 --- a/commands/scard.md +++ b/commands/scard.md @@ -2,8 +2,8 @@ Returns the set cardinality (number of elements) of the set stored at `key`. @return -@integer-reply: the cardinality (number of elements) of the set, or `0` if -`key` does not exist. +@integer-reply: the cardinality (number of elements) of the set, or `0` if `key` +does not exist. @examples @@ -11,4 +11,3 @@ Returns the set cardinality (number of elements) of the set stored at `key`. SADD myset "Hello" SADD myset "World" SCARD myset - diff --git a/commands/sdiff.md b/commands/sdiff.md index 272097b437..e1ae603dd4 100644 --- a/commands/sdiff.md +++ b/commands/sdiff.md @@ -24,4 +24,3 @@ Keys that do not exist are considered to be empty sets. SADD key2 "d" SADD key2 "e" SDIFF key1 key2 - diff --git a/commands/select.md b/commands/select.md index 0d78628da7..29825ad3d7 100644 --- a/commands/select.md +++ b/commands/select.md @@ -4,4 +4,3 @@ connections always use DB 0. @return @status-reply - diff --git a/commands/set.md b/commands/set.md index f5be816656..34d9a8c284 100644 --- a/commands/set.md +++ b/commands/set.md @@ -10,4 +10,3 @@ overwritten, regardless of its type. @cli SET mykey "Hello" GET mykey - diff --git a/commands/setbit.md b/commands/setbit.md index 3e6d801774..0c511be7b2 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -1,25 +1,24 @@ -Sets or clears the bit at _offset_ in the string value stored at _key_. +Sets or clears the bit at *offset* in the string value stored at *key*. -The bit is either set or cleared depending on _value_, which can be either 0 or -1. When _key_ does not exist, a new string value is created. The string is -grown to make sure it can hold a bit at _offset_. The _offset_ argument is -required to be greater than or equal to 0, and smaller than 2^32 (this -limits bitmaps to 512MB). When the string at _key_ is grown, added -bits are set to 0. +The bit is either set or cleared depending on *value*, which can be either 0 or +1. When *key* does not exist, a new string value is created. The string is grown +to make sure it can hold a bit at *offset*. The *offset* argument is required +to be greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to +512MB). When the string at *key* is grown, added bits are set to 0. -**Warning**: When setting the last possible bit (_offset_ equal to 2^32 -1) and -the string value stored at _key_ does not yet hold a string value, or holds a -small string value, Redis needs to allocate all intermediate memory which can -block the server for some time. On a 2010 MacBook Pro, setting bit number +**Warning**: When setting the last possible bit (*offset* equal to 2^32 -1) and +the string value stored at *key* does not yet hold a string value, or holds +a small string value, Redis needs to allocate all intermediate memory which +can block the server for some time. On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes -~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that -once this first allocation is done, subsequent calls to `SETBIT` for the same -_key_ will not have the allocation overhead. +~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once +this first allocation is done, subsequent calls to `SETBIT` for the same *key* +will not have the allocation overhead. @return -@integer-reply: the original bit value stored at _offset_. +@integer-reply: the original bit value stored at *offset*. @examples @@ -27,4 +26,3 @@ _key_ will not have the allocation overhead. SETBIT mykey 7 1 SETBIT mykey 7 0 GET mykey - diff --git a/commands/setex.md b/commands/setex.md index 465a2bc28a..d29221df96 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -6,7 +6,7 @@ commands: EXPIRE mykey seconds `SETEX` is atomic, and can be reproduced by using the previous two commands -inside an `MULTI`/`EXEC` block. It is provided as a faster alternative to the +inside an `MULTI` / `EXEC` block. It is provided as a faster alternative to the given sequence of operations, because this operation is very common when Redis is used as a cache. @@ -22,4 +22,3 @@ An error is returned when `seconds` is invalid. SETEX mykey 10 "Hello" TTL mykey GET mykey - diff --git a/commands/setnx.md b/commands/setnx.md index b437f68c87..37b4b64484 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -1,6 +1,6 @@ Set `key` to hold string `value` if `key` does not exist. In that case, it is equal to `SET`. When `key` already holds a value, no operation is performed. -`SETNX` is short for "**SET** if **N**ot e**X**ists". +`SETNX` is short for "**SET** if **N** ot e **X** ists". @return @@ -18,8 +18,8 @@ equal to `SET`. When `key` already holds a value, no operation is performed. ## Design pattern: Locking with `!SETNX` -`SETNX` can be used as a locking primitive. For example, to acquire -the lock of the key `foo`, the client could try the following: +`SETNX` can be used as a locking primitive. For example, to acquire the lock of +the key `foo`, the client could try the following: SETNX lock.foo @@ -63,16 +63,17 @@ Let's see how C4, our sane client, uses the good algorithm: GETSET lock.foo -* Because of the `GETSET` semantic, C4 can check if the old value stored - at `key` is still an expired timestamp. If it is, the lock was acquired. -* If another client, for instance C5, was faster than C4 and acquired - the lock with the `GETSET` operation, the C4 `GETSET` operation will return a non +* Because of the `GETSET` semantic, C4 can check if the old value stored at + `key` is still an expired timestamp. If it is, the lock was acquired. + +* If another client, for instance C5, was faster than C4 and acquired the lock + with the `GETSET` operation, the C4 `GETSET` operation will return a non expired timestamp. C4 will simply restart from the first step. Note that even if C4 set the key a bit a few seconds in the future this is not a problem. -**Important note**: In order to make this locking algorithm more robust, a client -holding a lock should always check the timeout didn't expire before unlocking -the key with `DEL` because client failures can be complex, not just crashing -but also blocking a lot of time against some operations and trying to issue -`DEL` after a lot of time (when the LOCK is already held by another client). - +**Important note**: In order to make this locking algorithm more robust, a +client holding a lock should always check the timeout didn't expire before +unlocking the key with `DEL` because client failures can be complex, not just +crashing but also blocking a lot of time against some operations and trying +to issue `DEL` after a lot of time (when the LOCK is already held by another +client). diff --git a/commands/setrange.md b/commands/setrange.md index 7a6f476503..8a7204cad2 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,23 +1,23 @@ -Overwrites part of the string stored at _key_, starting at the specified offset, -for the entire length of _value_. If the offset is larger than the current -length of the string at _key_, the string is padded with zero-bytes to make -_offset_ fit. Non-existing keys are considered as empty strings, so this command -will make sure it holds a string large enough to be able to set _value_ at -_offset_. +Overwrites part of the string stored at *key*, starting at the specified offset, +for the entire length of *value*. If the offset is larger than the current +length of the string at *key*, the string is padded with zero-bytes to make +*offset* fit. Non-existing keys are considered as empty strings, so this command +will make sure it holds a string large enough to be able to set *value* at +*offset*. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you can use multiple keys. **Warning**: When setting the last possible byte and the string value stored at -_key_ does not yet hold a string value, or holds a small string value, Redis +*key* does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some -time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) +time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, -setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit -number 8388608 (8MB allocation) takes ~8ms. Note that once this first -allocation is done, subsequent calls to `SETRANGE` for the same _key_ will not -have the allocation overhead. +setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number +8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is +done, subsequent calls to `SETRANGE` for the same *key* will not have the +allocation overhead. ## Patterns @@ -43,4 +43,3 @@ Example of zero padding: @cli SETRANGE key2 6 "Redis" GET key2 - diff --git a/commands/shutdown.md b/commands/shutdown.md index 860d8899c7..dd7c239de2 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -10,18 +10,21 @@ without the lost of any data. This is not guaranteed if the client uses simply `SAVE` and then `QUIT` because other clients may alter the DB data between the two commands. -Note: A Redis instance that is configured for not persisting on disk -(no AOF configured, nor "save" directive) will not dump the RDB file on -`SHUTDOWN`, as usually you don't want Redis instances used only for caching -to block on when shutting down. +Note: A Redis instance that is configured for not persisting on disk (no AOF +configured, nor "save" directive) will not dump the RDB file on `SHUTDOWN`, as +usually you don't want Redis instances used only for caching to block on when +shutting down. ## SAVE and NOSAVE modifiers It is possible to specify an optional modifier to alter the behavior of the command. Specifically: -* **SHUTDOWN SAVE** will force a DB saving operation even if no save points are configured. -* **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more save points are configured. (You can think at this variant as an hypothetical **ABORT** command that just stops the server). +* **SHUTDOWN SAVE** will force a DB saving operation even if no save points are + configured. +* **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more + save points are configured. (You can think at this variant as an hypothetical + **ABORT** command that just stops the server). @return diff --git a/commands/sinter.md b/commands/sinter.md index e6fdf56b22..70c848aba6 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -26,4 +26,3 @@ an empty set always results in an empty set). SADD key2 "d" SADD key2 "e" SINTER key1 key2 - diff --git a/commands/sismember.md b/commands/sismember.md index bfe474b58c..109b7bfbcb 100644 --- a/commands/sismember.md +++ b/commands/sismember.md @@ -13,4 +13,3 @@ Returns if `member` is a member of the set stored at `key`. SADD myset "one" SISMEMBER myset "one" SISMEMBER myset "two" - diff --git a/commands/slowlog.md b/commands/slowlog.md index 19239b722e..5c996d4886 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -8,15 +8,13 @@ with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). -You can configure the slow log with two parameters: -*slowlog-log-slower-than* tells Redis -what is the execution time, in microseconds, to exceed in order for the -command to get logged. Note that a negative number disables the slow log, -while a value of zero forces the logging of every command. -*slowlog-max-len* is the length of the slow log. The minimum value is zero. -When a new command is logged and the slow log is already at its -maximum length, the oldest one is removed from the queue of logged commands -in order to make space. +You can configure the slow log with two parameters: *slowlog-log-slower-than* +tells Redis what is the execution time, in microseconds, to exceed in order for +the command to get logged. Note that a negative number disables the slow log, +while a value of zero forces the logging of every command. *slowlog-max-len* +is the length of the slow log. The minimum value is zero. When a new command +is logged and the slow log is already at its maximum length, the oldest one is +removed from the queue of logged commands in order to make space. The configuration can be done by editing `redis.conf` or while the server is running using the `CONFIG GET` and `CONFIG SET` commands. @@ -26,8 +24,7 @@ running using the `CONFIG GET` and `CONFIG SET` commands. The slow log is accumulated in memory, so no file is written with information about the slow command executions. This makes the slow log remarkably fast at the point that you can enable the logging of all the commands (setting the -*slowlog-log-slower-than* config parameter to zero) with minor performance -hit. +*slowlog-log-slower-than* config parameter to zero) with minor performance hit. To read the slow log the **SLOWLOG GET** command is used, that returns every entry in the slow log. It is possible to return only the N most recent entries @@ -52,6 +49,7 @@ implemented in redis-cli (deeply nested multi bulk replies). 3) "100" Every entry is composed of four fields: + * A unique progressive identifier for every slow log entry. * The unix timestamp at which the logged command was processed. * The amount of time needed for its execution, in microseconds. diff --git a/commands/smembers.md b/commands/smembers.md index a5f74eaa73..f278c0eed0 100644 --- a/commands/smembers.md +++ b/commands/smembers.md @@ -12,4 +12,3 @@ This has the same effect as running `SINTER` with one argument `key`. SADD myset "Hello" SADD myset "World" SMEMBERS myset - diff --git a/commands/smove.md b/commands/smove.md index bfbcfb033a..7a4b689dd4 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -25,4 +25,3 @@ An error is returned if `source` or `destination` does not hold a set value. SMOVE myset myotherset "two" SMEMBERS myset SMEMBERS myotherset - diff --git a/commands/sort.md b/commands/sort.md index 795d081c74..6f53952dbe 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,14 +1,13 @@ -Returns or stores the elements contained in the -[list][lists], [set][sets] or [sorted set][sorted-sets] - at `key`. By default, sorting is numeric -and elements are compared by their value interpreted as double precision -floating point number. This is `SORT` in its simplest form: +Returns or stores the elements contained in the [list][tdtl], [set][tdts] or +[sorted set][tdtss] at `key`. By default, sorting is numeric and elements are +compared by their value interpreted as double precision floating point number. +This is `SORT` in its simplest form: - SORT mylist +[tdtl]: /topics/data-types#lists +[tdts]: /topics/data-types#set +[tdtss]: /topics/data-types#sorted-sets -[lists]: /topics/data-types#lists -[sets]: /topics/data-types#set -[sorted-sets]: /topics/data-types#sorted-sets + SORT mylist Assuming `mylist` is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from @@ -41,9 +40,9 @@ first 5 elements, lexicographically sorted in descending order: Sometimes you want to sort elements using external keys as weights to compare instead of comparing the actual elements in the list, set or sorted set. Let's -say the list `mylist` contains the elements `1`, `2` and `3` representing unique -IDs of objects stored in `object_1`, `object_2` and `object_3`. When these -objects have associated weights stored in `weight_1`, `weight_2` and +say the list `mylist` contains the elements `1`, `2` and `3` representing +unique IDs of objects stored in `object_1`, `object_2` and `object_3`. When +these objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: @@ -57,8 +56,8 @@ element in the list (`1`, `2` and `3` in this example). ## Skip sorting the elements The `!BY` option can also take a non-existent key, which causes `SORT` to skip -the sorting operation. This is useful if you want to retrieve external keys -(see the `!GET` option below) without the overhead of sorting. +the sorting operation. This is useful if you want to retrieve external keys (see +the `!GET` option below) without the overhead of sorting. SORT mylist BY nosort @@ -90,12 +89,12 @@ An interesting pattern using `SORT ... STORE` consists in associating an `EXPIRE` timeout to the resulting key so that in applications where the result of a `SORT` operation can be cached for some time. Other clients will use the cached list instead of calling `SORT` for every request. When the key will -timeout, an updated version of the cache can be created by calling `SORT ... STORE` again. +timeout, an updated version of the cache can be created by calling `SORT ... +STORE` again. Note that for correctly implementing this pattern it is important to avoid multiple clients rebuilding the cache at the same time. Some kind of locking is -needed here -(for instance using `SETNX`). +needed here (for instance using `SETNX`). ## Using hashes in `!BY` and `!GET` @@ -111,4 +110,3 @@ is accessed to retrieve the specified hash field. @return @multi-bulk-reply: list of sorted elements. - diff --git a/commands/spop.md b/commands/spop.md index c50e332c8f..0f9b230430 100644 --- a/commands/spop.md +++ b/commands/spop.md @@ -15,4 +15,3 @@ set but does not remove it. SADD myset "three" SPOP myset SMEMBERS myset - diff --git a/commands/srandmember.md b/commands/srandmember.md index 9c4ab5b643..196c5a790a 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -15,4 +15,3 @@ element without altering the original set in any way. SADD myset "two" SADD myset "three" SRANDMEMBER myset - diff --git a/commands/srem.md b/commands/srem.md index c33da4fbfc..759c8b853c 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -11,7 +11,8 @@ including non existing members. @history -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions older than 2.4 can only remove a set member per call. +* `>= 2.4`: Accepts multiple `member` arguments. Redis versions older than 2.4 + can only remove a set member per call. @examples @@ -22,4 +23,3 @@ including non existing members. SREM myset "one" SREM myset "four" SMEMBERS myset - diff --git a/commands/strlen.md b/commands/strlen.md index b785fe9d1d..6700b71d0d 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -12,4 +12,3 @@ exist. SET mykey "Hello world" STRLEN mykey STRLEN nonexisting - diff --git a/commands/subscribe.md b/commands/subscribe.md index 6709664d69..eb05702462 100644 --- a/commands/subscribe.md +++ b/commands/subscribe.md @@ -1,5 +1,5 @@ Subscribes the client to the specified channels. Once the client enters the subscribed state it is not supposed to issue any -other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, -`UNSUBSCRIBE` and `PUNSUBSCRIBE` commands. +other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE` +and `PUNSUBSCRIBE` commands. diff --git a/commands/sunion.md b/commands/sunion.md index 98c30cd5fb..2cdaaadd84 100644 --- a/commands/sunion.md +++ b/commands/sunion.md @@ -23,4 +23,3 @@ Keys that do not exist are considered to be empty sets. SADD key2 "d" SADD key2 "e" SUNION key1 key2 - diff --git a/commands/time.md b/commands/time.md index a7c5ce454d..422f1fdf6e 100644 --- a/commands/time.md +++ b/commands/time.md @@ -2,7 +2,6 @@ O(1) - The `TIME` command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the `gettimeofday` system diff --git a/commands/ttl.md b/commands/ttl.md index 409063e98b..3c31fcec5f 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -13,4 +13,3 @@ have a timeout. SET mykey "Hello" EXPIRE mykey 10 TTL mykey - diff --git a/commands/type.md b/commands/type.md index 4c68019a76..9a0225f7e8 100644 --- a/commands/type.md +++ b/commands/type.md @@ -15,4 +15,3 @@ different types that can be returned are: `string`, `list`, `set`, `zset` and TYPE key1 TYPE key2 TYPE key3 - diff --git a/commands/unwatch.md b/commands/unwatch.md index 853766dfcb..1a0b20e600 100644 --- a/commands/unwatch.md +++ b/commands/unwatch.md @@ -1,6 +1,6 @@ -Flushes all the previously watched keys for a [transaction][transactions]. +Flushes all the previously watched keys for a [transaction][tt]. -[transactions]: /topics/transactions +[tt]: /topics/transactions If you call `EXEC` or `DISCARD`, there's no need to manually call `UNWATCH`. diff --git a/commands/watch.md b/commands/watch.md index 1e3e01e67a..082ff1b348 100644 --- a/commands/watch.md +++ b/commands/watch.md @@ -1,7 +1,7 @@ Marks the given keys to be watched for conditional execution of a -[transaction][transactions]. +[transaction][tt]. -[transactions]: /topics/transactions +[tt]: /topics/transactions @return diff --git a/commands/zadd.md b/commands/zadd.md index 0f694783e1..65664f4901 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -10,19 +10,21 @@ The score values should be the string representation of a numeric value, and accepts double precision floating point numbers. For an introduction to sorted sets, see the data types page on [sorted -sets][sorted-sets]. +sets][tdtss]. -[sorted-sets]: /topics/data-types#sorted-sets +[tdtss]: /topics/data-types#sorted-sets @return @integer-reply, specifically: -* The number of elements added to the sorted sets, not including elements already existing for which the score was updated. +* The number of elements added to the sorted sets, not including elements + already existing for which the score was updated. @history -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was possible to add or update a single member per call. +* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was + possible to add or update a single member per call. @examples @@ -32,4 +34,3 @@ sets][sorted-sets]. ZADD myzset 2 "two" ZADD myzset 3 "two" ZRANGE myzset 0 -1 WITHSCORES - diff --git a/commands/zcard.md b/commands/zcard.md index 89d81c21a6..01331eccd2 100644 --- a/commands/zcard.md +++ b/commands/zcard.md @@ -12,4 +12,3 @@ if `key` does not exist. ZADD myzset 1 "one" ZADD myzset 2 "two" ZCARD myzset - diff --git a/commands/zcount.md b/commands/zcount.md index b554a6b630..ee56a53137 100644 --- a/commands/zcount.md +++ b/commands/zcount.md @@ -16,4 +16,3 @@ The `min` and `max` arguments have the same semantic as described for ZADD myzset 3 "three" ZCOUNT myzset -inf +inf ZCOUNT myzset (1 3 - diff --git a/commands/zincrby.md b/commands/zincrby.md index b7c1ce16de..1623b6b6dd 100644 --- a/commands/zincrby.md +++ b/commands/zincrby.md @@ -1,6 +1,6 @@ Increments the score of `member` in the sorted set stored at `key` by -`increment`. If `member` does not exist in the sorted set, it is added with -`increment` as its score (as if its previous score was `0.0`). If `key` does +`increment`. If `member` does not exist in the sorted set, it is added with +`increment` as its score (as if its previous score was `0.0`). If `key` does not exist, a new sorted set with the specified `member` as its sole member is created. @@ -22,4 +22,3 @@ number), represented as string. ZADD myzset 2 "two" ZINCRBY myzset 2 "one" ZRANGE myzset 0 -1 WITHSCORES - diff --git a/commands/zinterstore.md b/commands/zinterstore.md index 87cb375dd1..dd7e71dfe4 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -1,7 +1,7 @@ Computes the intersection of `numkeys` sorted sets given by the specified keys, and stores the result in `destination`. It is mandatory to provide the number of -input keys (`numkeys`) before passing the input keys and the other -(optional) arguments. +input keys (`numkeys`) before passing the input keys and the other (optional) +arguments. By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Because intersection requires an element to be a @@ -27,4 +27,3 @@ If `destination` already exists, it is overwritten. ZADD zset2 3 "three" ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES - diff --git a/commands/zrange.md b/commands/zrange.md index 046ced56b4..42275f6c77 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -15,9 +15,9 @@ largest index in the sorted set, or `start > stop`, an empty list is returned. If `stop` is larger than the end of the sorted set Redis will treat it like it is the last element of the sorted set. -It is possible to pass the `WITHSCORES` option in order to return the scores of -the elements together with the elements. The returned list will contain -`value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client +It is possible to pass the `WITHSCORES` option in order to return the scores +of the elements together with the elements. The returned list will contain +`value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples). @@ -35,4 +35,3 @@ their scores). ZRANGE myzset 0 -1 ZRANGE myzset 2 3 ZRANGE myzset -2 -1 - diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index afdb08102d..df47b50573 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -7,7 +7,7 @@ follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to _SELECT LIMIT offset, count_ in SQL). Keep in mind that if +elements (similar to *SELECT LIMIT offset, count* in SQL). Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. @@ -49,4 +49,3 @@ with their scores). ZRANGEBYSCORE myzset 1 2 ZRANGEBYSCORE myzset (1 2 ZRANGEBYSCORE myzset (1 (2 - diff --git a/commands/zrank.md b/commands/zrank.md index 63be4d676e..3703b5bee1 100644 --- a/commands/zrank.md +++ b/commands/zrank.md @@ -9,7 +9,7 @@ to low. * If `member` exists in the sorted set, @integer-reply: the rank of `member`. * If `member` does not exist in the sorted set or `key` does not exist, -@bulk-reply: `nil`. + @bulk-reply: `nil`. @examples @@ -19,4 +19,3 @@ to low. ZADD myzset 3 "three" ZRANK myzset "three" ZRANK myzset "four" - diff --git a/commands/zrem.md b/commands/zrem.md index 9d0f051c1a..8733647153 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -7,11 +7,13 @@ An error is returned when `key` exists and does not hold a sorted set. @integer-reply, specifically: -* The number of members removed from the sorted set, not including non existing members. +* The number of members removed from the sorted set, not including non existing + members. @history -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was possible to remove a single member per call. +* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was + possible to remove a single member per call. @examples @@ -21,4 +23,3 @@ An error is returned when `key` exists and does not hold a sorted set. ZADD myzset 3 "three" ZREM myzset "two" ZRANGE myzset 0 -1 WITHSCORES - diff --git a/commands/zremrangebyrank.md b/commands/zremrangebyrank.md index c9f97419e6..e0458d21de 100644 --- a/commands/zremrangebyrank.md +++ b/commands/zremrangebyrank.md @@ -1,9 +1,9 @@ -Removes all elements in the sorted set stored at `key` with rank between -`start` and `stop`. Both `start` and `stop` are `0`-based indexes with `0` -being the element with the lowest score. These indexes can be negative numbers, -where they indicate offsets starting at the element with the highest score. For -example: `-1` is the element with the highest score, `-2` the element with the -second highest score and so forth. +Removes all elements in the sorted set stored at `key` with rank between `start` +and `stop`. Both `start` and `stop` are `0` -based indexes with `0` being the +element with the lowest score. These indexes can be negative numbers, where they +indicate offsets starting at the element with the highest score. For example: +`-1` is the element with the highest score, `-2` the element with the second +highest score and so forth. @return @@ -17,4 +17,3 @@ second highest score and so forth. ZADD myzset 3 "three" ZREMRANGEBYRANK myzset 0 1 ZRANGE myzset 0 -1 WITHSCORES - diff --git a/commands/zremrangebyscore.md b/commands/zremrangebyscore.md index d575d8d61e..253b37098a 100644 --- a/commands/zremrangebyscore.md +++ b/commands/zremrangebyscore.md @@ -16,4 +16,3 @@ Since version 2.1.6, `min` and `max` can be exclusive, following the syntax of ZADD myzset 3 "three" ZREMRANGEBYSCORE myzset -inf (2 ZRANGE myzset 0 -1 WITHSCORES - diff --git a/commands/zrevrange.md b/commands/zrevrange.md index ddf0a404a0..772be477e2 100644 --- a/commands/zrevrange.md +++ b/commands/zrevrange.md @@ -18,4 +18,3 @@ their scores). ZREVRANGE myzset 0 -1 ZREVRANGE myzset 2 3 ZREVRANGE myzset -2 -1 - diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index c20bd61915..bc2257cea2 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -24,4 +24,3 @@ with their scores). ZREVRANGEBYSCORE myzset 2 1 ZREVRANGEBYSCORE myzset 2 (1 ZREVRANGEBYSCORE myzset (2 (1 - diff --git a/commands/zrevrank.md b/commands/zrevrank.md index 8cc820ab9c..90f86352c4 100644 --- a/commands/zrevrank.md +++ b/commands/zrevrank.md @@ -9,7 +9,7 @@ high. * If `member` exists in the sorted set, @integer-reply: the rank of `member`. * If `member` does not exist in the sorted set or `key` does not exist, -@bulk-reply: `nil`. + @bulk-reply: `nil`. @examples @@ -19,4 +19,3 @@ high. ZADD myzset 3 "three" ZREVRANK myzset "one" ZREVRANK myzset "four" - diff --git a/commands/zscore.md b/commands/zscore.md index 7240b21add..03d178e5b9 100644 --- a/commands/zscore.md +++ b/commands/zscore.md @@ -1,7 +1,7 @@ Returns the score of `member` in the sorted set at `key`. -If `member` does not exist in the sorted set, or `key` does not exist, -`nil` is returned. +If `member` does not exist in the sorted set, or `key` does not exist, `nil` is +returned. @return @@ -13,4 +13,3 @@ represented as string. @cli ZADD myzset 1 "one" ZSCORE myzset "one" - diff --git a/commands/zunionstore.md b/commands/zunionstore.md index 872c2ac4c7..f2710f61e6 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -35,4 +35,3 @@ If `destination` already exists, it is overwritten. ZADD zset2 3 "three" ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 ZRANGE out 0 -1 WITHSCORES - diff --git a/remarkdown.rb b/remarkdown.rb new file mode 100644 index 0000000000..68074dd7b1 --- /dev/null +++ b/remarkdown.rb @@ -0,0 +1,204 @@ +require "rdiscount" +require "nokogiri" + +class ReMarkdown + + attr_reader :xml + + def initialize(input) + html = RDiscount.new(input).to_html + @xml = Nokogiri::XML::Document.parse("#{html}") + + @links = [] + @indent = 0 + + @ol_depth = 0 + @ol_index = [0] * 10 + end + + def to_s + parts = [] + + @xml.at("/doc").children.each do |node| + parts << format_block_node(node) + parts << flush_links + end + + parts.compact.join("\n") + "\n" + end + + private + + def flush_links + return if @links.empty? + + rv = @links.map do |id, href| + "[%s]: %s\n" % [id, href] + end.join + + @links = [] + + rv + end + + def format_nodes(nodes) + if nodes.any? { |node| block_nodes.include?(node.name) } + format_block_nodes(nodes) + else + format_inline_nodes(nodes) + "\n" + end + end + + def block_nodes + ["p", "pre"] + end + + def format_block_nodes(nodes) + nodes.map do |node| + format_block_node(node) + end.join("\n") + "\n" + end + + def format_block_node(node) + case node.name.downcase + when "h1", "h2", "h3", "h4", "h5", "h6" + format_header(node) + "\n" + when "p" + format_inline_nodes(node.children) + "\n" + when "pre" + indent(node.child.content.chomp, 4) + "\n" + when "ul" + format_ul(node) + "\n" + when "ol" + format_ol(node) + "\n" + when "text" + # skip + else + raise "don't know what to do for block node #{node.name}" + end + end + + def inline_nodes + ["em", "strong", "sub", "sup", "a"] + end + + def format_inline_nodes(nodes) + result = nodes.inject("") do |sum, node| + text = format_inline_node(node) + + if sum.empty? || sum =~ /["(\[]$/ || text =~ /^[.,:")\]\^]/ + sum + text + else + sum + " " + text + end + end + + par(result).chomp + end + + def format_inline_node(node) + if node.text? + node.content.strip + else + case node.name + when "em" + "*" + format_inline_node(node.child) + "*" + when "strong" + "**" + format_inline_node(node.child) + "**" + when "code" + "`" + format_inline_node(node.child) + "`" + when "sub" + format_inline_node(node.child) + when "sup" + "^" + format_inline_node(node.child) + when "a" + href = node["href"] + + id = href. + gsub(/[^\w]/, " "). + split(/\s+/). + map { |e| e.to_s[0] }. + join. + downcase + + @links << [id, href] + + "[%s][%s]" % [format_inline_nodes(node.children).chomp, id] + else + raise "don't know what to do for inline node #{node.name}" + end + end + end + + def format_header(node) + level = node.name[/h([1-6])/, 1].to_i + str = "#" * level + str += " " + str += format_inline_nodes(node.children) + str += "\n" + str + end + + def format_ul(node) + children = node.children.map do |child| + next unless child.name.downcase == "li" + + @indent += 2 + txt = format_nodes(child.children) + @indent -= 2 + + txt = indent(txt, 2) + txt[0] = "*" + + txt + end.compact + + children.map do |child| + child + end.join + end + + def format_ol(node) + @ol_depth += 1 + @ol_index[@ol_depth] = 0 + + children = node.children.map do |child| + next unless child.name.downcase == "li" + + @ol_index[@ol_depth] += 1 + + @indent += 3 + txt = format_nodes(child.children) + @indent -= 3 + + txt = indent(txt, 3) + txt[0, 2] = "%d." % @ol_index[@ol_depth] + + txt + end.compact + + @ol_depth -= 1 + + children.map do |child| + child + end.join + end + + def indent(text, level = 0) + text.gsub(/^.*$/) do |match| + (" " * level) + match + end + end + + def par(input) + formatted = nil + + IO.popen("par p0s0w%d" % [80 - @indent], "r+") do |io| + io.puts input + io.close_write + formatted = io.read + end + + formatted + end +end From 99169cf45fcb1d6d5cdcd6aab13872615587eb95 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 18:24:09 -0700 Subject: [PATCH 0170/2880] Emphasis with underscore --- commands/append.md | 2 +- commands/bgrewriteaof.md | 2 +- commands/bitcount.md | 4 ++-- commands/bitop.md | 10 +++++----- commands/eval.md | 20 ++++++++++---------- commands/expire.md | 12 ++++++------ commands/getbit.md | 10 +++++----- commands/incr.md | 2 +- commands/migrate.md | 4 ++-- commands/persist.md | 4 ++-- commands/rpoplpush.md | 8 ++++---- commands/save.md | 2 +- commands/setbit.md | 18 +++++++++--------- commands/setrange.md | 16 ++++++++-------- commands/slowlog.md | 6 +++--- commands/zrangebyscore.md | 2 +- remarkdown.rb | 2 +- 17 files changed, 62 insertions(+), 62 deletions(-) diff --git a/commands/append.md b/commands/append.md index c4c6ecca9b..b353b27602 100644 --- a/commands/append.md +++ b/commands/append.md @@ -17,7 +17,7 @@ string, so `APPEND` will be similar to `SET` in this special case. ## Pattern: Time series the `APPEND` command can be used to create a very compact representation of a -list of fixed-size samples, usually referred as *time series*. Every time a new +list of fixed-size samples, usually referred as _time series_. Every time a new sample arrives we can store it using the command APPEND timeseries "fixed-size sample" diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 51e97ad4c0..c69a801e6f 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -9,7 +9,7 @@ The rewrite will be only triggered by Redis if there is not already a background process doing persistence. Specifically: * If a Redis child is creating a snapshot on disk, the AOF rewrite is - *scheduled* but not started until the saving child producing the RDB file + _scheduled_ but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command starting from Redis 2.6. diff --git a/commands/bitcount.md b/commands/bitcount.md index e7fee5733d..1d193cafc8 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -2,7 +2,7 @@ Count the number of set bits (population counting) in a string. By default all the bytes contained in the string are examined. It is possible to specify the counting operation only in an interval passing the additional -arguments *start* and *end*. +arguments _start_ and _end_. Like for the `GETRANGE` command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last @@ -59,6 +59,6 @@ When the bitmap is big, there are two alternatives: * Taking a separated key that is incremented every time the bitmap is modified. This can be very efficient and atomic using a small Redis Lua script. -* Running the bitmap incrementally using the `BITCOUNT` *start* and *end* +* Running the bitmap incrementally using the `BITCOUNT` _start_ and _end_ optional parameters, accumulating the results client-side, and optionally caching the result into a key. diff --git a/commands/bitop.md b/commands/bitop.md index ad918bf9fe..b9e5f3a9f9 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -4,15 +4,15 @@ store the result in the destination key. The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** and **NOT**, thus the valid forms to call the command are: -* BITOP AND *destkey srckey1 srckey2 srckey3 ... srckeyN* -* BITOP OR *destkey srckey1 srckey2 srckey3 ... srckeyN* -* BITOP XOR *destkey srckey1 srckey2 srckey3 ... srckeyN* -* BITOP NOT *destkey srckey* +* BITOP AND _destkey srckey1 srckey2 srckey3 ... srckeyN_ +* BITOP OR _destkey srckey1 srckey2 srckey3 ... srckeyN_ +* BITOP XOR _destkey srckey1 srckey2 srckey3 ... srckeyN_ +* BITOP NOT _destkey srckey_ As you can see **NOT** is special as it only takes an input key, because it performs inversion of bits so it only makes sense as an unary operator. -The result of the operation is always stored at *destkey*. +The result of the operation is always stored at _destkey_. ## Handling of strings with different lengths diff --git a/commands/eval.md b/commands/eval.md index 6bff2b985e..277e46cf4e 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -212,7 +212,7 @@ means that if an `EVAL` is performed against a Redis instance all the subsequent `EVALSHA` calls will succeed. The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH -command, that will *completely flush* the scripts cache removing all the scripts +command, that will _completely flush_ the scripts cache removing all the scripts executed so far. This is usually needed only when the instance is going to be instantiated for another customer or application in a cloud environment. @@ -240,13 +240,13 @@ subsystem. SCRIPT currently accepts three different commands: can be reassigned to a different user. It is also useful for testing client libraries implementations of the scripting feature. -* SCRIPT EXISTS *sha1* *sha2*... *shaN*. Given a list of SHA1 digests as +* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. Given a list of SHA1 digests as arguments this command returns an array of 1 or 0, where 1 means the specific SHA1 is recognized as a script already present in the scripting cache, while 0 means that a script with this SHA1 was never seen before (or at least never seen after the latest SCRIPT FLUSH command). -* SCRIPT LOAD *script*. This command registers the specified script in the +* SCRIPT LOAD _script_. This command registers the specified script in the Redis script cache. The command is useful in all the contexts where we want to make sure that `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC operation), without the need to actually execute the script. @@ -274,7 +274,7 @@ scripts). The only drawback with this approach is that scripts are required to have the following property: -* The script always evaluates the same Redis *write* commands with the same +* The script always evaluates the same Redis _write_ commands with the same arguments given the same input data set. Operations performed by the script cannot depend on any hidden (non explicit) information or state that may change as script execution proceeds or between different executions of the @@ -290,15 +290,15 @@ In order to enforce this behavior in scripts Redis does the following: state. * Redis will block the script with an error if a script will call a Redis - command able to alter the data set **after** a Redis *random* command like + command able to alter the data set **after** a Redis _random_ command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is read only and does not modify the data set it is free to call those commands. Note that - a *random command* does not necessarily identifies a command that uses random + a _random command_ does not necessarily identifies a command that uses random numbers: any non deterministic command is considered a random command (the best example in this regard is the `TIME` command). * Redis commands that may return elements in random order, like `SMEMBERS` - (because Redis Sets are *unordered*) have a different behavior when called + (because Redis Sets are _unordered_) have a different behavior when called from Lua, and undergone a silent lexicographical sorting filter before returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from @@ -394,7 +394,7 @@ returns with an error: redis 127.0.0.1:6379> eval 'a=10' 0 (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' -Accessing a *non existing* global variable generates a similar error. +Accessing a _non existing_ global variable generates a similar error. Using Lua debugging functionalities or other approaches like altering the meta table used to implement global protections, in order to circumvent globals @@ -403,7 +403,7 @@ If the user messes with the Lua global state, the consistency of AOF and replication is not guaranteed: don't do it. Note for Lua newbies: in order to avoid using global variables in your scripts -simply declare every variable you are going to use using the *local* keyword. +simply declare every variable you are going to use using the _local_ keyword. ## Available libraries @@ -417,7 +417,7 @@ The Redis Lua interpreter loads the following Lua libraries: * cjson lib. * cmsgpack lib. -Every Redis instance is *guaranteed* to have all the above libraries so you can +Every Redis instance is _guaranteed_ to have all the above libraries so you can be sure that the environment for your Redis scripts is always the same. The CJSON library allows to manipulate JSON data in a very fast way from Lua. diff --git a/commands/expire.md b/commands/expire.md index fe17727887..78d283f190 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -1,10 +1,10 @@ Set a timeout on `key`. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be -*volatile* in Redis terminology. +_volatile_ in Redis terminology. The timeout is cleared only when the key is removed using the `DEL` command or overwritten using the `SET` or `GETSET` commands. This means that all the -operations that conceptually *alter* the value stored at the key without +operations that conceptually _alter_ the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with `INCR`, pushing a new value into a list with `LPUSH`, or altering the field value of a hash with `HSET` are all @@ -24,9 +24,9 @@ inherit all the characteristics of `Key_B`. ## Refreshing expires It is possible to call `EXPIRE` using as argument a key that already has an -existing expire set. In this case the time to live of a key is *updated* to the +existing expire set. In this case the time to live of a key is _updated_ to the new value. There are many useful applications for this, an example is documented -in the *Navigation session* pattern section below. +in the _Navigation session_ pattern section below. ## Differences in Redis prior 2.1.3 @@ -54,9 +54,9 @@ now fixed. ## Pattern: Navigation session Imagine you have a web service and you are interested in the latest N pages -*recently* visited by your users, such that each adiacent page view was not +_recently_ visited by your users, such that each adiacent page view was not performed more than 60 seconds after the previous. Conceptually you may think at -this set of page views as a *Navigation session* if your user, that may contain +this set of page views as a _Navigation session_ if your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products. diff --git a/commands/getbit.md b/commands/getbit.md index c74d66f9ef..18cd7ff06c 100644 --- a/commands/getbit.md +++ b/commands/getbit.md @@ -1,13 +1,13 @@ -Returns the bit value at *offset* in the string value stored at *key*. +Returns the bit value at _offset_ in the string value stored at _key_. -When *offset* is beyond the string length, the string is assumed to be a -contiguous space with 0 bits. When *key* does not exist it is assumed to be an -empty string, so *offset* is always out of range and the value is also assumed +When _offset_ is beyond the string length, the string is assumed to be a +contiguous space with 0 bits. When _key_ does not exist it is assumed to be an +empty string, so _offset_ is always out of range and the value is also assumed to be a contiguous space with 0 bits. @return -@integer-reply: the bit value stored at *offset*. +@integer-reply: the bit value stored at _offset_. @examples diff --git a/commands/incr.md b/commands/incr.md index 4f2d42087d..feefe6fcfa 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -54,7 +54,7 @@ public API. We provide two implementations of this pattern using `INCR`, where we assume that the problem to solve is limiting the number of API calls to a maximum of -*ten requests per second per IP address*. +_ten requests per second per IP address_. ## Pattern: Rate limiter 1 diff --git a/commands/migrate.md b/commands/migrate.md index d7ff3f4a3f..7c48941885 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -28,11 +28,11 @@ the following two cases are possible: It is not possible for the key to get lost in the event of a timeout, but the client calling `MIGRATE`, in the event of a timeout error, should check if the -key is *also* present in the target instance and act accordingly. +key is _also_ present in the target instance and act accordingly. When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that the key is still only present in the originating instance (unless a key with the -same name was also *already* present on the target instance). +same name was also _already_ present on the target instance). On success OK is returned. diff --git a/commands/persist.md b/commands/persist.md index f0c4ccd14c..6f236afa2e 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -1,5 +1,5 @@ -Remove the existing timeout on `key`, turning the key from *volatile* (a key -with an expire set) to *persistent* (a key that will never expire as no timeout +Remove the existing timeout on `key`, turning the key from _volatile_ (a key +with an expire set) to _persistent_ (a key that will never expire as no timeout is associated). @return diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index f48fb4a45e..a6644658a5 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -33,16 +33,16 @@ pushing values into a list in the producer side, and waiting for this values in the consumer side using `RPOP` (using polling), or `BRPOP` if the client is better served by a blocking operation. -However in this context the obtained queue is not *reliable* as messages can +However in this context the obtained queue is not _reliable_ as messages can be lost, for example in the case there is a network problem or if the consumer crashes just after the message is received but it is still to process. `RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it -into a *processing* list. It will use the `LREM` command in order to remove the -message from the *processing* list once the message has been processed. +into a _processing_ list. It will use the `LREM` command in order to remove the +message from the _processing_ list once the message has been processed. -An additional client may monitor the *processing* list for items that remain +An additional client may monitor the _processing_ list for items that remain there for too much time, and will push those timed out items into the queue again if needed. diff --git a/commands/save.md b/commands/save.md index 961a6a2d5d..6a20aad3d1 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,5 +1,5 @@ The `SAVE` commands performs a **synchronous** save of the dataset producing a -*point in time* snapshot of all the data inside the Redis instance, in the form +_point in time_ snapshot of all the data inside the Redis instance, in the form of an RDB file. You almost never what to call `SAVE` in production environments where it will diff --git a/commands/setbit.md b/commands/setbit.md index 0c511be7b2..1cf7f1b914 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -1,24 +1,24 @@ -Sets or clears the bit at *offset* in the string value stored at *key*. +Sets or clears the bit at _offset_ in the string value stored at _key_. -The bit is either set or cleared depending on *value*, which can be either 0 or -1. When *key* does not exist, a new string value is created. The string is grown -to make sure it can hold a bit at *offset*. The *offset* argument is required +The bit is either set or cleared depending on _value_, which can be either 0 or +1. When _key_ does not exist, a new string value is created. The string is grown +to make sure it can hold a bit at _offset_. The _offset_ argument is required to be greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to -512MB). When the string at *key* is grown, added bits are set to 0. +512MB). When the string at _key_ is grown, added bits are set to 0. -**Warning**: When setting the last possible bit (*offset* equal to 2^32 -1) and -the string value stored at *key* does not yet hold a string value, or holds +**Warning**: When setting the last possible bit (_offset_ equal to 2^32 -1) and +the string value stored at _key_ does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes ~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once -this first allocation is done, subsequent calls to `SETBIT` for the same *key* +this first allocation is done, subsequent calls to `SETBIT` for the same _key_ will not have the allocation overhead. @return -@integer-reply: the original bit value stored at *offset*. +@integer-reply: the original bit value stored at _offset_. @examples diff --git a/commands/setrange.md b/commands/setrange.md index 8a7204cad2..3b99b658eb 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,22 +1,22 @@ -Overwrites part of the string stored at *key*, starting at the specified offset, -for the entire length of *value*. If the offset is larger than the current -length of the string at *key*, the string is padded with zero-bytes to make -*offset* fit. Non-existing keys are considered as empty strings, so this command -will make sure it holds a string large enough to be able to set *value* at -*offset*. +Overwrites part of the string stored at _key_, starting at the specified offset, +for the entire length of _value_. If the offset is larger than the current +length of the string at _key_, the string is padded with zero-bytes to make +_offset_ fit. Non-existing keys are considered as empty strings, so this command +will make sure it holds a string large enough to be able to set _value_ at +_offset_. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis Strings are limited to 512 megabytes. If you need to grow beyond this size, you can use multiple keys. **Warning**: When setting the last possible byte and the string value stored at -*key* does not yet hold a string value, or holds a small string value, Redis +_key_ does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is -done, subsequent calls to `SETRANGE` for the same *key* will not have the +done, subsequent calls to `SETRANGE` for the same _key_ will not have the allocation overhead. ## Patterns diff --git a/commands/slowlog.md b/commands/slowlog.md index 5c996d4886..943764ebfc 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -8,10 +8,10 @@ with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). -You can configure the slow log with two parameters: *slowlog-log-slower-than* +You can configure the slow log with two parameters: _slowlog-log-slower-than_ tells Redis what is the execution time, in microseconds, to exceed in order for the command to get logged. Note that a negative number disables the slow log, -while a value of zero forces the logging of every command. *slowlog-max-len* +while a value of zero forces the logging of every command. _slowlog-max-len_ is the length of the slow log. The minimum value is zero. When a new command is logged and the slow log is already at its maximum length, the oldest one is removed from the queue of logged commands in order to make space. @@ -24,7 +24,7 @@ running using the `CONFIG GET` and `CONFIG SET` commands. The slow log is accumulated in memory, so no file is written with information about the slow command executions. This makes the slow log remarkably fast at the point that you can enable the logging of all the commands (setting the -*slowlog-log-slower-than* config parameter to zero) with minor performance hit. +_slowlog-log-slower-than_ config parameter to zero) with minor performance hit. To read the slow log the **SLOWLOG GET** command is used, that returns every entry in the slow log. It is possible to return only the N most recent entries diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index df47b50573..2a355904b5 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -7,7 +7,7 @@ follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to *SELECT LIMIT offset, count* in SQL). Keep in mind that if +elements (similar to _SELECT LIMIT offset, count_ in SQL). Keep in mind that if `offset` is large, the sorted set needs to be traversed for `offset` elements before getting to the elements to return, which can add up to O(N) time complexity. diff --git a/remarkdown.rb b/remarkdown.rb index 68074dd7b1..50f5e518b0 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -102,7 +102,7 @@ def format_inline_node(node) else case node.name when "em" - "*" + format_inline_node(node.child) + "*" + "_" + format_inline_node(node.child) + "_" when "strong" "**" + format_inline_node(node.child) + "**" when "code" From 339e5130b2f930a5986b8927ad8bb39a261e59ad Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 21:20:45 -0700 Subject: [PATCH 0171/2880] Start every sentence on a new line --- commands/append.md | 21 +-- commands/auth.md | 9 +- commands/bgrewriteaof.md | 16 +- commands/bgsave.md | 10 +- commands/bitcount.md | 25 +-- commands/bitop.md | 9 +- commands/blpop.md | 51 ++++--- commands/brpop.md | 9 +- commands/brpoplpush.md | 9 +- commands/config get.md | 17 ++- commands/config set.md | 17 ++- commands/debug object.md | 4 +- commands/debug segfault.md | 4 +- commands/decr.md | 9 +- commands/decrby.md | 9 +- commands/del.md | 3 +- commands/dump.md | 15 +- commands/eval.md | 285 +++++++++++++++++++---------------- commands/expire.md | 82 +++++----- commands/expireat.md | 5 +- commands/flushall.md | 3 +- commands/flushdb.md | 3 +- commands/get.md | 7 +- commands/getbit.md | 7 +- commands/getrange.md | 7 +- commands/getset.md | 9 +- commands/hdel.md | 11 +- commands/hgetall.md | 6 +- commands/hincrby.md | 3 +- commands/hincrbyfloat.md | 8 +- commands/hmset.md | 5 +- commands/hset.md | 6 +- commands/hsetnx.md | 5 +- commands/incr.md | 69 +++++---- commands/incrby.md | 9 +- commands/incrbyfloat.md | 11 +- commands/info.md | 22 +-- commands/keys.md | 17 ++- commands/lastsave.md | 8 +- commands/lindex.md | 11 +- commands/llen.md | 6 +- commands/lpush.md | 22 +-- commands/lpushx.md | 5 +- commands/lrange.md | 29 ++-- commands/lrem.md | 4 +- commands/lset.md | 4 +- commands/ltrim.md | 22 +-- commands/mget.md | 7 +- commands/migrate.md | 25 +-- commands/monitor.md | 17 ++- commands/move.md | 7 +- commands/mset.md | 11 +- commands/msetnx.md | 10 +- commands/multi.md | 4 +- commands/object.md | 35 +++-- commands/ping.md | 5 +- commands/punsubscribe.md | 5 +- commands/quit.md | 5 +- commands/rename.md | 7 +- commands/renamenx.md | 4 +- commands/restore.md | 4 +- commands/rpoplpush.md | 38 ++--- commands/rpush.md | 22 +-- commands/rpushx.md | 5 +- commands/sadd.md | 11 +- commands/save.md | 9 +- commands/script exists.md | 6 +- commands/script kill.md | 13 +- commands/script load.md | 8 +- commands/select.md | 4 +- commands/set.md | 4 +- commands/setbit.md | 27 ++-- commands/setex.md | 10 +- commands/setnx.md | 36 +++-- commands/setrange.md | 31 ++-- commands/shutdown.md | 19 ++- commands/sinter.md | 6 +- commands/slaveof.md | 15 +- commands/slowlog.md | 38 +++-- commands/smove.md | 15 +- commands/sort.md | 78 +++++----- commands/srem.md | 11 +- commands/strlen.md | 4 +- commands/ttl.md | 6 +- commands/type.md | 6 +- commands/unsubscribe.md | 5 +- commands/zadd.md | 17 ++- commands/zincrby.md | 13 +- commands/zinterstore.md | 13 +- commands/zrange.md | 29 ++-- commands/zrangebyscore.md | 23 +-- commands/zrank.md | 5 +- commands/zrem.md | 9 +- commands/zremrangebyrank.md | 12 +- commands/zrevrange.md | 4 +- commands/zrevrangebyscore.md | 6 +- commands/zrevrank.md | 5 +- commands/zunionstore.md | 23 +-- remarkdown.rb | 7 +- 99 files changed, 922 insertions(+), 745 deletions(-) diff --git a/commands/append.md b/commands/append.md index b353b27602..3b673eb0e9 100644 --- a/commands/append.md +++ b/commands/append.md @@ -1,6 +1,7 @@ If `key` already exists and is a string, this command appends the `value` at the -end of the string. If `key` does not exist it is created and set as an empty -string, so `APPEND` will be similar to `SET` in this special case. +end of the string. +If `key` does not exist it is created and set as an empty string, so `APPEND` +will be similar to `SET` in this special case. @return @@ -17,24 +18,24 @@ string, so `APPEND` will be similar to `SET` in this special case. ## Pattern: Time series the `APPEND` command can be used to create a very compact representation of a -list of fixed-size samples, usually referred as _time series_. Every time a new -sample arrives we can store it using the command +list of fixed-size samples, usually referred as _time series_. +Every time a new sample arrives we can store it using the command APPEND timeseries "fixed-size sample" Accessing individual elements in the time series is not hard: * `STRLEN` can be used in order to obtain the number of samples. -* `GETRANGE` allows for random access of elements. If our time series have an - associated time information we can easily implement a binary search to get - range combining `GETRANGE` with the Lua scripting engine available in Redis - 2.6. +* `GETRANGE` allows for random access of elements. + If our time series have an associated time information we can easily implement + a binary search to get range combining `GETRANGE` with the Lua scripting + engine available in Redis 2.6. * `SETRANGE` can be used to overwrite an existing time serie. The limitations of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily -because Redis currently lacks a command able to trim string objects. However the -space efficiency of time series stored in this way is remarkable. +because Redis currently lacks a command able to trim string objects. +However the space efficiency of time series stored in this way is remarkable. Hint: it is possible to switch to a different key based on the current Unix time, in this way it is possible to have just a relatively small amount of diff --git a/commands/auth.md b/commands/auth.md index 661ecd8016..63d185dbf6 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,10 +1,11 @@ -Request for authentication in a password protected Redis server. Redis can be -instructed to require a password before allowing clients to execute commands. +Request for authentication in a password protected Redis server. +Redis can be instructed to require a password before allowing clients to execute +commands. This is done using the `requirepass` directive in the configuration file. If `password` matches the password in the configuration file, the server replies -with the `OK` status code and starts accepting commands. Otherwise, an error is -returned and the clients needs to try a new password. +with the `OK` status code and starts accepting commands. +Otherwise, an error is returned and the clients needs to try a new password. **Note**: because of the high performance nature of Redis, it is possible to try a lot of passwords in parallel in very short time, so make sure to generate a diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index c69a801e6f..c9b73882b1 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,18 +1,22 @@ -Instruct Redis to start an [Append Only File][tpaof] rewrite process. The -rewrite will create a small optimized version of the current Append Only File. +Instruct Redis to start an [Append Only File][tpaof] rewrite process. +The rewrite will create a small optimized version of the current Append Only +File. [tpaof]: /topics/persistence#append-only-file If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. The rewrite will be only triggered by Redis if there is not already a background -process doing persistence. Specifically: +process doing persistence. +Specifically: * If a Redis child is creating a snapshot on disk, the AOF rewrite is _scheduled_ but not started until the saving child producing the RDB file - terminates. In this case the `BGREWRITEAOF` will still return an OK code, but - with an appropriate message. You can check if an AOF rewrite is scheduled - looking at the `INFO` command starting from Redis 2.6. + terminates. + In this case the `BGREWRITEAOF` will still return an OK code, but with an + appropriate message. + You can check if an AOF rewrite is scheduled looking at the `INFO` command + starting from Redis 2.6. * If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. diff --git a/commands/bgsave.md b/commands/bgsave.md index 25a5f1f696..7a65e2e3b7 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,7 +1,9 @@ -Save the DB in background. The OK code is immediately returned. Redis forks, -the parent continues to server the clients, the child saves the DB on disk -then exit. A client my be able to check if the operation succeeded using the -`LASTSAVE` command. +Save the DB in background. +The OK code is immediately returned. +Redis forks, the parent continues to server the clients, the child saves the DB +on disk then exit. +A client my be able to check if the operation succeeded using the `LASTSAVE` +command. Please refer to the [persistence documentation][tp] for detailed information. diff --git a/commands/bitcount.md b/commands/bitcount.md index 1d193cafc8..fdac855364 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -1,8 +1,8 @@ Count the number of set bits (population counting) in a string. -By default all the bytes contained in the string are examined. It is possible -to specify the counting operation only in an interval passing the additional -arguments _start_ and _end_. +By default all the bytes contained in the string are examined. +It is possible to specify the counting operation only in an interval passing the +additional arguments _start_ and _end_. Like for the `GETRANGE` command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last @@ -27,13 +27,15 @@ The number of bits set to 1. ## Pattern: real time metrics using bitmaps Bitmaps are a very space efficient representation of certain kinds of -information. One example is a web application that needs the history of user -visits, so that for instance it is possible to determine what users are good -targets of beta features, or for any other purpose. +information. +One example is a web application that needs the history of user visits, so that +for instance it is possible to determine what users are good targets of beta +features, or for any other purpose. -Using the `SETBIT` command this is trivial to accomplish, identifying every -day with a small progressive integer. For instance day 0 is the first day the -application was put online, day 1 the next day, and so forth. +Using the `SETBIT` command this is trivial to accomplish, identifying every day +with a small progressive integer. +For instance day 0 is the first day the application was put online, day 1 the +next day, and so forth. Every time an user performs a page view, the application can register that in the current day the user visited the web site using the `SETBIT` command setting @@ -52,8 +54,9 @@ bitmaps][hbgc212fermurb]". In the above example of counting days, even after 10 years the application is online we still have just `365*10` bits of data per user, that is just 456 bytes -per user. With this amount of data `BITCOUNT` is still as fast as any other O(1) -Redis command like `GET` or `INCR`. +per user. +With this amount of data `BITCOUNT` is still as fast as any other O(1) Redis +command like `GET` or `INCR`. When the bitmap is big, there are two alternatives: diff --git a/commands/bitop.md b/commands/bitop.md index b9e5f3a9f9..948cce7856 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -41,8 +41,9 @@ size of the longest input string. ## Pattern: real time metrics using bitmaps `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command -documentation. Different bitmaps can be combined in order to obtain a target -bitmap where to perform the population counting operation. +documentation. +Different bitmaps can be combined in order to obtain a target bitmap where to +perform the population counting operation. See the article called "[Fast easy realtime metrics using Redis bitmaps][hbgc212fermurb]" for an interesting use cases. @@ -51,8 +52,8 @@ bitmaps][hbgc212fermurb]" for an interesting use cases. ## Performance considerations -`BITOP` is a potentially slow command as it runs in O(N) time. Care should be -taken when running it against long input strings. +`BITOP` is a potentially slow command as it runs in O(N) time. +Care should be taken when running it against long input strings. For real time metrics and statistics involving large inputs a good approach is to use a slave (with read-only option disabled) where to perform the bit-wise diff --git a/commands/blpop.md b/commands/blpop.md index c500782eee..24ed8f24ec 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -1,7 +1,8 @@ -`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP` -because it blocks the connection when there are no elements to pop from any of -the given lists. An element is popped from the head of the first list that is -non-empty, with the given keys being checked in the order that they are given. +`BLPOP` is a blocking list pop primitive. +It is the blocking version of `LPOP` because it blocks the connection when there +are no elements to pop from any of the given lists. +An element is popped from the head of the first list that is non-empty, with the +given keys being checked in the order that they are given. ## Non-blocking behavior @@ -9,9 +10,10 @@ When `BLPOP` is called, if at least one of the specified keys contain a non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. -Keys are checked in the order that they are given. Let's say that the key -`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the -following command: +Keys are checked in the order that they are given. +Let's say that the key `list1` doesn't exist and `list2` and `list3` hold +non-empty lists. +Consider the following command: BLPOP list1 list2 list3 0 @@ -32,27 +34,29 @@ the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. -The timeout argument is interpreted as an integer value. A timeout of zero can -be used to block indefinitely. +The timeout argument is interpreted as an integer value. +A timeout of zero can be used to block indefinitely. ## Multiple clients blocking for the same keys -Multiple clients can block for the same key. They are put into a queue, so the -first to be served will be the one that started to wait earlier, in a first- -`!BLPOP` first-served fashion. +Multiple clients can block for the same key. +They are put into a queue, so the first to be served will be the one that +started to wait earlier, in a first- `!BLPOP` first-served fashion. ## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction `BLPOP` can be used with pipelining (sending multiple commands and reading the replies in batch), but it does not make sense to use `BLPOP` inside a `MULTI` / -`EXEC` block. This would require blocking the entire server in order to execute -the block atomically, which in turn does not allow other clients to perform a -push operation. +`EXEC` block. +This would require blocking the entire server in order to execute the block +atomically, which in turn does not allow other clients to perform a push +operation. The behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to -return a `nil` multi-bulk reply, which is the same thing that happens when -the timeout is reached. If you like science fiction, think of time flowing at -infinite speed inside a `MULTI` / `EXEC` block. +return a `nil` multi-bulk reply, which is the same thing that happens when the +timeout is reached. +If you like science fiction, think of time flowing at infinite speed inside a +`MULTI` / `EXEC` block. @return @@ -76,11 +80,12 @@ infinite speed inside a `MULTI` / `EXEC` block. ## Pattern: Event notification Using blocking list operations it is possible to mount different blocking -primitives. For instance for some application you may need to block waiting for -elements into a Redis Set, so that as far as a new element is added to the Set, -it is possible to retrieve it without resort to polling. This would require -a blocking version of `SPOP` that is not available, but using blocking list -operations we can easily accomplish this task. +primitives. +For instance for some application you may need to block waiting for elements +into a Redis Set, so that as far as a new element is added to the Set, it is +possible to retrieve it without resort to polling. +This would require a blocking version of `SPOP` that is not available, but using +blocking list operations we can easily accomplish this task. The consumer will do: diff --git a/commands/brpop.md b/commands/brpop.md index da44a3ae4e..fffc65fc55 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -1,7 +1,8 @@ -`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP` -because it blocks the connection when there are no elements to pop from any of -the given lists. An element is popped from the tail of the first list that is -non-empty, with the given keys being checked in the order that they are given. +`BRPOP` is a blocking list pop primitive. +It is the blocking version of `RPOP` because it blocks the connection when there +are no elements to pop from any of the given lists. +An element is popped from the tail of the first list that is non-empty, with the +given keys being checked in the order that they are given. See the [BLPOP documentation][cb] for the exact semantics, since `BRPOP` is identical to `BLPOP` with the only difference being that it pops elements from diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 73b726db62..025dc3749c 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -1,7 +1,8 @@ -`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` contains -elements, this command behaves exactly like `RPOPLPUSH`. When `source` is empty, -Redis will block the connection until another client pushes to it or until -`timeout` is reached. A `timeout` of zero can be used to block indefinitely. +`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. +When `source` contains elements, this command behaves exactly like `RPOPLPUSH`. +When `source` is empty, Redis will block the connection until another client +pushes to it or until `timeout` is reached. +A `timeout` of zero can be used to block indefinitely. See `RPOPLPUSH` for more information. diff --git a/commands/config get.md b/commands/config get.md index 588d46fd76..c35cd8af1f 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -1,14 +1,15 @@ The `CONFIG GET` command is used to read the configuration parameters of a -running Redis server. Not all the configuration parameters are supported in -Redis 2.4, while Redis 2.6 can read the whole configuration of a server using -this command. +running Redis server. +Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 +can read the whole configuration of a server using this command. The symmetric command used to alter the configuration at run time is `CONFIG SET`. -`CONFIG GET` takes a single argument, that is glob style pattern. All the -configuration parameters matching this parameter are reported as a list of -key-value pairs. Example: +`CONFIG GET` takes a single argument, that is glob style pattern. +All the configuration parameters matching this parameter are reported as a list +of key-value pairs. +Example: redis> config get *max-*-entries* 1) "hash-max-zipmap-entries" @@ -31,8 +32,8 @@ following important differences: the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair - of integers represent a seconds/modifications threshold. +* The save parameter is a single string of space separated integers. + Every pair of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: diff --git a/commands/config set.md b/commands/config set.md index 084c88aabc..dc845853c7 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -1,6 +1,7 @@ The `CONFIG SET` command is used in order to reconfigure the server at run time -without the need to restart Redis. You can change both trivial parameters or -switch from one to another persistence option using this command. +without the need to restart Redis. +You can change both trivial parameters or switch from one to another persistence +option using this command. The list of configuration parameters supported by `CONFIG SET` can be obtained issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain @@ -20,8 +21,8 @@ following important differences: the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair - of integers represent a seconds/modifications threshold. +* The save parameter is a single string of space separated integers. + Every pair of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: @@ -33,8 +34,8 @@ and after 300 seconds if there are at least 10 changes to the datasets, should be set using `CONFIG SET` as "900 1 300 10". It is possible to switch persistence from RDB snapshotting to append only file -(and the other way around) using the `CONFIG SET` command. For more information -about how to do that please check [persistence page][tp]. +(and the other way around) using the `CONFIG SET` command. +For more information about how to do that please check [persistence page][tp]. [tp]: /topics/persistence @@ -49,5 +50,5 @@ options are not mutually exclusive. @return -@status-reply: `OK` when the configuration was set properly. Otherwise an error -is returned. +@status-reply: `OK` when the configuration was set properly. +Otherwise an error is returned. diff --git a/commands/debug object.md b/commands/debug object.md index 4d3cf6de22..ffc969d8ab 100644 --- a/commands/debug object.md +++ b/commands/debug object.md @@ -1,4 +1,4 @@ -`DEBUG OBJECT` is a debugging command that should not be used by clients. Check -the `OBJECT` command instead. +`DEBUG OBJECT` is a debugging command that should not be used by clients. +Check the `OBJECT` command instead. @status-reply diff --git a/commands/debug segfault.md b/commands/debug segfault.md index c01d06a38c..7524c166f2 100644 --- a/commands/debug segfault.md +++ b/commands/debug segfault.md @@ -1,4 +1,4 @@ -`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. It is -used to simulate bugs during the development. +`DEBUG SEGFAULT` performs an invalid memory access that crashes Redis. +It is used to simulate bugs during the development. @status-reply diff --git a/commands/decr.md b/commands/decr.md index 038ba06b56..875d553b15 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -1,7 +1,8 @@ -Decrements the number stored at `key` by one. If the key does not exist, it -is set to `0` before performing the operation. An error is returned if the -key contains a value of the wrong type or contains a string that can not be -represented as integer. This operation is limited to **64 bit signed integers**. +Decrements the number stored at `key` by one. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to **64 bit signed integers**. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/decrby.md b/commands/decrby.md index 16a77dc814..d2493dc9d0 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -1,7 +1,8 @@ -Decrements the number stored at `key` by `decrement`. If the key does not exist, -it is set to `0` before performing the operation. An error is returned if the -key contains a value of the wrong type or contains a string that can not be -represented as integer. This operation is limited to 64 bit signed integers. +Decrements the number stored at `key` by `decrement`. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/del.md b/commands/del.md index 6d20c0c314..c37b3f3e1d 100644 --- a/commands/del.md +++ b/commands/del.md @@ -1,4 +1,5 @@ -Removes the specified keys. A key is ignored if it does not exist. +Removes the specified keys. +A key is ignored if it does not exist. @return diff --git a/commands/dump.md b/commands/dump.md index 58ec12a22e..fd92bb9bd6 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -1,20 +1,23 @@ Serialize the value stored at key in a Redis-specific format and return it to -the user. The returned value can be synthesized back into a Redis key using the -`RESTORE` command. +the user. +The returned value can be synthesized back into a Redis key using the `RESTORE` +command. The serialization format is opaque and non-standard, however it has a few semantical characteristics: * It contains a 64bit checksum that is used to make sure errors will be - detected. The `RESTORE` command makes sure to check the checksum before - synthesizing a key using the serialized value. + detected. + The `RESTORE` command makes sure to check the checksum before synthesizing a + key using the serialized value. * Values are encoded in the same format used by RDB. * An RDB version is encoded inside the serialized value, so that different Redis versions with incompatible RDB formats will refuse to process the serialized value. -The serialized value does NOT contain expire information. In order to capture -the time to live of the current value the `PTTL` command should be used. +The serialized value does NOT contain expire information. +In order to capture the time to live of the current value the `PTTL` command +should be used. If `key` does not exist a nil bulk reply is returned. diff --git a/commands/eval.md b/commands/eval.md index 277e46cf4e..602c30df6e 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -3,14 +3,14 @@ `EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter built into Redis starting from version 2.6.0. -The first argument of `EVAL` is a Lua 5.1 script. The script does not need to -define a Lua function (and should not). It is just a Lua program that will run -in the context of the Redis server. +The first argument of `EVAL` is a Lua 5.1 script. +The script does not need to define a Lua function (and should not). +It is just a Lua program that will run in the context of the Redis server. -The second argument of `EVAL` is the number of arguments that follows the -script (starting from the third argument) that represent Redis key names. This -arguments can be accessed by Lua using the `KEYS` global variable in the form of -a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). +The second argument of `EVAL` is the number of arguments that follows the script +(starting from the third argument) that represent Redis key names. +This arguments can be accessed by Lua using the `KEYS` global variable in the +form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). All the additional arguments should not represent key names and can be accessed by Lua using the `ARGV` global variable, very similarly to what happens with @@ -46,9 +46,9 @@ the arguments of a well formed Redis command: > eval "return redis.call('set','foo','bar')" 0 OK -The above script actually sets the key `foo` to the string `bar`. However it -violates the `EVAL` command semantics as all the keys that the script uses -should be passed using the KEYS array, in the following way: +The above script actually sets the key `foo` to the string `bar`. +However it violates the `EVAL` command semantics as all the keys that the script +uses should be passed using the KEYS array, in the following way: > eval "return redis.call('set',KEYS[1],'bar')" 1 foo OK @@ -57,11 +57,12 @@ The reason for passing keys in the proper way is that, before of `EVAL` all the Redis commands could be analyzed before execution in order to establish what are the keys the command will operate on. -In order for this to be true for `EVAL` also keys must be explicit. This is -useful in many ways, but especially in order to make sure Redis Cluster is able -to forward your request to the appropriate cluster node (Redis Cluster is a -work in progress, but the scripting feature was designed in order to play well -with it). However this rule is not enforced in order to provide the user with +In order for this to be true for `EVAL` also keys must be explicit. +This is useful in many ways, but especially in order to make sure Redis Cluster +is able to forward your request to the appropriate cluster node (Redis Cluster +is a work in progress, but the scripting feature was designed in order to play +well with it). +However this rule is not enforced in order to provide the user with opportunities to abuse the Redis single instance configuration, at the cost of writing scripts not compatible with Redis Cluster. @@ -71,16 +72,17 @@ protocol using a set of conversion rules. ## Conversion between Lua and Redis data types Redis return values are converted into Lua data types when Lua calls a Redis -command using call() or pcall(). Similarly Lua data types are converted into -Redis protocol when a Lua script returns some value, so that scripts can control -what `EVAL` will reply to the client. +command using call() or pcall(). +Similarly Lua data types are converted into Redis protocol when a Lua script +returns some value, so that scripts can control what `EVAL` will reply to the +client. This conversion between data types is designed in a way that if a Redis type is converted into a Lua type, and then the result is converted back into a Redis type, the result is the same as of the initial value. -In other words there is a one to one conversion between Lua and Redis types. The -following table shows you all the conversions rules: +In other words there is a one to one conversion between Lua and Redis types. +The following table shows you all the conversions rules: **Redis to Lua** conversion table. @@ -125,17 +127,17 @@ what the called command would return if called directly. ## Atomicity of scripts -Redis uses the same Lua interpreter to run all the commands. Also Redis -guarantees that a script is executed in an atomic way: no other script or Redis -command will be executed while a script is being executed. This semantics is -very similar to the one of `MULTI` / `EXEC`. From the point of view of all the -other clients the effects of a script are either still not visible or already -completed. +Redis uses the same Lua interpreter to run all the commands. +Also Redis guarantees that a script is executed in an atomic way: no other +script or Redis command will be executed while a script is being executed. +This semantics is very similar to the one of `MULTI` / `EXEC`. +From the point of view of all the other clients the effects of a script are +either still not visible or already completed. -However this also means that executing slow scripts is not a good idea. It is -not hard to create fast scripts, as the script overhead is very low, but if -you are going to use slow scripts you should be aware that while the script is -running no other client can execute commands since the server is busy. +However this also means that executing slow scripts is not a good idea. +It is not hard to create fast scripts, as the script overhead is very low, but +if you are going to use slow scripts you should be aware that while the script +is running no other client can execute commands since the server is busy. ## Error handling @@ -157,10 +159,10 @@ object returned by `redis.pcall()`. ## Bandwidth and EVALSHA -The `EVAL` command forces you to send the script body again and again. Redis -does not need to recompile the script every time as it uses an internal caching -mechanism, however paying the cost of the additional bandwidth may not be -optimal in many contexts. +The `EVAL` command forces you to send the script body again and again. +Redis does not need to recompile the script every time as it uses an internal +caching mechanism, however paying the cost of the additional bandwidth may not +be optimal in many contexts. On the other hand defining commands using a special command or via `redis.conf` would be a problem for a few reasons: @@ -177,7 +179,8 @@ In order to avoid the above three problems and at the same time don't incur in the bandwidth penalty, Redis implements the `EVALSHA` command. `EVALSHA` works exactly as `EVAL`, but instead of having a script as first -argument it has the SHA1 sum of a script. The behavior is the following: +argument it has the SHA1 sum of a script. +The behavior is the following: * If the server still remembers a script whose SHA1 sum was the one specified, the script is executed. @@ -198,8 +201,8 @@ Example: The client library implementation can always optimistically send `EVALSHA` under the hoods even when the client actually called `EVAL`, in the hope the script -was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will -be used instead. +was already seen by the server. +If the `NOSCRIPT` error is returned `EVAL` will be used instead. Passing keys and arguments as `EVAL` additional arguments is also very useful in this context as the script string remains constant and can be efficiently cached @@ -207,64 +210,74 @@ by Redis. ## Script cache semantics -Executed scripts are guaranteed to be in the script cache **forever**. This -means that if an `EVAL` is performed against a Redis instance all the subsequent -`EVALSHA` calls will succeed. +Executed scripts are guaranteed to be in the script cache **forever**. +This means that if an `EVAL` is performed against a Redis instance all the +subsequent `EVALSHA` calls will succeed. The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH command, that will _completely flush_ the scripts cache removing all the scripts -executed so far. This is usually needed only when the instance is going to be -instantiated for another customer or application in a cloud environment. +executed so far. +This is usually needed only when the instance is going to be instantiated for +another customer or application in a cloud environment. The reason why scripts can be cached for long time is that it is unlikely for a well written application to have so many different scripts to create memory -problems. Every script is conceptually like the implementation of a new command, -and even a large application will likely have just a few hundreds of that. Even -if the application is modified many times and scripts will change, still the -memory used is negligible. +problems. +Every script is conceptually like the implementation of a new command, and even +a large application will likely have just a few hundreds of that. +Even if the application is modified many times and scripts will change, still +the memory used is negligible. The fact that the user can count on Redis not removing scripts is semantically a -very good thing. For instance an application taking a persistent connection to -Redis can stay sure that if a script was sent once it is still in memory, thus -for instance can use EVALSHA against those scripts in a pipeline without the -chance that an error will be generated since the script is not known (we'll see -this problem in its details later). +very good thing. +For instance an application taking a persistent connection to Redis can stay +sure that if a script was sent once it is still in memory, thus for instance can +use EVALSHA against those scripts in a pipeline without the chance that an error +will be generated since the script is not known (we'll see this problem in its +details later). ## The SCRIPT command Redis offers a SCRIPT command that can be used in order to control the scripting -subsystem. SCRIPT currently accepts three different commands: - -* SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts - cache. It is mostly useful in a cloud environment where the same instance - can be reassigned to a different user. It is also useful for testing client - libraries implementations of the scripting feature. - -* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. Given a list of SHA1 digests as - arguments this command returns an array of 1 or 0, where 1 means the specific - SHA1 is recognized as a script already present in the scripting cache, while - 0 means that a script with this SHA1 was never seen before (or at least never - seen after the latest SCRIPT FLUSH command). - -* SCRIPT LOAD _script_. This command registers the specified script in the - Redis script cache. The command is useful in all the contexts where we want - to make sure that `EVALSHA` will not fail (for instance during a pipeline or - MULTI/EXEC operation), without the need to actually execute the script. - -* SCRIPT KILL. This command is the only wait to interrupt a long running script - that reached the configured maximum execution time for scripts. The SCRIPT - KILL command can only be used with scripts that did not modified the dataset - during their execution (since stopping a read only script does not violate - the scripting engine guaranteed atomicity). See the next sections for more - information about long running scripts. +subsystem. +SCRIPT currently accepts three different commands: + +* SCRIPT FLUSH. + This command is the only way to force Redis to flush the scripts cache. + It is mostly useful in a cloud environment where the same instance can be + reassigned to a different user. + It is also useful for testing client libraries implementations of the + scripting feature. + +* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. + Given a list of SHA1 digests as arguments this command returns an array of + 1 or 0, where 1 means the specific SHA1 is recognized as a script already + present in the scripting cache, while 0 means that a script with this SHA1 + was never seen before (or at least never seen after the latest SCRIPT FLUSH + command). + +* SCRIPT LOAD _script_. + This command registers the specified script in the Redis script cache. + The command is useful in all the contexts where we want to make sure that + `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC + operation), without the need to actually execute the script. + +* SCRIPT KILL. + This command is the only wait to interrupt a long running script that reached + the configured maximum execution time for scripts. + The SCRIPT KILL command can only be used with scripts that did not modified + the dataset during their execution (since stopping a read only script does not + violate the scripting engine guaranteed atomicity). + See the next sections for more information about long running scripts. ## Scripts as pure functions A very important part of scripting is writing scripts that are pure functions. Scripts executed in a Redis instance are replicated on slaves sending the same -script, instead of the resulting commands. The same happens for the Append Only -File. The reason is that scripts are much faster than sending commands one after -the other to a Redis instance, so if the client is taking the master very busy +script, instead of the resulting commands. +The same happens for the Append Only File. +The reason is that scripts are much faster than sending commands one after the +other to a Redis instance, so if the client is taking the master very busy sending scripts, turning this scripts into single commands for the slave / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via network @@ -275,10 +288,11 @@ The only drawback with this approach is that scripts are required to have the following property: * The script always evaluates the same Redis _write_ commands with the same - arguments given the same input data set. Operations performed by the script - cannot depend on any hidden (non explicit) information or state that may - change as script execution proceeds or between different executions of the - script, nor can it depend on any external input from I/O devices. + arguments given the same input data set. + Operations performed by the script cannot depend on any hidden (non explicit) + information or state that may change as script execution proceeds or between + different executions of the script, nor can it depend on any external input + from I/O devices. Things like using the system time, calling Redis random commands like `RANDOMKEY`, or using Lua random number generator, could result into scripts @@ -291,29 +305,31 @@ In order to enforce this behavior in scripts Redis does the following: * Redis will block the script with an error if a script will call a Redis command able to alter the data set **after** a Redis _random_ command like - `RANDOMKEY`, `SRANDMEMBER`, `TIME`. This means that if a script is read only - and does not modify the data set it is free to call those commands. Note that - a _random command_ does not necessarily identifies a command that uses random - numbers: any non deterministic command is considered a random command (the - best example in this regard is the `TIME` command). + `RANDOMKEY`, `SRANDMEMBER`, `TIME`. + This means that if a script is read only and does not modify the data set it + is free to call those commands. + Note that a _random command_ does not necessarily identifies a command that + uses random numbers: any non deterministic command is considered a random + command (the best example in this regard is the `TIME` command). * Redis commands that may return elements in random order, like `SMEMBERS` (because Redis Sets are _unordered_) have a different behavior when called from Lua, and undergone a silent lexicographical sorting filter before - returning data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always - return the Set elements in the same order, while the same command invoked from - normal clients may return different results even if the key contains exactly - the same elements. + returning data to Lua scripts. + So `redis.call("smembers",KEYS[1])` will always return the Set elements in + the same order, while the same command invoked from normal clients may return + different results even if the key contains exactly the same elements. * Lua pseudo random number generation functions `math.random` and `math.randomseed` are modified in order to always have the same seed every - time a new script is executed. This means that calling `math.random` will - always generate the same sequence of numbers every time a script is executed - if `math.randomseed` is not used. + time a new script is executed. + This means that calling `math.random` will always generate the same sequence + of numbers every time a script is executed if `math.randomseed` is not used. -However the user is still able to write commands with random behaviors using -the following simple trick. Imagine I want to write a Redis script that will -populate a list with N random integers. +However the user is still able to write commands with random behaviors using the +following simple trick. +Imagine I want to write a Redis script that will populate a list with N random +integers. I can start writing the following script, using a small Ruby program: @@ -353,7 +369,8 @@ following elements: In order to make it a pure function, but still making sure that every invocation of the script will result in different random elements, we can simply add an additional argument to the script, that will be used in order to seed the Lua -pseudo random number generator. The new script will be like the following: +pseudo random number generator. +The new script will be like the following: RandomPushScript = <= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4 - can only remove a field per call. +* `>= 2.4`: Accepts multiple `field` arguments. + Redis versions older than 2.4 can only remove a field per call. To remove multiple fields from a hash in an atomic fashion in earlier versions, use a `MULTI` / `EXEC` block. diff --git a/commands/hgetall.md b/commands/hgetall.md index 6a858f0035..84ea9a3604 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -1,6 +1,6 @@ -Returns all fields and values of the hash stored at `key`. In the returned -value, every field name is followed by its value, so the length of the reply is -twice the size of the hash. +Returns all fields and values of the hash stored at `key`. +In the returned value, every field name is followed by its value, so the length +of the reply is twice the size of the hash. @return diff --git a/commands/hincrby.md b/commands/hincrby.md index 75c49aab02..22a4124060 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -1,5 +1,6 @@ Increments the number stored at `field` in the hash stored at `key` by -`increment`. If `key` does not exist, a new key holding a hash is created. +`increment`. +If `key` does not exist, a new key holding a hash is created. If `field` does not exist the value is set to `0` before the operation is performed. diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md index b80f7b619d..cd86669750 100644 --- a/commands/hincrbyfloat.md +++ b/commands/hincrbyfloat.md @@ -1,7 +1,7 @@ -Increment the specified `field` of an hash stored at `key`, and representing -a floating point number, by the specified `increment`. If the field does not -exist, it is set to `0` before performing the operation. An error is returned if -one of the following conditions occur: +Increment the specified `field` of an hash stored at `key`, and representing a +floating point number, by the specified `increment`. +If the field does not exist, it is set to `0` before performing the operation. +An error is returned if one of the following conditions occur: * The field contains a value of the wrong type (not a string). * The current field content or the specified increment are not parsable as a diff --git a/commands/hmset.md b/commands/hmset.md index fba775cfaa..9444092e6a 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -1,6 +1,7 @@ Sets the specified fields to their respective values in the hash stored at -`key`. This command overwrites any existing fields in the hash. If `key` does -not exist, a new key holding a hash is created. +`key`. +This command overwrites any existing fields in the hash. +If `key` does not exist, a new key holding a hash is created. @return diff --git a/commands/hset.md b/commands/hset.md index cfe15ef113..8a2c299dbd 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -1,6 +1,6 @@ -Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a -new key holding a hash is created. If `field` already exists in the hash, it is -overwritten. +Sets `field` in the hash stored at `key` to `value`. +If `key` does not exist, a new key holding a hash is created. +If `field` already exists in the hash, it is overwritten. @return diff --git a/commands/hsetnx.md b/commands/hsetnx.md index 49e98781e4..b24200370b 100644 --- a/commands/hsetnx.md +++ b/commands/hsetnx.md @@ -1,6 +1,7 @@ Sets `field` in the hash stored at `key` to `value`, only if `field` does not -yet exist. If `key` does not exist, a new key holding a hash is created. If -`field` already exists, this operation has no effect. +yet exist. +If `key` does not exist, a new key holding a hash is created. +If `field` already exists, this operation has no effect. @return diff --git a/commands/incr.md b/commands/incr.md index feefe6fcfa..f59b0d6088 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -1,11 +1,13 @@ -Increments the number stored at `key` by one. If the key does not exist, it -is set to `0` before performing the operation. An error is returned if the -key contains a value of the wrong type or contains a string that can not be -represented as integer. This operation is limited to 64 bit signed integers. +Increments the number stored at `key` by one. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated -integer type. The the string stored at the key is interpreted as a base-10 **64 -bit signed integer** to execute the operation. +integer type. +The the string stored at the key is interpreted as a base-10 **64 bit signed +integer** to execute the operation. Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the string @@ -25,9 +27,11 @@ representation of the integer. ## Pattern: Counter The counter pattern is the most obvious thing you can do with Redis atomic -increment operations. The idea is simply send an `INCR` command to Redis every -time an operation occurs. For instance in a web application we may want to know -how many page views this user did every day of the year. +increment operations. +The idea is simply send an `INCR` command to Redis every time an operation +occurs. +For instance in a web application we may want to know how many page views this +user did every day of the year. To do so the web application may simply increment a key every time the user performs a page view, creating the key name concatenating the User ID and a @@ -42,15 +46,15 @@ This simple pattern can be extended in many ways: and reset it to zero. * Using other atomic increment/decrement commands like `DECR` or `INCRBY` it is possible to handle values that may get bigger or smaller depending on the - operations performed by the user. Imagine for instance the score of different - users in an online game. + operations performed by the user. + Imagine for instance the score of different users in an online game. ## Pattern: Rate limiter -The rate limiter pattern is a special counter that is used to limit the rate -at which an operation can be performed. The classical materialization of this -pattern involves limiting the number of requests that can be performed against a -public API. +The rate limiter pattern is a special counter that is used to limit the rate at +which an operation can be performed. +The classical materialization of this pattern involves limiting the number of +requests that can be performed against a public API. We provide two implementations of this pattern using `INCR`, where we assume that the problem to solve is limiting the number of API calls to a maximum of @@ -74,9 +78,10 @@ The more simple and direct implementation of this pattern is the following: PERFORM_API_CALL() END -Basically we have a counter for every IP, for every different second. But this -counters are always incremented setting an expire of 10 seconds so that they'll -be removed by Redis automatically when the current second is a different one. +Basically we have a counter for every IP, for every different second. +But this counters are always incremented setting an expire of 10 seconds so that +they'll be removed by Redis automatically when the current second is a different +one. Note the used of `MULTI` and `EXEC` in order to make sure that we'll both increment and set the expire at every API call. @@ -84,7 +89,8 @@ increment and set the expire at every API call. ## Pattern: Rate limiter 2 An alternative implementation uses a single counter, but is a bit more complex -to get it right without race conditions. We'll examine different variants. +to get it right without race conditions. +We'll examine different variants. FUNCTION LIMIT_API_CALL(ip): current = GET(ip) @@ -99,13 +105,13 @@ to get it right without race conditions. We'll examine different variants. END The counter is created in a way that it only will survive one second, starting -from the first request performed in the current second. If there are more than -10 requests in the same second the counter will reach a value greater than 10, -otherwise it will expire and start again from 0. +from the first request performed in the current second. +If there are more than 10 requests in the same second the counter will reach a +value greater than 10, otherwise it will expire and start again from 0. -**In the above code there is a race condition**. If for some reason the client -performs the `INCR` command but does not perform the `EXPIRE` the key will be -leaked until we'll see the same IP address again. +**In the above code there is a race condition**. +If for some reason the client performs the `INCR` command but does not perform +the `EXPIRE` the key will be leaked until we'll see the same IP address again. This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua script that is send using the `EVAL` command (only available since Redis version @@ -118,10 +124,10 @@ script that is send using the `EVAL` command (only available since Redis version end There is a different way to fix this issue without using scripting, but using -Redis lists instead of counters. The implementation is more complex and uses -more advanced features but has the advantage of remembering the IP addresses -of the clients currently performing an API call, that may be useful or not -depending on the application. +Redis lists instead of counters. +The implementation is more complex and uses more advanced features but has the +advantage of remembering the IP addresses of the clients currently performing an +API call, that may be useful or not depending on the application. FUNCTION LIMIT_API_CALL(ip) current = LLEN(ip) @@ -143,5 +149,6 @@ The `RPUSHX` command only pushes the element if the key already exists. Note that we have a race here, but it is not a problem: `EXISTS` may return false but the key may be created by another client before we create it inside -the `MULTI` / `EXEC` block. However this race will just miss an API call under -rare conditions, so the rate limiting will still work correctly. +the `MULTI` / `EXEC` block. +However this race will just miss an API call under rare conditions, so the rate +limiting will still work correctly. diff --git a/commands/incrby.md b/commands/incrby.md index e60e45fe9d..8f4d049023 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -1,7 +1,8 @@ -Increments the number stored at `key` by `increment`. If the key does not exist, -it is set to `0` before performing the operation. An error is returned if the -key contains a value of the wrong type or contains a string that can not be -represented as integer. This operation is limited to 64 bit signed integers. +Increments the number stored at `key` by `increment`. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md index 8dcea20f13..cfe606a741 100644 --- a/commands/incrbyfloat.md +++ b/commands/incrbyfloat.md @@ -1,7 +1,7 @@ -Increment the string representing a floating point number stored at `key` -by the specified `increment`. If the key does not exist, it is set to `0` -before performing the operation. An error is returned if one of the following -conditions occur: +Increment the string representing a floating point number stored at `key` by the +specified `increment`. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if one of the following conditions occur: * The key contains a value of the wrong type (not a string). * The current key content or the specified increment are not parsable as a @@ -15,7 +15,8 @@ Both the value already contained in the string key and the increment argument can be optionally provided in exponential notation, however the value computed after the increment is stored consistently in the same format, that is, an integer number followed (if needed) by a dot, and a variable number of digits -representing the decimal part of the number. Trailing zeroes are always removed. +representing the decimal part of the number. +Trailing zeroes are always removed. The precision of the output is fixed at 17 digits after the decimal point regardless of the actual internal precision of the computation. diff --git a/commands/info.md b/commands/info.md index a1b6fffe66..1b65229439 100644 --- a/commands/info.md +++ b/commands/info.md @@ -27,20 +27,24 @@ All the fields are in the form of `field:value` terminated by `\r\n`. as [`tcmalloc`][hcgcpgp] * `used_memory_rss` is the number of bytes that Redis allocated as seen by the - operating system. Optimally, this number is close to `used_memory` and there - is little memory fragmentation. This is the number reported by tools such as - `top` and `ps`. A large difference between these numbers means there is memory - fragmentation. Because Redis does not have control over how its allocations - are mapped to memory pages, `used_memory_rss` is often the result of a spike - in memory usage. The ratio between `used_memory_rss` and `used_memory` is - given as `mem_fragmentation_ratio`. + operating system. + Optimally, this number is close to `used_memory` and there is little memory + fragmentation. + This is the number reported by tools such as `top` and `ps`. + A large difference between these numbers means there is memory fragmentation. + Because Redis does not have control over how its allocations are mapped to + memory pages, `used_memory_rss` is often the result of a spike in memory + usage. + The ratio between `used_memory_rss` and `used_memory` is given as + `mem_fragmentation_ratio`. * `changes_since_last_save` refers to the number of operations that produced some kind of change in the dataset since the last time either `SAVE` or `BGSAVE` was called. * `allocation_stats` holds a histogram containing the number of allocations of a - certain size (up to 256). This provides a means of introspection for the type - of allocations performed by Redis at run time. + certain size (up to 256). + This provides a means of introspection for the type of allocations performed + by Redis at run time. [hcgcpgp]: http://code.google.com/p/google-perftools/ diff --git a/commands/keys.md b/commands/keys.md index 8499438d3a..ef31b7bd86 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -1,15 +1,18 @@ Returns all keys matching `pattern`. While the time complexity for this operation is O(N), the constant times are -fairly low. For example, Redis running on an entry level laptop can scan a 1 -million key database in 40 milliseconds. +fairly low. +For example, Redis running on an entry level laptop can scan a 1 million key +database in 40 milliseconds. **Warning**: consider `KEYS` as a command that should only be used in production -environments with extreme care. It may ruin performance when it is executed -against large databases. This command is intended for debugging and special -operations, such as changing your keyspace layout. Don't use `KEYS` in your -regular application code. If you're looking for a way to find keys in a subset -of your keyspace, consider using [sets][tdts]. +environments with extreme care. +It may ruin performance when it is executed against large databases. +This command is intended for debugging and special operations, such as changing +your keyspace layout. +Don't use `KEYS` in your regular application code. +If you're looking for a way to find keys in a subset of your keyspace, consider +using [sets][tdts]. [tdts]: /topics/data-types#sets diff --git a/commands/lastsave.md b/commands/lastsave.md index 93c5cbcce5..cfec6253ab 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,7 +1,7 @@ -Return the UNIX TIME of the last DB save executed with success. A client may -check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, then -issuing a `BGSAVE` command and checking at regular intervals every N seconds if -`LASTSAVE` changed. +Return the UNIX TIME of the last DB save executed with success. +A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, +then issuing a `BGSAVE` command and checking at regular intervals every N +seconds if `LASTSAVE` changed. @return diff --git a/commands/lindex.md b/commands/lindex.md index c9bd85ccc3..c96ccfde64 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -1,8 +1,9 @@ -Returns the element at index `index` in the list stored at `key`. The index -is zero-based, so `0` means the first element, `1` the second element and so -on. Negative indices can be used to designate elements starting at the tail of -the list. Here, `-1` means the last element, `-2` means the penultimate and so -forth. +Returns the element at index `index` in the list stored at `key`. +The index is zero-based, so `0` means the first element, `1` the second element +and so on. +Negative indices can be used to designate elements starting at the tail of the +list. +Here, `-1` means the last element, `-2` means the penultimate and so forth. When the value at `key` is not a list, an error is returned. diff --git a/commands/llen.md b/commands/llen.md index b4d7e16319..a41f2ae6a9 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -1,6 +1,6 @@ -Returns the length of the list stored at `key`. If `key` does not exist, it is -interpreted as an empty list and `0` is returned. An error is returned when the -value stored at `key` is not a list. +Returns the length of the list stored at `key`. +If `key` does not exist, it is interpreted as an empty list and `0` is returned. +An error is returned when the value stored at `key` is not a list. @return diff --git a/commands/lpush.md b/commands/lpush.md index f4125de5e4..e29b151417 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -1,13 +1,14 @@ -Insert all the specified values at the head of the list stored at `key`. If -`key` does not exist, it is created as empty list before performing the push -operations. When `key` holds a value that is not a list, an error is returned. +Insert all the specified values at the head of the list stored at `key`. +If `key` does not exist, it is created as empty list before performing the push +operations. +When `key` holds a value that is not a list, an error is returned. It is possible to push multiple elements using a single command call just -specifying multiple arguments at the end of the command. Elements are inserted -one after the other to the head of the list, from the leftmost element to the -rightmost element. So for instance the command `LPUSH mylist a b c` will result -into a list containing `c` as first element, `b` as second element and `a` as -third element. +specifying multiple arguments at the end of the command. +Elements are inserted one after the other to the head of the list, from the +leftmost element to the rightmost element. +So for instance the command `LPUSH mylist a b c` will result into a list +containing `c` as first element, `b` as second element and `a` as third element. @return @@ -15,8 +16,9 @@ third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 - it was possible to push a single value per command. +* `>= 2.4`: Accepts multiple `value` arguments. + In Redis versions older than 2.4 it was possible to push a single value per + command. @examples diff --git a/commands/lpushx.md b/commands/lpushx.md index 22da91e651..8376b5de40 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -1,6 +1,7 @@ Inserts `value` at the head of the list stored at `key`, only if `key` already -exists and holds a list. In contrary to `LPUSH`, no operation will be performed -when `key` does not yet exist. +exists and holds a list. +In contrary to `LPUSH`, no operation will be performed when `key` does not yet +exist. @return diff --git a/commands/lrange.md b/commands/lrange.md index 9a6f9c9a85..25936d567f 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -1,24 +1,27 @@ -Returns the specified elements of the list stored at `key`. The offsets `start` -and `stop` are zero-based indexes, with `0` being the first element of the list -(the head of the list), `1` being the next element and so on. +Returns the specified elements of the list stored at `key`. +The offsets `start` and `stop` are zero-based indexes, with `0` being the first +element of the list (the head of the list), `1` being the next element and so +on. These offsets can also be negative numbers indicating offsets starting at the -end of the list. For example, `-1` is the last element of the list, `-2` the -penultimate, and so on. +end of the list. +For example, `-1` is the last element of the list, `-2` the penultimate, and so +on. ## Consistency with range functions in various programming languages -Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` -will return 11 elements, that is, the rightmost item is included. This **may -or may not** be consistent with behavior of range-related functions in your -programming language of choice (think Ruby's `Range.new`, `Array#slice` or -Python's `range()` function). +Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will +return 11 elements, that is, the rightmost item is included. +This **may or may not** be consistent with behavior of range-related functions +in your programming language of choice (think Ruby's `Range.new`, `Array#slice` +or Python's `range()` function). ## Out-of-range indexes -Out of range indexes will not produce an error. If `start` is larger than the -end of the list, an empty list is returned. If `stop` is larger than the actual -end of the list, Redis will treat it like the last element of the list. +Out of range indexes will not produce an error. +If `start` is larger than the end of the list, an empty list is returned. +If `stop` is larger than the actual end of the list, Redis will treat it like +the last element of the list. @return diff --git a/commands/lrem.md b/commands/lrem.md index baa2328857..4fd9b50efc 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -1,6 +1,6 @@ Removes the first `count` occurrences of elements equal to `value` from the list -stored at `key`. The `count` argument influences the operation in the following -ways: +stored at `key`. +The `count` argument influences the operation in the following ways: * `count > 0`: Remove elements equal to `value` moving from head to tail. * `count < 0`: Remove elements equal to `value` moving from tail to head. diff --git a/commands/lset.md b/commands/lset.md index 12fa6e7621..6a4703f416 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -1,5 +1,5 @@ -Sets the list element at `index` to `value`. For more information on the `index` -argument, see `LINDEX`. +Sets the list element at `index` to `value`. +For more information on the `index` argument, see `LINDEX`. An error is returned for out of range indexes. diff --git a/commands/ltrim.md b/commands/ltrim.md index 08b8f87d40..45c6accf75 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -1,6 +1,7 @@ Trim an existing list so that it will contain only the specified range of -elements specified. Both `start` and `stop` are zero-based indexes, where `0` is -the first element of the list (the head), `1` the next element and so on. +elements specified. +Both `start` and `stop` are zero-based indexes, where `0` is the first element +of the list (the head), `1` the next element and so on. For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that only the first three elements of the list will remain. @@ -11,19 +12,22 @@ element and so on. Out of range indexes will not produce an error: if `start` is larger than the end of the list, or `start > end`, the result will be an empty list (which -causes `key` to be removed). If `end` is larger than the end of the list, Redis -will treat it like the last element of the list. +causes `key` to be removed). +If `end` is larger than the end of the list, Redis will treat it like the last +element of the list. -A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. For example: +A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. +For example: LPUSH mylist someelement LTRIM mylist 0 99 This pair of commands will push a new element on the list, while making sure -that the list will not grow larger than 100 elements. This is very useful when -using Redis to store logs for example. It is important to note that when used -in this way `LTRIM` is an O(1) operation because in the average case just one -element is removed from the tail of the list. +that the list will not grow larger than 100 elements. +This is very useful when using Redis to store logs for example. +It is important to note that when used in this way `LTRIM` is an O(1) operation +because in the average case just one element is removed from the tail of the +list. @return diff --git a/commands/mget.md b/commands/mget.md index eda675894e..fb9c79d286 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -1,6 +1,7 @@ -Returns the values of all specified keys. For every key that does not hold a -string value or does not exist, the special value `nil` is returned. Because of -this, the operation never fails. +Returns the values of all specified keys. +For every key that does not hold a string value or does not exist, the special +value `nil` is returned. +Because of this, the operation never fails. @return diff --git a/commands/migrate.md b/commands/migrate.md index 7c48941885..69736e1f21 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -1,27 +1,28 @@ Atomically transfer a key from a source Redis instance to a destination Redis -instance. On success the key is deleted from the original instance and is -guaranteed to exist in the target instance. +instance. +On success the key is deleted from the original instance and is guaranteed to +exist in the target instance. The command is atomic and blocks the two instances for the time required to transfer the key, at any given time the key will appear to exist in a given instance or in the other instance, unless a timeout error occurs. The command internally uses `DUMP` to generate the serialized version of the key -value, and `RESTORE` in order to synthesize the key in the target instance. The -source instance acts as a client for the target instance. If the target instance -returns OK to the `RESTORE` command, the source instance deletes the key using -`DEL`. +value, and `RESTORE` in order to synthesize the key in the target instance. +The source instance acts as a client for the target instance. +If the target instance returns OK to the `RESTORE` command, the source instance +deletes the key using `DEL`. The timeout specifies the maximum idle time in any moment of the communication -with the destination instance in milliseconds. This means that the operation -does not need to be completed within the specified amount of milliseconds, but -that the transfer should make progresses without blocking for more than the -specified amount of milliseconds. +with the destination instance in milliseconds. +This means that the operation does not need to be completed within the specified +amount of milliseconds, but that the transfer should make progresses without +blocking for more than the specified amount of milliseconds. `MIGRATE` needs to perform I/O operations and to honor the specified timeout. When there is an I/O error during the transfer or if the timeout is reached the -operation is aborted and the special error - `IOERR` returned. When this happens -the following two cases are possible: +operation is aborted and the special error - `IOERR` returned. +When this happens the following two cases are possible: * The key may be on both the instances. * The key may be only in the source instance. diff --git a/commands/monitor.md b/commands/monitor.md index 93ebd1af0a..3bd2b1c29e 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -1,6 +1,7 @@ -`MONITOR` is a debugging command that streams back every command processed -by the Redis server. It can help in understanding what is happening to the -database. This command can both be used via `redis-cli` and via `telnet`. +`MONITOR` is a debugging command that streams back every command processed by +the Redis server. +It can help in understanding what is happening to the database. +This command can both be used via `redis-cli` and via `telnet`. The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a @@ -37,9 +38,9 @@ Manually issue the `QUIT` command to stop a `MONITOR` stream running via ## Cost of running `MONITOR` -Because `MONITOR` streams back **all** commands, its use comes at a cost. The -following (totally unscientific) benchmark numbers illustrate what the cost of -running `MONITOR` can be. +Because `MONITOR` streams back **all** commands, its use comes at a cost. +The following (totally unscientific) benchmark numbers illustrate what the cost +of running `MONITOR` can be. Benchmark result **without** `MONITOR` running: @@ -60,8 +61,8 @@ Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`): INCR: 41771.09 requests per second In this particular case, running a single `MONITOR` client can reduce the -throughput by more than 50%. Running more `MONITOR` clients will reduce -throughput even more. +throughput by more than 50%. +Running more `MONITOR` clients will reduce throughput even more. @return diff --git a/commands/move.md b/commands/move.md index f63a045bdb..ceb212caac 100644 --- a/commands/move.md +++ b/commands/move.md @@ -1,7 +1,8 @@ Move `key` from the currently selected database (see `SELECT`) to the specified -destination database. When `key` already exists in the destination database, or -it does not exist in the source database, it does nothing. It is possible to use -`MOVE` as a locking primitive because of this. +destination database. +When `key` already exists in the destination database, or it does not exist in +the source database, it does nothing. +It is possible to use `MOVE` as a locking primitive because of this. @return diff --git a/commands/mset.md b/commands/mset.md index 4a45c03637..76e81d959a 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -1,9 +1,10 @@ -Sets the given keys to their respective values. `MSET` replaces existing values -with new values, just as regular `SET`. See `MSETNX` if you don't want to -overwrite existing values. +Sets the given keys to their respective values. +`MSET` replaces existing values with new values, just as regular `SET`. +See `MSETNX` if you don't want to overwrite existing values. -`MSET` is atomic, so all given keys are set at once. It is not possible for -clients to see that some of the keys were updated while others are unchanged. +`MSET` is atomic, so all given keys are set at once. +It is not possible for clients to see that some of the keys were updated while +others are unchanged. @return diff --git a/commands/msetnx.md b/commands/msetnx.md index e9b656bcfb..1b1c060b0f 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -1,12 +1,14 @@ -Sets the given keys to their respective values. `MSETNX` will not perform any -operation at all even if just a single key already exists. +Sets the given keys to their respective values. +`MSETNX` will not perform any operation at all even if just a single key already +exists. Because of this semantic `MSETNX` can be used in order to set different keys representing different fields of an unique logic object in a way that ensures that either all the fields or none at all are set. -`MSETNX` is atomic, so all given keys are set at once. It is not possible for -clients to see that some of the keys were updated while others are unchanged. +`MSETNX` is atomic, so all given keys are set at once. +It is not possible for clients to see that some of the keys were updated while +others are unchanged. @return diff --git a/commands/multi.md b/commands/multi.md index f6c69be303..437a3e1f93 100644 --- a/commands/multi.md +++ b/commands/multi.md @@ -1,5 +1,5 @@ -Marks the start of a [transaction][tt] block. Subsequent commands will be queued -for atomic execution using `EXEC`. +Marks the start of a [transaction][tt] block. +Subsequent commands will be queued for atomic execution using `EXEC`. [tt]: /topics/transactions diff --git a/commands/object.md b/commands/object.md index 35379b01bc..a560dbefd0 100644 --- a/commands/object.md +++ b/commands/object.md @@ -1,14 +1,16 @@ The `OBJECT` command allows to inspect the internals of Redis Objects associated -with keys. It is useful for debugging or to understand if your keys are using -the specially encoded data types to save space. Your application may also use -the information reported by the `OBJECT` command to implement application level -key eviction policies when using Redis as a Cache. +with keys. +It is useful for debugging or to understand if your keys are using the specially +encoded data types to save space. +Your application may also use the information reported by the `OBJECT` command +to implement application level key eviction policies when using Redis as a +Cache. The `OBJECT` command supports multiple sub commands: * `OBJECT REFCOUNT ` returns the number of references of the value - associated with the specified key. This command is mainly useful for - debugging. + associated with the specified key. + This command is mainly useful for debugging. * `OBJECT ENCODING ` returns the kind of internal representation used in order to store the value associated with a key. * `OBJECT IDLETIME ` returns the number of seconds since the object stored @@ -21,15 +23,18 @@ Objects can be encoded in different ways: * Strings can be encoded as `raw` (normal string encoding) or `int` (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space). -* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the - special representation that is used to save space for small lists. -* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special - encoding used for small sets composed solely of integers. -* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special - encoding used for small hashes. -* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List - type small sorted sets can be specially encoded using `ziplist`, while the - `skiplist` encoding is the one that works with sorted sets of any size. +* Lists can be encoded as `ziplist` or `linkedlist`. + The `ziplist` is the special representation that is used to save space for + small lists. +* Sets can be encoded as `intset` or `hashtable`. + The `intset` is a special encoding used for small sets composed solely of + integers. +* Hashes can be encoded as `zipmap` or `hashtable`. + The `zipmap` is a special encoding used for small hashes. +* Sorted Sets can be encoded as `ziplist` or `skiplist` format. + As for the List type small sorted sets can be specially encoded using + `ziplist`, while the `skiplist` encoding is the one that works with sorted + sets of any size. All the specially encoded types are automatically converted to the general type once you perform an operation that makes it no possible for Redis to retain the diff --git a/commands/ping.md b/commands/ping.md index f80ca6a931..f2405ce93d 100644 --- a/commands/ping.md +++ b/commands/ping.md @@ -1,5 +1,6 @@ -Returns `PONG`. This command is often used to test if a connection is still -alive, or to measure latency. +Returns `PONG`. +This command is often used to test if a connection is still alive, or to measure +latency. @return diff --git a/commands/punsubscribe.md b/commands/punsubscribe.md index 449fddcfec..2dffcd53cd 100644 --- a/commands/punsubscribe.md +++ b/commands/punsubscribe.md @@ -2,5 +2,6 @@ Unsubscribes the client from the given patterns, or from all of them if none is given. When no patters are specified, the client is unsubscribed from all the -previously subscribed patterns. In this case, a message for every unsubscribed -pattern will be sent to the client. +previously subscribed patterns. +In this case, a message for every unsubscribed pattern will be sent to the +client. diff --git a/commands/quit.md b/commands/quit.md index f36b86a1ce..9ade700edc 100644 --- a/commands/quit.md +++ b/commands/quit.md @@ -1,5 +1,6 @@ -Ask the server to close the connection. The connection is closed as soon as all -pending replies have been written to the client. +Ask the server to close the connection. +The connection is closed as soon as all pending replies have been written to the +client. @return diff --git a/commands/rename.md b/commands/rename.md index 1dd586018a..0f80d67b81 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -1,6 +1,7 @@ -Renames `key` to `newkey`. It returns an error when the source and destination -names are the same, or when `key` does not exist. If `newkey` already exists it -is overwritten. +Renames `key` to `newkey`. +It returns an error when the source and destination names are the same, or when +`key` does not exist. +If `newkey` already exists it is overwritten. @return diff --git a/commands/renamenx.md b/commands/renamenx.md index 1128bf6757..737c9dfdf0 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -1,5 +1,5 @@ -Renames `key` to `newkey` if `newkey` does not yet exist. It returns an error -under the same conditions as `RENAME`. +Renames `key` to `newkey` if `newkey` does not yet exist. +It returns an error under the same conditions as `RENAME`. @return diff --git a/commands/restore.md b/commands/restore.md index 9b04f820fe..861dcbfbad 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -4,8 +4,8 @@ provided serialized value (obtained via `DUMP`). If `ttl` is 0 the key is created without any expire, otherwise the specified expire time (in milliseconds) is set. -`RESTORE` checks the RDB version and data checksum. If they don't match an error -is returned. +`RESTORE` checks the RDB version and data checksum. +If they don't match an error is returned. @return diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index a6644658a5..34c393bac2 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -3,13 +3,15 @@ Atomically returns and removes the last element (tail) of the list stored at at `destination`. For example: consider `source` holding the list `a,b,c`, and `destination` -holding the list `x,y,z`. Executing `RPOPLPUSH` results in `source` holding -`a,b` and `destination` holding `c,x,y,z`. +holding the list `x,y,z`. +Executing `RPOPLPUSH` results in `source` holding `a,b` and `destination` +holding `c,x,y,z`. -If `source` does not exist, the value `nil` is returned and no operation -is performed. If `source` and `destination` are the same, the operation is -equivalent to removing the last element from the list and pushing it as first -element of the list, so it can be considered as a list rotation command. +If `source` does not exist, the value `nil` is returned and no operation is +performed. +If `source` and `destination` are the same, the operation is equivalent to +removing the last element from the list and pushing it as first element of the +list, so it can be considered as a list rotation command. @return @@ -28,10 +30,11 @@ element of the list, so it can be considered as a list rotation command. ## Pattern: Reliable queue Redis is often used as a messaging server to implement processing of background -jobs or other kinds of messaging tasks. A simple form of queue is often obtained -pushing values into a list in the producer side, and waiting for this values -in the consumer side using `RPOP` (using polling), or `BRPOP` if the client is -better served by a blocking operation. +jobs or other kinds of messaging tasks. +A simple form of queue is often obtained pushing values into a list in the +producer side, and waiting for this values in the consumer side using `RPOP` +(using polling), or `BRPOP` if the client is better served by a blocking +operation. However in this context the obtained queue is not _reliable_ as messages can be lost, for example in the case there is a network problem or if the consumer @@ -39,8 +42,9 @@ crashes just after the message is received but it is still to process. `RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid this problem: the consumer fetches the message and at the same time pushes it -into a _processing_ list. It will use the `LREM` command in order to remove the -message from the _processing_ list once the message has been processed. +into a _processing_ list. +It will use the `LREM` command in order to remove the message from the +_processing_ list once the message has been processed. An additional client may monitor the _processing_ list for items that remain there for too much time, and will push those timed out items into the queue @@ -55,13 +59,13 @@ operation. The above pattern works even if the following two conditions: * There are multiple clients rotating the list: they'll fetch different elements, until all -the elements of the list are visited, and the process restarts. * Even if other -clients are actively pushing new items at the end of the list. +the elements of the list are visited, and the process restarts. +* Even if other clients are actively pushing new items at the end of the list. The above makes it very simple to implement a system where a set of items must -be processed by N workers continuously as fast as possible. An example is a -monitoring system that must check that a set of web sites are reachable, with -the smallest delay possible, using a number of parallel workers. +be processed by N workers continuously as fast as possible. +An example is a monitoring system that must check that a set of web sites are +reachable, with the smallest delay possible, using a number of parallel workers. Note that this implementation of workers is trivially scalable and reliable, because even if a message is lost the item is still in the queue and will be diff --git a/commands/rpush.md b/commands/rpush.md index a655598e0a..6ed764f650 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -1,13 +1,14 @@ -Insert all the specified values at the tail of the list stored at `key`. If -`key` does not exist, it is created as empty list before performing the push -operation. When `key` holds a value that is not a list, an error is returned. +Insert all the specified values at the tail of the list stored at `key`. +If `key` does not exist, it is created as empty list before performing the push +operation. +When `key` holds a value that is not a list, an error is returned. It is possible to push multiple elements using a single command call just -specifying multiple arguments at the end of the command. Elements are inserted -one after the other to the tail of the list, from the leftmost element to the -rightmost element. So for instance the command `RPUSH mylist a b c` will result -into a list containing `a` as first element, `b` as second element and `c` as -third element. +specifying multiple arguments at the end of the command. +Elements are inserted one after the other to the tail of the list, from the +leftmost element to the rightmost element. +So for instance the command `RPUSH mylist a b c` will result into a list +containing `a` as first element, `b` as second element and `c` as third element. @return @@ -15,8 +16,9 @@ third element. @history -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 - it was possible to push a single value per command. +* `>= 2.4`: Accepts multiple `value` arguments. + In Redis versions older than 2.4 it was possible to push a single value per + command. @examples diff --git a/commands/rpushx.md b/commands/rpushx.md index a7f8d04a73..5375485707 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -1,6 +1,7 @@ Inserts `value` at the tail of the list stored at `key`, only if `key` already -exists and holds a list. In contrary to `RPUSH`, no operation will be performed -when `key` does not yet exist. +exists and holds a list. +In contrary to `RPUSH`, no operation will be performed when `key` does not yet +exist. @return diff --git a/commands/sadd.md b/commands/sadd.md index e6ad0cf3d4..92de4c30ad 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -1,6 +1,7 @@ -Add the specified members to the set stored at `key`. Specified members that are -already a member of this set are ignored. If `key` does not exist, a new set is -created before adding the specified members. +Add the specified members to the set stored at `key`. +Specified members that are already a member of this set are ignored. +If `key` does not exist, a new set is created before adding the specified +members. An error is returned when the value stored at `key` is not a set. @@ -11,8 +12,8 @@ all the elements already present into the set. @history -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions before 2.4 are - only able to add a single member per call. +* `>= 2.4`: Accepts multiple `member` arguments. + Redis versions before 2.4 are only able to add a single member per call. @examples diff --git a/commands/save.md b/commands/save.md index 6a20aad3d1..783c9d117c 100644 --- a/commands/save.md +++ b/commands/save.md @@ -3,10 +3,11 @@ _point in time_ snapshot of all the data inside the Redis instance, in the form of an RDB file. You almost never what to call `SAVE` in production environments where it will -block all the other clients. Instead usually `BGSAVE` is used. However in case -of issues preventing Redis to create the background saving child (for instance -errors in the fork(2) system call), the `SAVE` command can be a good last resort -to perform the dump of the latest dataset. +block all the other clients. +Instead usually `BGSAVE` is used. +However in case of issues preventing Redis to create the background saving child +(for instance errors in the fork(2) system call), the `SAVE` command can be a +good last resort to perform the dump of the latest dataset. Please refer to the [persistence documentation][tp] for detailed information. diff --git a/commands/script exists.md b/commands/script exists.md index 17aac20883..b1f7cf7e37 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -13,9 +13,9 @@ Lua scripting. @return @multi-bulk-reply The command returns an array of integers that correspond to -the specified SHA1 sum arguments. For every corresponding SHA1 sum of a script -that actually exists in the script cache, an 1 is returned, otherwise 0 is -returned. +the specified SHA1 sum arguments. +For every corresponding SHA1 sum of a script that actually exists in the script +cache, an 1 is returned, otherwise 0 is returned. @example diff --git a/commands/script kill.md b/commands/script kill.md index 2be073997b..4fea76f7d9 100644 --- a/commands/script kill.md +++ b/commands/script kill.md @@ -2,14 +2,15 @@ Kills the currently executing Lua script, assuming no write operation was yet performed by the script. This command is mainly useful to kill a script that is running for too much -time(for instance because it entered an infinite loop because of a bug). The -script will be killed and the client currently blocked into EVAL will see the -command returning with an error. +time(for instance because it entered an infinite loop because of a bug). +The script will be killed and the client currently blocked into EVAL will see +the command returning with an error. If the script already performed write operations it can not be killed in this -way because it would violate Lua script atomicity contract. In such a case only -`SHUTDOWN NOSAVE` is able to kill the script, killing the Redis process in an -hard way preventing it to persist with half-written information. +way because it would violate Lua script atomicity contract. +In such a case only `SHUTDOWN NOSAVE` is able to kill the script, killing +the Redis process in an hard way preventing it to persist with half-written +information. Please refer to the `EVAL` documentation for detailed information about Redis Lua scripting. diff --git a/commands/script load.md b/commands/script load.md index 15bd451c86..4709695bec 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -1,7 +1,7 @@ -Load a script into the scripts cache, without executing it. After the specified -command is loaded into the script cache it will be callable using `EVALSHA` with -the correct SHA1 digest of the script, exactly like after the first successful -invocation of `EVAL`. +Load a script into the scripts cache, without executing it. +After the specified command is loaded into the script cache it will be callable +using `EVALSHA` with the correct SHA1 digest of the script, exactly like after +the first successful invocation of `EVAL`. The script is guaranteed to stay in the script cache forever (unless `SCRIPT FLUSH` is called). diff --git a/commands/select.md b/commands/select.md index 29825ad3d7..b0a4cd565a 100644 --- a/commands/select.md +++ b/commands/select.md @@ -1,5 +1,5 @@ -Select the DB with having the specified zero-based numeric index. New -connections always use DB 0. +Select the DB with having the specified zero-based numeric index. +New connections always use DB 0. @return diff --git a/commands/set.md b/commands/set.md index 34d9a8c284..547f4c0824 100644 --- a/commands/set.md +++ b/commands/set.md @@ -1,5 +1,5 @@ -Set `key` to hold the string `value`. If `key` already holds a value, it is -overwritten, regardless of its type. +Set `key` to hold the string `value`. +If `key` already holds a value, it is overwritten, regardless of its type. @return diff --git a/commands/setbit.md b/commands/setbit.md index 1cf7f1b914..3541cddfb6 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -1,20 +1,23 @@ Sets or clears the bit at _offset_ in the string value stored at _key_. The bit is either set or cleared depending on _value_, which can be either 0 or -1. When _key_ does not exist, a new string value is created. The string is grown -to make sure it can hold a bit at _offset_. The _offset_ argument is required -to be greater than or equal to 0, and smaller than 2^32 (this limits bitmaps to -512MB). When the string at _key_ is grown, added bits are set to 0. +1. +When _key_ does not exist, a new string value is created. +The string is grown to make sure it can hold a bit at _offset_. +The _offset_ argument is required to be greater than or equal to 0, and smaller +than 2^32 (this limits bitmaps to 512MB). +When the string at _key_ is grown, added bits are set to 0. **Warning**: When setting the last possible bit (_offset_ equal to 2^32 -1) and -the string value stored at _key_ does not yet hold a string value, or holds -a small string value, Redis needs to allocate all intermediate memory which -can block the server for some time. On a 2010 MacBook Pro, setting bit number -2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB -allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes -~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that once -this first allocation is done, subsequent calls to `SETBIT` for the same _key_ -will not have the allocation overhead. +the string value stored at _key_ does not yet hold a string value, or holds a +small string value, Redis needs to allocate all intermediate memory which can +block the server for some time. +On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes +~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit +number 2^28 -1 (32MB allocation) takes ~30ms and setting bit number 2^26 -1 (8MB +allocation) takes ~8ms. +Note that once this first allocation is done, subsequent calls to `SETBIT` for +the same _key_ will not have the allocation overhead. @return diff --git a/commands/setex.md b/commands/setex.md index d29221df96..53221baaee 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -1,14 +1,14 @@ Set `key` to hold the string `value` and set `key` to timeout after a given -number of seconds. This command is equivalent to executing the following -commands: +number of seconds. +This command is equivalent to executing the following commands: SET mykey value EXPIRE mykey seconds `SETEX` is atomic, and can be reproduced by using the previous two commands -inside an `MULTI` / `EXEC` block. It is provided as a faster alternative to the -given sequence of operations, because this operation is very common when Redis -is used as a cache. +inside an `MULTI` / `EXEC` block. +It is provided as a faster alternative to the given sequence of operations, +because this operation is very common when Redis is used as a cache. An error is returned when `seconds` is invalid. diff --git a/commands/setnx.md b/commands/setnx.md index 37b4b64484..df607d33c8 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -1,5 +1,6 @@ -Set `key` to hold string `value` if `key` does not exist. In that case, it is -equal to `SET`. When `key` already holds a value, no operation is performed. +Set `key` to hold string `value` if `key` does not exist. +In that case, it is equal to `SET`. +When `key` already holds a value, no operation is performed. `SETNX` is short for "**SET** if **N** ot e **X** ists". @return @@ -18,25 +19,27 @@ equal to `SET`. When `key` already holds a value, no operation is performed. ## Design pattern: Locking with `!SETNX` -`SETNX` can be used as a locking primitive. For example, to acquire the lock of -the key `foo`, the client could try the following: +`SETNX` can be used as a locking primitive. +For example, to acquire the lock of the key `foo`, the client could try the +following: SETNX lock.foo If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key -to the Unix time at which the lock should no longer be considered valid. The -client will later use `DEL lock.foo` in order to release the lock. +to the Unix time at which the lock should no longer be considered valid. +The client will later use `DEL lock.foo` in order to release the lock. -If `SETNX` returns `0` the key is already locked by some other client. We -can either return to the caller if it's a non blocking lock, or enter a loop +If `SETNX` returns `0` the key is already locked by some other client. +We can either return to the caller if it's a non blocking lock, or enter a loop retrying to hold the lock until we succeed or some kind of timeout expires. ### Handling deadlocks In the above locking algorithm there is a problem: what happens if a client fails, crashes, or is otherwise not able to release the lock? It's possible to -detect this condition because the lock key contains a UNIX timestamp. If such a -timestamp is equal to the current Unix time the lock is no longer valid. +detect this condition because the lock key contains a UNIX timestamp. +If such a timestamp is equal to the current Unix time the lock is no longer +valid. When this happens we can't just call `DEL` against the key to remove the lock and then try to issue a `SETNX`, as there is a race condition here, when @@ -56,20 +59,23 @@ Let's see how C4, our sane client, uses the good algorithm: * C4 sends `SETNX lock.foo` in order to acquire the lock * The crashed client C3 still holds it, so Redis will reply with `0` to C4. -* C4 sends `GET lock.foo` to check if the lock expired. If it is not, it will - sleep for some time and retry from the start. +* C4 sends `GET lock.foo` to check if the lock expired. + If it is not, it will sleep for some time and retry from the start. * Instead, if the lock is expired because the Unix time at `lock.foo` is older than the current Unix time, C4 tries to perform: GETSET lock.foo * Because of the `GETSET` semantic, C4 can check if the old value stored at - `key` is still an expired timestamp. If it is, the lock was acquired. + `key` is still an expired timestamp. + If it is, the lock was acquired. * If another client, for instance C5, was faster than C4 and acquired the lock with the `GETSET` operation, the C4 `GETSET` operation will return a non - expired timestamp. C4 will simply restart from the first step. Note that even - if C4 set the key a bit a few seconds in the future this is not a problem. + expired timestamp. + C4 will simply restart from the first step. + Note that even if C4 set the key a bit a few seconds in the future this is not + a problem. **Important note**: In order to make this locking algorithm more robust, a client holding a lock should always check the timeout didn't expire before diff --git a/commands/setrange.md b/commands/setrange.md index 3b99b658eb..38821216cb 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,29 +1,30 @@ Overwrites part of the string stored at _key_, starting at the specified offset, -for the entire length of _value_. If the offset is larger than the current -length of the string at _key_, the string is padded with zero-bytes to make -_offset_ fit. Non-existing keys are considered as empty strings, so this command -will make sure it holds a string large enough to be able to set _value_ at -_offset_. +for the entire length of _value_. +If the offset is larger than the current length of the string at _key_, the +string is padded with zero-bytes to make _offset_ fit. +Non-existing keys are considered as empty strings, so this command will make +sure it holds a string large enough to be able to set _value_ at _offset_. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis -Strings are limited to 512 megabytes. If you need to grow beyond this size, you -can use multiple keys. +Strings are limited to 512 megabytes. +If you need to grow beyond this size, you can use multiple keys. **Warning**: When setting the last possible byte and the string value stored at _key_ does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some -time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) -takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, -setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit number -8388608 (8MB allocation) takes ~8ms. Note that once this first allocation is -done, subsequent calls to `SETRANGE` for the same _key_ will not have the -allocation overhead. +time. +On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes +~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting +bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 +(8MB allocation) takes ~8ms. +Note that once this first allocation is done, subsequent calls to `SETRANGE` for +the same _key_ will not have the allocation overhead. ## Patterns Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis -strings as a linear array with O(1) random access. This is a very fast and -efficient storage in many real world use cases. +strings as a linear array with O(1) random access. +This is a very fast and efficient storage in many real world use cases. @return diff --git a/commands/shutdown.md b/commands/shutdown.md index dd7c239de2..e659d86349 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -6,9 +6,9 @@ The command behavior is the following: * Quit the server. If persistence is enabled this commands makes sure that Redis is switched off -without the lost of any data. This is not guaranteed if the client uses simply -`SAVE` and then `QUIT` because other clients may alter the DB data between the -two commands. +without the lost of any data. +This is not guaranteed if the client uses simply `SAVE` and then `QUIT` because +other clients may alter the DB data between the two commands. Note: A Redis instance that is configured for not persisting on disk (no AOF configured, nor "save" directive) will not dump the RDB file on `SHUTDOWN`, as @@ -18,15 +18,18 @@ shutting down. ## SAVE and NOSAVE modifiers It is possible to specify an optional modifier to alter the behavior of the -command. Specifically: +command. +Specifically: * **SHUTDOWN SAVE** will force a DB saving operation even if no save points are configured. * **SHUTDOWN NOSAVE** will prevent a DB saving operation even if one or more - save points are configured. (You can think at this variant as an hypothetical - **ABORT** command that just stops the server). + save points are configured. + (You can think at this variant as an hypothetical **ABORT** command that just + stops the server). @return -@status-reply on error. On success nothing is returned since the server quits -and the connection is closed. +@status-reply on error. +On success nothing is returned since the server quits and the connection is +closed. diff --git a/commands/sinter.md b/commands/sinter.md index 70c848aba6..d7bdceb551 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -8,9 +8,9 @@ For example: key3 = {a,c,e} SINTER key1 key2 key3 = {c} -Keys that do not exist are considered to be empty sets. With one of the keys -being an empty set, the resulting set is also empty (since set intersection with -an empty set always results in an empty set). +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since +set intersection with an empty set always results in an empty set). @return diff --git a/commands/slaveof.md b/commands/slaveof.md index 3ff3091476..f3d20e6b24 100644 --- a/commands/slaveof.md +++ b/commands/slaveof.md @@ -1,18 +1,19 @@ The `SLAVEOF` command can change the replication settings of a slave on the fly. If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE will -turn off the replication, turning the Redis server into a MASTER. In the proper -form `SLAVEOF` hostname port will make the server a slave of another server -listening at the specified hostname and port. +turn off the replication, turning the Redis server into a MASTER. +In the proper form `SLAVEOF` hostname port will make the server a slave of +another server listening at the specified hostname and port. If a server is already a slave of some master, `SLAVEOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. The form `SLAVEOF` NO ONE will stop replication, turning the server into a -MASTER, but will not discard the replication. So, if the old master stops -working, it is possible to turn the slave into a master and set the application -to use this new master in read/write. Later when the other Redis server is -fixed, it can be reconfigured to work as a slave. +MASTER, but will not discard the replication. +So, if the old master stops working, it is possible to turn the slave into a +master and set the application to use this new master in read/write. +Later when the other Redis server is fixed, it can be reconfigured to work as a +slave. @return diff --git a/commands/slowlog.md b/commands/slowlog.md index 943764ebfc..55306cdda1 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -3,18 +3,22 @@ This command is used in order to read and reset the Redis slow queries log. ## Redis slow log overview The Redis Slow Log is a system to log queries that exceeded a specified -execution time. The execution time does not include I/O operations like talking -with the client, sending the reply and so forth, but just the time needed to -actually execute the command (this is the only stage of command execution where -the thread is blocked and can not serve other requests in the meantime). +execution time. +The execution time does not include I/O operations like talking with the client, +sending the reply and so forth, but just the time needed to actually execute the +command (this is the only stage of command execution where the thread is blocked +and can not serve other requests in the meantime). You can configure the slow log with two parameters: _slowlog-log-slower-than_ tells Redis what is the execution time, in microseconds, to exceed in order for -the command to get logged. Note that a negative number disables the slow log, -while a value of zero forces the logging of every command. _slowlog-max-len_ -is the length of the slow log. The minimum value is zero. When a new command -is logged and the slow log is already at its maximum length, the oldest one is -removed from the queue of logged commands in order to make space. +the command to get logged. +Note that a negative number disables the slow log, while a value of zero forces +the logging of every command. +_slowlog-max-len_ is the length of the slow log. +The minimum value is zero. +When a new command is logged and the slow log is already at its maximum length, +the oldest one is removed from the queue of logged commands in order to make +space. The configuration can be done by editing `redis.conf` or while the server is running using the `CONFIG GET` and `CONFIG SET` commands. @@ -22,13 +26,15 @@ running using the `CONFIG GET` and `CONFIG SET` commands. ## Reading the slow log The slow log is accumulated in memory, so no file is written with information -about the slow command executions. This makes the slow log remarkably fast at -the point that you can enable the logging of all the commands (setting the -_slowlog-log-slower-than_ config parameter to zero) with minor performance hit. +about the slow command executions. +This makes the slow log remarkably fast at the point that you can enable the +logging of all the commands (setting the _slowlog-log-slower-than_ config +parameter to zero) with minor performance hit. To read the slow log the **SLOWLOG GET** command is used, that returns every -entry in the slow log. It is possible to return only the N most recent entries -passing an additional argument to the command (for instance **SLOWLOG GET 10**). +entry in the slow log. +It is possible to return only the N most recent entries passing an additional +argument to the command (for instance **SLOWLOG GET 10**). Note that you need a recent version of redis-cli in order to read the slow log output, since it uses some features of the protocol that were not formerly @@ -69,5 +75,5 @@ It is possible to get just the length of the slow log using the command ## Resetting the slow log. -You can reset the slow log using the **SLOWLOG RESET** command. Once deleted the -information is lost forever. +You can reset the slow log using the **SLOWLOG RESET** command. +Once deleted the information is lost forever. diff --git a/commands/smove.md b/commands/smove.md index 7a4b689dd4..e2dfccb14c 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -1,11 +1,14 @@ -Move `member` from the set at `source` to the set at `destination`. This -operation is atomic. In every given moment the element will appear to be a -member of `source` **or** `destination` for other clients. +Move `member` from the set at `source` to the set at `destination`. +This operation is atomic. +In every given moment the element will appear to be a member of `source` **or** +`destination` for other clients. If the source set does not exist or does not contain the specified element, no -operation is performed and `0` is returned. Otherwise, the element is removed -from the source set and added to the destination set. When the specified element -already exists in the destination set, it is only removed from the source set. +operation is performed and `0` is returned. +Otherwise, the element is removed from the source set and added to the +destination set. +When the specified element already exists in the destination set, it is only +removed from the source set. An error is returned if `source` or `destination` does not hold a set value. diff --git a/commands/sort.md b/commands/sort.md index 6f53952dbe..c2c7b5e533 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,6 +1,7 @@ Returns or stores the elements contained in the [list][tdtl], [set][tdts] or -[sorted set][tdtss] at `key`. By default, sorting is numeric and elements are -compared by their value interpreted as double precision floating point number. +[sorted set][tdtss] at `key`. +By default, sorting is numeric and elements are compared by their value +interpreted as double precision floating point number. This is `SORT` in its simplest form: [tdtl]: /topics/data-types#lists @@ -10,8 +11,8 @@ This is `SORT` in its simplest form: SORT mylist Assuming `mylist` is a list of numbers, this command will return the same list -with the elements sorted from small to large. In order to sort the numbers from -large to small, use the `!DESC` modifier: +with the elements sorted from small to large. +In order to sort the numbers from large to small, use the `!DESC` modifier: SORT mylist DESC @@ -24,49 +25,53 @@ Redis is UTF-8 aware, assuming you correctly set the `!LC_COLLATE` environment variable. The number of returned elements can be limited using the `!LIMIT` modifier. -This modifier takes the `offset` argument, specifying the number of elements -to skip and the `count` argument, specifying the number of elements to return -from starting at `offset`. The following example will return 10 elements of the -sorted version of `mylist`, starting at element 0 (`offset` is zero-based): +This modifier takes the `offset` argument, specifying the number of elements to +skip and the `count` argument, specifying the number of elements to return from +starting at `offset`. +The following example will return 10 elements of the sorted version of `mylist`, +starting at element 0 (`offset` is zero-based): SORT mylist LIMIT 0 10 -Almost all modifiers can be used together. The following example will return the -first 5 elements, lexicographically sorted in descending order: +Almost all modifiers can be used together. +The following example will return the first 5 elements, lexicographically sorted +in descending order: SORT mylist LIMIT 0 5 ALPHA DESC ## Sorting by external keys Sometimes you want to sort elements using external keys as weights to compare -instead of comparing the actual elements in the list, set or sorted set. Let's -say the list `mylist` contains the elements `1`, `2` and `3` representing -unique IDs of objects stored in `object_1`, `object_2` and `object_3`. When -these objects have associated weights stored in `weight_1`, `weight_2` and +instead of comparing the actual elements in the list, set or sorted set. +Let's say the list `mylist` contains the elements `1`, `2` and `3` representing +unique IDs of objects stored in `object_1`, `object_2` and `object_3`. +When these objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: SORT mylist BY weight_* -The `BY` option takes a pattern (equal to `weight_*` in this example) that -is used to generate the keys that are used for sorting. These key names are -obtained substituting the first occurrence of `*` with the actual value of the -element in the list (`1`, `2` and `3` in this example). +The `BY` option takes a pattern (equal to `weight_*` in this example) that is +used to generate the keys that are used for sorting. +These key names are obtained substituting the first occurrence of `*` with the +actual value of the element in the list (`1`, `2` and `3` in this example). ## Skip sorting the elements The `!BY` option can also take a non-existent key, which causes `SORT` to skip -the sorting operation. This is useful if you want to retrieve external keys (see -the `!GET` option below) without the overhead of sorting. +the sorting operation. +This is useful if you want to retrieve external keys (see the `!GET` option +below) without the overhead of sorting. SORT mylist BY nosort ## Retrieving external keys -Our previous example returns just the sorted IDs. In some cases, it is more -useful to get the actual objects instead of their IDs (`object_1`, `object_2` -and `object_3`). Retrieving external keys based on the elements in a list, set -or sorted set can be done with the following command: +Our previous example returns just the sorted IDs. +In some cases, it is more useful to get the actual objects instead of their IDs +(`object_1`, `object_2` and `object_3`). +Retrieving external keys based on the elements in a list, set or sorted set can +be done with the following command: SORT mylist BY weight_* GET object_* @@ -79,22 +84,23 @@ It is also possible to `!GET` the element itself using the special pattern `#`: ## Storing the result of a SORT operation -By default, `SORT` returns the sorted elements to the client. With the `!STORE` -option, the result will be stored as a list at the specified key instead of -being returned to the client. +By default, `SORT` returns the sorted elements to the client. +With the `!STORE` option, the result will be stored as a list at the specified +key instead of being returned to the client. SORT mylist BY weight_* STORE resultkey An interesting pattern using `SORT ... STORE` consists in associating an `EXPIRE` timeout to the resulting key so that in applications where the result -of a `SORT` operation can be cached for some time. Other clients will use the -cached list instead of calling `SORT` for every request. When the key will -timeout, an updated version of the cache can be created by calling `SORT ... -STORE` again. +of a `SORT` operation can be cached for some time. +Other clients will use the cached list instead of calling `SORT` for every +request. +When the key will timeout, an updated version of the cache can be created by +calling `SORT ... STORE` again. Note that for correctly implementing this pattern it is important to avoid -multiple clients rebuilding the cache at the same time. Some kind of locking is -needed here (for instance using `SETNX`). +multiple clients rebuilding the cache at the same time. +Some kind of locking is needed here (for instance using `SETNX`). ## Using hashes in `!BY` and `!GET` @@ -103,9 +109,9 @@ following syntax: SORT mylist BY weight_*->fieldname GET object_*->fieldname -The string `->` is used to separate the key name from the hash field name. The -key is substituted as documented above, and the hash stored at the resulting key -is accessed to retrieve the specified hash field. +The string `->` is used to separate the key name from the hash field name. +The key is substituted as documented above, and the hash stored at the resulting +key is accessed to retrieve the specified hash field. @return diff --git a/commands/srem.md b/commands/srem.md index 759c8b853c..f92a1d0d92 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -1,6 +1,7 @@ -Remove the specified members from the set stored at `key`. Specified members -that are not a member of this set are ignored. If `key` does not exist, it is -treated as an empty set and this command returns `0`. +Remove the specified members from the set stored at `key`. +Specified members that are not a member of this set are ignored. +If `key` does not exist, it is treated as an empty set and this command returns +`0`. An error is returned when the value stored at `key` is not a set. @@ -11,8 +12,8 @@ including non existing members. @history -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions older than 2.4 - can only remove a set member per call. +* `>= 2.4`: Accepts multiple `member` arguments. + Redis versions older than 2.4 can only remove a set member per call. @examples diff --git a/commands/strlen.md b/commands/strlen.md index 6700b71d0d..4b36ab8b80 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -1,5 +1,5 @@ -Returns the length of the string value stored at `key`. An error is returned -when `key` holds a non-string value. +Returns the length of the string value stored at `key`. +An error is returned when `key` holds a non-string value. @return diff --git a/commands/ttl.md b/commands/ttl.md index 3c31fcec5f..1e1914cad8 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -1,6 +1,6 @@ -Returns the remaining time to live of a key that has a timeout. This -introspection capability allows a Redis client to check how many seconds a given -key will continue to be part of the dataset. +Returns the remaining time to live of a key that has a timeout. +This introspection capability allows a Redis client to check how many seconds a +given key will continue to be part of the dataset. @return diff --git a/commands/type.md b/commands/type.md index 9a0225f7e8..9c5fefb048 100644 --- a/commands/type.md +++ b/commands/type.md @@ -1,6 +1,6 @@ -Returns the string representation of the type of the value stored at `key`. The -different types that can be returned are: `string`, `list`, `set`, `zset` and -`hash`. +Returns the string representation of the type of the value stored at `key`. +The different types that can be returned are: `string`, `list`, `set`, `zset` +and `hash`. @return diff --git a/commands/unsubscribe.md b/commands/unsubscribe.md index 78c4d0c759..7bdf1d15e5 100644 --- a/commands/unsubscribe.md +++ b/commands/unsubscribe.md @@ -2,5 +2,6 @@ Unsubscribes the client from the given channels, or from all of them if none is given. When no channels are specified, the client is unsubscribed from all the -previously subscribed channels. In this case, a message for every unsubscribed -channel will be sent to the client. +previously subscribed channels. +In this case, a message for every unsubscribed channel will be sent to the +client. diff --git a/commands/zadd.md b/commands/zadd.md index 65664f4901..081b926b3f 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -1,10 +1,12 @@ Adds all the specified members with the specified scores to the sorted set -stored at `key`. It is possible to specify multiple score/member pairs. If a -specified member is already a member of the sorted set, the score is updated and -the element reinserted at the right position to ensure the correct ordering. +stored at `key`. +It is possible to specify multiple score/member pairs. +If a specified member is already a member of the sorted set, the score is +updated and the element reinserted at the right position to ensure the correct +ordering. If `key` does not exist, a new sorted set with the specified members as sole -members is created, like if the sorted set was empty. If the key exists but does -not hold a sorted set, an error is returned. +members is created, like if the sorted set was empty. +If the key exists but does not hold a sorted set, an error is returned. The score values should be the string representation of a numeric value, and accepts double precision floating point numbers. @@ -23,8 +25,9 @@ sets][tdtss]. @history -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was - possible to add or update a single member per call. +* `>= 2.4`: Accepts multiple elements. + In Redis versions older than 2.4 it was possible to add or update a single + member per call. @examples diff --git a/commands/zincrby.md b/commands/zincrby.md index 1623b6b6dd..4e30a49ab2 100644 --- a/commands/zincrby.md +++ b/commands/zincrby.md @@ -1,14 +1,15 @@ Increments the score of `member` in the sorted set stored at `key` by -`increment`. If `member` does not exist in the sorted set, it is added with -`increment` as its score (as if its previous score was `0.0`). If `key` does -not exist, a new sorted set with the specified `member` as its sole member is -created. +`increment`. +If `member` does not exist in the sorted set, it is added with `increment` as +its score (as if its previous score was `0.0`). +If `key` does not exist, a new sorted set with the specified `member` as its +sole member is created. An error is returned when `key` exists but does not hold a sorted set. The `score` value should be the string representation of a numeric value, and -accepts double precision floating point numbers. It is possible to provide a -negative value to decrement the score. +accepts double precision floating point numbers. +It is possible to provide a negative value to decrement the score. @return diff --git a/commands/zinterstore.md b/commands/zinterstore.md index dd7e71dfe4..7b7f1afe2e 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -1,12 +1,13 @@ Computes the intersection of `numkeys` sorted sets given by the specified keys, -and stores the result in `destination`. It is mandatory to provide the number of -input keys (`numkeys`) before passing the input keys and the other (optional) -arguments. +and stores the result in `destination`. +It is mandatory to provide the number of input keys (`numkeys`) before passing +the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the -sorted sets where it exists. Because intersection requires an element to be a -member of every given sorted set, this results in the score of every element in -the resulting sorted set to be equal to the number of input sorted sets. +sorted sets where it exists. +Because intersection requires an element to be a member of every given sorted +set, this results in the score of every element in the resulting sorted set to +be equal to the number of input sorted sets. For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. diff --git a/commands/zrange.md b/commands/zrange.md index 42275f6c77..3ec95281b7 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -1,25 +1,28 @@ -Returns the specified range of elements in the sorted set stored at `key`. The -elements are considered to be ordered from the lowest to the highest score. +Returns the specified range of elements in the sorted set stored at `key`. +The elements are considered to be ordered from the lowest to the highest score. Lexicographical order is used for elements with equal score. See `ZREVRANGE` when you need the elements ordered from highest to lowest score (and descending lexicographical order for elements with equal score). Both `start` and `stop` are zero-based indexes, where `0` is the first element, -`1` is the next element and so on. They can also be negative numbers indicating -offsets from the end of the sorted set, with `-1` being the last element of the -sorted set, `-2` the penultimate element and so on. - -Out of range indexes will not produce an error. If `start` is larger than the -largest index in the sorted set, or `start > stop`, an empty list is returned. +`1` is the next element and so on. +They can also be negative numbers indicating offsets from the end of the sorted +set, with `-1` being the last element of the sorted set, `-2` the penultimate +element and so on. + +Out of range indexes will not produce an error. +If `start` is larger than the largest index in the sorted set, or `start > +stop`, an empty list is returned. If `stop` is larger than the end of the sorted set Redis will treat it like it is the last element of the sorted set. -It is possible to pass the `WITHSCORES` option in order to return the scores -of the elements together with the elements. The returned list will contain -`value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client -libraries are free to return a more appropriate data type (suggestion: an array -with (value, score) arrays/tuples). +It is possible to pass the `WITHSCORES` option in order to return the scores of +the elements together with the elements. +The returned list will contain `value1,score1,...,valueN,scoreN` instead of +`value1,...,valueN`. +Client libraries are free to return a more appropriate data type (suggestion: an +array with (value, score) arrays/tuples). @return diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 2a355904b5..92c7be524b 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -1,20 +1,20 @@ Returns all the elements in the sorted set at `key` with a score between `min` -and `max` (including elements with score equal to `min` or `max`). The elements -are considered to be ordered from low to high scores. +and `max` (including elements with score equal to `min` or `max`). +The elements are considered to be ordered from low to high scores. The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to _SELECT LIMIT offset, count_ in SQL). Keep in mind that if -`offset` is large, the sorted set needs to be traversed for `offset` elements -before getting to the elements to return, which can add up to O(N) time -complexity. +elements (similar to _SELECT LIMIT offset, count_ in SQL). +Keep in mind that if `offset` is large, the sorted set needs to be traversed for +`offset` elements before getting to the elements to return, which can add up to +O(N) time complexity. The optional `WITHSCORES` argument makes the command return both the element and -its score, instead of the element alone. This option is available since Redis -2.0. +its score, instead of the element alone. +This option is available since Redis 2.0. ## Exclusive intervals and infinity @@ -22,9 +22,10 @@ its score, instead of the element alone. This option is available since Redis the highest or lowest score in the sorted set to get all elements from or up to a certain score. -By default, the interval specified by `min` and `max` is closed (inclusive). It -is possible to specify an open interval (exclusive) by prefixing the score with -the character `(`. For example: +By default, the interval specified by `min` and `max` is closed (inclusive). +It is possible to specify an open interval (exclusive) by prefixing the score +with the character `(`. +For example: ZRANGEBYSCORE zset (1 5 diff --git a/commands/zrank.md b/commands/zrank.md index 3703b5bee1..243af3c338 100644 --- a/commands/zrank.md +++ b/commands/zrank.md @@ -1,6 +1,7 @@ Returns the rank of `member` in the sorted set stored at `key`, with the scores -ordered from low to high. The rank (or index) is 0-based, which means that the -member with the lowest score has rank `0`. +ordered from low to high. +The rank (or index) is 0-based, which means that the member with the lowest +score has rank `0`. Use `ZREVRANK` to get the rank of an element with the scores ordered from high to low. diff --git a/commands/zrem.md b/commands/zrem.md index 8733647153..9461fd7eeb 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -1,5 +1,5 @@ -Removes the specified members from the sorted set stored at `key`. Non existing -members are ignored. +Removes the specified members from the sorted set stored at `key`. +Non existing members are ignored. An error is returned when `key` exists and does not hold a sorted set. @@ -12,8 +12,9 @@ An error is returned when `key` exists and does not hold a sorted set. @history -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was - possible to remove a single member per call. +* `>= 2.4`: Accepts multiple elements. + In Redis versions older than 2.4 it was possible to remove a single member per + call. @examples diff --git a/commands/zremrangebyrank.md b/commands/zremrangebyrank.md index e0458d21de..7a77d12900 100644 --- a/commands/zremrangebyrank.md +++ b/commands/zremrangebyrank.md @@ -1,9 +1,11 @@ Removes all elements in the sorted set stored at `key` with rank between `start` -and `stop`. Both `start` and `stop` are `0` -based indexes with `0` being the -element with the lowest score. These indexes can be negative numbers, where they -indicate offsets starting at the element with the highest score. For example: -`-1` is the element with the highest score, `-2` the element with the second -highest score and so forth. +and `stop`. +Both `start` and `stop` are `0` -based indexes with `0` being the element with +the lowest score. +These indexes can be negative numbers, where they indicate offsets starting at +the element with the highest score. +For example: `-1` is the element with the highest score, `-2` the element with +the second highest score and so forth. @return diff --git a/commands/zrevrange.md b/commands/zrevrange.md index 772be477e2..c77c66001e 100644 --- a/commands/zrevrange.md +++ b/commands/zrevrange.md @@ -1,5 +1,5 @@ -Returns the specified range of elements in the sorted set stored at `key`. The -elements are considered to be ordered from the highest to the lowest score. +Returns the specified range of elements in the sorted set stored at `key`. +The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score. Apart from the reversed ordering, `ZREVRANGE` is similar to `ZRANGE`. diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index bc2257cea2..84aaead246 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -1,7 +1,7 @@ Returns all the elements in the sorted set at `key` with a score between `max` -and `min` (including elements with score equal to `max` or `min`). In contrary -to the default ordering of sorted sets, for this command the elements are -considered to be ordered from high to low scores. +and `min` (including elements with score equal to `max` or `min`). +In contrary to the default ordering of sorted sets, for this command the +elements are considered to be ordered from high to low scores. The elements having the same score are returned in reverse lexicographical order. diff --git a/commands/zrevrank.md b/commands/zrevrank.md index 90f86352c4..928dea02bd 100644 --- a/commands/zrevrank.md +++ b/commands/zrevrank.md @@ -1,6 +1,7 @@ Returns the rank of `member` in the sorted set stored at `key`, with the scores -ordered from high to low. The rank (or index) is 0-based, which means that the -member with the highest score has rank `0`. +ordered from high to low. +The rank (or index) is 0-based, which means that the member with the highest +score has rank `0`. Use `ZRANK` to get the rank of an element with the scores ordered from low to high. diff --git a/commands/zunionstore.md b/commands/zunionstore.md index f2710f61e6..196c7f14de 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -1,22 +1,23 @@ Computes the union of `numkeys` sorted sets given by the specified keys, and -stores the result in `destination`. It is mandatory to provide the number of -input keys (`numkeys`) before passing the input keys and the other (optional) -arguments. +stores the result in `destination`. +It is mandatory to provide the number of input keys (`numkeys`) before passing +the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Using the `WEIGHTS` option, it is possible to specify a multiplication factor -for each input sorted set. This means that the score of every element in every -input sorted set is multiplied by this factor before being passed to the -aggregation function. When `WEIGHTS` is not given, the multiplication factors -default to `1`. +for each input sorted set. +This means that the score of every element in every input sorted set is +multiplied by this factor before being passed to the aggregation function. +When `WEIGHTS` is not given, the multiplication factors default to `1`. With the `AGGREGATE` option, it is possible to specify how the results of the -union are aggregated. This option defaults to `SUM`, where the score of an -element is summed across the inputs where it exists. When this option is set to -either `MIN` or `MAX`, the resulting set will contain the minimum or maximum -score of an element across the inputs where it exists. +union are aggregated. +This option defaults to `SUM`, where the score of an element is summed across +the inputs where it exists. +When this option is set to either `MIN` or `MAX`, the resulting set will contain +the minimum or maximum score of an element across the inputs where it exists. If `destination` already exists, it is overwritten. diff --git a/remarkdown.rb b/remarkdown.rb index 50f5e518b0..3a18e1b9a1 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -93,7 +93,12 @@ def format_inline_nodes(nodes) end end - par(result).chomp + sentences = result.gsub(/\s*\r?\n\s*/, " ").split(/(?<=[^.]\.)\s+/) + sentences = sentences.map do |e| + par(e).chomp + end + + sentences.join("\n") end def format_inline_node(node) From b340d58f786e2c572bf4a99900bc218de5011e13 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 21:49:28 -0700 Subject: [PATCH 0172/2880] Add task to format files that have been cached --- Rakefile | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/Rakefile b/Rakefile index 9711d24ede..5ee65eeaaa 100644 --- a/Rakefile +++ b/Rakefile @@ -65,6 +65,12 @@ namespace :format do format(args[:path]) end + task :cached do + `git diff --cached --name-only -- commands/`.split.each do |path| + format(path) + end + end + task :all do Dir["commands/*.md"].each do |path| format(path) From 5bba00e7b0f7d0fa14a249aa0c059b5cd8f42084 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 21:49:35 -0700 Subject: [PATCH 0173/2880] Update styling guidelines --- README.md | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index bcb88384fa..4982236c64 100644 --- a/README.md +++ b/README.md @@ -63,9 +63,35 @@ keyword: Styling guidelines --- -Please wrap your text to 80 characters. You can easily accomplish this -using a CLI tool called `par`. +Please use the following formatting rules: +* Wrap lines to 80 characters. +* Start every sentence on a new line. + +Luckily, this repository comes with an automated Markdown formatter. +To only reformat the files you have modified, first stage them using +`git add` (this makes sure that your changes won't be lost in case of an +error), then run the formatter: + + $ rake format:cached + +The formatter has the following dependencies: + +* RDiscount +* Nokogiri +* The `par` tool + +Installation of the Ruby gems: + + gem install rdiscount nokogiri + +Installation of par (OSX): + + brew install par + +Installation of par (Ubuntu): + + sudo apt-get install par Checking your work --- From 520f8b3e7242992e40fd12166924487dc8da7072 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 21:49:49 -0700 Subject: [PATCH 0174/2880] Format README.md --- README.md | 62 ++++++++++++++++++++++++++----------------------------- 1 file changed, 29 insertions(+), 33 deletions(-) diff --git a/README.md b/README.md index 4982236c64..08e01f3a8b 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,10 @@ -Redis documentation -=== +# Redis documentation +## Clients -Clients ---- - -All clients are listed in the `clients.json` file. Each key in the JSON -object represents a single client library. For example: +All clients are listed in the `clients.json` file. +Each key in the JSON object represents a single client library. +For example: "Rediska": { @@ -32,26 +30,25 @@ object represents a single client library. For example: } - -Commands ---- +## Commands Redis commands are described in the `commands.json` file. For each command there's a Markdown file with a complete, human-readable -description. We process this Markdown to provide a better experience, so -some things to take into account: +description. +We process this Markdown to provide a better experience, so some things to take +into account: -* Inside text, all commands should be written in all caps, in between -backticks. For example: `INCR`. +* Inside text, all commands should be written in all caps, in between backticks. + For example: ``INCR``. -* You can use some magic keywords to name common elements in Redis. For -example: `@multi-bulk-reply`. These keywords will get expanded and -auto-linked to relevant parts of the documentation. +* You can use some magic keywords to name common elements in Redis. + For example: `@multi-bulk-reply`. + These keywords will get expanded and auto-linked to relevant parts of the + documentation. -There should be at least two predefined sections: description and -return value. The return value section is marked using the @return -keyword: +There should be at least two predefined sections: description and return value. +The return value section is marked using the @return keyword: Returns all keys matching the given pattern. @@ -59,9 +56,7 @@ keyword: @multi-bulk-reply: all the keys that matched the pattern. - -Styling guidelines ---- +## Styling guidelines Please use the following formatting rules: @@ -69,9 +64,9 @@ Please use the following formatting rules: * Start every sentence on a new line. Luckily, this repository comes with an automated Markdown formatter. -To only reformat the files you have modified, first stage them using -`git add` (this makes sure that your changes won't be lost in case of an -error), then run the formatter: +To only reformat the files you have modified, first stage them using `git add` +(this makes sure that your changes won't be lost in case of an error), then run +the formatter: $ rake format:cached @@ -93,17 +88,18 @@ Installation of par (Ubuntu): sudo apt-get install par -Checking your work ---- +## Checking your work -Once you're done, the very least you should do is make sure that all -files compile properly. You can do this by running Rake inside your -working directory. +Once you're done, the very least you should do is make sure that all files +compile properly. +You can do this by running Rake inside your working directory. $ rake parse -Additionally, if you have [Aspell](http://aspell.net/) installed, you -can spell check the documentation: +Additionally, if you have [Aspell][han] installed, you can spell check the +documentation: + +[han]: http://aspell.net/ $ rake spellcheck From cea30f884089d58c63d705941b61d897817ba3ca Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 18 Jun 2012 22:39:22 -0700 Subject: [PATCH 0175/2880] Some small grammatical changes (by @ShawnMilo) Reapplied to the reformatted tree. Fixes #119. --- commands/bgsave.md | 4 +- commands/blpop.md | 2 +- commands/config get.md | 8 +- commands/config set.md | 16 +-- commands/dbsize.md | 2 +- commands/dump.md | 2 +- commands/eval.md | 198 ++++++++++++++++++------------------- commands/script exists.md | 11 ++- commands/script load.md | 2 +- topics/data-types-intro.md | 4 +- topics/persistence.md | 2 +- 11 files changed, 125 insertions(+), 126 deletions(-) diff --git a/commands/bgsave.md b/commands/bgsave.md index 7a65e2e3b7..4db91e539b 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,7 +1,7 @@ Save the DB in background. The OK code is immediately returned. -Redis forks, the parent continues to server the clients, the child saves the DB -on disk then exit. +Redis forks, the parent continues to serve the clients, the child saves the DB +on disk then exits. A client my be able to check if the operation succeeded using the `LASTSAVE` command. diff --git a/commands/blpop.md b/commands/blpop.md index 24ed8f24ec..6ae9f1d926 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -6,7 +6,7 @@ given keys being checked in the order that they are given. ## Non-blocking behavior -When `BLPOP` is called, if at least one of the specified keys contain a +When `BLPOP` is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. diff --git a/commands/config get.md b/commands/config get.md index c35cd8af1f..5e248879f5 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -6,7 +6,7 @@ can read the whole configuration of a server using this command. The symmetric command used to alter the configuration at run time is `CONFIG SET`. -`CONFIG GET` takes a single argument, that is glob style pattern. +`CONFIG GET` takes a single argument, which is a glob-style pattern. All the configuration parameters matching this parameter are reported as a list of key-value pairs. Example: @@ -19,7 +19,7 @@ Example: 5) "set-max-intset-entries" 6) "512" -You can obtain a list of all the supported configuration parameters typing +You can obtain a list of all the supported configuration parameters by typing `CONFIG GET *` in an open `redis-cli` prompt. All the supported parameters have the same meaning of the equivalent @@ -30,9 +30,9 @@ following important differences: * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything - should be specified as a well formed 64 bit integer, in the base unit of the + should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. +* The save parameter is a single string of space-separated integers. Every pair of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: diff --git a/commands/config set.md b/commands/config set.md index dc845853c7..b74244e23e 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -8,8 +8,7 @@ issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain information about the configuration of a running Redis instance. All the configuration parameters set using `CONFIG SET` are immediately loaded -by Redis that will start acting as specified starting from the next command -executed. +by Redis and will take effect starting with the next command executed. All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf][hgcarr22rc] file, with the @@ -19,9 +18,9 @@ following important differences: * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything - should be specified as a well formed 64 bit integer, in the base unit of the + should be specified as a well-formed 64-bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. +* The save parameter is a single string of space-separated integers. Every pair of integers represent a seconds/modifications threshold. For instance what in `redis.conf` looks like: @@ -33,16 +32,17 @@ that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, should be set using `CONFIG SET` as "900 1 300 10". -It is possible to switch persistence from RDB snapshotting to append only file +It is possible to switch persistence from RDB snapshotting to append-only file (and the other way around) using the `CONFIG SET` command. -For more information about how to do that please check [persistence page][tp]. +For more information about how to do that please check the [persistence +page][tp]. [tp]: /topics/persistence In general what you should know is that setting the `appendonly` parameter to -`yes` will start a background process to save the initial append only file +`yes` will start a background process to save the initial append-only file (obtained from the in memory data set), and will append all the subsequent -commands on the append only file, thus obtaining exactly the same effect of a +commands on the append-only file, thus obtaining exactly the same effect of a Redis server that started with AOF turned on since the start. You can have both the AOF enabled with RDB snapshotting if you want, the two diff --git a/commands/dbsize.md b/commands/dbsize.md index 8818785166..fe82aa78cb 100644 --- a/commands/dbsize.md +++ b/commands/dbsize.md @@ -1,4 +1,4 @@ -Return the number of keys in the currently selected database. +Return the number of keys in the currently-selected database. @return diff --git a/commands/dump.md b/commands/dump.md index fd92bb9bd6..c07589b481 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -6,7 +6,7 @@ command. The serialization format is opaque and non-standard, however it has a few semantical characteristics: -* It contains a 64bit checksum that is used to make sure errors will be +* It contains a 64-bit checksum that is used to make sure errors will be detected. The `RESTORE` command makes sure to check the checksum before synthesizing a key using the serialized value. diff --git a/commands/eval.md b/commands/eval.md index 602c30df6e..15df5f6cb1 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -53,9 +53,9 @@ uses should be passed using the KEYS array, in the following way: > eval "return redis.call('set',KEYS[1],'bar')" 1 foo OK -The reason for passing keys in the proper way is that, before of `EVAL` all the -Redis commands could be analyzed before execution in order to establish what are -the keys the command will operate on. +The reason for passing keys in the proper way is that, before `EVAL` all the +Redis commands could be analyzed before execution in order to establish what +keys the command will operate on. In order for this to be true for `EVAL` also keys must be explicit. This is useful in many ways, but especially in order to make sure Redis Cluster @@ -73,15 +73,15 @@ protocol using a set of conversion rules. Redis return values are converted into Lua data types when Lua calls a Redis command using call() or pcall(). -Similarly Lua data types are converted into Redis protocol when a Lua script -returns some value, so that scripts can control what `EVAL` will reply to the +Similarly Lua data types are converted into the Redis protocol when a Lua script +returns a value, so that scripts can control what `EVAL` will return to the client. This conversion between data types is designed in a way that if a Redis type is converted into a Lua type, and then the result is converted back into a Redis type, the result is the same as of the initial value. -In other words there is a one to one conversion between Lua and Redis types. +In other words there is a one-to-one conversion between Lua and Redis types. The following table shows you all the conversions rules: **Redis to Lua** conversion table. @@ -102,12 +102,12 @@ The following table shows you all the conversions rules: * Lua table with a single `err` field -> Redis error reply * Lua boolean false -> Redis Nil bulk reply. -There is an additional Lua to Redis conversion rule that has no corresponding +There is an additional Lua-to-Redis conversion rule that has no corresponding Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. -The followings are a few conversion examples: +Here are a few conversion examples: > eval "return 10" 0 (integer) 10 @@ -121,9 +121,9 @@ The followings are a few conversion examples: > eval "return redis.call('get','foo')" 0 "bar" -The last example shows how it is possible to directly return from Lua the return -value of `redis.call()` and `redis.pcall()` with the result of returning exactly -what the called command would return if called directly. +The last example shows how it is possible to receive the exact return value of +`redis.call()` or `redis.pcall()` from Lua that would be returned if the command +was called directly. ## Atomicity of scripts @@ -141,9 +141,9 @@ is running no other client can execute commands since the server is busy. ## Error handling -As already stated calls to `redis.call()` resulting into a Redis command error -will stop the execution of the script and will return that error back, in a way -that makes it obvious that the error was generated by a script: +As already stated, calls to `redis.call()` resulting in a Redis command error +will stop the execution of the script and will return the error, in a way that +makes it obvious that the error was generated by a script: > del foo (integer) 1 @@ -154,8 +154,8 @@ that makes it obvious that the error was generated by a script: Using the `redis.pcall()` command no error is raised, but an error object is returned in the format specified above (as a Lua table with an `err` field). -The user can later return this exact error to the user just returning the error -object returned by `redis.pcall()`. +The script can pass the exact error to the user by returning the error object +returned by `redis.pcall()`. ## Bandwidth and EVALSHA @@ -164,7 +164,7 @@ Redis does not need to recompile the script every time as it uses an internal caching mechanism, however paying the cost of the additional bandwidth may not be optimal in many contexts. -On the other hand defining commands using a special command or via `redis.conf` +On the other hand, defining commands using a special command or via `redis.conf` would be a problem for a few reasons: * Different instances may have different versions of a command implementation. @@ -175,18 +175,18 @@ would be a problem for a few reasons: * Reading an application code the full semantic could not be clear since the application would call commands defined server side. -In order to avoid the above three problems and at the same time don't incur in -the bandwidth penalty, Redis implements the `EVALSHA` command. +In order to avoid these problems while avoiding the bandwidth penalty, Redis +implements the `EVALSHA` command. -`EVALSHA` works exactly as `EVAL`, but instead of having a script as first -argument it has the SHA1 sum of a script. +`EVALSHA` works exactly like `EVAL`, but instead of having a script as the first +argument it has the SHA1 digest of a script. The behavior is the following: -* If the server still remembers a script whose SHA1 sum was the one specified, - the script is executed. +* If the server still remembers a script with a matching SHA1 digest, the script + is executed. -* If the server does not remember a script with this SHA1 sum, a special error - is returned that will tell the client to use `EVAL` instead. +* If the server does not remember a script with this SHA1 digest, a special + error is returned telling the client to use `EVAL` instead. Example: @@ -200,11 +200,11 @@ Example: (error) `NOSCRIPT` No matching script. Please use `EVAL`. The client library implementation can always optimistically send `EVALSHA` under -the hoods even when the client actually called `EVAL`, in the hope the script -was already seen by the server. +the hood even when the client actually calls `EVAL`, in the hope the script was +already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be used instead. -Passing keys and arguments as `EVAL` additional arguments is also very useful in +Passing keys and arguments as additional `EVAL` arguments is also very useful in this context as the script string remains constant and can be efficiently cached by Redis. @@ -214,27 +214,26 @@ Executed scripts are guaranteed to be in the script cache **forever**. This means that if an `EVAL` is performed against a Redis instance all the subsequent `EVALSHA` calls will succeed. -The only way to flush the script cache is by explicitly calling the SCRIPT FLUSH -command, that will _completely flush_ the scripts cache removing all the scripts -executed so far. +The only way to flush the script cache is by explicitly calling the SCRIPT +FLUSH command, which will _completely flush_ the scripts cache removing all the +scripts executed so far. This is usually needed only when the instance is going to be instantiated for another customer or application in a cloud environment. The reason why scripts can be cached for long time is that it is unlikely for -a well written application to have so many different scripts to create memory +a well written application to have enough different scripts to cause memory problems. Every script is conceptually like the implementation of a new command, and even -a large application will likely have just a few hundreds of that. -Even if the application is modified many times and scripts will change, still -the memory used is negligible. +a large application will likely have just a few hundred of them. +Even if the application is modified many times and scripts will change, the +memory used is negligible. The fact that the user can count on Redis not removing scripts is semantically a very good thing. -For instance an application taking a persistent connection to Redis can stay -sure that if a script was sent once it is still in memory, thus for instance can -use EVALSHA against those scripts in a pipeline without the chance that an error -will be generated since the script is not known (we'll see this problem in its -details later). +For instance an application with a persistent connection to Redis can be sure +that if a script was sent once it is still in memory, so EVALSHA can be used +against those scripts in a pipeline without the chance of an error being +generated due to an unknown script (we'll see this problem in detail later). ## The SCRIPT command @@ -244,9 +243,9 @@ SCRIPT currently accepts three different commands: * SCRIPT FLUSH. This command is the only way to force Redis to flush the scripts cache. - It is mostly useful in a cloud environment where the same instance can be + It is most useful in a cloud environment where the same instance can be reassigned to a different user. - It is also useful for testing client libraries implementations of the + It is also useful for testing client libraries' implementations of the scripting feature. * SCRIPT EXISTS _sha1_ _sha2_... _shaN_. @@ -263,59 +262,59 @@ SCRIPT currently accepts three different commands: operation), without the need to actually execute the script. * SCRIPT KILL. - This command is the only wait to interrupt a long running script that reached + This command is the only way to interrupt a long-running script that reaches the configured maximum execution time for scripts. - The SCRIPT KILL command can only be used with scripts that did not modified - the dataset during their execution (since stopping a read only script does not - violate the scripting engine guaranteed atomicity). + The SCRIPT KILL command can only be used with scripts that did not modify the + dataset during their execution (since stopping a read-only script does not + violate the scripting engine's guaranteed atomicity). See the next sections for more information about long running scripts. ## Scripts as pure functions A very important part of scripting is writing scripts that are pure functions. -Scripts executed in a Redis instance are replicated on slaves sending the same -script, instead of the resulting commands. +Scripts executed in a Redis instance are replicated on slaves by sending the +script -- not the resulting commands. The same happens for the Append Only File. -The reason is that scripts are much faster than sending commands one after the -other to a Redis instance, so if the client is taking the master very busy -sending scripts, turning this scripts into single commands for the slave / AOF -would result in too much bandwidth for the replication link or the Append Only -File (and also too much CPU since dispatching a command received via network -is a lot more work for Redis compared to dispatching a command invoked by Lua -scripts). +The reason is that sending a script to another Redis instance is much +faster than sending the multiple commands the script generates, so if the +client is sending many scripts to the master, converting the scripts into +individual commands for the slave / AOF would result in too much bandwidth +for the replication link or the Append Only File (and also too much CPU since +dispatching a command received via network is a lot more work for Redis compared +to dispatching a command invoked by Lua scripts). The only drawback with this approach is that scripts are required to have the following property: * The script always evaluates the same Redis _write_ commands with the same arguments given the same input data set. - Operations performed by the script cannot depend on any hidden (non explicit) + Operations performed by the script cannot depend on any hidden (non-explicit) information or state that may change as script execution proceeds or between different executions of the script, nor can it depend on any external input from I/O devices. Things like using the system time, calling Redis random commands like `RANDOMKEY`, or using Lua random number generator, could result into scripts -that will not evaluate always in the same way. +that will not always evaluate in the same way. In order to enforce this behavior in scripts Redis does the following: * Lua does not export commands to access the system time or other external state. -* Redis will block the script with an error if a script will call a Redis +* Redis will block the script with an error if a script calls a Redis command able to alter the data set **after** a Redis _random_ command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. - This means that if a script is read only and does not modify the data set it + This means that if a script is read-only and does not modify the data set it is free to call those commands. - Note that a _random command_ does not necessarily identifies a command that - uses random numbers: any non deterministic command is considered a random - command (the best example in this regard is the `TIME` command). + Note that a _random command_ does not necessarily mean a command that uses + random numbers: any non-deterministic command is considered a random command + (the best example in this regard is the `TIME` command). * Redis commands that may return elements in random order, like `SMEMBERS` (because Redis Sets are _unordered_) have a different behavior when called - from Lua, and undergone a silent lexicographical sorting filter before - returning data to Lua scripts. + from Lua, and undergo a silent lexicographical sorting filter before returning + data to Lua scripts. So `redis.call("smembers",KEYS[1])` will always return the Set elements in the same order, while the same command invoked from normal clients may return different results even if the key contains exactly the same elements. @@ -326,12 +325,12 @@ In order to enforce this behavior in scripts Redis does the following: This means that calling `math.random` will always generate the same sequence of numbers every time a script is executed if `math.randomseed` is not used. -However the user is still able to write commands with random behaviors using the +However the user is still able to write commands with random behavior using the following simple trick. Imagine I want to write a Redis script that will populate a list with N random integers. -I can start writing the following script, using a small Ruby program: +I can start with this small Ruby program: require 'rubygems' require 'redis' @@ -366,11 +365,11 @@ following elements: 9) "0.74990198051087" 10) "0.17082803611217" -In order to make it a pure function, but still making sure that every invocation +In order to make it a pure function, but still be sure that every invocation of the script will result in different random elements, we can simply add an -additional argument to the script, that will be used in order to seed the Lua -pseudo random number generator. -The new script will be like the following: +additional argument to the script that will be used in order to seed the Lua +pseudo-random number generator. +The new script is as follows: RandomPushScript = < eval 'a=10' 0 @@ -415,10 +414,10 @@ returns with an error: Accessing a _non existing_ global variable generates a similar error. -Using Lua debugging functionalities or other approaches like altering the meta -table used to implement global protections, in order to circumvent globals -protection, is not hard. -However it is hardly possible to do it accidentally. +Using Lua debugging functionality or other approaches like altering the meta +table used to implement global protections in order to circumvent globals +protection is not hard. +However it is difficult to do it accidentally. If the user messes with the Lua global state, the consistency of AOF and replication is not guaranteed: don't do it. @@ -440,7 +439,7 @@ The Redis Lua interpreter loads the following Lua libraries: Every Redis instance is _guaranteed_ to have all the above libraries so you can be sure that the environment for your Redis scripts is always the same. -The CJSON library allows to manipulate JSON data in a very fast way from Lua. +The CJSON library provides extremely fast JSON maniplation within Lua. All the other libraries are standard Lua libraries. ## Emitting Redis logs from scripts @@ -457,7 +456,7 @@ It is possible to write to the Redis log file from Lua scripts using the * `redis.LOG_NOTICE` * `redis.LOG_WARNING` -They exactly correspond to the normal Redis log levels. +They correspond directly to the normal Redis log levels. Only logs emitted by scripting using a log level that is equal or greater than the currently configured Redis instance log level will be emitted. @@ -472,42 +471,41 @@ Will generate the following: ## Sandbox and maximum execution time -Scripts should never try to access the external system, like the file system, -nor calling any other system call. -A script should just do its work operating on Redis data and passed arguments. +Scripts should never try to access the external system, like the file system or +any other system call. +A script should only operate on Redis data and passed arguments. Scripts are also subject to a maximum execution time (five seconds by default). -This default timeout is huge since a script should run usually in a sub -millisecond amount of time. -The limit is mostly needed in order to avoid problems when developing scripts -that may loop forever for a programming error. +This default timeout is huge since a script should usually run in under a +millisecond. +The limit is mostly to handle accidental infinite loops created during +development. It is possible to modify the maximum time a script can be executed with -milliseconds precision, either via `redis.conf` or using the CONFIG GET / CONFIG +millisecond precision, either via `redis.conf` or using the CONFIG GET / CONFIG SET command. The configuration parameter affecting max execution time is called `lua-time-limit`. When a script reaches the timeout it is not automatically terminated by Redis since this violates the contract Redis has with the scripting engine to ensure -that scripts are atomic in nature. -Stopping a script half-way means to possibly leave the dataset with half-written -data inside. +that scripts are atomic. +Interrupting a script means potentially leaving the dataset with half-written +data. For this reasons when a script executes for more than the specified time the following happens: -* Redis logs that a script that is running for too much time is still in - execution. +* Redis logs that a script is running too long. * It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. -* It is possible to terminate a script that executed only read-only commands +* It is possible to terminate a script that executes only read-only commands using the `SCRIPT KILL` command. - This does not violate the scripting semantic as no data was yet written on the + This does not violate the scripting semantic as no data was yet written to the dataset by the script. * If the script already called write commands the only allowed command becomes - `SHUTDOWN NOSAVE` that stops the server not saving the current data set on + `SHUTDOWN NOSAVE` that stops the server without saving the current data set on disk (basically the server is aborted). ## EVALSHA in the context of pipelining @@ -525,5 +523,5 @@ The client library implementation should take one of the following approaches: * Accumulate all the commands to send into the pipeline, then check for `EVAL` commands and use the `SCRIPT EXISTS` command to check if all the scripts are already defined. - If not add `SCRIPT LOAD` commands on top of the pipeline as required, and use + If not, add `SCRIPT LOAD` commands on top of the pipeline as required, and use `EVALSHA` for all the `EVAL` calls. diff --git a/commands/script exists.md b/commands/script exists.md index b1f7cf7e37..435b4150a0 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -1,7 +1,8 @@ Returns information about the existence of the scripts in the script cache. -This command accepts one or more SHA1 sums and returns a list of ones or zeros -to signal if the scripts are already defined or not inside the script cache. +This command accepts one or more SHA1 digests and returns a list of ones or +zeros to signal if the scripts are already defined or not inside the script +cache. This can be useful before a pipelining operation to ensure that scripts are loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining operation can be performed solely using `EVALSHA` instead of `EVAL` to save @@ -13,9 +14,9 @@ Lua scripting. @return @multi-bulk-reply The command returns an array of integers that correspond to -the specified SHA1 sum arguments. -For every corresponding SHA1 sum of a script that actually exists in the script -cache, an 1 is returned, otherwise 0 is returned. +the specified SHA1 digest arguments. +For every corresponding SHA1 digest of a script that actually exists in the +script cache, an 1 is returned, otherwise 0 is returned. @example diff --git a/commands/script load.md b/commands/script load.md index 4709695bec..7adbd4e890 100644 --- a/commands/script load.md +++ b/commands/script load.md @@ -14,5 +14,5 @@ Lua scripting. @return -@bulk-reply This command returns the SHA1 sum of the script added into the +@bulk-reply This command returns the SHA1 digest of the script added into the script cache. diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 5be69f87bb..3feb1024f2 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -293,7 +293,7 @@ Our first attempt (that is broken) can be the following. Let's suppose we want to get a unique ID for the tag "redis": * In order to make this algorithm binary safe (they are just tags but think to - utf8, spaces and so forth) we start performing the SHA1 sum of the tag. + utf8, spaces and so forth) we start performing the SHA1 digest of the tag. SHA1(redis) = b840fc02d524045429941cc15f59e41cb7be6c52. * Let's check if this tag is already associated with a unique ID with the command *GET tag:b840fc02d524045429941cc15f59e41cb7be6c52:id*. @@ -313,7 +313,7 @@ return the wrong ID to the caller. To fix the algorithm is not hard fortunately, and this is the sane version: * In order to make this algorithm binary safe (they are just tags but think to - utf8, spaces and so forth) we start performing the SHA1 sum of the tag. + utf8, spaces and so forth) we start performing the SHA1 digest of the tag. SHA1(redis) = b840fc02d524045429941cc15f59e41cb7be6c52. * Let's check if this tag is already associated with a unique ID with the command *GET tag:b840fc02d524045429941cc15f59e41cb7be6c52:id*. diff --git a/topics/persistence.md b/topics/persistence.md index 8d6df85ec5..2e4dd67b33 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -276,7 +276,7 @@ for best results. It is important to understand that this systems can easily fail if not coded in the right way. At least make absolutely sure that after the transfer is completed you are able to verify the file size (that should match the one of -the file you copied) and possibly the SHA1 sum if you are using a VPS. +the file you copied) and possibly the SHA1 digest if you are using a VPS. You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason. From 815c0cc5d75623614796f18416eb489f232ca871 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 09:00:02 -0700 Subject: [PATCH 0176/2880] Add descriptions to reformat tasks --- Rakefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Rakefile b/Rakefile index 5ee65eeaaa..14b7f7b12b 100644 --- a/Rakefile +++ b/Rakefile @@ -61,16 +61,19 @@ namespace :format do STDOUT.puts end + desc "Reformat single file" task :file, :path do |t, args| format(args[:path]) end + desc "Reformat changes staged for commit" task :cached do `git diff --cached --name-only -- commands/`.split.each do |path| format(path) end end + desc "Reformat everything" task :all do Dir["commands/*.md"].each do |path| format(path) From 02077873aaeae35ebcceddd096c6dd6c9472a0d0 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 09:04:22 -0700 Subject: [PATCH 0177/2880] Add tiny EVALSHA doc --- commands/evalsha.md | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 commands/evalsha.md diff --git a/commands/evalsha.md b/commands/evalsha.md new file mode 100644 index 0000000000..a87de0de98 --- /dev/null +++ b/commands/evalsha.md @@ -0,0 +1,3 @@ +Evaluates a script cached on the server side by its SHA1 digest. +Scripts are cached on the server side using the `SCRIPT LOAD` command. +The command is otherwise identical to `EVAL`. From 2342c59fff821d82692cf46817f5baee9a19ace1 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 10:44:40 -0700 Subject: [PATCH 0178/2880] Indent lists with multi-paragraph items 4 spaces This is in line with the Markdown specification. Switch to RedCarpet parser. --- commands/eval.md | 134 +++++++++++++++++++++++----------------------- commands/hdel.md | 8 +-- commands/info.md | 49 ++++++++--------- commands/setnx.md | 41 +++++++------- remarkdown.rb | 89 ++++++++++++++++++++---------- 5 files changed, 178 insertions(+), 143 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 15df5f6cb1..7a5312c9bb 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -167,13 +167,13 @@ be optimal in many contexts. On the other hand, defining commands using a special command or via `redis.conf` would be a problem for a few reasons: -* Different instances may have different versions of a command implementation. +* Different instances may have different versions of a command implementation. -* Deployment is hard if there is to make sure all the instances contain a given - command, especially in a distributed environment. +* Deployment is hard if there is to make sure all the instances contain a + given command, especially in a distributed environment. -* Reading an application code the full semantic could not be clear since the - application would call commands defined server side. +* Reading an application code the full semantic could not be clear since the + application would call commands defined server side. In order to avoid these problems while avoiding the bandwidth penalty, Redis implements the `EVALSHA` command. @@ -182,11 +182,11 @@ implements the `EVALSHA` command. argument it has the SHA1 digest of a script. The behavior is the following: -* If the server still remembers a script with a matching SHA1 digest, the script - is executed. +* If the server still remembers a script with a matching SHA1 digest, the + script is executed. -* If the server does not remember a script with this SHA1 digest, a special - error is returned telling the client to use `EVAL` instead. +* If the server does not remember a script with this SHA1 digest, a special + error is returned telling the client to use `EVAL` instead. Example: @@ -241,33 +241,33 @@ Redis offers a SCRIPT command that can be used in order to control the scripting subsystem. SCRIPT currently accepts three different commands: -* SCRIPT FLUSH. - This command is the only way to force Redis to flush the scripts cache. - It is most useful in a cloud environment where the same instance can be - reassigned to a different user. - It is also useful for testing client libraries' implementations of the - scripting feature. - -* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. - Given a list of SHA1 digests as arguments this command returns an array of - 1 or 0, where 1 means the specific SHA1 is recognized as a script already - present in the scripting cache, while 0 means that a script with this SHA1 - was never seen before (or at least never seen after the latest SCRIPT FLUSH - command). - -* SCRIPT LOAD _script_. - This command registers the specified script in the Redis script cache. - The command is useful in all the contexts where we want to make sure that - `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC - operation), without the need to actually execute the script. - -* SCRIPT KILL. - This command is the only way to interrupt a long-running script that reaches - the configured maximum execution time for scripts. - The SCRIPT KILL command can only be used with scripts that did not modify the - dataset during their execution (since stopping a read-only script does not - violate the scripting engine's guaranteed atomicity). - See the next sections for more information about long running scripts. +* SCRIPT FLUSH. + This command is the only way to force Redis to flush the scripts cache. + It is most useful in a cloud environment where the same instance can be + reassigned to a different user. + It is also useful for testing client libraries' implementations of the + scripting feature. + +* SCRIPT EXISTS _sha1_ _sha2_... _shaN_. + Given a list of SHA1 digests as arguments this command returns an array of + 1 or 0, where 1 means the specific SHA1 is recognized as a script already + present in the scripting cache, while 0 means that a script with this SHA1 + was never seen before (or at least never seen after the latest SCRIPT FLUSH + command). + +* SCRIPT LOAD _script_. + This command registers the specified script in the Redis script cache. + The command is useful in all the contexts where we want to make sure that + `EVALSHA` will not fail (for instance during a pipeline or MULTI/EXEC + operation), without the need to actually execute the script. + +* SCRIPT KILL. + This command is the only way to interrupt a long-running script that reaches + the configured maximum execution time for scripts. + The SCRIPT KILL command can only be used with scripts that did not modify + the dataset during their execution (since stopping a read-only script does + not violate the scripting engine's guaranteed atomicity). + See the next sections for more information about long running scripts. ## Scripts as pure functions @@ -299,31 +299,31 @@ that will not always evaluate in the same way. In order to enforce this behavior in scripts Redis does the following: -* Lua does not export commands to access the system time or other external - state. - -* Redis will block the script with an error if a script calls a Redis - command able to alter the data set **after** a Redis _random_ command like - `RANDOMKEY`, `SRANDMEMBER`, `TIME`. - This means that if a script is read-only and does not modify the data set it - is free to call those commands. - Note that a _random command_ does not necessarily mean a command that uses - random numbers: any non-deterministic command is considered a random command - (the best example in this regard is the `TIME` command). - -* Redis commands that may return elements in random order, like `SMEMBERS` - (because Redis Sets are _unordered_) have a different behavior when called - from Lua, and undergo a silent lexicographical sorting filter before returning - data to Lua scripts. - So `redis.call("smembers",KEYS[1])` will always return the Set elements in - the same order, while the same command invoked from normal clients may return - different results even if the key contains exactly the same elements. - -* Lua pseudo random number generation functions `math.random` and - `math.randomseed` are modified in order to always have the same seed every - time a new script is executed. - This means that calling `math.random` will always generate the same sequence - of numbers every time a script is executed if `math.randomseed` is not used. +* Lua does not export commands to access the system time or other external + state. + +* Redis will block the script with an error if a script calls a Redis + command able to alter the data set **after** a Redis _random_ command like + `RANDOMKEY`, `SRANDMEMBER`, `TIME`. + This means that if a script is read-only and does not modify the data set it + is free to call those commands. + Note that a _random command_ does not necessarily mean a command that uses + random numbers: any non-deterministic command is considered a random command + (the best example in this regard is the `TIME` command). + +* Redis commands that may return elements in random order, like `SMEMBERS` + (because Redis Sets are _unordered_) have a different behavior when called + from Lua, and undergo a silent lexicographical sorting filter before + returning data to Lua scripts. + So `redis.call("smembers",KEYS[1])` will always return the Set elements + in the same order, while the same command invoked from normal clients may + return different results even if the key contains exactly the same elements. + +* Lua pseudo random number generation functions `math.random` and + `math.randomseed` are modified in order to always have the same seed every + time a new script is executed. + This means that calling `math.random` will always generate the same sequence + of numbers every time a script is executed if `math.randomseed` is not used. However the user is still able to write commands with random behavior using the following simple trick. @@ -518,10 +518,10 @@ later otherwise the order of execution is violated. The client library implementation should take one of the following approaches: -* Always use plain `EVAL` when in the context of a pipeline. +* Always use plain `EVAL` when in the context of a pipeline. -* Accumulate all the commands to send into the pipeline, then check for `EVAL` - commands and use the `SCRIPT EXISTS` command to check if all the scripts are - already defined. - If not, add `SCRIPT LOAD` commands on top of the pipeline as required, and use - `EVALSHA` for all the `EVAL` calls. +* Accumulate all the commands to send into the pipeline, then check for `EVAL` + commands and use the `SCRIPT EXISTS` command to check if all the scripts are + already defined. + If not, add `SCRIPT LOAD` commands on top of the pipeline as required, and + use `EVALSHA` for all the `EVAL` calls. diff --git a/commands/hdel.md b/commands/hdel.md index 5581851dce..2ce85be703 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -10,11 +10,11 @@ including specified but non existing fields. @history -* `>= 2.4`: Accepts multiple `field` arguments. - Redis versions older than 2.4 can only remove a field per call. +* `>= 2.4`: Accepts multiple `field` arguments. + Redis versions older than 2.4 can only remove a field per call. - To remove multiple fields from a hash in an atomic fashion in earlier - versions, use a `MULTI` / `EXEC` block. + To remove multiple fields from a hash in an atomic fashion in earlier + versions, use a `MULTI` / `EXEC` block. @examples diff --git a/commands/info.md b/commands/info.md index 1b65229439..b234350d88 100644 --- a/commands/info.md +++ b/commands/info.md @@ -22,29 +22,30 @@ All the fields are in the form of `field:value` terminated by `\r\n`. ## Notes -* `used_memory` is the total number of bytes allocated by Redis using its - allocator (either standard `libc` `malloc`, or an alternative allocator such - as [`tcmalloc`][hcgcpgp] - -* `used_memory_rss` is the number of bytes that Redis allocated as seen by the - operating system. - Optimally, this number is close to `used_memory` and there is little memory - fragmentation. - This is the number reported by tools such as `top` and `ps`. - A large difference between these numbers means there is memory fragmentation. - Because Redis does not have control over how its allocations are mapped to - memory pages, `used_memory_rss` is often the result of a spike in memory - usage. - The ratio between `used_memory_rss` and `used_memory` is given as - `mem_fragmentation_ratio`. - -* `changes_since_last_save` refers to the number of operations that produced - some kind of change in the dataset since the last time either `SAVE` or - `BGSAVE` was called. - -* `allocation_stats` holds a histogram containing the number of allocations of a - certain size (up to 256). - This provides a means of introspection for the type of allocations performed - by Redis at run time. +* `used_memory` is the total number of bytes allocated by Redis using its + allocator (either standard `libc` `malloc`, or an alternative allocator such + as [`tcmalloc`][hcgcpgp] + +* `used_memory_rss` is the number of bytes that Redis allocated as seen by the + operating system. + Optimally, this number is close to `used_memory` and there is little memory + fragmentation. + This is the number reported by tools such as `top` and `ps`. + A large difference between these numbers means there is memory + fragmentation. + Because Redis does not have control over how its allocations are mapped to + memory pages, `used_memory_rss` is often the result of a spike in memory + usage. + The ratio between `used_memory_rss` and `used_memory` is given as + `mem_fragmentation_ratio`. + +* `changes_since_last_save` refers to the number of operations that produced + some kind of change in the dataset since the last time either `SAVE` or + `BGSAVE` was called. + +* `allocation_stats` holds a histogram containing the number of allocations of + a certain size (up to 256). + This provides a means of introspection for the type of allocations performed + by Redis at run time. [hcgcpgp]: http://code.google.com/p/google-perftools/ diff --git a/commands/setnx.md b/commands/setnx.md index df607d33c8..d3437d3c9d 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -57,25 +57,28 @@ multiple clients detected an expired lock and are trying to release it. Fortunately, it's possible to avoid this issue using the following algorithm. Let's see how C4, our sane client, uses the good algorithm: -* C4 sends `SETNX lock.foo` in order to acquire the lock -* The crashed client C3 still holds it, so Redis will reply with `0` to C4. -* C4 sends `GET lock.foo` to check if the lock expired. - If it is not, it will sleep for some time and retry from the start. -* Instead, if the lock is expired because the Unix time at `lock.foo` is older - than the current Unix time, C4 tries to perform: - - GETSET lock.foo - -* Because of the `GETSET` semantic, C4 can check if the old value stored at - `key` is still an expired timestamp. - If it is, the lock was acquired. - -* If another client, for instance C5, was faster than C4 and acquired the lock - with the `GETSET` operation, the C4 `GETSET` operation will return a non - expired timestamp. - C4 will simply restart from the first step. - Note that even if C4 set the key a bit a few seconds in the future this is not - a problem. +* C4 sends `SETNX lock.foo` in order to acquire the lock + +* The crashed client C3 still holds it, so Redis will reply with `0` to C4. + +* C4 sends `GET lock.foo` to check if the lock expired. + If it is not, it will sleep for some time and retry from the start. + +* Instead, if the lock is expired because the Unix time at `lock.foo` is older + than the current Unix time, C4 tries to perform: + + GETSET lock.foo + +* Because of the `GETSET` semantic, C4 can check if the old value stored at + `key` is still an expired timestamp. + If it is, the lock was acquired. + +* If another client, for instance C5, was faster than C4 and acquired the lock + with the `GETSET` operation, the C4 `GETSET` operation will return a non + expired timestamp. + C4 will simply restart from the first step. + Note that even if C4 set the key a bit a few seconds in the future this is + not a problem. **Important note**: In order to make this locking algorithm more robust, a client holding a lock should always check the timeout didn't expire before diff --git a/remarkdown.rb b/remarkdown.rb index 3a18e1b9a1..d2a5ec66bb 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -1,4 +1,4 @@ -require "rdiscount" +require "redcarpet" require "nokogiri" class ReMarkdown @@ -6,8 +6,15 @@ class ReMarkdown attr_reader :xml def initialize(input) - html = RDiscount.new(input).to_html - @xml = Nokogiri::XML::Document.parse("#{html}") + render = Redcarpet::Render::HTML.new \ + :filter_html => true + + markdown = Redcarpet::Markdown.new render, \ + :no_intra_emphasis => true, + :fenced_code_blocks => true, + :superscript => true + + @xml = Nokogiri::XML::Document.parse("#{markdown.render(input)}") @links = [] @indent = 0 @@ -41,24 +48,6 @@ def flush_links rv end - def format_nodes(nodes) - if nodes.any? { |node| block_nodes.include?(node.name) } - format_block_nodes(nodes) - else - format_inline_nodes(nodes) + "\n" - end - end - - def block_nodes - ["p", "pre"] - end - - def format_block_nodes(nodes) - nodes.map do |node| - format_block_node(node) - end.join("\n") + "\n" - end - def format_block_node(node) case node.name.downcase when "h1", "h2", "h3", "h4", "h5", "h6" @@ -144,15 +133,49 @@ def format_header(node) str end + def block_nodes + ["p", "pre"] + end + + def detect_block_in_li(nodes) + nodes.detect do |node| + node.name.downcase == "li" && + node.children.any? { |node| block_nodes.include?(node.name) } + end + end + + def format_li_children(nodes, has_block) + if nodes.any? { |node| block_nodes.include?(node.name) } + result = nodes.map do |node| + format_block_node(node) + end.join("\n") + "\n" + else + result = format_inline_nodes(nodes) + "\n" + + # Add extra newline when ul/ol contains a multi-line li + result += "\n" if has_block + + result + end + end + def format_ul(node) + has_block = detect_block_in_li(node.children) + children = node.children.map do |child| next unless child.name.downcase == "li" - @indent += 2 - txt = format_nodes(child.children) - @indent -= 2 + if has_block + indent = 4 + else + indent = 2 + end + + @indent += indent + txt = format_li_children(child.children, has_block) + @indent -= indent - txt = indent(txt, 2) + txt = indent(txt, indent) txt[0] = "*" txt @@ -167,16 +190,24 @@ def format_ol(node) @ol_depth += 1 @ol_index[@ol_depth] = 0 + has_block = detect_block_in_li(node.children) + children = node.children.map do |child| next unless child.name.downcase == "li" @ol_index[@ol_depth] += 1 - @indent += 3 - txt = format_nodes(child.children) - @indent -= 3 + if has_block + indent = 4 + else + indent = 3 + end + + @indent += indent + txt = format_li_children(child.children, has_block) + @indent -= indent - txt = indent(txt, 3) + txt = indent(txt, indent) txt[0, 2] = "%d." % @ol_index[@ol_depth] txt From 66b17c2ab00ca9fa27b751c73a4d7ef05e740d09 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 11:42:16 -0700 Subject: [PATCH 0179/2880] Format code blocks with triple backtick --- commands/append.md | 26 +++--- commands/bitcount.md | 11 ++- commands/bitop.md | 11 ++- commands/blpop.md | 42 +++++---- commands/brpop.md | 16 ++-- commands/config get.md | 22 +++-- commands/config set.md | 6 +- commands/decr.md | 11 ++- commands/decrby.md | 7 +- commands/del.md | 9 +- commands/dump.md | 7 +- commands/echo.md | 5 +- commands/eval.md | 176 ++++++++++++++++++++--------------- commands/exists.md | 9 +- commands/expire.md | 23 +++-- commands/expireat.md | 11 ++- commands/get.md | 9 +- commands/getbit.md | 11 ++- commands/getrange.md | 13 +-- commands/getset.md | 18 ++-- commands/hdel.md | 9 +- commands/hexists.md | 9 +- commands/hget.md | 9 +- commands/hgetall.md | 9 +- commands/hincrby.md | 11 ++- commands/hincrbyfloat.md | 11 ++- commands/hkeys.md | 9 +- commands/hlen.md | 9 +- commands/hmget.md | 9 +- commands/hmset.md | 9 +- commands/hset.md | 7 +- commands/hsetnx.md | 9 +- commands/hvals.md | 9 +- commands/incr.md | 99 +++++++++++--------- commands/incrby.md | 7 +- commands/incrbyfloat.md | 11 ++- commands/info.md | 26 +++--- commands/keys.md | 11 ++- commands/lindex.md | 13 +-- commands/linsert.md | 11 ++- commands/llen.md | 9 +- commands/lpop.md | 13 +-- commands/lpush.md | 9 +- commands/lpushx.md | 13 +-- commands/lrange.md | 17 ++-- commands/lrem.md | 15 +-- commands/lset.md | 15 +-- commands/ltrim.md | 19 ++-- commands/mget.md | 9 +- commands/monitor.md | 76 ++++++++------- commands/mset.md | 9 +- commands/msetnx.md | 9 +- commands/object.md | 40 ++++---- commands/persist.md | 13 +-- commands/pexpire.md | 11 ++- commands/pexpireat.md | 11 ++- commands/ping.md | 5 +- commands/psetex.md | 9 +- commands/pttl.md | 9 +- commands/rename.md | 9 +- commands/renamenx.md | 11 ++- commands/restore.md | 26 +++--- commands/rpop.md | 13 +-- commands/rpoplpush.md | 15 +-- commands/rpush.md | 9 +- commands/rpushx.md | 13 +-- commands/sadd.md | 11 ++- commands/scard.md | 9 +- commands/script exists.md | 7 +- commands/sdiff.md | 27 +++--- commands/set.md | 7 +- commands/setbit.md | 9 +- commands/setex.md | 15 +-- commands/setnx.md | 17 ++-- commands/setrange.md | 16 ++-- commands/sinter.md | 27 +++--- commands/sismember.md | 9 +- commands/slowlog.md | 24 ++--- commands/smembers.md | 9 +- commands/smove.md | 15 +-- commands/sort.md | 44 ++++++--- commands/spop.md | 13 +-- commands/srandmember.md | 11 ++- commands/srem.md | 15 +-- commands/strlen.md | 9 +- commands/sunion.md | 27 +++--- commands/time.md | 7 +- commands/ttl.md | 9 +- commands/type.md | 15 +-- commands/zadd.md | 13 +-- commands/zcard.md | 9 +- commands/zcount.md | 13 +-- commands/zincrby.md | 11 ++- commands/zinterstore.md | 17 ++-- commands/zrange.md | 15 +-- commands/zrangebyscore.md | 25 +++-- commands/zrank.md | 13 +-- commands/zrem.md | 13 +-- commands/zremrangebyrank.md | 13 +-- commands/zremrangebyscore.md | 13 +-- commands/zrevrange.md | 15 +-- commands/zrevrangebyscore.md | 17 ++-- commands/zrevrank.md | 13 +-- commands/zscore.md | 7 +- commands/zunionstore.md | 17 ++-- remarkdown.rb | 15 ++- 106 files changed, 980 insertions(+), 758 deletions(-) diff --git a/commands/append.md b/commands/append.md index 3b673eb0e9..a73fc9b8ca 100644 --- a/commands/append.md +++ b/commands/append.md @@ -9,11 +9,12 @@ will be similar to `SET` in this special case. @examples - @cli - EXISTS mykey - APPEND mykey "Hello" - APPEND mykey " World" - GET mykey +```cli +EXISTS mykey +APPEND mykey "Hello" +APPEND mykey " World" +GET mykey +``` ## Pattern: Time series @@ -21,7 +22,9 @@ the `APPEND` command can be used to create a very compact representation of a list of fixed-size samples, usually referred as _time series_. Every time a new sample arrives we can store it using the command - APPEND timeseries "fixed-size sample" +``` +APPEND timeseries "fixed-size sample" +``` Accessing individual elements in the time series is not hard: @@ -45,8 +48,9 @@ more friendly to be distributed across many Redis instances. An example sampling the temperature of a sensor using fixed-size strings (using a binary format is better in real implementations). - @cli - APPEND ts "0043" - APPEND ts "0035" - GETRANGE ts 0 3 - GETRANGE ts 4 7 +```cli +APPEND ts "0043" +APPEND ts "0035" +GETRANGE ts 0 3 +GETRANGE ts 4 7 +``` diff --git a/commands/bitcount.md b/commands/bitcount.md index fdac855364..d4ebb452b5 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -18,11 +18,12 @@ The number of bits set to 1. @examples - @cli - SET mykey "foobar" - BITCOUNT mykey - BITCOUNT mykey 0 0 - BITCOUNT mykey 1 1 +```cli +SET mykey "foobar" +BITCOUNT mykey +BITCOUNT mykey 0 0 +BITCOUNT mykey 1 1 +``` ## Pattern: real time metrics using bitmaps diff --git a/commands/bitop.md b/commands/bitop.md index 948cce7856..a3db670607 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -32,11 +32,12 @@ size of the longest input string. @examples - @cli - SET key1 "foobar" - SET key2 "abcdef" - BITOP AND dest key1 key2 - GET dest +```cli +SET key1 "foobar" +SET key2 "abcdef" +BITOP AND dest key1 key2 +GET dest +``` ## Pattern: real time metrics using bitmaps diff --git a/commands/blpop.md b/commands/blpop.md index 6ae9f1d926..d428d367d8 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -15,7 +15,9 @@ Let's say that the key `list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider the following command: - BLPOP list1 list2 list3 0 +``` +BLPOP list1 list2 list3 0 +``` `BLPOP` guarantees to return an element from the list stored at `list2` (since it is the first non empty list when checking `list1`, `list2` and `list3` in @@ -69,13 +71,15 @@ If you like science fiction, think of time flowing at infinite speed inside a @examples - redis> DEL list1 list2 - (integer) 0 - redis> RPUSH list1 a b c - (integer) 3 - redis> BLPOP list1 list2 0 - 1) "list1" - 2) "a" +``` +redis> DEL list1 list2 +(integer) 0 +redis> RPUSH list1 a b c +(integer) 3 +redis> BLPOP list1 list2 0 +1) "list1" +2) "a" +``` ## Pattern: Event notification @@ -89,16 +93,20 @@ blocking list operations we can easily accomplish this task. The consumer will do: - LOOP forever - WHILE SPOP(key) returns elements - ... process elements ... - END - BRPOP helper_key +``` +LOOP forever + WHILE SPOP(key) returns elements + ... process elements ... END + BRPOP helper_key +END +``` While in the producer side we'll use simply: - MULTI - SADD key element - LPUSH helper_key x - EXEC +``` +MULTI +SADD key element +LPUSH helper_key x +EXEC +``` diff --git a/commands/brpop.md b/commands/brpop.md index fffc65fc55..36fed4ed10 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -21,10 +21,12 @@ the tail of a list instead of popping from the head. @examples - redis> DEL list1 list2 - (integer) 0 - redis> RPUSH list1 a b c - (integer) 3 - redis> BRPOP list1 list2 0 - 1) "list1" - 2) "c" +``` +redis> DEL list1 list2 +(integer) 0 +redis> RPUSH list1 a b c +(integer) 3 +redis> BRPOP list1 list2 0 +1) "list1" +2) "c" +``` diff --git a/commands/config get.md b/commands/config get.md index 5e248879f5..d4f57b25ba 100644 --- a/commands/config get.md +++ b/commands/config get.md @@ -11,13 +11,15 @@ All the configuration parameters matching this parameter are reported as a list of key-value pairs. Example: - redis> config get *max-*-entries* - 1) "hash-max-zipmap-entries" - 2) "512" - 3) "list-max-ziplist-entries" - 4) "512" - 5) "set-max-intset-entries" - 6) "512" +``` +redis> config get *max-*-entries* +1) "hash-max-zipmap-entries" +2) "512" +3) "list-max-ziplist-entries" +4) "512" +5) "set-max-intset-entries" +6) "512" +``` You can obtain a list of all the supported configuration parameters by typing `CONFIG GET *` in an open `redis-cli` prompt. @@ -37,8 +39,10 @@ following important differences: For instance what in `redis.conf` looks like: - save 900 1 - save 300 10 +``` +save 900 1 +save 300 10 +``` that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, will be diff --git a/commands/config set.md b/commands/config set.md index b74244e23e..fb75449e82 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -25,8 +25,10 @@ following important differences: For instance what in `redis.conf` looks like: - save 900 1 - save 300 10 +``` +save 900 1 +save 300 10 +``` that means, save after 900 seconds if there is at least 1 change to the dataset, and after 300 seconds if there are at least 10 changes to the datasets, should diff --git a/commands/decr.md b/commands/decr.md index 875d553b15..cda121a932 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -12,8 +12,9 @@ See `INCR` for extra information on increment/decrement operations. @examples - @cli - SET mykey "10" - DECR mykey - SET mykey "234293482390480948029348230948" - DECR mykey +```cli +SET mykey "10" +DECR mykey +SET mykey "234293482390480948029348230948" +DECR mykey +``` diff --git a/commands/decrby.md b/commands/decrby.md index d2493dc9d0..b4dead90fd 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -12,6 +12,7 @@ See `INCR` for extra information on increment/decrement operations. @examples - @cli - SET mykey "10" - DECRBY mykey 5 +```cli +SET mykey "10" +DECRBY mykey 5 +``` diff --git a/commands/del.md b/commands/del.md index c37b3f3e1d..d5fcbaced5 100644 --- a/commands/del.md +++ b/commands/del.md @@ -7,7 +7,8 @@ A key is ignored if it does not exist. @examples - @cli - SET key1 "Hello" - SET key2 "World" - DEL key1 key2 key3 +```cli +SET key1 "Hello" +SET key2 "World" +DEL key1 key2 key3 +``` diff --git a/commands/dump.md b/commands/dump.md index c07589b481..417caa4c42 100644 --- a/commands/dump.md +++ b/commands/dump.md @@ -27,6 +27,7 @@ If `key` does not exist a nil bulk reply is returned. @examples - @cli - SET mykey 10 - DUMP mykey +```cli +SET mykey 10 +DUMP mykey +``` diff --git a/commands/echo.md b/commands/echo.md index 3e1767eb20..b3e0257b00 100644 --- a/commands/echo.md +++ b/commands/echo.md @@ -6,5 +6,6 @@ Returns `message`. @examples - @cli - ECHO "Hello World!" +```cli +ECHO "Hello World!" +``` diff --git a/commands/eval.md b/commands/eval.md index 7a5312c9bb..31522f78b9 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -18,11 +18,13 @@ keys (so `ARGV[1]`, `ARGV[2]`, ...). The following example should clarify what stated above: - > eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second - 1) "key1" - 2) "key2" - 3) "first" - 4) "second" +``` +> eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second +1) "key1" +2) "key2" +3) "first" +4) "second" +``` Note: as you can see Lua arrays are returned as Redis multi bulk replies, that is a Redis return type that your client library will likely convert into an @@ -43,15 +45,19 @@ error. The arguments of the `redis.call()` and `redis.pcall()` functions are simply all the arguments of a well formed Redis command: - > eval "return redis.call('set','foo','bar')" 0 - OK +``` +> eval "return redis.call('set','foo','bar')" 0 +OK +``` The above script actually sets the key `foo` to the string `bar`. However it violates the `EVAL` command semantics as all the keys that the script uses should be passed using the KEYS array, in the following way: - > eval "return redis.call('set',KEYS[1],'bar')" 1 foo - OK +``` +> eval "return redis.call('set',KEYS[1],'bar')" 1 foo +OK +``` The reason for passing keys in the proper way is that, before `EVAL` all the Redis commands could be analyzed before execution in order to establish what @@ -109,17 +115,19 @@ Redis to Lua conversion rule: Here are a few conversion examples: - > eval "return 10" 0 - (integer) 10 +``` +> eval "return 10" 0 +(integer) 10 - > eval "return {1,2,{3,'Hello World!'}}" 0 - 1) (integer) 1 - 2) (integer) 2 - 3) 1) (integer) 3 - 2) "Hello World!" +> eval "return {1,2,{3,'Hello World!'}}" 0 +1) (integer) 1 +2) (integer) 2 +3) 1) (integer) 3 + 2) "Hello World!" - > eval "return redis.call('get','foo')" 0 - "bar" +> eval "return redis.call('get','foo')" 0 +"bar" +``` The last example shows how it is possible to receive the exact return value of `redis.call()` or `redis.pcall()` from Lua that would be returned if the command @@ -145,12 +153,14 @@ As already stated, calls to `redis.call()` resulting in a Redis command error will stop the execution of the script and will return the error, in a way that makes it obvious that the error was generated by a script: - > del foo - (integer) 1 - > lpush foo a - (integer) 1 - > eval "return redis.call('get','foo')" 0 - (error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value +``` +> del foo +(integer) 1 +> lpush foo a +(integer) 1 +> eval "return redis.call('get','foo')" 0 +(error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value +``` Using the `redis.pcall()` command no error is raised, but an error object is returned in the format specified above (as a Lua table with an `err` field). @@ -190,14 +200,16 @@ The behavior is the following: Example: - > set foo bar - OK - > eval "return redis.call('get','foo')" 0 - "bar" - > evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0 - "bar" - > evalsha ffffffffffffffffffffffffffffffffffffffff 0 - (error) `NOSCRIPT` No matching script. Please use `EVAL`. +``` +> set foo bar +OK +> eval "return redis.call('get','foo')" 0 +"bar" +> evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0 +"bar" +> evalsha ffffffffffffffffffffffffffffffffffffffff 0 +(error) `NOSCRIPT` No matching script. Please use `EVAL`. +``` The client library implementation can always optimistically send `EVALSHA` under the hood even when the client actually calls `EVAL`, in the hope the script was @@ -332,38 +344,42 @@ integers. I can start with this small Ruby program: - require 'rubygems' - require 'redis' +``` +require 'rubygems' +require 'redis' - r = Redis.new +r = Redis.new - RandomPushScript = < 0) do - res = redis.call('lpush',KEYS[1],math.random()) - i = i-1 - end - return res - EOF +RandomPushScript = < 0) do + res = redis.call('lpush',KEYS[1],math.random()) + i = i-1 + end + return res +EOF - r.del(:mylist) - puts r.eval(RandomPushScript,1,:mylist,10) +r.del(:mylist) +puts r.eval(RandomPushScript,1,:mylist,10) +``` Every time this script executed the resulting list will have exactly the following elements: - > lrange mylist 0 -1 - 1) "0.74509509873814" - 2) "0.87390407681181" - 3) "0.36876626981831" - 4) "0.6921941534114" - 5) "0.7857992587545" - 6) "0.57730350670279" - 7) "0.87046522734243" - 8) "0.09637165539729" - 9) "0.74990198051087" - 10) "0.17082803611217" +``` +> lrange mylist 0 -1 + 1) "0.74509509873814" + 2) "0.87390407681181" + 3) "0.36876626981831" + 4) "0.6921941534114" + 5) "0.7857992587545" + 6) "0.57730350670279" + 7) "0.87046522734243" + 8) "0.09637165539729" + 9) "0.74990198051087" +10) "0.17082803611217" +``` In order to make it a pure function, but still be sure that every invocation of the script will result in different random elements, we can simply add an @@ -371,19 +387,21 @@ additional argument to the script that will be used in order to seed the Lua pseudo-random number generator. The new script is as follows: - RandomPushScript = < 0) do - res = redis.call('lpush',KEYS[1],math.random()) - i = i-1 - end - return res - EOF - - r.del(:mylist) - puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) +``` +RandomPushScript = < 0) do + res = redis.call('lpush',KEYS[1],math.random()) + i = i-1 + end + return res +EOF + +r.del(:mylist) +puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) +``` What we are doing here is sending the seed of the PRNG as one of the arguments. This way the script output will be the same given the same arguments, but we are @@ -409,8 +427,10 @@ should use Redis keys instead. When global variable access is attempted the script is terminated and EVAL returns with an error: - redis 127.0.0.1:6379> eval 'a=10' 0 - (error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' +``` +redis 127.0.0.1:6379> eval 'a=10' 0 +(error) ERR Error running script (call to f_933044db579a2f8fd45d8065f04a8d0249383e57): user_script:1: Script attempted to create global variable 'a' +``` Accessing a _non existing_ global variable generates a similar error. @@ -447,7 +467,9 @@ All the other libraries are standard Lua libraries. It is possible to write to the Redis log file from Lua scripts using the `redis.log` function. - redis.log(loglevel,message) +``` +redis.log(loglevel,message) +``` `loglevel` is one of: @@ -463,11 +485,15 @@ the currently configured Redis instance log level will be emitted. The `message` argument is simply a string. Example: - redis.log(redis.LOG_WARNING,"Something is wrong with this script.") +``` +redis.log(redis.LOG_WARNING,"Something is wrong with this script.") +``` Will generate the following: - [32343] 22 Mar 15:21:39 # Something is wrong with this script. +``` +[32343] 22 Mar 15:21:39 # Something is wrong with this script. +``` ## Sandbox and maximum execution time diff --git a/commands/exists.md b/commands/exists.md index 8df55a66e1..1963109ce8 100644 --- a/commands/exists.md +++ b/commands/exists.md @@ -9,7 +9,8 @@ Returns if `key` exists. @examples - @cli - SET key1 "Hello" - EXISTS key1 - EXISTS key2 +```cli +SET key1 "Hello" +EXISTS key1 +EXISTS key2 +``` diff --git a/commands/expire.md b/commands/expire.md index a8082c37fc..141d7412e8 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -46,12 +46,13 @@ are now fixed. @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - SET mykey "Hello World" - TTL mykey +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +SET mykey "Hello World" +TTL mykey +``` ## Pattern: Navigation session @@ -66,10 +67,12 @@ products. You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands: - MULTI - RPUSH pagewviews.user: http://..... - EXPIRE pagewviews.user: 60 - EXEC +``` +MULTI +RPUSH pagewviews.user: http://..... +EXPIRE pagewviews.user: 60 +EXEC +``` If the user will be idle more than 60 seconds, the key will be deleted and only subsequent page views that have less than 60 seconds of difference will be diff --git a/commands/expireat.md b/commands/expireat.md index 005fa43534..7deecb5e9d 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -23,8 +23,9 @@ a given time in the future. @examples - @cli - SET mykey "Hello" - EXISTS mykey - EXPIREAT mykey 1293840000 - EXISTS mykey +```cli +SET mykey "Hello" +EXISTS mykey +EXPIREAT mykey 1293840000 +EXISTS mykey +``` diff --git a/commands/get.md b/commands/get.md index 32dd939b5e..92357897c4 100644 --- a/commands/get.md +++ b/commands/get.md @@ -9,7 +9,8 @@ only handles string values. @examples - @cli - GET nonexisting - SET mykey "Hello" - GET mykey +```cli +GET nonexisting +SET mykey "Hello" +GET mykey +``` diff --git a/commands/getbit.md b/commands/getbit.md index b0aa68116c..1506af304a 100644 --- a/commands/getbit.md +++ b/commands/getbit.md @@ -12,8 +12,9 @@ always out of range and the value is also assumed to be a contiguous space with @examples - @cli - SETBIT mykey 7 1 - GETBIT mykey 0 - GETBIT mykey 7 - GETBIT mykey 100 +```cli +SETBIT mykey 7 1 +GETBIT mykey 0 +GETBIT mykey 7 +GETBIT mykey 100 +``` diff --git a/commands/getrange.md b/commands/getrange.md index 208640434c..578827c53e 100644 --- a/commands/getrange.md +++ b/commands/getrange.md @@ -16,9 +16,10 @@ the actual length of the string. @examples - @cli - SET mykey "This is a string" - GETRANGE mykey 0 3 - GETRANGE mykey -3 -1 - GETRANGE mykey 0 -1 - GETRANGE mykey 10 100 +```cli +SET mykey "This is a string" +GETRANGE mykey 0 3 +GETRANGE mykey -3 -1 +GETRANGE mykey 0 -1 +GETRANGE mykey 10 100 +``` diff --git a/commands/getset.md b/commands/getset.md index a3a2a6dcce..c347ba6e29 100644 --- a/commands/getset.md +++ b/commands/getset.md @@ -9,10 +9,11 @@ some event occurs, but from time to time we need to get the value of the counter and reset it to zero atomically. This can be done using `GETSET mycounter "0"`: - @cli - INCR mycounter - GETSET mycounter "0" - GET mycounter +```cli +INCR mycounter +GETSET mycounter "0" +GET mycounter +``` @return @@ -20,7 +21,8 @@ This can be done using `GETSET mycounter "0"`: @examples - @cli - SET mykey "Hello" - GETSET mykey "World" - GET mykey +```cli +SET mykey "Hello" +GETSET mykey "World" +GET mykey +``` diff --git a/commands/hdel.md b/commands/hdel.md index 2ce85be703..559db701d6 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -18,7 +18,8 @@ including specified but non existing fields. @examples - @cli - HSET myhash field1 "foo" - HDEL myhash field1 - HDEL myhash field2 +```cli +HSET myhash field1 "foo" +HDEL myhash field1 +HDEL myhash field2 +``` diff --git a/commands/hexists.md b/commands/hexists.md index 0df9c222ec..f27678a67a 100644 --- a/commands/hexists.md +++ b/commands/hexists.md @@ -9,7 +9,8 @@ Returns if `field` is an existing field in the hash stored at `key`. @examples - @cli - HSET myhash field1 "foo" - HEXISTS myhash field1 - HEXISTS myhash field2 +```cli +HSET myhash field1 "foo" +HEXISTS myhash field1 +HEXISTS myhash field2 +``` diff --git a/commands/hget.md b/commands/hget.md index 5eae9ea7bb..ff28b8a765 100644 --- a/commands/hget.md +++ b/commands/hget.md @@ -7,7 +7,8 @@ present in the hash or `key` does not exist. @examples - @cli - HSET myhash field1 "foo" - HGET myhash field1 - HGET myhash field2 +```cli +HSET myhash field1 "foo" +HGET myhash field1 +HGET myhash field2 +``` diff --git a/commands/hgetall.md b/commands/hgetall.md index 84ea9a3604..7b8dbac011 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -9,7 +9,8 @@ empty list when `key` does not exist. @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HGETALL myhash +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HGETALL myhash +``` diff --git a/commands/hincrby.md b/commands/hincrby.md index 22a4124060..3d24c254d8 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -15,8 +15,9 @@ The range of values supported by `HINCRBY` is limited to 64 bit signed integers. Since the `increment` argument is signed, both increment and decrement operations can be performed: - @cli - HSET myhash field 5 - HINCRBY myhash field 1 - HINCRBY myhash field -1 - HINCRBY myhash field -10 +```cli +HSET myhash field 5 +HINCRBY myhash field 1 +HINCRBY myhash field -1 +HINCRBY myhash field -10 +``` diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md index cd86669750..4dea0f42f2 100644 --- a/commands/hincrbyfloat.md +++ b/commands/hincrbyfloat.md @@ -17,11 +17,12 @@ information. @examples - @cli - HSET mykey field 10.50 - HINCRBYFLOAT mykey field 0.1 - HSET mykey field 5.0e3 - HINCRBYFLOAT mykey field 2.0e2 +```cli +HSET mykey field 10.50 +HINCRBYFLOAT mykey field 0.1 +HSET mykey field 5.0e3 +HINCRBYFLOAT mykey field 2.0e2 +``` ## Implementation details diff --git a/commands/hkeys.md b/commands/hkeys.md index 6bc8bd3cf4..58269c8a72 100644 --- a/commands/hkeys.md +++ b/commands/hkeys.md @@ -7,7 +7,8 @@ not exist. @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HKEYS myhash +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HKEYS myhash +``` diff --git a/commands/hlen.md b/commands/hlen.md index df116704f7..2c18193435 100644 --- a/commands/hlen.md +++ b/commands/hlen.md @@ -6,7 +6,8 @@ Returns the number of fields contained in the hash stored at `key`. @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HLEN myhash +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HLEN myhash +``` diff --git a/commands/hmget.md b/commands/hmget.md index d8b470db54..3de8073171 100644 --- a/commands/hmget.md +++ b/commands/hmget.md @@ -10,7 +10,8 @@ a non-existing `key` will return a list of `nil` values. @multi-bulk-reply: list of values associated with the given fields, in the same order as they are requested. - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HMGET myhash field1 field2 nofield +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HMGET myhash field1 field2 nofield +``` diff --git a/commands/hmset.md b/commands/hmset.md index 9444092e6a..35e5919e1e 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -9,7 +9,8 @@ If `key` does not exist, a new key holding a hash is created. @examples - @cli - HMSET myhash field1 "Hello" field2 "World" - HGET myhash field1 - HGET myhash field2 +```cli +HMSET myhash field1 "Hello" field2 "World" +HGET myhash field1 +HGET myhash field2 +``` diff --git a/commands/hset.md b/commands/hset.md index 8a2c299dbd..b4e871ec8d 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -11,6 +11,7 @@ If `field` already exists in the hash, it is overwritten. @examples - @cli - HSET myhash field1 "Hello" - HGET myhash field1 +```cli +HSET myhash field1 "Hello" +HGET myhash field1 +``` diff --git a/commands/hsetnx.md b/commands/hsetnx.md index b24200370b..c60eaa071b 100644 --- a/commands/hsetnx.md +++ b/commands/hsetnx.md @@ -12,7 +12,8 @@ If `field` already exists, this operation has no effect. @examples - @cli - HSETNX myhash field "Hello" - HSETNX myhash field "World" - HGET myhash field +```cli +HSETNX myhash field "Hello" +HSETNX myhash field "World" +HGET myhash field +``` diff --git a/commands/hvals.md b/commands/hvals.md index 31ca894996..7f4ca2377a 100644 --- a/commands/hvals.md +++ b/commands/hvals.md @@ -7,7 +7,8 @@ not exist. @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HVALS myhash +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HVALS myhash +``` diff --git a/commands/incr.md b/commands/incr.md index f59b0d6088..04671007e0 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -19,10 +19,11 @@ representation of the integer. @examples - @cli - SET mykey "10" - INCR mykey - GET mykey +```cli +SET mykey "10" +INCR mykey +GET mykey +``` ## Pattern: Counter @@ -64,19 +65,21 @@ _ten requests per second per IP address_. The more simple and direct implementation of this pattern is the following: - FUNCTION LIMIT_API_CALL(ip) - ts = CURRENT_UNIX_TIME() - keyname = ip+":"+ts - current = GET(keyname) - IF current != NULL AND current > 10 THEN - ERROR "too many requests per second" - ELSE - MULTI - INCR(keyname,1) - EXPIRE(keyname,10) - EXEC - PERFORM_API_CALL() - END +``` +FUNCTION LIMIT_API_CALL(ip) +ts = CURRENT_UNIX_TIME() +keyname = ip+":"+ts +current = GET(keyname) +IF current != NULL AND current > 10 THEN + ERROR "too many requests per second" +ELSE + MULTI + INCR(keyname,1) + EXPIRE(keyname,10) + EXEC + PERFORM_API_CALL() +END +``` Basically we have a counter for every IP, for every different second. But this counters are always incremented setting an expire of 10 seconds so that @@ -92,17 +95,19 @@ An alternative implementation uses a single counter, but is a bit more complex to get it right without race conditions. We'll examine different variants. - FUNCTION LIMIT_API_CALL(ip): - current = GET(ip) - IF current != NULL AND current > 10 THEN - ERROR "too many requests per second" - ELSE - value = INCR(ip) - IF value == 1 THEN - EXPIRE(value,1) - END - PERFORM_API_CALL() +``` +FUNCTION LIMIT_API_CALL(ip): +current = GET(ip) +IF current != NULL AND current > 10 THEN + ERROR "too many requests per second" +ELSE + value = INCR(ip) + IF value == 1 THEN + EXPIRE(value,1) END + PERFORM_API_CALL() +END +``` The counter is created in a way that it only will survive one second, starting from the first request performed in the current second. @@ -117,11 +122,13 @@ This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua script that is send using the `EVAL` command (only available since Redis version 2.6). - local current - current = redis.call("incr",KEYS[1]) - if tonumber(current) == 1 then - redis.call("expire",KEYS[1],1) - end +``` +local current +current = redis.call("incr",KEYS[1]) +if tonumber(current) == 1 then + redis.call("expire",KEYS[1],1) +end +``` There is a different way to fix this issue without using scripting, but using Redis lists instead of counters. @@ -129,21 +136,23 @@ The implementation is more complex and uses more advanced features but has the advantage of remembering the IP addresses of the clients currently performing an API call, that may be useful or not depending on the application. - FUNCTION LIMIT_API_CALL(ip) - current = LLEN(ip) - IF current > 10 THEN - ERROR "too many requests per second" +``` +FUNCTION LIMIT_API_CALL(ip) +current = LLEN(ip) +IF current > 10 THEN + ERROR "too many requests per second" +ELSE + IF EXISTS(ip) == FALSE + MULTI + RPUSH(ip,ip) + EXPIRE(ip,1) + EXEC ELSE - IF EXISTS(ip) == FALSE - MULTI - RPUSH(ip,ip) - EXPIRE(ip,1) - EXEC - ELSE - RPUSHX(ip,ip) - END - PERFORM_API_CALL() + RPUSHX(ip,ip) END + PERFORM_API_CALL() +END +``` The `RPUSHX` command only pushes the element if the key already exists. diff --git a/commands/incrby.md b/commands/incrby.md index 8f4d049023..9734351e80 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -12,6 +12,7 @@ See `INCR` for extra information on increment/decrement operations. @examples - @cli - SET mykey "10" - INCRBY mykey 5 +```cli +SET mykey "10" +INCRBY mykey 5 +``` diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md index cfe606a741..efad1f0b41 100644 --- a/commands/incrbyfloat.md +++ b/commands/incrbyfloat.md @@ -27,11 +27,12 @@ regardless of the actual internal precision of the computation. @examples - @cli - SET mykey 10.50 - INCRBYFLOAT mykey 0.1 - SET mykey 5.0e3 - INCRBYFLOAT mykey 2.0e2 +```cli +SET mykey 10.50 +INCRBYFLOAT mykey 0.1 +SET mykey 5.0e3 +INCRBYFLOAT mykey 2.0e2 +``` ## Implementation details diff --git a/commands/info.md b/commands/info.md index b234350d88..2e184256a3 100644 --- a/commands/info.md +++ b/commands/info.md @@ -5,18 +5,20 @@ format that is simple to parse by computers and easy to read by humans. @bulk-reply: in the following format (compacted for brevity): - redis_version:2.2.2 - uptime_in_seconds:148 - used_cpu_sys:0.01 - used_cpu_user:0.03 - used_memory:768384 - used_memory_rss:1536000 - mem_fragmentation_ratio:2.00 - changes_since_last_save:118 - keyspace_hits:174 - keyspace_misses:37 - allocation_stats:4=56,8=312,16=1498,... - db0:keys=1240,expires=0 +``` +redis_version:2.2.2 +uptime_in_seconds:148 +used_cpu_sys:0.01 +used_cpu_user:0.03 +used_memory:768384 +used_memory_rss:1536000 +mem_fragmentation_ratio:2.00 +changes_since_last_save:118 +keyspace_hits:174 +keyspace_misses:37 +allocation_stats:4=56,8=312,16=1498,... +db0:keys=1240,expires=0 +``` All the fields are in the form of `field:value` terminated by `\r\n`. diff --git a/commands/keys.md b/commands/keys.md index ef31b7bd86..a05384cde8 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -30,8 +30,9 @@ Use `\` to escape special characters if you want to match them verbatim. @examples - @cli - MSET one 1 two 2 three 3 four 4 - KEYS *o* - KEYS t?? - KEYS * +```cli +MSET one 1 two 2 three 3 four 4 +KEYS *o* +KEYS t?? +KEYS * +``` diff --git a/commands/lindex.md b/commands/lindex.md index c96ccfde64..00ebee4d1a 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -13,9 +13,10 @@ When the value at `key` is not a list, an error is returned. @examples - @cli - LPUSH mylist "World" - LPUSH mylist "Hello" - LINDEX mylist 0 - LINDEX mylist -1 - LINDEX mylist 3 +```cli +LPUSH mylist "World" +LPUSH mylist "Hello" +LINDEX mylist 0 +LINDEX mylist -1 +LINDEX mylist 3 +``` diff --git a/commands/linsert.md b/commands/linsert.md index 5efcc71091..fb2edf2291 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -13,8 +13,9 @@ the value `pivot` was not found. @examples - @cli - RPUSH mylist "Hello" - RPUSH mylist "World" - LINSERT mylist BEFORE "World" "There" - LRANGE mylist 0 -1 +```cli +RPUSH mylist "Hello" +RPUSH mylist "World" +LINSERT mylist BEFORE "World" "There" +LRANGE mylist 0 -1 +``` diff --git a/commands/llen.md b/commands/llen.md index a41f2ae6a9..8c7c70fac1 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -8,7 +8,8 @@ An error is returned when the value stored at `key` is not a list. @examples - @cli - LPUSH mylist "World" - LPUSH mylist "Hello" - LLEN mylist +```cli +LPUSH mylist "World" +LPUSH mylist "Hello" +LLEN mylist +``` diff --git a/commands/lpop.md b/commands/lpop.md index 6b68c3eb41..cde5c051c1 100644 --- a/commands/lpop.md +++ b/commands/lpop.md @@ -6,9 +6,10 @@ Removes and returns the first element of the list stored at `key`. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LPOP mylist - LRANGE mylist 0 -1 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LPOP mylist +LRANGE mylist 0 -1 +``` diff --git a/commands/lpush.md b/commands/lpush.md index e29b151417..fd15b5b8d4 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -22,7 +22,8 @@ containing `c` as first element, `b` as second element and `a` as third element. @examples - @cli - LPUSH mylist "world" - LPUSH mylist "hello" - LRANGE mylist 0 -1 +```cli +LPUSH mylist "world" +LPUSH mylist "hello" +LRANGE mylist 0 -1 +``` diff --git a/commands/lpushx.md b/commands/lpushx.md index 8376b5de40..fbaeed992e 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -9,9 +9,10 @@ exist. @examples - @cli - LPUSH mylist "World" - LPUSHX mylist "Hello" - LPUSHX myotherlist "Hello" - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 +```cli +LPUSH mylist "World" +LPUSHX mylist "Hello" +LPUSHX myotherlist "Hello" +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` diff --git a/commands/lrange.md b/commands/lrange.md index 25936d567f..ce21bd41eb 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -29,11 +29,12 @@ the last element of the list. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LRANGE mylist 0 0 - LRANGE mylist -3 2 - LRANGE mylist -100 100 - LRANGE mylist 5 10 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LRANGE mylist 0 0 +LRANGE mylist -3 2 +LRANGE mylist -100 100 +LRANGE mylist 5 10 +``` diff --git a/commands/lrem.md b/commands/lrem.md index 4fd9b50efc..573deae958 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -18,10 +18,11 @@ exist, the command will always return `0`. @examples - @cli - RPUSH mylist "hello" - RPUSH mylist "hello" - RPUSH mylist "foo" - RPUSH mylist "hello" - LREM mylist -2 "hello" - LRANGE mylist 0 -1 +```cli +RPUSH mylist "hello" +RPUSH mylist "hello" +RPUSH mylist "foo" +RPUSH mylist "hello" +LREM mylist -2 "hello" +LRANGE mylist 0 -1 +``` diff --git a/commands/lset.md b/commands/lset.md index 6a4703f416..e5c87cad78 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -9,10 +9,11 @@ An error is returned for out of range indexes. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LSET mylist 0 "four" - LSET mylist -2 "five" - LRANGE mylist 0 -1 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LSET mylist 0 "four" +LSET mylist -2 "five" +LRANGE mylist 0 -1 +``` diff --git a/commands/ltrim.md b/commands/ltrim.md index 45c6accf75..3db296b402 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -19,8 +19,10 @@ element of the list. A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. For example: - LPUSH mylist someelement - LTRIM mylist 0 99 +``` +LPUSH mylist someelement +LTRIM mylist 0 99 +``` This pair of commands will push a new element on the list, while making sure that the list will not grow larger than 100 elements. @@ -35,9 +37,10 @@ list. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LTRIM mylist 1 -1 - LRANGE mylist 0 -1 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LTRIM mylist 1 -1 +LRANGE mylist 0 -1 +``` diff --git a/commands/mget.md b/commands/mget.md index fb9c79d286..4abbf6cd74 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -9,7 +9,8 @@ Because of this, the operation never fails. @examples - @cli - SET key1 "Hello" - SET key2 "World" - MGET key1 key2 nonexisting +```cli +SET key1 "Hello" +SET key2 "World" +MGET key1 key2 nonexisting +``` diff --git a/commands/monitor.md b/commands/monitor.md index 3bd2b1c29e..2fb465129c 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -7,31 +7,35 @@ The ability to see all the requests processed by the server is useful in order to spot bugs in an application both when using Redis as a database and as a distributed caching system. - $ redis-cli monitor - 1339518083.107412 [0 127.0.0.1:60866] "keys" "*" - 1339518087.877697 [0 127.0.0.1:60866] "dbsize" - 1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" - 1339518096.506257 [0 127.0.0.1:60866] "get" "x" - 1339518099.363765 [0 127.0.0.1:60866] "del" "x" - 1339518100.544926 [0 127.0.0.1:60866] "get" "x" +``` +$ redis-cli monitor +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" +1339518087.877697 [0 127.0.0.1:60866] "dbsize" +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" +1339518096.506257 [0 127.0.0.1:60866] "get" "x" +1339518099.363765 [0 127.0.0.1:60866] "del" "x" +1339518100.544926 [0 127.0.0.1:60866] "get" "x" +``` Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. - $ telnet localhost 6379 - Trying 127.0.0.1... - Connected to localhost. - Escape character is '^]'. - MONITOR - +OK - +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" - +1339518087.877697 [0 127.0.0.1:60866] "dbsize" - +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" - +1339518096.506257 [0 127.0.0.1:60866] "get" "x" - +1339518099.363765 [0 127.0.0.1:60866] "del" "x" - +1339518100.544926 [0 127.0.0.1:60866] "get" "x" - QUIT - +OK - Connection closed by foreign host. +``` +$ telnet localhost 6379 +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. +MONITOR ++OK ++1339518083.107412 [0 127.0.0.1:60866] "keys" "*" ++1339518087.877697 [0 127.0.0.1:60866] "dbsize" ++1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" ++1339518096.506257 [0 127.0.0.1:60866] "get" "x" ++1339518099.363765 [0 127.0.0.1:60866] "del" "x" ++1339518100.544926 [0 127.0.0.1:60866] "get" "x" +QUIT ++OK +Connection closed by foreign host. +``` Manually issue the `QUIT` command to stop a `MONITOR` stream running via `telnet`. @@ -44,21 +48,25 @@ of running `MONITOR` can be. Benchmark result **without** `MONITOR` running: - $ src/redis-benchmark -c 10 -n 100000 -q - PING_INLINE: 101936.80 requests per second - PING_BULK: 102880.66 requests per second - SET: 95419.85 requests per second - GET: 104275.29 requests per second - INCR: 93283.58 requests per second +``` +$ src/redis-benchmark -c 10 -n 100000 -q +PING_INLINE: 101936.80 requests per second +PING_BULK: 102880.66 requests per second +SET: 95419.85 requests per second +GET: 104275.29 requests per second +INCR: 93283.58 requests per second +``` Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`): - $ src/redis-benchmark -c 10 -n 100000 -q - PING_INLINE: 58479.53 requests per second - PING_BULK: 59136.61 requests per second - SET: 41823.50 requests per second - GET: 45330.91 requests per second - INCR: 41771.09 requests per second +``` +$ src/redis-benchmark -c 10 -n 100000 -q +PING_INLINE: 58479.53 requests per second +PING_BULK: 59136.61 requests per second +SET: 41823.50 requests per second +GET: 45330.91 requests per second +INCR: 41771.09 requests per second +``` In this particular case, running a single `MONITOR` client can reduce the throughput by more than 50%. diff --git a/commands/mset.md b/commands/mset.md index 76e81d959a..a432859717 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -12,7 +12,8 @@ others are unchanged. @examples - @cli - MSET key1 "Hello" key2 "World" - GET key1 - GET key2 +```cli +MSET key1 "Hello" key2 "World" +GET key1 +GET key2 +``` diff --git a/commands/msetnx.md b/commands/msetnx.md index 1b1c060b0f..138450f655 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -19,7 +19,8 @@ others are unchanged. @examples - @cli - MSETNX key1 "Hello" key2 "there" - MSETNX key2 "there" key3 "world" - MGET key1 key2 key3 +```cli +MSETNX key1 "Hello" key2 "there" +MSETNX key2 "there" key3 "world" +MGET key1 key2 key3 +``` diff --git a/commands/object.md b/commands/object.md index a560dbefd0..c0b9a1f709 100644 --- a/commands/object.md +++ b/commands/object.md @@ -51,25 +51,29 @@ If the object you try to inspect is missing, a null bulk reply is returned. @examples - redis> lpush mylist "Hello World" - (integer) 4 - redis> object refcount mylist - (integer) 1 - redis> object encoding mylist - "ziplist" - redis> object idletime mylist - (integer) 10 +``` +redis> lpush mylist "Hello World" +(integer) 4 +redis> object refcount mylist +(integer) 1 +redis> object encoding mylist +"ziplist" +redis> object idletime mylist +(integer) 10 +``` In the following example you can see how the encoding changes once Redis is no longer able to use the space saving encoding. - redis> set foo 1000 - OK - redis> object encoding foo - "int" - redis> append foo bar - (integer) 7 - redis> get foo - "1000bar" - redis> object encoding foo - "raw" +``` +redis> set foo 1000 +OK +redis> object encoding foo +"int" +redis> append foo bar +(integer) 7 +redis> get foo +"1000bar" +redis> object encoding foo +"raw" +``` diff --git a/commands/persist.md b/commands/persist.md index 6f236afa2e..67a00147da 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -11,9 +11,10 @@ is associated). @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - PERSIST mykey - TTL mykey +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +PERSIST mykey +TTL mykey +``` diff --git a/commands/pexpire.md b/commands/pexpire.md index 9b59fa952e..d5bb40bf63 100644 --- a/commands/pexpire.md +++ b/commands/pexpire.md @@ -12,8 +12,9 @@ specified in milliseconds instead of seconds. @examples - @cli - SET mykey "Hello" - PEXPIRE mykey 1500 - TTL mykey - PTTL mykey +```cli +SET mykey "Hello" +PEXPIRE mykey 1500 +TTL mykey +PTTL mykey +``` diff --git a/commands/pexpireat.md b/commands/pexpireat.md index febff0daca..bfd7005552 100644 --- a/commands/pexpireat.md +++ b/commands/pexpireat.md @@ -14,8 +14,9 @@ which the key will expire is specified in milliseconds instead of seconds. @examples - @cli - SET mykey "Hello" - PEXPIREAT mykey 1555555555005 - TTL mykey - PTTL mykey +```cli +SET mykey "Hello" +PEXPIREAT mykey 1555555555005 +TTL mykey +PTTL mykey +``` diff --git a/commands/ping.md b/commands/ping.md index f2405ce93d..203eafe813 100644 --- a/commands/ping.md +++ b/commands/ping.md @@ -8,5 +8,6 @@ latency. @examples - @cli - PING +```cli +PING +``` diff --git a/commands/psetex.md b/commands/psetex.md index fd0f0e731c..f6ee05815b 100644 --- a/commands/psetex.md +++ b/commands/psetex.md @@ -7,7 +7,8 @@ time is specified in milliseconds instead of seconds. @examples - @cli - PSETEX mykey 1000 "Hello" - PTTL mykey - GET mykey +```cli +PSETEX mykey 1000 "Hello" +PTTL mykey +GET mykey +``` diff --git a/commands/pttl.md b/commands/pttl.md index 7d3f6b4c21..e80f10f53e 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -13,7 +13,8 @@ or does not have a timeout. @examples - @cli - SET mykey "Hello" - EXPIRE mykey 1 - PTTL mykey +```cli +SET mykey "Hello" +EXPIRE mykey 1 +PTTL mykey +``` diff --git a/commands/rename.md b/commands/rename.md index 0f80d67b81..6317706540 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -9,7 +9,8 @@ If `newkey` already exists it is overwritten. @examples - @cli - SET mykey "Hello" - RENAME mykey myotherkey - GET myotherkey +```cli +SET mykey "Hello" +RENAME mykey myotherkey +GET myotherkey +``` diff --git a/commands/renamenx.md b/commands/renamenx.md index 737c9dfdf0..4823887fa1 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -10,8 +10,9 @@ It returns an error under the same conditions as `RENAME`. @examples - @cli - SET mykey "Hello" - SET myotherkey "World" - RENAMENX mykey myotherkey - GET myotherkey +```cli +SET mykey "Hello" +SET myotherkey "World" +RENAMENX mykey myotherkey +GET myotherkey +``` diff --git a/commands/restore.md b/commands/restore.md index 861dcbfbad..7f6ab7f4c8 100644 --- a/commands/restore.md +++ b/commands/restore.md @@ -13,15 +13,17 @@ If they don't match an error is returned. @examples - redis> DEL mykey - 0 - redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ - x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\ - xff\x04\x00u#<\xc0;.\xe9\xdd" - OK - redis> TYPE mykey - list - redis> LRANGE mykey 0 -1 - 1) "1" - 2) "2" - 3) "3" +``` +redis> DEL mykey +0 +redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ + x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\ + xff\x04\x00u#<\xc0;.\xe9\xdd" +OK +redis> TYPE mykey +list +redis> LRANGE mykey 0 -1 +1) "1" +2) "2" +3) "3" +``` diff --git a/commands/rpop.md b/commands/rpop.md index d28fcc7e24..2d6c29ef5c 100644 --- a/commands/rpop.md +++ b/commands/rpop.md @@ -6,9 +6,10 @@ Removes and returns the last element of the list stored at `key`. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - RPOP mylist - LRANGE mylist 0 -1 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +RPOP mylist +LRANGE mylist 0 -1 +``` diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index 34c393bac2..b42e70c533 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -19,13 +19,14 @@ list, so it can be considered as a list rotation command. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - RPOPLPUSH mylist myotherlist - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +RPOPLPUSH mylist myotherlist +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` ## Pattern: Reliable queue diff --git a/commands/rpush.md b/commands/rpush.md index 6ed764f650..182ec88a38 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -22,7 +22,8 @@ containing `a` as first element, `b` as second element and `c` as third element. @examples - @cli - RPUSH mylist "hello" - RPUSH mylist "world" - LRANGE mylist 0 -1 +```cli +RPUSH mylist "hello" +RPUSH mylist "world" +LRANGE mylist 0 -1 +``` diff --git a/commands/rpushx.md b/commands/rpushx.md index 5375485707..5748a35fcb 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -9,9 +9,10 @@ exist. @examples - @cli - RPUSH mylist "Hello" - RPUSHX mylist "World" - RPUSHX myotherlist "World" - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 +```cli +RPUSH mylist "Hello" +RPUSHX mylist "World" +RPUSHX myotherlist "World" +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` diff --git a/commands/sadd.md b/commands/sadd.md index 92de4c30ad..63b3c0945c 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -17,8 +17,9 @@ all the elements already present into the set. @examples - @cli - SADD myset "Hello" - SADD myset "World" - SADD myset "World" - SMEMBERS myset +```cli +SADD myset "Hello" +SADD myset "World" +SADD myset "World" +SMEMBERS myset +``` diff --git a/commands/scard.md b/commands/scard.md index 59f0926a9f..85d3c01059 100644 --- a/commands/scard.md +++ b/commands/scard.md @@ -7,7 +7,8 @@ does not exist. @examples - @cli - SADD myset "Hello" - SADD myset "World" - SCARD myset +```cli +SADD myset "Hello" +SADD myset "World" +SCARD myset +``` diff --git a/commands/script exists.md b/commands/script exists.md index 435b4150a0..199d752012 100644 --- a/commands/script exists.md +++ b/commands/script exists.md @@ -20,6 +20,7 @@ script cache, an 1 is returned, otherwise 0 is returned. @example - @cli - SCRIPT LOAD "return 1" - SCRIPT EXISTS e0e1f9fabfc9d4800c877a703b823ac0578ff8db ffffffffffffffffffffffffffffffffffffffff +```cli +SCRIPT LOAD "return 1" +SCRIPT EXISTS e0e1f9fabfc9d4800c877a703b823ac0578ff8db ffffffffffffffffffffffffffffffffffffffff +``` diff --git a/commands/sdiff.md b/commands/sdiff.md index e1ae603dd4..f36e1fe696 100644 --- a/commands/sdiff.md +++ b/commands/sdiff.md @@ -3,10 +3,12 @@ set and all the successive sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SDIFF key1 key2 key3 = {b,d} +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SDIFF key1 key2 key3 = {b,d} +``` Keys that do not exist are considered to be empty sets. @@ -16,11 +18,12 @@ Keys that do not exist are considered to be empty sets. @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SDIFF key1 key2 +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SDIFF key1 key2 +``` diff --git a/commands/set.md b/commands/set.md index 547f4c0824..b93d618525 100644 --- a/commands/set.md +++ b/commands/set.md @@ -7,6 +7,7 @@ If `key` already holds a value, it is overwritten, regardless of its type. @examples - @cli - SET mykey "Hello" - GET mykey +```cli +SET mykey "Hello" +GET mykey +``` diff --git a/commands/setbit.md b/commands/setbit.md index 3541cddfb6..9163c90b54 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -25,7 +25,8 @@ the same _key_ will not have the allocation overhead. @examples - @cli - SETBIT mykey 7 1 - SETBIT mykey 7 0 - GET mykey +```cli +SETBIT mykey 7 1 +SETBIT mykey 7 0 +GET mykey +``` diff --git a/commands/setex.md b/commands/setex.md index 53221baaee..a57661016b 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -2,8 +2,10 @@ Set `key` to hold the string `value` and set `key` to timeout after a given number of seconds. This command is equivalent to executing the following commands: - SET mykey value - EXPIRE mykey seconds +``` +SET mykey value +EXPIRE mykey seconds +``` `SETEX` is atomic, and can be reproduced by using the previous two commands inside an `MULTI` / `EXEC` block. @@ -18,7 +20,8 @@ An error is returned when `seconds` is invalid. @examples - @cli - SETEX mykey 10 "Hello" - TTL mykey - GET mykey +```cli +SETEX mykey 10 "Hello" +TTL mykey +GET mykey +``` diff --git a/commands/setnx.md b/commands/setnx.md index d3437d3c9d..4ee0e1bb80 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -12,10 +12,11 @@ When `key` already holds a value, no operation is performed. @examples - @cli - SETNX mykey "Hello" - SETNX mykey "World" - GET mykey +```cli +SETNX mykey "Hello" +SETNX mykey "World" +GET mykey +``` ## Design pattern: Locking with `!SETNX` @@ -23,7 +24,9 @@ When `key` already holds a value, no operation is performed. For example, to acquire the lock of the key `foo`, the client could try the following: - SETNX lock.foo +``` +SETNX lock.foo +``` If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key to the Unix time at which the lock should no longer be considered valid. @@ -67,7 +70,9 @@ Let's see how C4, our sane client, uses the good algorithm: * Instead, if the lock is expired because the Unix time at `lock.foo` is older than the current Unix time, C4 tries to perform: - GETSET lock.foo + ``` + GETSET lock.foo + ``` * Because of the `GETSET` semantic, C4 can check if the old value stored at `key` is still an expired timestamp. diff --git a/commands/setrange.md b/commands/setrange.md index 38821216cb..617e3d5dc3 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -34,13 +34,15 @@ This is a very fast and efficient storage in many real world use cases. Basic usage: - @cli - SET key1 "Hello World" - SETRANGE key1 6 "Redis" - GET key1 +```cli +SET key1 "Hello World" +SETRANGE key1 6 "Redis" +GET key1 +``` Example of zero padding: - @cli - SETRANGE key2 6 "Redis" - GET key2 +```cli +SETRANGE key2 6 "Redis" +GET key2 +``` diff --git a/commands/sinter.md b/commands/sinter.md index d7bdceb551..b6212ee654 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -3,10 +3,12 @@ sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SINTER key1 key2 key3 = {c} +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SINTER key1 key2 key3 = {c} +``` Keys that do not exist are considered to be empty sets. With one of the keys being an empty set, the resulting set is also empty (since @@ -18,11 +20,12 @@ set intersection with an empty set always results in an empty set). @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SINTER key1 key2 +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTER key1 key2 +``` diff --git a/commands/sismember.md b/commands/sismember.md index 109b7bfbcb..219cd6e3e0 100644 --- a/commands/sismember.md +++ b/commands/sismember.md @@ -9,7 +9,8 @@ Returns if `member` is a member of the set stored at `key`. @examples - @cli - SADD myset "one" - SISMEMBER myset "one" - SISMEMBER myset "two" +```cli +SADD myset "one" +SISMEMBER myset "one" +SISMEMBER myset "two" +``` diff --git a/commands/slowlog.md b/commands/slowlog.md index 55306cdda1..0cf113f34a 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -42,17 +42,19 @@ implemented in redis-cli (deeply nested multi bulk replies). ## Output format - redis 127.0.0.1:6379> slowlog get 2 - 1) 1) (integer) 14 - 2) (integer) 1309448221 - 3) (integer) 15 - 4) 1) "ping" - 2) 1) (integer) 13 - 2) (integer) 1309448128 - 3) (integer) 30 - 4) 1) "slowlog" - 2) "get" - 3) "100" +``` +redis 127.0.0.1:6379> slowlog get 2 +1) 1) (integer) 14 + 2) (integer) 1309448221 + 3) (integer) 15 + 4) 1) "ping" +2) 1) (integer) 13 + 2) (integer) 1309448128 + 3) (integer) 30 + 4) 1) "slowlog" + 2) "get" + 3) "100" +``` Every entry is composed of four fields: diff --git a/commands/smembers.md b/commands/smembers.md index f278c0eed0..4672e31cdc 100644 --- a/commands/smembers.md +++ b/commands/smembers.md @@ -8,7 +8,8 @@ This has the same effect as running `SINTER` with one argument `key`. @examples - @cli - SADD myset "Hello" - SADD myset "World" - SMEMBERS myset +```cli +SADD myset "Hello" +SADD myset "World" +SMEMBERS myset +``` diff --git a/commands/smove.md b/commands/smove.md index e2dfccb14c..6b2400b6b0 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -21,10 +21,11 @@ An error is returned if `source` or `destination` does not hold a set value. @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myotherset "three" - SMOVE myset myotherset "two" - SMEMBERS myset - SMEMBERS myotherset +```cli +SADD myset "one" +SADD myset "two" +SADD myotherset "three" +SMOVE myset myotherset "two" +SMEMBERS myset +SMEMBERS myotherset +``` diff --git a/commands/sort.md b/commands/sort.md index c2c7b5e533..418e6c332d 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -8,18 +8,24 @@ This is `SORT` in its simplest form: [tdts]: /topics/data-types#set [tdtss]: /topics/data-types#sorted-sets - SORT mylist +``` +SORT mylist +``` Assuming `mylist` is a list of numbers, this command will return the same list with the elements sorted from small to large. In order to sort the numbers from large to small, use the `!DESC` modifier: - SORT mylist DESC +``` +SORT mylist DESC +``` When `mylist` contains string values and you want to sort them lexicographically, use the `!ALPHA` modifier: - SORT mylist ALPHA +``` +SORT mylist ALPHA +``` Redis is UTF-8 aware, assuming you correctly set the `!LC_COLLATE` environment variable. @@ -31,13 +37,17 @@ starting at `offset`. The following example will return 10 elements of the sorted version of `mylist`, starting at element 0 (`offset` is zero-based): - SORT mylist LIMIT 0 10 +``` +SORT mylist LIMIT 0 10 +``` Almost all modifiers can be used together. The following example will return the first 5 elements, lexicographically sorted in descending order: - SORT mylist LIMIT 0 5 ALPHA DESC +``` +SORT mylist LIMIT 0 5 ALPHA DESC +``` ## Sorting by external keys @@ -49,7 +59,9 @@ When these objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: - SORT mylist BY weight_* +``` +SORT mylist BY weight_* +``` The `BY` option takes a pattern (equal to `weight_*` in this example) that is used to generate the keys that are used for sorting. @@ -63,7 +75,9 @@ the sorting operation. This is useful if you want to retrieve external keys (see the `!GET` option below) without the overhead of sorting. - SORT mylist BY nosort +``` +SORT mylist BY nosort +``` ## Retrieving external keys @@ -73,14 +87,18 @@ In some cases, it is more useful to get the actual objects instead of their IDs Retrieving external keys based on the elements in a list, set or sorted set can be done with the following command: - SORT mylist BY weight_* GET object_* +``` +SORT mylist BY weight_* GET object_* +``` The `!GET` option can be used multiple times in order to get more keys for every element of the original list, set or sorted set. It is also possible to `!GET` the element itself using the special pattern `#`: - SORT mylist BY weight_* GET object_* GET # +``` +SORT mylist BY weight_* GET object_* GET # +``` ## Storing the result of a SORT operation @@ -88,7 +106,9 @@ By default, `SORT` returns the sorted elements to the client. With the `!STORE` option, the result will be stored as a list at the specified key instead of being returned to the client. - SORT mylist BY weight_* STORE resultkey +``` +SORT mylist BY weight_* STORE resultkey +``` An interesting pattern using `SORT ... STORE` consists in associating an `EXPIRE` timeout to the resulting key so that in applications where the result @@ -107,7 +127,9 @@ Some kind of locking is needed here (for instance using `SETNX`). It is possible to use `!BY` and `!GET` options against hash fields with the following syntax: - SORT mylist BY weight_*->fieldname GET object_*->fieldname +``` +SORT mylist BY weight_*->fieldname GET object_*->fieldname +``` The string `->` is used to separate the key name from the hash field name. The key is substituted as documented above, and the hash stored at the resulting diff --git a/commands/spop.md b/commands/spop.md index 0f9b230430..ddaf36ff9c 100644 --- a/commands/spop.md +++ b/commands/spop.md @@ -9,9 +9,10 @@ set but does not remove it. @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SPOP myset - SMEMBERS myset +```cli +SADD myset "one" +SADD myset "two" +SADD myset "three" +SPOP myset +SMEMBERS myset +``` diff --git a/commands/srandmember.md b/commands/srandmember.md index 196c5a790a..c6db17ac39 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -10,8 +10,9 @@ element without altering the original set in any way. @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SRANDMEMBER myset +```cli +SADD myset "one" +SADD myset "two" +SADD myset "three" +SRANDMEMBER myset +``` diff --git a/commands/srem.md b/commands/srem.md index f92a1d0d92..b1863974a0 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -17,10 +17,11 @@ including non existing members. @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SREM myset "one" - SREM myset "four" - SMEMBERS myset +```cli +SADD myset "one" +SADD myset "two" +SADD myset "three" +SREM myset "one" +SREM myset "four" +SMEMBERS myset +``` diff --git a/commands/strlen.md b/commands/strlen.md index 4b36ab8b80..e504180f01 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -8,7 +8,8 @@ exist. @examples - @cli - SET mykey "Hello world" - STRLEN mykey - STRLEN nonexisting +```cli +SET mykey "Hello world" +STRLEN mykey +STRLEN nonexisting +``` diff --git a/commands/sunion.md b/commands/sunion.md index 2cdaaadd84..b39ccf32ca 100644 --- a/commands/sunion.md +++ b/commands/sunion.md @@ -2,10 +2,12 @@ Returns the members of the set resulting from the union of all the given sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SUNION key1 key2 key3 = {a,b,c,d,e} +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SUNION key1 key2 key3 = {a,b,c,d,e} +``` Keys that do not exist are considered to be empty sets. @@ -15,11 +17,12 @@ Keys that do not exist are considered to be empty sets. @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SUNION key1 key2 +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SUNION key1 key2 +``` diff --git a/commands/time.md b/commands/time.md index 422f1fdf6e..5d22ba3e12 100644 --- a/commands/time.md +++ b/commands/time.md @@ -18,6 +18,7 @@ A multi bulk reply containing two elements: @examples - @cli - TIME - TIME +```cli +TIME +TIME +``` diff --git a/commands/ttl.md b/commands/ttl.md index 1e1914cad8..0d478c520a 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -9,7 +9,8 @@ have a timeout. @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +``` diff --git a/commands/type.md b/commands/type.md index 9c5fefb048..5332dfd5f1 100644 --- a/commands/type.md +++ b/commands/type.md @@ -8,10 +8,11 @@ and `hash`. @examples - @cli - SET key1 "value" - LPUSH key2 "value" - SADD key3 "value" - TYPE key1 - TYPE key2 - TYPE key3 +```cli +SET key1 "value" +LPUSH key2 "value" +SADD key3 "value" +TYPE key1 +TYPE key2 +TYPE key3 +``` diff --git a/commands/zadd.md b/commands/zadd.md index 081b926b3f..497eaa7f14 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -31,9 +31,10 @@ sets][tdtss]. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 1 "uno" - ZADD myzset 2 "two" - ZADD myzset 3 "two" - ZRANGE myzset 0 -1 WITHSCORES +```cli +ZADD myzset 1 "one" +ZADD myzset 1 "uno" +ZADD myzset 2 "two" +ZADD myzset 3 "two" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zcard.md b/commands/zcard.md index 01331eccd2..5ad504335d 100644 --- a/commands/zcard.md +++ b/commands/zcard.md @@ -8,7 +8,8 @@ if `key` does not exist. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZCARD myzset +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZCARD myzset +``` diff --git a/commands/zcount.md b/commands/zcount.md index ee56a53137..468b0dcbbe 100644 --- a/commands/zcount.md +++ b/commands/zcount.md @@ -10,9 +10,10 @@ The `min` and `max` arguments have the same semantic as described for @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZCOUNT myzset -inf +inf - ZCOUNT myzset (1 3 +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZCOUNT myzset -inf +inf +ZCOUNT myzset (1 3 +``` diff --git a/commands/zincrby.md b/commands/zincrby.md index 4e30a49ab2..1d9da13441 100644 --- a/commands/zincrby.md +++ b/commands/zincrby.md @@ -18,8 +18,9 @@ number), represented as string. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZINCRBY myzset 2 "one" - ZRANGE myzset 0 -1 WITHSCORES +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZINCRBY myzset 2 "one" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zinterstore.md b/commands/zinterstore.md index 7b7f1afe2e..0ecda0ddd7 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -20,11 +20,12 @@ If `destination` already exists, it is overwritten. @examples - @cli - ZADD zset1 1 "one" - ZADD zset1 2 "two" - ZADD zset2 1 "one" - ZADD zset2 2 "two" - ZADD zset2 3 "three" - ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 - ZRANGE out 0 -1 WITHSCORES +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 +ZRANGE out 0 -1 WITHSCORES +``` diff --git a/commands/zrange.md b/commands/zrange.md index 3ec95281b7..753c83e2e3 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -31,10 +31,11 @@ their scores). @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANGE myzset 0 -1 - ZRANGE myzset 2 3 - ZRANGE myzset -2 -1 +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZRANGE myzset 0 -1 +ZRANGE myzset 2 3 +ZRANGE myzset -2 -1 +``` diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 92c7be524b..33b6cdcd63 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -27,11 +27,15 @@ It is possible to specify an open interval (exclusive) by prefixing the score with the character `(`. For example: - ZRANGEBYSCORE zset (1 5 +``` +ZRANGEBYSCORE zset (1 5 +``` Will return all elements with `1 < score <= 5` while: - ZRANGEBYSCORE zset (5 (10 +``` +ZRANGEBYSCORE zset (5 (10 +``` Will return all the elements with `5 < score < 10` (5 and 10 excluded). @@ -42,11 +46,12 @@ with their scores). @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANGEBYSCORE myzset -inf +inf - ZRANGEBYSCORE myzset 1 2 - ZRANGEBYSCORE myzset (1 2 - ZRANGEBYSCORE myzset (1 (2 +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZRANGEBYSCORE myzset -inf +inf +ZRANGEBYSCORE myzset 1 2 +ZRANGEBYSCORE myzset (1 2 +ZRANGEBYSCORE myzset (1 (2 +``` diff --git a/commands/zrank.md b/commands/zrank.md index 243af3c338..bbd9dedad3 100644 --- a/commands/zrank.md +++ b/commands/zrank.md @@ -14,9 +14,10 @@ to low. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANK myzset "three" - ZRANK myzset "four" +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZRANK myzset "three" +ZRANK myzset "four" +``` diff --git a/commands/zrem.md b/commands/zrem.md index 9461fd7eeb..10433c2418 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -18,9 +18,10 @@ An error is returned when `key` exists and does not hold a sorted set. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREM myzset "two" - ZRANGE myzset 0 -1 WITHSCORES +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREM myzset "two" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zremrangebyrank.md b/commands/zremrangebyrank.md index 7a77d12900..edd3cf39a5 100644 --- a/commands/zremrangebyrank.md +++ b/commands/zremrangebyrank.md @@ -13,9 +13,10 @@ the second highest score and so forth. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREMRANGEBYRANK myzset 0 1 - ZRANGE myzset 0 -1 WITHSCORES +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREMRANGEBYRANK myzset 0 1 +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zremrangebyscore.md b/commands/zremrangebyscore.md index 253b37098a..3665bd0f1f 100644 --- a/commands/zremrangebyscore.md +++ b/commands/zremrangebyscore.md @@ -10,9 +10,10 @@ Since version 2.1.6, `min` and `max` can be exclusive, following the syntax of @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREMRANGEBYSCORE myzset -inf (2 - ZRANGE myzset 0 -1 WITHSCORES +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREMRANGEBYSCORE myzset -inf (2 +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zrevrange.md b/commands/zrevrange.md index c77c66001e..dd4fc654f0 100644 --- a/commands/zrevrange.md +++ b/commands/zrevrange.md @@ -11,10 +11,11 @@ their scores). @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANGE myzset 0 -1 - ZREVRANGE myzset 2 3 - ZREVRANGE myzset -2 -1 +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANGE myzset 0 -1 +ZREVRANGE myzset 2 3 +ZREVRANGE myzset -2 -1 +``` diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index 84aaead246..e7ca6e094b 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -16,11 +16,12 @@ with their scores). @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANGEBYSCORE myzset +inf -inf - ZREVRANGEBYSCORE myzset 2 1 - ZREVRANGEBYSCORE myzset 2 (1 - ZREVRANGEBYSCORE myzset (2 (1 +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANGEBYSCORE myzset +inf -inf +ZREVRANGEBYSCORE myzset 2 1 +ZREVRANGEBYSCORE myzset 2 (1 +ZREVRANGEBYSCORE myzset (2 (1 +``` diff --git a/commands/zrevrank.md b/commands/zrevrank.md index 928dea02bd..5bfbd8ffa7 100644 --- a/commands/zrevrank.md +++ b/commands/zrevrank.md @@ -14,9 +14,10 @@ high. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANK myzset "one" - ZREVRANK myzset "four" +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANK myzset "one" +ZREVRANK myzset "four" +``` diff --git a/commands/zscore.md b/commands/zscore.md index 03d178e5b9..e6aa1e9af7 100644 --- a/commands/zscore.md +++ b/commands/zscore.md @@ -10,6 +10,7 @@ represented as string. @examples - @cli - ZADD myzset 1 "one" - ZSCORE myzset "one" +```cli +ZADD myzset 1 "one" +ZSCORE myzset "one" +``` diff --git a/commands/zunionstore.md b/commands/zunionstore.md index 196c7f14de..49e2d506e9 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -28,11 +28,12 @@ If `destination` already exists, it is overwritten. @examples - @cli - ZADD zset1 1 "one" - ZADD zset1 2 "two" - ZADD zset2 1 "one" - ZADD zset2 2 "two" - ZADD zset2 3 "three" - ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 - ZRANGE out 0 -1 WITHSCORES +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 +ZRANGE out 0 -1 WITHSCORES +``` diff --git a/remarkdown.rb b/remarkdown.rb index d2a5ec66bb..373e88ddb2 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -55,7 +55,20 @@ def format_block_node(node) when "p" format_inline_nodes(node.children) + "\n" when "pre" - indent(node.child.content.chomp, 4) + "\n" + code = node.child + content = code.content.chomp + + if code["class"] + klass = code["class"] + else + # Test for @cli clause + if content =~ /\A@cli\n/ + content = content.gsub(/\A@cli\n/, "") + klass = "cli" + end + end + + "```#{klass}\n" + content + "\n```\n" when "ul" format_ul(node) + "\n" when "ol" From 0a4fbbfa794e78d14e8a19154f9083783605ff1b Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 11:52:15 -0700 Subject: [PATCH 0180/2880] Mention Redcarpet in README --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 08e01f3a8b..e9dbb1add3 100644 --- a/README.md +++ b/README.md @@ -72,13 +72,13 @@ the formatter: The formatter has the following dependencies: -* RDiscount +* Redcarpet * Nokogiri * The `par` tool Installation of the Ruby gems: - gem install rdiscount nokogiri + gem install redcarpet nokogiri Installation of par (OSX): From 847d75ff900f345b4c16d159c8bddf05ef902379 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 25 Jun 2012 11:52:48 -0700 Subject: [PATCH 0181/2880] Format README --- README.md | 81 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 49 insertions(+), 32 deletions(-) diff --git a/README.md b/README.md index e9dbb1add3..2ee41c8696 100644 --- a/README.md +++ b/README.md @@ -6,29 +6,31 @@ All clients are listed in the `clients.json` file. Each key in the JSON object represents a single client library. For example: - "Rediska": { +``` +"Rediska": { - # A programming language should be specified. - "language": "PHP", + # A programming language should be specified. + "language": "PHP", - # If the project has a website of its own, put it here. - # Otherwise, lose the "url" key. - "url": "http://rediska.geometria-lab.net", + # If the project has a website of its own, put it here. + # Otherwise, lose the "url" key. + "url": "http://rediska.geometria-lab.net", - # A URL pointing to the repository where users can - # find the code. - "repository": "http://github.com/Shumkov/Rediska", + # A URL pointing to the repository where users can + # find the code. + "repository": "http://github.com/Shumkov/Rediska", - # A short, free-text description of the client. - # Should be objective. The goal is to help users - # choose the correct client they need. - "description": "A PHP client", + # A short, free-text description of the client. + # Should be objective. The goal is to help users + # choose the correct client they need. + "description": "A PHP client", - # An array of Twitter usernames for the authors - # and maintainers of the library. - "authors": ["shumkov"] + # An array of Twitter usernames for the authors + # and maintainers of the library. + "authors": ["shumkov"] - } +} +``` ## Commands @@ -39,22 +41,25 @@ description. We process this Markdown to provide a better experience, so some things to take into account: -* Inside text, all commands should be written in all caps, in between backticks. - For example: ``INCR``. +* Inside text, all commands should be written in all caps, in between + backticks. + For example: `INCR`. -* You can use some magic keywords to name common elements in Redis. - For example: `@multi-bulk-reply`. - These keywords will get expanded and auto-linked to relevant parts of the - documentation. +* You can use some magic keywords to name common elements in Redis. + For example: `@multi-bulk-reply`. + These keywords will get expanded and auto-linked to relevant parts of the + documentation. There should be at least two predefined sections: description and return value. The return value section is marked using the @return keyword: - Returns all keys matching the given pattern. +``` +Returns all keys matching the given pattern. - @return +@return - @multi-bulk-reply: all the keys that matched the pattern. +@multi-bulk-reply: all the keys that matched the pattern. +``` ## Styling guidelines @@ -68,7 +73,9 @@ To only reformat the files you have modified, first stage them using `git add` (this makes sure that your changes won't be lost in case of an error), then run the formatter: - $ rake format:cached +``` +$ rake format:cached +``` The formatter has the following dependencies: @@ -78,15 +85,21 @@ The formatter has the following dependencies: Installation of the Ruby gems: - gem install redcarpet nokogiri +``` +gem install redcarpet nokogiri +``` Installation of par (OSX): - brew install par +``` +brew install par +``` Installation of par (Ubuntu): - sudo apt-get install par +``` +sudo apt-get install par +``` ## Checking your work @@ -94,13 +107,17 @@ Once you're done, the very least you should do is make sure that all files compile properly. You can do this by running Rake inside your working directory. - $ rake parse +``` +$ rake parse +``` Additionally, if you have [Aspell][han] installed, you can spell check the documentation: [han]: http://aspell.net/ - $ rake spellcheck +``` +$ rake spellcheck +``` Exceptions can be added to `./wordlist`. From 64b8269f0b368657c6160477b21e2dfbdde48120 Mon Sep 17 00:00:00 2001 From: Eike Herzbach Date: Wed, 4 Jul 2012 15:01:19 +0200 Subject: [PATCH 0182/2880] Typos - remove extra "the"s --- commands/incr.md | 2 +- topics/latency.md | 2 +- topics/persistence.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/commands/incr.md b/commands/incr.md index 04671007e0..17a127b48d 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -6,7 +6,7 @@ This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated integer type. -The the string stored at the key is interpreted as a base-10 **64 bit signed +The string stored at the key is interpreted as a base-10 **64 bit signed integer** to execute the operation. Redis stores integers in their integer representation, so for string values diff --git a/topics/latency.md b/topics/latency.md index db2a6315f7..73ab98e837 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -408,7 +408,7 @@ file you can use the strace command under Linux: The above command will show all the fdatasync(2) system calls performed by Redis in the main thread. With the above command you'll not see the fdatasync system calls performed by the background thread when the -the appendfsync config option is set to **everysec**. In order to do so +appendfsync config option is set to **everysec**. In order to do so just add the -f switch to strace. If you wish you can also see both fdatasync and write system calls with the diff --git a/topics/persistence.md b/topics/persistence.md index 2e4dd67b33..26a7aef03c 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -225,7 +225,7 @@ Interactions between AOF and RDB persistence Redis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB snapshotting operation is already in progress, or allowing a BGSAVE while the -the AOF rewrite is in progress. This prevents two Redis background processes +AOF rewrite is in progress. This prevents two Redis background processes from doing heavy disk I/O at the same time. When snapshotting is in progress and the user explicitly requests a log From 8ef256189ba6bee026f4273e5347e2a2797b0702 Mon Sep 17 00:00:00 2001 From: antirez Date: Sun, 8 Jul 2012 21:36:19 +0200 Subject: [PATCH 0183/2880] Redis sentinel major update. --- topics/sentinel-spec.md | 370 ++++++++++++++++++++++++++++------------ 1 file changed, 258 insertions(+), 112 deletions(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 8cd34f952e..7aa169666c 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -6,6 +6,7 @@ Changelog: * 1.0 first version. * 1.1 fail over steps modified: slaves are pointed to new master one after the other and not simultaneously. New section about monitoring slaves to ensure they are replicating correctly. * 1.2 Fixed a typo in the fail over section about: critical error is in step 5 and not 6. Added TODO section. +* 1.3 Document updated to reflect the actual implementation of the monitoring and leader election. Introduction === @@ -17,15 +18,21 @@ a way to perform automatic fail over when a master instance is not functioning correctly. The plan is to provide an usable beta implementaiton of Redis Sentinel in a -short time, preferrably in mid June 2012. +short time, preferrably in mid July 2012. In short this is what Redis Sentinel will be able to do: -* Monitor master instances to see if they are available. +* Monitor master and slave instances to see if they are available. * Promote a slave to master when the master fails. * Modify clients configurations when a slave is elected. * Inform the system administrator about incidents using notifications. +So the three different roles of Redis Sentinel can be summarized in the following three big aspects: + +* Monitoring. +* Notification. +* Automatic failover. + The following document explains what is the design of Redis Sentinel in order to accomplish this goals. @@ -38,23 +45,22 @@ different places of your network, monitoring the Redis master instance. However this independent devices can't act without agreement with other sentinels. -Once a Redis master instance is detected as failing, for the fail over process -to start the sentinel must verify that there is a given level of agreement. +Once a Redis master instance is detected as failing, for the failover process +to start, the sentinel must verify that there is a given level of agreement. The amount of sentinels, their location in the network, and the -"minimal agreement" configured, select the desired behavior among many -possibilities. +configured quorum, select the desired behavior among many possibilities. -Redis Sentinel does not use any proxy: client reconfiguration are performed +Redis Sentinel does not use any proxy: clients reconfiguration is performed running user-provided executables (for instance a shell script or a -Python program) in a user setup specific way. +Python program) in an user setup specific way. In what form it will be shipped === -Redis Sentinel will just be a special mode of the redis-server executable. +Redis Sentinel is just be a special mode of the redis-server executable. -If the redis-server is called with "redis-sentinel" as argv[0] (for instance +If the redis-server is called with "redis-sentinel" as `argv[0]` (for instance using a symbolic link or copying the file), or if --sentinel option is passed, the Redis instance starts in sentinel mode and will only understand sentinel related commands. All the other commands will be refused. @@ -67,47 +73,43 @@ to reimplement them or to maintain a separated code base for Redis Sentinel. Sentinels networking === -All the sentinels take a connection with the monitored master. +All the sentinels take persistent connections with: +* The monitored masters. +* All its slaves, that are discovered using the master's INFO output. +* All the other Sentinels connected to this master, discovered via Pub/Sub. -Sentinels use the Redis protocol to talk with each other when needed. +Sentinels use the Redis protocol to talk with each other, and to reply to +external clients. -Redis Sentinels export a single SENTINEL command. Subcommands of the SENTINEL +Redis Sentinels export a SENTINEL command. Subcommands of the SENTINEL command are used in order to perform different actions. -For instance to check what a sentinel thinks about the state of the master -it is possible to send the "SENTINEL STATUS" command using redis-cli. - -There is no gossip going on between sentinels. A sentinel instance will query -other instances only when an agreement is needed about the state of the -master or slaves. +For instnace the `SENTINEL masters` command enumerates all the monitored +masters and their states. However Sentinels can also reply to the PING command +as a normal Redis instance, so that it is possible to monitor a Sentinel +considering it a normal Redis instance. The list of networking tasks performed by every sentinel is the following: -* A Sentinel PUBLISH its presence using the master Pub/Sub every minute. -* A Sentinel accepts commands using a TCP port. -* A Sentinel constantly monitors master and slaves sending PING commands. -* A Sentinel sends INFO commands to the master every minute in order to take a fresh list of connected slaves. -* A Sentinel monitors the snetinels Pub/SUb channel in order to discover newly connected setninels. +* A Sentinel PUBLISH its presence using the master Pub/Sub multiple times every five seconds. +* A Sentinel accepts commands using a TCP port. By default the port is 26379. +* A Sentinel constantly monitors masters, slaves, other sentinels sending PING commands. +* A Sentinel sends INFO commands to the masters and slaves every ten seconds in order to take a fresh list of connected slaves, the state of the master, and so forth. +* A Sentinel monitors the sentinel Pub/Sub "hello" channel in order to discover newly connected Sentinels, or to detect no longer connected Sentinels. The channel used is `__sentinel__:hello`. Sentinels discovering === -While sentinels don't use some kind of bus interconnecting every Redis Sentinel -instance to each other, they still need to know the IP address and port of -each other sentinel instance, because this is useful to run the agreement -protocol needed to perform the slave election. - To make the configuration of sentinels as simple as possible every sentinel broadcasts its presence using the Redis master Pub/Sub functionality. Every sentinel is subscribed to the same channel, and broadcast information -about its existence to the same channel, including the "Run ID" of the Sentinel, +about its existence to the same channel, including the Run ID of the Sentinel, and the IP address and port where it is listening for commands. -Every sentinel maintain a list of other sentinels ID, IP and port. +Every sentinel maintains a list of other sentinels Run ID, IP and port. A sentinel that does no longer announce its presence using Pub/Sub for too -long time is removed from the list. In that case, optionally, a notification -is delivered to the system administrator. +long time is removed from the list, assuming the Master appears to be working well. In that case a notification is delivered to the system administrator. Detection of failing masters === @@ -140,122 +142,259 @@ configured script time limit. When this happens before triggering a fail over Redis Sentinel will try to send a "SCRIPT KILL" command, that will only succeed if the script was read-only. -Agreement with other sentinels +Subjectively down and Objectively down +=== + +From the point of view of a Sentinel there are two different error conditions for a master: + +* *Subjectively Down* (aka `S_DOWN`) means that a master is down from the point of view of a Sentinel. +* *Objectively Down* (aka `O_DOWN`) means that a master is subjectively down from the point of view of enough Sentinels to reach the configured quorum for that master. + +How Sentinels agree to mark a master `O_DOWN`. +=== + +Once a Sentinel detects that a master is in `S_DOWN` condition it starts to +send other sentinels a `SENTINEL is-master-down-by-addr` request every second. +The reply is stored inside the state that every Sentinel takes in memory. + +Ten times every second a Sentinel scans the state and checks if there are +enough Sentinels thinking that a master is down (this is not specific for +this operation, most state checks are performed with this frequency). + +If this Sentinel has already an `S_DOWN` condition for this master, and there +are enough other sentinels that recently reported this condition +(the validity time is currently set to 5 seconds), then the master is marked +as `O_DOWN` (Objectively Down). + +Note that the `O_DOWN` state is not propagated among Sentinels. Every single +Sentinel can reach independently this state. + +The SENTINEL is-master-down-by-addr command +=== + +Sentinels ask other Sentinels the state of a master from their local point +of view using the `SENTINEL is-master-down-by-addr` command. This command +replies with a boolean value (in the form of a 0 or 1 integer reply, as +a first element of a multi bulk reply). + +However in order to avoid false positives, the command acts in the following +way: + +* If the specified ip and port is not known, 0 is returned. +* If the specified ip and port are found but don't belong to a Master instance, 0 is returned. +* If the Sentinel is in TILT mode (see later in this document) 0 is returned. +* The value of 1 is returned only if the instance is known, is a master, is flagged `S_DOWN` and the Sentinel is in TILT mode. + +Duplicate Sentinels removal +=== + +In order to reach the configured quorum we absolutely want to make sure that +the quorum is reached by different physical Sentinel instances. Under +no circumstance we should get agreement from the same instance that for some +reason appears to be two or multiple distinct Sentinel instances. + +This is enforced by an aggressive removal of duplicated Sentinels: every time +a Sentinel sends a message in the Hello Pub/Sub channel with its address +and runid, if we can't find a perfect match (same runid and address) inside +the Sentinels table for that master, we remove any other Sentinel with the same +runid OR the same address. And later add the new Sentinel. + +For instance if a Sentienl instance is restarted, the Run ID will be different, +and the old Sentinel with the same IP address and port pair will be removed. + +Starting the failover: Leaders and Observers +=== + +The fact that a master is marked as `O_DOWN` is not enough to star the +failover process. What Sentinel should start the failover is also to be +decided. + +Also Sentinels can be configured in two ways: only as monitors that can't +perform the fail over, or as Sentinels that can start the failover. + +What is desireable is that only a Sentinel will start the failover process, +and this Sentinel should be selected among the Sentinels that are allowed +to perform the failover. + +In Sentinel there are two roles during a fail over: + +* The Leader Sentinel is the one selected to perform the failover. +* The Observers Sentinels are the other sentinels just following the failover process without doing active operations. + +So the condition to start the failover is: + +* A Master in `O_DOWN` condition. +* A Sentinel that is elected Leader. + +Leader Sentinel election === -Once a Sentinel detects that the master is failing, in order to perform the -fail over, it must make sure that the required number of other sentinels -are agreeing as well. +The election process works as follows: + +* Every Sentinel with a master in `O_DOWN` condition updates its internal state with frequency of 10 HZ to refresh what is the *Subjective Leader* from its point of view. + +A Subjective Leader is selected in this way by every sentinel. -To do so one sentinel after the other is checked to see if the needed -quorum is reached, as configured by the user. +* Every Sentinel we know about a given master, that is reachable (no `S_DOWN` state), that is allowed to perform the failover (this Sentinel-specific configuration is propagated using the Hello channel), is a possible candidate. +* Among all the possible candidates, the one with lexicographically smaller Run ID is selected. -If the needed level of agreement is reached, the sentinel schedules the -fail over after DELAY seconds, where: +Every time a Sentinel replies with to the `MASTER is-sentinel-down-by-addr` command it also replies with the Run ID of its Subjective Leader. - DELAY = SENTINEL_CARDINALITY * 60 +Every Sentinel with a failing master (`O_DOWN`) checks its subjective leader +and the subjective leaders of all the other Sentinels with a frequency of +10 HZ, and will flag itself as the Leader if the following conditions happen: -The cardinality of a sentinel is obtained by the sentinel ordering all the -known sentinels, including itself, lexicographically by ID. The first sentinel -has cardinality 0, the second 1, and so forth. +* It is the Subjective Leader of itself. +* At least N-1 other Sentinels that see the master as down, and are reachable, also thing that it is the Leader. With N being the quorum configured for this master. +* At least 50% + 1 of all the Sentinels involved in the voting process (that are reachable and that also see the master as failing) should agree on the Leader. -This is useful in order to avoid that multiple sentinels will try to perform -the fail over at the same time. +So for instance if there are a total of three sentinels, the master is failing, +and all the three sentinels are able to communicate (no Sentinel is failing) +and the configured quorum for this master is 2, a Sentinel will feel itself +an Objective Leader if at least it and another Sentinel is agreeing that +it is the subjective leader. -However if a sentinel will fail for some reason, within 60 seconds the next -one will try to perform the fail over. +Once a Sentinel detects that it is the objective leader, it flags the master +with `FAILOVER_IN_PROGRESS` and `IM_THE_LEADER` flags, and starts the failover +process in `SENTINEL_FAILOVER_DELAY` (5 seconds currently) plus a random +additional time between 0 milliseconds and 10000 milliseconds. -Anyway once the delay has elapsed, before performing the fail over, sentinels -make sure using the INFO command that none of the slaves was already switched -into a master by some other sentinel or any other external software -component (or the system administrator itself). +During that time we ask INFO to all the slaves with an increased frequency +of one time per second (usually the period is 10 seconds). If a slave is +turned into a master in the meantime the failover is suspended and the +Leader clears the `IM_THE_LEADER` flag to turn itself into an observer. -Also the "SENTINEL NEWMASTER" command is send to all the other sentinels -by the sentinel that performed the failover (see later for details). +Guarantees of the Leader election process +=== -Slave sanity checks before election +As you can see for a Sentinel to become a leader the majority is not strictly +required. An user can force the majority to be needed just setting the master +quorum to, for instance, the value of 5 if there are a total of 9 sentinels. + +However it is also possible to set the quorum to the value of 2 with 9 +sentinels in order to improve the resistance to netsplits or failing Sentinels +or other error conditions. In such a case the protection against race +conditions (multiple Sentinels starting to perform the fail over at the same +time) is given by the random delay used to start the fail over, and the +continuous monitor of the slave instances to detect if another Sentinel +(or an human) started the failover process. + +Morehover the slave to promote is selected using a deterministic process to +minimize the chance that two different Sentinels with full vision of the +working slaves may pick two different slaves to promote. + +However it is possible to easily imagine netsplits and specific configurations +where two Sentinels may start to act as a leader at the same time, electing two +different slaves as masters, in two different parts of the net that can't +communicate. The Redis Sentinel user should evaluate the network topology and +select an appropriate quorum considering his or her goals and the different +trade offs. + +How observers understand that the failover started === -Once the fail over process starts, the sentinel performing the slave election -must be sure that the slave is functioning correctly. +An observer is just a Sentinel that does not believe to be the Leader, but +still sees a master in `O_DOWN` condition. + +The observer is still able to follow and update the internal state based on +what is happening with the failover, but does not directly rely on the +Leader to communicate with it to be informed by progresses. It simply observes +the state of the slaves to understand what is happening. -A master may have multiple slaves. A suitable candidate must be found. +Specifically the observers flags the master as `FAILOVER_IN_PROGRESS` if a slave +attached to a master turns into a master (observers can see it in the INFO output). An observer will also consider the failover complete once all the other +reachable slaves appear to be slaves of this slave that was turned into a +master. -To do this, a sentinel will check all the salves in the order listed by -Redis in the INFO output (however it is likely that we'll introduce some way -to indicate that a slave is to be preferred to another). +If a Slave is in `FAILOVER_IN_PROGRESS` and the failover is not progressing for +too much time, and at the same time the other Sentinels start claiming that +this Sentinel is the objective leader (because for example the old leader +is no longer reachable), the Sentinel will flag itself as `IM_THE_LEADER` and +will proceed with the failover. -The slave must be functioning correctly (able to reply to PING with one of -the accepted replies), and the INFO command should show that it has been -disconnected by the master for no more than the specified number of seconds -in the Sentinel configuration. +Note: all the Sentinel state, including the subjective and objective leadership +is a dynamic process that is continuously refreshed with period of 10 HZ. +There is no "one time decision" step in Sentinel. -The first slave found to meet this conditions is selected as the candidate -to be elected to master. However to really be selected as a candidate the -configured number of sentinels must also agree on the reachability of the -slave (the sentinel will check this sending SENTINEL STATUS commands). +Selection of the Slave to promote +=== + +If a master has multiple slaves, the slave to promote to master is selected +checking the slave priority (a new configuration option of Redis instances +that is propagated via INFO output), and picking the one with lower priority +value (it is an integer similar to the one of the MX field of the DNS system). +All the slaves that appears to be disconnected from the master for a long +time are discarded (stale data). + +If slaves with the same priority exist, the one with the lexicographically +smaller Run ID is selected. + +If there is no Slave to select because all the salves are failing the failover +is not started at all. Instead if there is no Slave to select because the +master *never* used to have slaves in the monitoring session, then the +failover is performed nonetheless just calling the user scripts. + +This is useful because there are configurations where a new Instance can be +provisioned at IP protocol level by the script, but there are no attached +slaves. Fail over process === The fail over process consists of the following steps: -* 1) Check that no slave was already elected. -* 2) Find suitable slave. -* 3) Turn the slave into a master using the SLAVEOF NO ONE command. -* 4) Verify the state of the new master again using INFO. -* 5) Call an user script to inform the clients that the configuration changed. -* 6) Call an user script to notify the system administrator. -* 7) Send a SENTINEL NEWMASTER command to all the reachable sentinels. -* 8) Turn all the remaining slaves, if any, to slaves of the new master. This is done incrementally, one slave after the other, waiting for the previous slave to complete the synchronization process before starting with the next one. -* 9) Start monitoring the new master. +* 1) Turn the selected slave into a master using the SLAVEOF NO ONE command. +* 2) Turn all the remaining slaves, if any, to slaves of the new master. This is done incrementally, one slave after the other, waiting for the previous slave to complete the synchronization process before starting with the next one. +* 3) Call an user script to inform the clients that the configuration changed. +* 4) Completely remove the old failing master from the table, and add the new master with the same name. -If Steps "1","2" or "3" fail, the fail over is aborted. -If Step "5" fails (the script returns non zero) the new master is contacted again and turned back into a slave of the previous master, and the fail over aborted. +If Steps "1" fails, the fail over is aborted. All the other errors are considered to be non-fatal. -SENTINEL NEWMASTER command -== +TILT mode +=== -The SENTINEL NEWMASTER command reconfigures a sentinel to monitor a new master. -The effect is similar of completely restarting a sentinel against a new master. -If a fail over was scheduled by the sentinel it is cancelled as well. +Redis Sentinel is heavily dependent on the computer time: for instance in +order to understand if an instance is available it remembers the time of the +latest successful reply to the PING command, and compares it with the current +time to understand how old it is. -Slaves monitoring -=== +However if the computer time changes in an unexpected way, or if the computer +is very busy, or the process blocked for some reason, Sentinel may start to +behave in an unexpected way. -A successful fail over can be only performed if there is at least one slave -that contains a reasonably update version of the master dataset. -We perform this check before electing the slave using the INFO command -to check how many seconds elapsed since master and slave disconnected. +The TILT mode is a special "protection" mode that a Sentinel can enter when +something odd is detected that can lower the reliability of the system. +The Sentinel timer interrupt is normally called 10 times per second, so we +expect that more or less 100 milliseconds will elapse between two calls +to the timer interrupt. -However if there is a problem in the replication process (networking problem, -redis bug, a problem with the slave operating system, ...), when the master -fail we can be in the unhappy condition of not having a slave that's good -enough for the fail over. +What a Sentinel does is to register the previous time the timer interrupt +was called, and compare it with the current call: if the time difference +is negative or unexpectedly big (2 seconds or more) the TILT mode is entered +(or if it was already entered the exit from the TILT mode postponed). -For this reason every sentinel also continuously monitors slaves as well, -checking if the replication is up. If the replication appears to be failing -for too long time (configurable), a notification is sent to the system -administrator that should make sure that slaves are correctly configured -and operational. +When in TILT mode the Sentinel will continue to monitor everything, but: + +* It stops acting at all. +* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. + +If everything appears to be normal for 30 second, the TILT mode is exited. Sentinels monitoring other sentinels === When a sentinel no longer advertises itself using the Pub/Sub channel for too -much time (configurable), the other sentinels can send (if configured) a -notification to the system administrator to notify that a sentinel may be down. - -At the same time the sentinel is removed from the list of sentinels (but it -will be automatically re-added to this list once it starts advertising itself -again using Pub/Sub). +much time (30 minutes more the configured timeout for the master), but at the +same time the master appears to work correctly, the Sentinel is removed from +the table of Sentinels for this master, and a notification is sent to the +system administrator. User provided scripts === -Sentinels call user-provided scripts to perform two tasks: +Sentinels can optionally call user-provided scripts to perform two tasks: * Inform clients that the configuration changed. * Notify the system administrator of problems. @@ -275,12 +414,11 @@ Using the ip:port of the calling sentinel, scripts may call SENTINEL subcommands to get more info if needed. Concrete implementations of notification scripts will likely use the "mail" -command or some other command to deliver SMS messages, emails, twitter direct -messages. +command or some other command to deliver SMS messages, emails, tweets. Implementations of the script to modify the configuration in web applications are likely to use HTTP GET requests to force clients to update the -configuration. +configuration, or any other sensible mechanism for the specific setup in use. Setup examples === @@ -316,11 +454,19 @@ In general if a complex network topology is present, the minimal agreement should be set to the max number of sentinels existing at the same time in the same network arm, plus one. +SENTINEL SUBCOMMANDS +=== + +* `SENTINEL masters`, provides a list of configured masters. +* `SENTINEL slaves `, provides a list of slaves for the master with the specified name. +* `SENTINEL sentinels `, provides a list of sentinels for the master with the specified name. +* `SENTINEL is-master-down-by-addr `, returns a two elements multi bulk reply where the first element is :0 or :1, and the second is the Subjective Leader for the failover. + TODO === * More detailed specification of user script error handling, including what return codes may mean, like 0: try again. 1: fatal error. 2: try again, and so forth. * More detailed specification of what happens when an user script does not return in a given amount of time. * Add a "push" notification system for configuration changes. -* Consider adding a "name" to every set of slaves / masters, so that clients can identify services by name. +* Document that for every master monitored the configuration specifies a name for the master that is reported by all the SENTINEL commands. * Make clear that we handle a single Sentinel monitoring multiple masters. From 06cde77a7ac6188b8ad509988853ac2ad0c17a7c Mon Sep 17 00:00:00 2001 From: antirez Date: Sun, 8 Jul 2012 21:37:48 +0200 Subject: [PATCH 0184/2880] Actually bump the version in the Sentinel spec title. --- topics/sentinel-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 7aa169666c..ae64ab5d75 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -1,4 +1,4 @@ -Redis Sentinel design draft 1.1 +Redis Sentinel design draft 1.3 === Changelog: From 550021e7cf6bded29388fe0e0471d047de82d96e Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Mon, 9 Jul 2012 02:04:40 +0300 Subject: [PATCH 0185/2880] Fixed a few typos. --- topics/sentinel-spec.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index ae64ab5d75..f9c5d2b220 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -58,7 +58,7 @@ Python program) in an user setup specific way. In what form it will be shipped === -Redis Sentinel is just be a special mode of the redis-server executable. +Redis Sentinel is just a special mode of the redis-server executable. If the redis-server is called with "redis-sentinel" as `argv[0]` (for instance using a symbolic link or copying the file), or if --sentinel option is passed, @@ -74,6 +74,7 @@ Sentinels networking === All the sentinels take persistent connections with: + * The monitored masters. * All its slaves, that are discovered using the master's INFO output. * All the other Sentinels connected to this master, discovered via Pub/Sub. @@ -84,7 +85,7 @@ external clients. Redis Sentinels export a SENTINEL command. Subcommands of the SENTINEL command are used in order to perform different actions. -For instnace the `SENTINEL masters` command enumerates all the monitored +For instance the `SENTINEL masters` command enumerates all the monitored masters and their states. However Sentinels can also reply to the PING command as a normal Redis instance, so that it is possible to monitor a Sentinel considering it a normal Redis instance. @@ -172,7 +173,7 @@ Sentinel can reach independently this state. The SENTINEL is-master-down-by-addr command === -Sentinels ask other Sentinels the state of a master from their local point +Sentinels ask other Sentinels for the state of a master from their local point of view using the `SENTINEL is-master-down-by-addr` command. This command replies with a boolean value (in the form of a 0 or 1 integer reply, as a first element of a multi bulk reply). @@ -199,7 +200,7 @@ and runid, if we can't find a perfect match (same runid and address) inside the Sentinels table for that master, we remove any other Sentinel with the same runid OR the same address. And later add the new Sentinel. -For instance if a Sentienl instance is restarted, the Run ID will be different, +For instance if a Sentinel instance is restarted, the Run ID will be different, and the old Sentinel with the same IP address and port pair will be removed. Starting the failover: Leaders and Observers @@ -245,7 +246,7 @@ and the subjective leaders of all the other Sentinels with a frequency of 10 HZ, and will flag itself as the Leader if the following conditions happen: * It is the Subjective Leader of itself. -* At least N-1 other Sentinels that see the master as down, and are reachable, also thing that it is the Leader. With N being the quorum configured for this master. +* At least N-1 other Sentinels that see the master as down, and are reachable, also think that it is the Leader. With N being the quorum configured for this master. * At least 50% + 1 of all the Sentinels involved in the voting process (that are reachable and that also see the master as failing) should agree on the Leader. So for instance if there are a total of three sentinels, the master is failing, @@ -279,7 +280,7 @@ time) is given by the random delay used to start the fail over, and the continuous monitor of the slave instances to detect if another Sentinel (or an human) started the failover process. -Morehover the slave to promote is selected using a deterministic process to +Moreover the slave to promote is selected using a deterministic process to minimize the chance that two different Sentinels with full vision of the working slaves may pick two different slaves to promote. From 34ff4c11dbe6fce10860d44d1fe7ef085e1810d9 Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Sun, 8 Jul 2012 19:56:19 -0400 Subject: [PATCH 0186/2880] Minor editing updates. --- commands/append.md | 8 ++++---- commands/auth.md | 2 +- commands/bgrewriteaof.md | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/commands/append.md b/commands/append.md index a73fc9b8ca..2c8bd7432f 100644 --- a/commands/append.md +++ b/commands/append.md @@ -18,7 +18,7 @@ GET mykey ## Pattern: Time series -the `APPEND` command can be used to create a very compact representation of a +The `APPEND` command can be used to create a very compact representation of a list of fixed-size samples, usually referred as _time series_. Every time a new sample arrives we can store it using the command @@ -30,12 +30,12 @@ Accessing individual elements in the time series is not hard: * `STRLEN` can be used in order to obtain the number of samples. * `GETRANGE` allows for random access of elements. - If our time series have an associated time information we can easily implement + If our time series have associated time information we can easily implement a binary search to get range combining `GETRANGE` with the Lua scripting engine available in Redis 2.6. -* `SETRANGE` can be used to overwrite an existing time serie. +* `SETRANGE` can be used to overwrite an existing time series. -The limitations of this pattern is that we are forced into an append-only mode +The limitation of this pattern is that we are forced into an append-only mode of operation, there is no way to cut the time series to a given size easily because Redis currently lacks a command able to trim string objects. However the space efficiency of time series stored in this way is remarkable. diff --git a/commands/auth.md b/commands/auth.md index 63d185dbf6..a25a9f49be 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,4 +1,4 @@ -Request for authentication in a password protected Redis server. +Request for authentication in a password-protected Redis server. Redis can be instructed to require a password before allowing clients to execute commands. This is done using the `requirepass` directive in the configuration file. diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index c9b73882b1..e8a071ab40 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -16,7 +16,7 @@ Specifically: In this case the `BGREWRITEAOF` will still return an OK code, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command - starting from Redis 2.6. + as of Redis 2.6. * If an AOF rewrite is already in progress the command returns an error and no AOF rewrite will be scheduled for a later time. From 6e49a938ee98540fe6a09bf00a7acd36a5166d05 Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Sun, 8 Jul 2012 20:48:37 -0400 Subject: [PATCH 0187/2880] More grammar editing. --- commands/bitcount.md | 10 +++++----- commands/bitop.md | 16 ++++++++-------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/commands/bitcount.md b/commands/bitcount.md index d4ebb452b5..ad0ff50560 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -8,7 +8,7 @@ Like for the `GETRANGE` command start and end can contain negative values in order to index bytes starting from the end of the string, where -1 is the last byte, -2 is the penultimate, and so forth. -Non existing keys are treated as empty strings, so the command will return zero. +Non-existent keys are treated as empty strings, so the command will return zero. @return @@ -25,13 +25,13 @@ BITCOUNT mykey 0 0 BITCOUNT mykey 1 1 ``` -## Pattern: real time metrics using bitmaps +## Pattern: real-time metrics using bitmaps -Bitmaps are a very space efficient representation of certain kinds of +Bitmaps are a very space-efficient representation of certain kinds of information. -One example is a web application that needs the history of user visits, so that +One example is a Web application that needs the history of user visits, so that for instance it is possible to determine what users are good targets of beta -features, or for any other purpose. +features. Using the `SETBIT` command this is trivial to accomplish, identifying every day with a small progressive integer. diff --git a/commands/bitop.md b/commands/bitop.md index a3db670607..53b2638cf0 100644 --- a/commands/bitop.md +++ b/commands/bitop.md @@ -20,14 +20,14 @@ When an operation is performed between strings having different lengths, all the strings shorter than the longest string in the set are treated as if they were zero-padded up to the length of the longest string. -The same holds true for non-existing keys, that are considered as a stream of +The same holds true for non-existent keys, that are considered as a stream of zero bytes up to the length of the longest string. @return @integer-reply -The size of the string stored into the destination key, that is equal to the +The size of the string stored in the destination key, that is equal to the size of the longest input string. @examples @@ -43,11 +43,11 @@ GET dest `BITOP` is a good complement to the pattern documented in the `BITCOUNT` command documentation. -Different bitmaps can be combined in order to obtain a target bitmap where to -perform the population counting operation. +Different bitmaps can be combined in order to obtain a target bitmap where +the population counting operation is performed. See the article called "[Fast easy realtime metrics using Redis -bitmaps][hbgc212fermurb]" for an interesting use cases. +bitmaps][hbgc212fermurb]" for a interesting use cases. [hbgc212fermurb]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps @@ -56,6 +56,6 @@ bitmaps][hbgc212fermurb]" for an interesting use cases. `BITOP` is a potentially slow command as it runs in O(N) time. Care should be taken when running it against long input strings. -For real time metrics and statistics involving large inputs a good approach is -to use a slave (with read-only option disabled) where to perform the bit-wise -operations without blocking the master instance. +For real-time metrics and statistics involving large inputs a good approach is +to use a slave (with read-only option disabled) where the bit-wise +operations are performed to avoid blocking the master instance. From 4b82ee5326d9a35b91a1c24c019d20316fc5c0cd Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Sun, 8 Jul 2012 21:25:34 -0700 Subject: [PATCH 0188/2880] Use HTML parser to correctly parse unclosed tags For example `` can cause problems. --- remarkdown.rb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/remarkdown.rb b/remarkdown.rb index 373e88ddb2..01195b1cf9 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -14,7 +14,7 @@ def initialize(input) :fenced_code_blocks => true, :superscript => true - @xml = Nokogiri::XML::Document.parse("#{markdown.render(input)}") + @xml = Nokogiri::HTML::Document.parse("#{markdown.render(input)}") @links = [] @indent = 0 @@ -26,7 +26,7 @@ def initialize(input) def to_s parts = [] - @xml.at("/doc").children.each do |node| + @xml.at("//doc").children.each do |node| parts << format_block_node(node) parts << flush_links end From 978f8489b3fa7133dcd68383a7389702a2684a40 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Sun, 8 Jul 2012 21:26:28 -0700 Subject: [PATCH 0189/2880] Support inline images --- remarkdown.rb | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/remarkdown.rb b/remarkdown.rb index 01195b1cf9..75d0b9ec0a 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -131,6 +131,19 @@ def format_inline_node(node) @links << [id, href] "[%s][%s]" % [format_inline_nodes(node.children).chomp, id] + when "img" + src = node["src"] + + id = src. + gsub(/[^\w]/, " "). + split(/\s+/). + map { |e| e.to_s[0] }. + join. + downcase + + @links << [id, src] + + "![%s][%s]" % [node["alt"].chomp, id] else raise "don't know what to do for inline node #{node.name}" end From 99a5d237d62e0999904bb68ab400adb18aa3d504 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Sun, 8 Jul 2012 21:52:36 -0700 Subject: [PATCH 0190/2880] Split on question mark and exclamation mark --- commands/setnx.md | 5 +++-- remarkdown.rb | 2 +- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/commands/setnx.md b/commands/setnx.md index 4ee0e1bb80..72c33798d0 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -39,8 +39,9 @@ retrying to hold the lock until we succeed or some kind of timeout expires. ### Handling deadlocks In the above locking algorithm there is a problem: what happens if a client -fails, crashes, or is otherwise not able to release the lock? It's possible to -detect this condition because the lock key contains a UNIX timestamp. +fails, crashes, or is otherwise not able to release the lock? +It's possible to detect this condition because the lock key contains a UNIX +timestamp. If such a timestamp is equal to the current Unix time the lock is no longer valid. diff --git a/remarkdown.rb b/remarkdown.rb index 75d0b9ec0a..fefa1e2838 100644 --- a/remarkdown.rb +++ b/remarkdown.rb @@ -95,7 +95,7 @@ def format_inline_nodes(nodes) end end - sentences = result.gsub(/\s*\r?\n\s*/, " ").split(/(?<=[^.]\.)\s+/) + sentences = result.gsub(/\s*\r?\n\s*/, " ").split(/(?<=(?:[^.]\.)|[?!])\s+/) sentences = sentences.map do |e| par(e).chomp end From 55faf47f2c77716fc4fbcc0197fb383845292d06 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 10 Jul 2012 10:25:40 +0200 Subject: [PATCH 0191/2880] Minor change to Sentinel spec: force-failover-without-slaves --- topics/sentinel-spec.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index ae64ab5d75..157c20ed85 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -333,6 +333,8 @@ If there is no Slave to select because all the salves are failing the failover is not started at all. Instead if there is no Slave to select because the master *never* used to have slaves in the monitoring session, then the failover is performed nonetheless just calling the user scripts. +However for this to happen a special configuration option must be set for +that master (force-failover-without-slaves). This is useful because there are configurations where a new Instance can be provisioned at IP protocol level by the script, but there are no attached From 81f59d58a7a003925cef725a8dcc904932c8e4d5 Mon Sep 17 00:00:00 2001 From: Mark Sonnabaum Date: Tue, 17 Jul 2012 14:14:44 -0500 Subject: [PATCH 0192/2880] Fixed a bug in the Ruby EVAL example. --- commands/eval.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 31522f78b9..c81cce43f6 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -361,7 +361,7 @@ RandomPushScript = < Date: Thu, 19 Jul 2012 12:17:23 +0200 Subject: [PATCH 0193/2880] Sentinel user documentation draft. --- topics/sentinel.md | 294 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 294 insertions(+) create mode 100644 topics/sentinel.md diff --git a/topics/sentinel.md b/topics/sentinel.md new file mode 100644 index 0000000000..5c4a9045fc --- /dev/null +++ b/topics/sentinel.md @@ -0,0 +1,294 @@ +Redis Sentinel Documentation +=== + +Redis Sentinel is a system designed to help managing Redis instances. +It performs the following three tasks: + +* **Monitoring**. Sentinel constantly check if your master and slave instances are working as expected. +* **Notification**. Sentinel can notify the system administrator, or another computer program, via an API, that something is wrong with one of the monitored Redis instances. +* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server informed about the new address to use when connecting. + +Redis Sentinel is a distributed system, this means that usually you want to run +multiple Sentinel processes across your infrastructure, and this processes +will use agreement protocols in order to understand if a master is down and +to perform the failover. + +Redis Sentinel is shipped as a stand-alone executable called `redis-sentinel` +but actually it is a special execution mode of the Redis server itself, and +can be also invoked using the `--sentinel` option of the normal `redis-sever` +executable. + +**WARNING:** Redis Sentinel is currently a work in progress. This document +describes how to use what we already have and may change as the Sentinel +implementation changes. + +Redis Sentinel is compatible with Redis 2.4.16 or greater, and redis 2.6.0-rc6 or greater. + +Obtaining Sentinel +=== + +Currently Sentinel is part of the Redis *unstable* branch at github. +To compile it you need to clone the *unstable* branch and compile Redis. +You'll see a `redis-sentinel` executable in your `src` directory. + +Alternatively you can use directly the `redis-server` executable itself, +starting it in Sentinel mode as specified in the next paragraph. + +Running Sentinel +=== + +If you are using the `redis-sentinel` executable (or if you have a symbolic +link with that name to the `redis-server` executable) you can run Sentinel +with the following command line: + + redis-sentinel /path/to/sentinel.conf + +Otherwise you can use directly the `redis-server` executable starting it in +Sentinel mode: + + redis-server /path/to/sentine.conf --sentinel + +Both ways work the same. + +Configuring Sentinel +=== + +In the root of the Redis source distribution you will find a `sentinel.conf` +file that is a self-documented example configuration file you can use to +configure Sentinel, however a typical minimal configuration file looks like the +following: + + sentinel monitor mymaster 127.0.0.1 6379 2 + sentinel down-after-milliseconds mymaster 60000 + sentinel can-failover mymaster yes + sentinel parallel-syncs mymaster 1 + + sentinel monitor mymaster 192.168.1.3 6380 4 + sentinel down-after-milliseconds mymaster 30000 + sentinel can-failover mymaster yes + sentinel parallel-syncs mymaster 5 + +The first line is used to tell Redis to monitor a master called *mymaster*, +that is at address 127.0.0.1 and port 6379, with a level of agreement needed +to detect this master as failing of 2 sentinels (if the agreement is not reached +the automatic failover does not start). + +The other options are almost always in the form: + + sentinel + +And are used for the following purposes: + +* `down-after-milliseconds` is the time in milliseconds an instance should not be reachable (either does not reply to our PINGs or it is replying with an error) for a Sentinel starting to think it is down. After this time has elapsed the Sentinel will mark an instance as **subjectively down** (also known as +`SDOWN`), that is not enough to +start the automatic failover. However if enough instances will think that there +is a subjectively down condition, then the instance is marked as +**objectively down**. The number of sentinels that needs to agree depends on +the configured agreement for this master. +* `can-failover` tells this Sentinel if it should start a failover when an +instance is detected as objectively down (also called `ODOWN` for simplicity). +You may configure all the Sentinels to perform the failover if needed, or you +may have a few Sentinels used only to reach the agreement, and a few more +that are actually in charge to perform the failover. +* `parallel-syncs` sets the number of slaves that can be reconfigured to use +the new master after a failover at the same time. The lower the number, the +more time it will take for the failover process to complete, however if the +slaves are configured to serve old data, you may not want all the slaves to +resync at the same time with the new master, as while the replication process +is mostly non blocking for a slave, there is a moment when it stops to load +the bulk data from the master during a resync. You may make sure only one +slave at a time is not reachable by setting this option to the value of 1. + +There are more options that are described in the rest of this document and +documented in the example sentinel.conf file. + +SDOWN and ODOWN +=== + +As already briefly mentioned in this document Redis Sentinel has two different +concepts of *being down*, one is called a *Subjectively Down* condition +(SDOWN) and is a down condition that is local to a given Sentinel instance. +Another is called *Objectively Down* condition (ODOWN) and is reached when +enough Sentinels (at least the number configured as the `quorum` parameter +of the monitored master) have an SDOWN condition, and get feedbacks from +other Sentinels using the `SENTINEL is-master-down-by-addr` command. + +From the point of view of a Sentienl an SDOWN condition is reached if we +don't receive a valid reply to PING requests for the number of seconds +specified in the configuration as `is-master-down-after-milliseconds` +parameter. + +An acceptable reply to PING is one of the following: + +* PING replied with +PONG. +* PING replied with -LOADING error. +* PING replied with -MASTERDOWN error. + +Any other reply (or no reply) is considered non valid. + +Note that SDOWN requires that no acceptable reply is received for the whole +interval configured, so for instance if the interval is 30000 milliseconds +(30 seconds) and we receive an acceptable ping reply every 29 seconds, the +instance is considered to be working. + +Sentinels and Slaves auto discovery +=== + +While Sentinels stay connected with other Sentinels in order to reciprocally +check the availability of each other, and to exchange messages, you don't +need to configure the other Sentinel addresses in every Sentinel instance you +run, as Sentinel uses the Redis master Pub/Sub capabilities in order to +discover the other Sentinels that are monitoring the same master. + +This is obtained by sending *Hello Messages* into the channel named +`__sentinel__:hello`. + +Similarly you don't need to configure what is the list of the slaves attached +to a master, as Sentinel will auto discover this list querying Redis. + +Sentinel commands +=== + +By default Sentinel runs using TCP port 26379 (note that 6379 is the normal +Redis port). Sentinels accept commands using the Redis protocol, so you can +use `redis-cli` or any other unmodified Redis client in order to talk with +Sentinel. + +The following is a list of accepted commands: +* **PING** this command simply returns PONG. +* **SENTINEL masters** show a list of monitored masters and their state. +* **SENTIENL slaves ``** show a list of slaves for this master, and their state. +* **SENTINEL is-master-down-by-addr ` `** return a two elements multi bulk reply where the first is 0 or 1 (0 if the master with that address is known and is in `SDOWN` state, 1 otherwise). The second element of the reply is the +*subjective leader* for this master, that is, the `runid` of the Redis +Sentinel instance that should perform the failover accordingly to the queried +instance. +* **SENTINEL get-master-addr-by-name ``** return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. +* **SENTINEL reset ``** this command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. + +The failover process +=== + +The failover process consists on the following steps: + +* Recognize that the master is in ODOWN state. +* Understand what's the Sentinel that should start the failover, called **The Leader**. All the other Sentinels will be **The Observers**. +* The leader selects a slave to promote to master. +* The promoted slave is turned into a master with the command **SLAVEOF NO ONE**. +* The observers see that a slave was turned into a master, so they know the failover started. +* All the other slaves attached to the original master are configured with the **SLAVEOF** command in order to start the replication process with the new master. +* The leader terminates the failover process when all the slaves are reconfigured. It removes the old master from the table of monitored masters and adds the new master, *under the same name* of the original master. +* The observers detect the end of the failover process when all the slaves are reconfigured. They remove the old master from the table and start monitoring the new master, exactly as the leader does. + +The election of the Leader is performed using the same mechanism used to reach +the ODOWN state, that is, the **SENTINEL is-master-down-by-addr** command. +It returns the leader from the point of view of the queried Sentinel, we call +it the **Subjective Leader**, and is selected using the following rule: + +* We remove all the Sentinels that can't failover for configuration (this information is propagated using the Hello Channel to all the Sentinels). +* We remove all the Sentinels in SDOWN, disconnected, or with the last ping reply received more than `SENTINEL_INFO_VALIDITY_TIME` milliseconds ago (currently defined as 5 seconds). +* Of all the remaining instances, we get the one with the lowest `runid`, lexicographically (every Redis instance has a Run ID, that is an identifier of every single execution). + +For a Sentinel to sense that it is the **Objective Leader**, that is, the Sentinel that should start the failove process, the following conditions are needed. + +* It thinks it is the subjective leader itself. +* It reaches acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request shoudl agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. + +Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves as an observer instead. + +This is needed because when configuring Sentinel the user is free to select a level of agreement needed that is lower than the majority of instances. This plus a complex netsplit may create the condition for multiple instances to start the failover as a leader at the same time. So the random delay and the detection of another leader are designed to make the process more robust. + +End of failover +=== + +The failover process is considered terminated from the point of view of a +single Sentinel if: + +* The promoted slave is not in SDOWN condition. +* A slave was promoted as new master. +* All the other slaves are configured to use the new master. + +Note: Slaves that are in SDOWN state are ignored. + +Also the failover state is considered terminate if: + +* The promoted slave is not in SDOWN condition. +* A slave was promoted as new master. +* At least `failover-timeout` milliseconds elapsed since the last progress. + +The `failover-timeout` value can be configured in sentinel.conf for every +different slave. + +Note that when a leader terminates a failover for timeout, it sends a +`SLAVEOF` command in a best-effort way to all the slaves yet to be +configured, in the hope that they'll receive the command and replicate +with the new master eventually. + +Leader failing during failover +=== + +If the leader fails when it has yet to promote the slave into a master, and it +fails in a way that makes it in SDOWN state from the point of view of the other +Sentinels, if enough Sentinels remained to reach the quorum the failover +will automatically continue using a new leader (the subjective leader of +all the remaining Sentinels will change because of the SDOWN state of the +previous leader). + +If the failover was already in progress and the slave +was already promoted, and possibly a few other slaves were already reconfigured, +an observer that is the new objective leader will continue the failover in +case no progresses are made for more than 25% of the time specified by the +`failover-timeout` configuration option. + +Note that this is safe as multiple Sentinels trying to reconfigure slaves +with duplicated SLAVEOF commands do not create any race condition, but at the +same time we want to be sure that all the slaves are reconfigured in the +case the original leader is no longer ok. + +Promoted slave failing during failover +=== + +If the promoted slave has an active SDOWN condition, a Sentinel will never +sense the failover as terminated. + +Additionally if there is an *extended SDOWN condition* (that is an SDOWN that +lasts for more than `down-after-milliseconds` milliseconds) the failover is +aborted (this happens for leaders and observers), and the master starts to +be monitored again as usually, so that a new failover can start with a different +slave. + +Note that when this happens it is possible that there are a few slaves already +configured to replicate from the (now failing) promoted slave, so when the +leader sentinel aborts a failover it sends a `SLAVEOF` command to all the +slaves already reconfigured or in the process of being reconfigured to switch +the configuration back to the original master. + +Manual interactions +=== + +TODO: + +* TODO: Manually triggering a failover with SENTINEL FAILOVER. +* Pausing Sentinels with PAUSE, GPAUSE, RESUME, GRESUME. +* Using REDIS SENTINEL + +The failback process +=== + +TODO: + +* Sentinel does not perform automatic Failback. +* Step for the failback: attach the old master as slave, run the failover. + +Clients configuration update +=== + +Notifications +=== + +Suggested setup +=== + +TILT mode +=== + + From 8487142eac61cdb8e8dc7451e989157727f8c6e5 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:29:01 +0200 Subject: [PATCH 0194/2880] Sentinel doc updated --- topics/sentinel-spec.md | 2 + topics/sentinel.md | 232 +++++++++++++++++++++++++++++++++------- 2 files changed, 194 insertions(+), 40 deletions(-) diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 06402a6ad3..0b70e52303 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -1,3 +1,5 @@ +**WARNING:** this document is no longer in sync with the implementation of Redis Sentinel and will be removed in the next weeks. + Redis Sentinel design draft 1.3 === diff --git a/topics/sentinel.md b/topics/sentinel.md index 5c4a9045fc..dd2dbfe87a 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -25,7 +25,7 @@ implementation changes. Redis Sentinel is compatible with Redis 2.4.16 or greater, and redis 2.6.0-rc6 or greater. Obtaining Sentinel -=== +--- Currently Sentinel is part of the Redis *unstable* branch at github. To compile it you need to clone the *unstable* branch and compile Redis. @@ -35,7 +35,7 @@ Alternatively you can use directly the `redis-server` executable itself, starting it in Sentinel mode as specified in the next paragraph. Running Sentinel -=== +--- If you are using the `redis-sentinel` executable (or if you have a symbolic link with that name to the `redis-server` executable) you can run Sentinel @@ -51,7 +51,7 @@ Sentinel mode: Both ways work the same. Configuring Sentinel -=== +--- In the root of the Redis source distribution you will find a `sentinel.conf` file that is a self-documented example configuration file you can use to @@ -60,13 +60,15 @@ following: sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 60000 + sentinel failover-timeout mymaster 900000 sentinel can-failover mymaster yes sentinel parallel-syncs mymaster 1 - sentinel monitor mymaster 192.168.1.3 6380 4 - sentinel down-after-milliseconds mymaster 30000 - sentinel can-failover mymaster yes - sentinel parallel-syncs mymaster 5 + sentinel monitor resque 192.168.1.3 6380 4 + sentinel down-after-milliseconds resque 10000 + sentinel failover-timeout resque 900000 + sentinel can-failover resque yes + sentinel parallel-syncs resque 5 The first line is used to tell Redis to monitor a master called *mymaster*, that is at address 127.0.0.1 and port 6379, with a level of agreement needed @@ -99,11 +101,12 @@ is mostly non blocking for a slave, there is a moment when it stops to load the bulk data from the master during a resync. You may make sure only one slave at a time is not reachable by setting this option to the value of 1. -There are more options that are described in the rest of this document and -documented in the example sentinel.conf file. +The other options are described in the rest of this document and +documented in the example sentinel.conf file shipped with the Redis +distribution. SDOWN and ODOWN -=== +--- As already briefly mentioned in this document Redis Sentinel has two different concepts of *being down*, one is called a *Subjectively Down* condition @@ -131,8 +134,12 @@ interval configured, so for instance if the interval is 30000 milliseconds (30 seconds) and we receive an acceptable ping reply every 29 seconds, the instance is considered to be working. +The ODOWN condition **only applies to masters**. For other kind of instances +Sentinel don't require any agreement, so the ODOWN state is never reached +for slaves and other sentinels. + Sentinels and Slaves auto discovery -=== +--- While Sentinels stay connected with other Sentinels in order to reciprocally check the availability of each other, and to exchange messages, you don't @@ -146,7 +153,7 @@ This is obtained by sending *Hello Messages* into the channel named Similarly you don't need to configure what is the list of the slaves attached to a master, as Sentinel will auto discover this list querying Redis. -Sentinel commands +Sentinel API === By default Sentinel runs using TCP port 26379 (note that 6379 is the normal @@ -154,6 +161,17 @@ Redis port). Sentinels accept commands using the Redis protocol, so you can use `redis-cli` or any other unmodified Redis client in order to talk with Sentinel. +There are two ways to talk with Sentinel: it is possible to directly query +it to check what is the state of the monitored Redis instances from its point +of view, to see what other Sentinels it knows, and so forth. + +An alternative is to use Pub/Sub to receive *push style* notifications from +Sentinels, every time some event happens, like a failover, or an instance +entering an error condition, and so forth. + +Sentinel commands +--- + The following is a list of accepted commands: * **PING** this command simply returns PONG. * **SENTINEL masters** show a list of monitored masters and their state. @@ -165,16 +183,72 @@ instance. * **SENTINEL get-master-addr-by-name ``** return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. * **SENTINEL reset ``** this command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. -The failover process +Pub/Sub Messages +--- + +A client can use a Sentinel as it was a Redis compatible Pub/Sub server +(but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to +channels and get notified about specific events. + +The channel name is the same as the name of the event. For instance the +channel named `+sdown` will receive all the notifications related to instances +entering an `SDOWN` condition. + +To get all the messages simply subscribe using `PSUBSCRIBE *`. + +The following is a list of channels and message formats you can receive using +this API. The first word is the channel / event name, the rest is the format of the data. + +Note: where *instance details* is specified it means that the following arguments are provided to identify the target instance: + + @ + +The part identifying the master (from the @ argument to the end) is optional +and is only specified if the instance is not a master itself. + +* **+reset-master** `` -- The master was reset. +* **+slave** `` -- A new slave was detected and attached. +* **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. +* **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached slave turned into a master). +* **+salve-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. +* **+salve-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. +* **+salve-reconf-done** `` -- The slave is now synchronized with the new master. +* **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). +* **+sentinel** `` -- A new sentinel for this master was detected and attached. +* **+sdown** `` -- The specified instance is now in Subjectively Down state. +* **-sdown** `` -- The specified instance is no longer in Subjectively Down state. +* **+odown** `` -- The specified instance is now in Objectively Down state. +* **-odown** `` -- The specified instance is no longer in Objectively Down state. +* **+failover-takedown** `` -- 25% of the configured failover timeout has elapsed, but this sentinel can't see any progress, and is the new leader. It starts to act as the new leader reconfiguring the remaining slaves to replicate with the new master. +* **+failover-triggered** `` -- We are starting a new failover as a the leader sentinel. +* **+failover-state-wait-start** `` -- New failover state is `wait-start`: we are waiting a fixed number of seconds, plus a random number of seconds before starting the failover. +* **+failover-state-select-slave** `` -- New failover state is `select-slave`: we are trying to find a suitable slave for promotion. +* **no-good-slave** `` -- There is no good slave to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. +* **selected-slave** `` -- We found the specified good slave to promote. +* **failover-state-send-slaveof-noone** `` -- We are trynig to reconfigure the promoted slave as master, waiting for it to switch. +* **failover-end-for-timeout** `` -- The failover terminated for timeout. If we are the failover leader, we sent a *best effort* `SLAVEOF` command to all the slaves yet to reconfigure. +* **failover-end** `` -- The failover terminated with success. All the slaves appears to be reconfigured to replicate with the new master. +* **switch-master** ` ` -- We are starting to monitor the new master, using the same name of the old one. The old master will be completely removed from our tables. +* **failover-abort-x-sdown** `` -- The failover was undoed (aborted) because the promoted slave appears to be in extended SDOWN state. +* **-slave-reconf-undo** `` -- The failover aborted so we sent a `SLAVEOF` command to the specified instance to reconfigure it back to the original master instance. +* **+tilt** -- Tilt mode entered. +* **-tilt** -- Tilt mode exited. + +The Redis CLIENT SENTINELS command +--- + +* Work in progress, not yet implemented in Redis instances. + +Sentinel failover === The failover process consists on the following steps: * Recognize that the master is in ODOWN state. -* Understand what's the Sentinel that should start the failover, called **The Leader**. All the other Sentinels will be **The Observers**. +* Understand who is the Sentinel that should start the failover, called **The Leader**. All the other Sentinels will be **The Observers**. * The leader selects a slave to promote to master. * The promoted slave is turned into a master with the command **SLAVEOF NO ONE**. -* The observers see that a slave was turned into a master, so they know the failover started. +* The observers see that a slave was turned into a master, so they know the failover started. **Note:** this means that any event that turns one of the slaves of a monitored master into a master (`SLAVEOF NO ONE` command) will be sensed as the start of a failover process. * All the other slaves attached to the original master are configured with the **SLAVEOF** command in order to start the replication process with the new master. * The leader terminates the failover process when all the slaves are reconfigured. It removes the old master from the table of monitored masters and adds the new master, *under the same name* of the original master. * The observers detect the end of the failover process when all the slaves are reconfigured. They remove the old master from the table and start monitoring the new master, exactly as the leader does. @@ -188,17 +262,15 @@ it the **Subjective Leader**, and is selected using the following rule: * We remove all the Sentinels in SDOWN, disconnected, or with the last ping reply received more than `SENTINEL_INFO_VALIDITY_TIME` milliseconds ago (currently defined as 5 seconds). * Of all the remaining instances, we get the one with the lowest `runid`, lexicographically (every Redis instance has a Run ID, that is an identifier of every single execution). -For a Sentinel to sense that it is the **Objective Leader**, that is, the Sentinel that should start the failove process, the following conditions are needed. +For a Sentinel to sense to be the **Objective Leader**, that is, the Sentinel that should start the failove process, the following conditions are needed. * It thinks it is the subjective leader itself. -* It reaches acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request shoudl agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. +* It receives acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request shoudl agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. -Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves as an observer instead. - -This is needed because when configuring Sentinel the user is free to select a level of agreement needed that is lower than the majority of instances. This plus a complex netsplit may create the condition for multiple instances to start the failover as a leader at the same time. So the random delay and the detection of another leader are designed to make the process more robust. +Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves into an observer instead. End of failover -=== +--- The failover process is considered terminated from the point of view of a single Sentinel if: @@ -224,7 +296,7 @@ configured, in the hope that they'll receive the command and replicate with the new master eventually. Leader failing during failover -=== +--- If the leader fails when it has yet to promote the slave into a master, and it fails in a way that makes it in SDOWN state from the point of view of the other @@ -242,19 +314,19 @@ case no progresses are made for more than 25% of the time specified by the Note that this is safe as multiple Sentinels trying to reconfigure slaves with duplicated SLAVEOF commands do not create any race condition, but at the same time we want to be sure that all the slaves are reconfigured in the -case the original leader is no longer ok. +case the original leader is no longer working. Promoted slave failing during failover -=== +--- If the promoted slave has an active SDOWN condition, a Sentinel will never sense the failover as terminated. Additionally if there is an *extended SDOWN condition* (that is an SDOWN that -lasts for more than `down-after-milliseconds` milliseconds) the failover is -aborted (this happens for leaders and observers), and the master starts to -be monitored again as usually, so that a new failover can start with a different -slave. +lasts for more than ten times `down-after-milliseconds` milliseconds) the +failover is aborted (this happens for leaders and observers), and the master +starts to be monitored again as usually, so that a new failover can start with +a different slave in case the master is still failing. Note that when this happens it is possible that there are a few slaves already configured to replicate from the (now failing) promoted slave, so when the @@ -263,32 +335,112 @@ slaves already reconfigured or in the process of being reconfigured to switch the configuration back to the original master. Manual interactions -=== - -TODO: +--- * TODO: Manually triggering a failover with SENTINEL FAILOVER. -* Pausing Sentinels with PAUSE, GPAUSE, RESUME, GRESUME. -* Using REDIS SENTINEL +* TODO: Pausing Sentinels with SENTINEL PAUSE, RESUME. The failback process -=== - -TODO: +--- -* Sentinel does not perform automatic Failback. -* Step for the failback: attach the old master as slave, run the failover. +* TODO: Sentinel does not perform automatic Failback. +* TODO: Document correct steps for the failback. Clients configuration update +--- + +Work in progress. + +TILT mode +--- + +Redis Sentinel is heavily dependent on the computer time: for instance in +order to understand if an instance is available it remembers the time of the +latest successful reply to the PING command, and compares it with the current +time to understand how old it is. + +However if the computer time changes in an unexpected way, or if the computer +is very busy, or the process blocked for some reason, Sentinel may start to +behave in an unexpected way. + +The TILT mode is a special "protection" mode that a Sentinel can enter when +something odd is detected that can lower the reliability of the system. +The Sentinel timer interrupt is normally called 10 times per second, so we +expect that more or less 100 milliseconds will elapse between two calls +to the timer interrupt. + +What a Sentinel does is to register the previous time the timer interrupt +was called, and compare it with the current call: if the time difference +is negative or unexpectedly big (2 seconds or more) the TILT mode is entered +(or if it was already entered the exit from the TILT mode postponed). + +When in TILT mode the Sentinel will continue to monitor everything, but: + +* It stops acting at all. +* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. + +If everything appears to be normal for 30 second, the TILT mode is exited. + +Handling of -BUSY state === -Notifications +(Warning: Yet not implemented) + +The -BUSY error is returned when a script is running for more time than the +configured script time limit. When this happens before triggering a fail over +Redis Sentinel will try to send a "SCRIPT KILL" command, that will only +succeed if the script was read-only. + +Notifications via user script === +Work in progress. + Suggested setup === -TILT mode +Work in progress. + +APPENDIX A - Get started with Sentinel in five minutes === +Work in progress. + +APPENDIX B - Implementation and algorithms +=== + +Duplicate Sentinels removal +--- + +In order to reach the configured quorum we absolutely want to make sure that +the quorum is reached by different physical Sentinel instances. Under +no circumstance we should get agreement from the same instance that for some +reason appears to be two or multiple distinct Sentinel instances. + +This is enforced by an aggressive removal of duplicated Sentinels: every time +a Sentinel sends a message in the Hello Pub/Sub channel with its address +and runid, if we can't find a perfect match (same runid and address) inside +the Sentinels table for that master, we remove any other Sentinel with the same +runid OR the same address. And later add the new Sentinel. + +For instance if a Sentinel instance is restarted, the Run ID will be different, +and the old Sentinel with the same IP address and port pair will be removed. + +Selection of the Slave to promote +=== + +If a master has multiple slaves, the slave to promote to master is selected +checking the slave priority (a new configuration option of Redis instances +that is propagated via INFO output, still not implemented), and picking the +one with lower priority value (it is an integer similar to the one of the +MX field of the DNS system). + +All the slaves that appears to be disconnected from the master for a long +time are discarded. + +If slaves with the same priority exist, the one with the lexicographically +smaller Run ID is selected. +Note: because currently slave priority is not implemented, the selection is +performed only discarding unreachable slaves and picking the one with the +lower Run ID. From 52b751450ef5f83fbb948fcb5295d6168ad364be Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:32:32 +0200 Subject: [PATCH 0195/2880] Markdown fixes --- topics/sentinel.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index dd2dbfe87a..72a933219a 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -382,7 +382,7 @@ When in TILT mode the Sentinel will continue to monitor everything, but: If everything appears to be normal for 30 second, the TILT mode is exited. Handling of -BUSY state -=== +--- (Warning: Yet not implemented) @@ -392,12 +392,12 @@ Redis Sentinel will try to send a "SCRIPT KILL" command, that will only succeed if the script was read-only. Notifications via user script -=== +--- Work in progress. Suggested setup -=== +--- Work in progress. @@ -427,7 +427,7 @@ For instance if a Sentinel instance is restarted, the Run ID will be different, and the old Sentinel with the same IP address and port pair will be removed. Selection of the Slave to promote -=== +--- If a master has multiple slaves, the slave to promote to master is selected checking the slave priority (a new configuration option of Redis instances From 9366a975c13cc91838a69cb173e797e6697f8ee7 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:51:30 +0200 Subject: [PATCH 0196/2880] Sentinel HOWTO --- topics/sentinel.md | 62 +++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 56 insertions(+), 6 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 72a933219a..ae2de8f088 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -401,12 +401,7 @@ Suggested setup Work in progress. -APPENDIX A - Get started with Sentinel in five minutes -=== - -Work in progress. - -APPENDIX B - Implementation and algorithms +APPENDIX A - Implementation and algorithms === Duplicate Sentinels removal @@ -444,3 +439,58 @@ smaller Run ID is selected. Note: because currently slave priority is not implemented, the selection is performed only discarding unreachable slaves and picking the one with the lower Run ID. + +APPENDIX A - Get started with Sentinel in five minutes +=== + +If you want to try Redis Sentinel, please follow this steps: + +* Clone the *unstable* branch of the Redis repository at github (it is the default branch). +* Compile it with "make". +* Start a few normal Redis instances, using the `redis-server` compiled in the *unstable* branch. One master and one slave is enough. +* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. To create the three configurations just create three files where you put something like that: + + port 26379 + sentinel monitor mymaster 127.0.0.1 6379 2 + sentinel down-after-milliseconds mymaster 5000 + sentinel failover-timeout mymaster 900000 + sentinel can-failover mymaster yes + sentinel parallel-syncs mymaster 1 + +Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other differnet non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentienl, but not good for production environments. + +At this point you should see something like the following in every Sentinel you are running: + + [4747] 23 Jul 14:49:15.883 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 + [4747] 23 Jul 14:49:19.645 * +sentinel sentinel 127.0.0.1:26379 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 + [4747] 23 Jul 14:49:21.659 * +sentinel sentinel 127.0.0.1:26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 + + redis-cli -p 26379 sentinel masters + 1) 1) "name" + 2) "mymaster" + 3) "ip" + 4) "127.0.0.1" + 5) "port" + 6) "6379" + 7) "runid" + 8) "66215809eede5c0fdd20680cfb3dbd3bdf70a6f8" + 9) "flags" + 10) "master" + 11) "pending-commands" + 12) "0" + 13) "last-ok-ping-reply" + 14) "515" + 15) "last-ping-reply" + 16) "515" + 17) "info-refresh" + 18) "5116" + 19) "num-slaves" + 20) "1" + 21) "num-other-sentinels" + 22) "2" + 23) "quorum" + 24) "2" + +To see how the failover works, just put down your slave (for instance sending `DEUBG SEGFAULT` to crash it) and see what happens. + +This HOWTO is a work in progress, more information will be added in the near future. From 7e9214316a559d29e3d7147ae2b85c8d42b5d389 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:53:06 +0200 Subject: [PATCH 0197/2880] Markup fix --- topics/sentinel.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/sentinel.md b/topics/sentinel.md index ae2de8f088..730b956671 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -450,6 +450,7 @@ If you want to try Redis Sentinel, please follow this steps: * Start a few normal Redis instances, using the `redis-server` compiled in the *unstable* branch. One master and one slave is enough. * Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. To create the three configurations just create three files where you put something like that: + port 26379 sentinel monitor mymaster 127.0.0.1 6379 2 sentinel down-after-milliseconds mymaster 5000 From d0a2d142ac048b251c2f08f4e6129b12a645c9d9 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:54:18 +0200 Subject: [PATCH 0198/2880] More markdown crazyness. --- topics/sentinel.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 730b956671..e4d4ca09f3 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -448,8 +448,7 @@ If you want to try Redis Sentinel, please follow this steps: * Clone the *unstable* branch of the Redis repository at github (it is the default branch). * Compile it with "make". * Start a few normal Redis instances, using the `redis-server` compiled in the *unstable* branch. One master and one slave is enough. -* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. To create the three configurations just create three files where you put something like that: - +* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. To create the three configurations just create three files where you put something like that. port 26379 sentinel monitor mymaster 127.0.0.1 6379 2 From f2fdd9ee9d7c126891263db7e205e9c63fdbda15 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 14:55:19 +0200 Subject: [PATCH 0199/2880] typo --- topics/sentinel.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index e4d4ca09f3..5adaec288d 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -448,7 +448,9 @@ If you want to try Redis Sentinel, please follow this steps: * Clone the *unstable* branch of the Redis repository at github (it is the default branch). * Compile it with "make". * Start a few normal Redis instances, using the `redis-server` compiled in the *unstable* branch. One master and one slave is enough. -* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. To create the three configurations just create three files where you put something like that. +* Use the `redis-sentinel` executable to start three instances of Sentinel, with `redis-sentinel /path/to/config`. + +To create the three configurations just create three files where you put something like that: port 26379 sentinel monitor mymaster 127.0.0.1 6379 2 From d018671c0905d5f121dc7174e0c6b4f4081447b9 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 23 Jul 2012 15:20:52 +0200 Subject: [PATCH 0200/2880] Appendix A -> Appendix B. --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 5adaec288d..d9c127b41c 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -440,7 +440,7 @@ Note: because currently slave priority is not implemented, the selection is performed only discarding unreachable slaves and picking the one with the lower Run ID. -APPENDIX A - Get started with Sentinel in five minutes +APPENDIX B - Get started with Sentinel in five minutes === If you want to try Redis Sentinel, please follow this steps: From 0a850a445b1b9e618bc05621706ccc148d8b4e5a Mon Sep 17 00:00:00 2001 From: Stefan Kjartansson Date: Mon, 23 Jul 2012 17:54:36 +0000 Subject: [PATCH 0201/2880] Fixed 3 typos: SENTIENL > SENTINEL --- topics/sentinel.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index d9c127b41c..d667c3c3bc 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -116,7 +116,7 @@ enough Sentinels (at least the number configured as the `quorum` parameter of the monitored master) have an SDOWN condition, and get feedbacks from other Sentinels using the `SENTINEL is-master-down-by-addr` command. -From the point of view of a Sentienl an SDOWN condition is reached if we +From the point of view of a Sentinel an SDOWN condition is reached if we don't receive a valid reply to PING requests for the number of seconds specified in the configuration as `is-master-down-after-milliseconds` parameter. @@ -175,7 +175,7 @@ Sentinel commands The following is a list of accepted commands: * **PING** this command simply returns PONG. * **SENTINEL masters** show a list of monitored masters and their state. -* **SENTIENL slaves ``** show a list of slaves for this master, and their state. +* **SENTINEL slaves ``** show a list of slaves for this master, and their state. * **SENTINEL is-master-down-by-addr ` `** return a two elements multi bulk reply where the first is 0 or 1 (0 if the master with that address is known and is in `SDOWN` state, 1 otherwise). The second element of the reply is the *subjective leader* for this master, that is, the `runid` of the Redis Sentinel instance that should perform the failover accordingly to the queried @@ -459,7 +459,7 @@ To create the three configurations just create three files where you put somethi sentinel can-failover mymaster yes sentinel parallel-syncs mymaster 1 -Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other differnet non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentienl, but not good for production environments. +Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other differnet non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentinel, but not good for production environments. At this point you should see something like the following in every Sentinel you are running: From 77c9594e628cee76b3b6ed1618f7b11c104578f6 Mon Sep 17 00:00:00 2001 From: Matt Boehlig Date: Fri, 27 Jul 2012 16:18:43 -0500 Subject: [PATCH 0202/2880] Handle multibyte values in sample protocol generator * Length needs to be number of bytes not number of characters * Fixes 'RR Protocol error: expected '$', got ' error * Ruby 1.8.7+ compatible --- topics/mass-insert.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index 641cf8c98d..9ce8f7d96d 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -99,7 +99,7 @@ The following Ruby function generates valid protocol: proto = "" proto << "*"+cmd.length.to_s+"\r\n" cmd.each{|arg| - proto << "$"+arg.length.to_s+"\r\n" + proto << "$"+arg.to_s.bytesize.to_s+"\r\n" proto << arg.to_s+"\r\n" } proto From eb48c2e6e206538c5bdd503260d5c4d1efc5cf1e Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Sun, 29 Jul 2012 11:18:39 +0200 Subject: [PATCH 0203/2880] Added CLIENT LIST and CLIENT KILL commands. --- commands.json | 18 +++++++++++++ commands/client kill.md | 15 +++++++++++ commands/client list.md | 59 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+) create mode 100644 commands/client kill.md create mode 100644 commands/client list.md diff --git a/commands.json b/commands.json index 66ad7af953..7f8950521f 100644 --- a/commands.json +++ b/commands.json @@ -133,6 +133,24 @@ "since": "2.2.0", "group": "list" }, + "CLIENT KILL": { + "summary": "Kill the connection of a client", + "complexity": "O(N) where N is the number of client connections", + "arguments": [ + { + "name": "ip:port", + "type": "string" + } + ], + "since": "2.4.0", + "group": "server" + }, + "CLIENT LIST": { + "summary": "Get the list of client connections", + "complexity": "O(N) where N is the number of client connections", + "since": "2.4.0", + "group": "server" + }, "CONFIG GET": { "summary": "Get the value of a configuration parameter", "arguments": [ diff --git a/commands/client kill.md b/commands/client kill.md new file mode 100644 index 0000000000..7b70f45ced --- /dev/null +++ b/commands/client kill.md @@ -0,0 +1,15 @@ +The `CLIENT KILL` command closes a given client connection identified +by ip:port. + +The ip:port should match a line returned by the `CLIENT LIST` command. + +Due to the single-treaded nature of Redis, it is not possible to +kill a client connection while it is executing a command. From +the client point of view, the connection can never be closed +in the middle of the execution of a command. However, the client +will notice the connection has been closed only when the +next command is sent (and results in network error). + +@return + +@status-reply: `OK` if the connection exists and has been closed diff --git a/commands/client list.md b/commands/client list.md new file mode 100644 index 0000000000..ec9a294ec4 --- /dev/null +++ b/commands/client list.md @@ -0,0 +1,59 @@ +The `CLIENT LIST` command returns information and statistics about the client +connections server in a mostly human readable format. + +@return + +@bulk-reply: a unique string, formatted as follows: + +* One client connection per line (separated by LF) +* Each line is composed of a succession of property=value fields separated + by a space character. + +Here is the meaning of the fields: + +* addr: address/port of the client +* fd: file descriptor corresponding to the socket +* age: total duration of the connection in seconds +* idle: idle time of the connection in seconds +* flags: client flags (see below) +* db: current database ID +* sub: number of channel subscriptions +* psub: number of pattern matching subscriptions +* multi: number of commands in a MULTI/EXEC context +* qbuf: query buffer length (0 means no query pending) +* qbuf-free: free space of the query buffer (0 means the buffer is full) +* obl: output buffer length +* oll: output list length (replies are queued in this list when the buffer is full) +* omem: output buffer memory usage +* events: file descriptor events (see below) +* cmd: last command played + +The client flags can be a combination of: + +``` +O: the client is a slave in MONITOR mode +S: the client is a normal slave server +M: the client is a master +x: the client is in a MULTI/EXEC context +b: the client is waiting in a blocking operation +i: the client is waiting for a VM I/O (deprecated) +d: a watched keys has been modified - EXEC will fail +c: connection to be closed after writing entire reply +u: the client is unblocked +A: connection to be closed ASAP +N: no specific flag set +``` + +The file descriptor events can be: + +``` +r: the client socket is readable (event loop) +w: the client socket is writable (event loop) +``` + +## Notes + +New fields are regularly added for debugging purpose. Some could be removed +in the future. A version safe Redis client using this command should parse +the output accordingly (i.e. handling gracefully missing fields, skipping +unknown fields). From dbea3461764576f40da9b0625bd661559d232fd0 Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Sun, 29 Jul 2012 11:36:44 +0200 Subject: [PATCH 0204/2880] Fixed complexity display glitches for some commands. --- commands.json | 1 + commands/pexpire.md | 4 ---- commands/pexpireat.md | 4 ---- commands/psetex.md | 4 ---- commands/pttl.md | 4 ---- commands/time.md | 4 ---- 6 files changed, 1 insertion(+), 20 deletions(-) diff --git a/commands.json b/commands.json index 7f8950521f..094bca80fd 100644 --- a/commands.json +++ b/commands.json @@ -1746,6 +1746,7 @@ }, "TIME": { "summary": "Return the current server time", + "complexity": "O(1)", "since": "2.6.0", "group": "server" }, diff --git a/commands/pexpire.md b/commands/pexpire.md index d5bb40bf63..b01c937931 100644 --- a/commands/pexpire.md +++ b/commands/pexpire.md @@ -1,7 +1,3 @@ -@complexity - -O(1) - This command works exactly like `EXPIRE` but the time to live of the key is specified in milliseconds instead of seconds. diff --git a/commands/pexpireat.md b/commands/pexpireat.md index bfd7005552..f61f25038b 100644 --- a/commands/pexpireat.md +++ b/commands/pexpireat.md @@ -1,7 +1,3 @@ -@complexity - -O(1) - `PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at which the key will expire is specified in milliseconds instead of seconds. diff --git a/commands/psetex.md b/commands/psetex.md index f6ee05815b..3e9988eff9 100644 --- a/commands/psetex.md +++ b/commands/psetex.md @@ -1,7 +1,3 @@ -@complexity - -O(1) - `PSETEX` works exactly like `SETEX` with the sole difference that the expire time is specified in milliseconds instead of seconds. diff --git a/commands/pttl.md b/commands/pttl.md index e80f10f53e..a3d66431a1 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -1,7 +1,3 @@ -@complexity - -O(1) - Like `TTL` this command returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. diff --git a/commands/time.md b/commands/time.md index 5d22ba3e12..0aca44a597 100644 --- a/commands/time.md +++ b/commands/time.md @@ -1,7 +1,3 @@ -@complexity - -O(1) - The `TIME` command returns the current server time as a two items lists: a Unix timestamp and the amount of microseconds already elapsed in the current second. Basically the interface is very similar to the one of the `gettimeofday` system From 65a3c8c2ca43c3f9bca01f7c0e796befdeae14b5 Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Sun, 29 Jul 2012 21:04:01 +0200 Subject: [PATCH 0205/2880] Refreshed benchmark page. (mentioned new pipeline option, ethernet packet size impact, etc ...) --- topics/Data_size.png | Bin 0 -> 15724 bytes topics/benchmarks.md | 69 +++++++++++++++++++++++++++---------------- 2 files changed, 44 insertions(+), 25 deletions(-) create mode 100644 topics/Data_size.png diff --git a/topics/Data_size.png b/topics/Data_size.png new file mode 100644 index 0000000000000000000000000000000000000000..1acff3f2b57bc821f51aef7c9f8aae65399cad5a GIT binary patch literal 15724 zcmch;c{r5oA3v@XN|F?nP;^qsma>hdvR0N#QPxnDeVrM`QX$!sO7^0tgb;&aWGq94 zWM>v*UuUrmGsgU$A)PwsbN#;OI^T1B|8kkc=`_kNCzO-=rs)|o5NlxeofB z;ge?luv5Df`1jI-O?RC!1~T>^5GPHA{2r3hh}mKzK&Bw znL8Yf&ymg-Sn+g=fB80zUW-gA`)+UEI;iJbKxZ%6pL0w2DgCyET!L4&>latNJDPIX zdIzX}?(NftHsSs9){}>kbz?8H{V^b;UVEhF*=_r@OOHgW@})5@c>5MVl{8gJbLB5< zmhz4um-L<2fg-WdQA#CUDj^%Rp$mq6<|Ug7@#JRn2vXs6^0U+oO6$&P-V!YR8{x)` zE))&F@bJv)Lgi6yRhQ-E4-}V6h9!TQ(S|za+CO<+(pI-YM#@EMa^}r(2MI4Y$fv^Q zoLiR=5tAv&J4n*8zff9fhwRe1jx)5#4+ue6P158g2d8)1emjEFzT*v(R;`pxIzq8d zPNjA9Jt*?Z9TqoWZZx`Oj$@A@%FOF*JBBI(CsnpwnQ@ptL2qhMG3h!p#8YoL+S|s5 zTYAB-`v^hT*(?1ZM9kue&ON@g-m?fiy>VEr1ZC{;4Znd3zOpw-Zf5#q&Lk*xHZiqs z@ASxA0a*3mBf@j$&HHuCW%u_TCQX)EWjN`B9MQla^=#r5v!p@kn!>j5)?Qs`nB+n{ z=d-7mQvFV=q;t3M5tgj_hz-v?PUV2+`n)j#3kSI_9f@$iv~T`SC}O87+KpzW1GQ%_ zL~xP5ZH4%+l!3l`Av+DG+P*@L9V%<}cT3n2tOMAA-6KBS(UQxSLwO6=3h7EM=Prd02W{dNPiWdy+TaDlw zyqr^xiC~{Ajqrj@tC&o8MCn1hn{WFPxcZpM5pAtkVaOFr6D1ZKt#x%lM(I* zRr+I^(>P8(J+bw?W_}2HH{dnG4JkdD2#YI#$8QDvNlQY-;8B_=@F3$f5DbQ@`3BXt znO*+z^gj+2|35)yuVaV|&Qm$CWuVjYx%ihhZm+z~j|L+aWCn z#axS(g87y?R7#Fd@kOqKA#ap5iF&cxi)UZNd_9zQUS|K^#$q#5?$8})AC?Y~U3Ooku3wdAe|1hwjTUb(LW^pira^?5Cmw0QYbZ+9^lKVwH#Pd@G zC-4yZVa%%C4SY#JGm;!S=UkAI8TUH3-owwPxue|dIZs(>0r8p#JHoCmdYZ_S)LZwL zUn0nRCiHkeSw>!?uuaA6=53E7ZjNwQ92%%Ex2xj;eo-tzC>}v!A4LjH0{?sfnP9q! zxEEXyO=;Knz(1dW(d`hhU@_GP&7qwAMZrJD@s`GL?;^N1)J6NCBZPpxx;jJLE`}fu z3M&4xt22DW2NsxXz!vXe3kK@JR1R(Ewp4E3PY#>~YZ4l>C>8uC(_{X=FBL(NyDi&o z>)3%st9~nUm-#2tmkN1$*a2(d00YbJ!wUVAJ&0(Ee`hD}v3w6@1HR(i_ish`w>|tj zJ>&uMGyS~2Zy$oy|IpUIo{G+!R+>Add4%UZsHB%=Qme*dUmmpBxWp50TE(n(?%}+= z@g~H+Sn&YME?6@q66rB+Ty|4EHPnQF1Am+bJD9+LnES)7#zYh}6-i4NyEet;o3wiV zY)I8?Ret*%rRze_cu#;cb@VQe$BkmegX_cHBihJ&dk@fL>vgj8xcf5G)Jm$zxemNR zAPST)PpJul9`j$Gsj9+x<7w8^SIpyNP}jW}ETGBbt>axu(rNVx&z1V;n&ENHmj;@vAjX9}hZ{_>3^_=@uZfbP% z-3kB+ftS%%KCv`h@sZ(PQ%1*@vk;TReg+XPdm5+p{ysDDH$d94(CQ=!je!f?=; z0<7X;jBa)wb_K0=y*8FZU*Io}%OpwLW(YLwCFqjFgbA)c{XJ7!MivWWaQ|V0LfYBW&fGyY8o|rLQET(<+a(ZnHd35jI_F-(fUsZM`xu zmX9UIPFIv4r}C=Pj0Z_~WuuaNs_3u7YMzVs%`T;ci{$Oi6Gf7vA-vwsS$3lz<-RLo z?N(i^$%EN5A}Rm)K-9gRvD28)CA*^0eX-*CRm|B@!^0v3)X3H*-IZz69<|{5bN6J^ ztf2v_4iZ6&y63ueGy)w@EYlAZoSI2d(4l3)qE^+UOl%2o9L7mXd#R(cIpBDERX41L z&;v^OG`dO|e8?a35sb&<6tT>~v{p?*XMGmF+h@$XG+`++;oYD*7b*+f-R~OW2exAf zaw7f6R685~D$3cvOwl8^FrZk4YZ+|lVlI8cQm5#jvHVPTQ}Qq(t!zoO2K`J3Iaf59 zAW}VS8Ui~yLHUy7>g=jayT*8Zl`x1WU1P+%E%!B(@`6@c(YXEinT5Pmb#3VV55?HA zF#LR9hejXF36Ztr7D!Juh?Lm=+^cubcrsCL)$2w|9njCIu_;8~6)WYX#SyYh@D%#3 zwTnj3)E}dEFVa1qkD-kW3;{g`)M4wxorJu4Rek#GM;u^?hs`eA45hS41R(vX z7kI;R7tRFk`lDv)Pm_Y>mls+d)3*ToK5q)2EG;3XyF?w*@LI_%jIFdT6ki5RiPJj3 zZBcigk|+;I6D8|v&!mJQh3fsEw}l?^>s#`_?H(?pzD#Tj6zMhgcZGwRHIgITdJf8vB%@4xLQQMsnc=X z<84DWkGeiOrCN{yROBY7u8yp)qs>pU?|kXm6LxX?HOh;4x_pZvW=XW*!jvnGIM?0A zuL-Ky*cd}57to3M`p%0U^#UcF$)f2Mk_un?hJ1_+RTq=Pt$8nG?Ab*Sxmd>)ONFgt z!yh{Y0MoFS24LhRq9RcoOWkuqUZU3fQH0%kXf8L^?YNmGc!~Pc)+2yr7B>oZ9hTMr z1v+{@mku~94Rw2C5#pp|K7Ip0Ii1mTU|(JD6YrctcM(A|G&;LliEZOvLK7M=S{Pxd z5p2#KyxdcRH^gkql`i`Dm>Va=QnH>)!Tvk|XbO0-fM&0h_haIP?D-7N{*{~Wq2FG! zcDy`TQxG_xuXbX7GIf5;6z>&k7v(p|r+$jkZrp~aC8(JOs!-aA2;y^0;cKh{+`zf} zvyv4NKU9&1$&N+F-;U`TephtFkaA&mHYg=6V!N%I zs+ASvv`*x5w>gME9x)x{^YODdwqY}A4V6pS>R7=PZ$qdh@T6t?=vUOz9=&&1B#95lwP1oilZdeV{tFyaN+Z;xT%dNAm zPUy)K^-@((guqf*wvlodOGCBVLst>^Q&uf#9al%Vr&mGmoP~`Nf!l zDz+Fon$)777tl~P<90ZZG23{}%inos!FsdCH0cxf>E9N8ktH%#s_cEM9x#`hpd5vgeuRD5$mtXDc{J|;3;4_h7i2+9fH~T)HY_@S79mN@9;NBxo zbvgYy_LlVnKu<)ore`Dhjr4n{tOeDL0WI`#^zUfT9K5yNT_|R;eS$NcK8M#1D>HCj zEbwZ|4i=3u#B}RZ(1$hPF19syW6u9-N-kqWKyt7;c39FGB%`0X&5LyD!QmAkMTYy{ z002k8=qX&HLf{OR7x{sBTE31I$3J6aGzFN0b64hOC}ccg=V(%P zGjg%060Re0PdU)Buse87_dlvt_+w0q3pelPyhZ?ZP`CE9!eIS1)-mZmB*TV;7)9H&rY_s1B1Now71q@TCaJ;wtC3X_Jb z4ny}>>s1~H?oXNJ9|q3!5OWmkI7)fEIN&GhAXCp{(uMI}bPavgS$)4c{3cL(3eRYtiRiLD!42^!mxG@Sc_dHEtURf)2kQ1v8RIWUkoZ7Xr_DAsZ= zb`{#P%rfTzy`aadbuZ?Dynf}@frOD!{`gNHK_J(+=WYa+P&mTw zr&lQ{&VJ5Rv-UAL18J4|GcZ0T+O@fzG+qYW+z?5xKCp24%>z{HL!~Cq$W=_=jVVB$ipb)Ep$xT}&o#L;pK>Bd7hd|=*LVuifY_`w zi>l%EDPd`WWp7{^S1c;dnL3~~mWTXppC=btGdPVf3XAN5!9a@0lD3{Ya~J54m>#i{daiV|Zqu*5{1a>Vi!x{WbmzQCx-!YV%SP%oi-IX>(HTGh!( zfd$+ffL0K^yi9@^zH_B}Hf!P!k=K1Ai;1)OjG2Bt`8q{I3n87wv38#ypU?se=$x!i z_LYZ4lg8980kC=61ysy$^vE?Poz}4sH6NRvgsGsDgw!UY(IaZ7Ozu5LC694_CiK3y*K$J_afLkh7Cb4cw)CZSc-ac7kNZGcd|)H;;m8g^ zgS^OYwP2QPU9f;`7n=&iTWux)*a;~PO^V1qcM?st@?A1FPd}f(07z&rj5yNzRxdbAMg?}j`|`s_j#bW7VW z6q#2DGz@gIU86?e`v!`2_=xq%RZTE9*2ZZF-LSXT0M-r8M<&dM`Un~UFeWS(z_~~z zMF*Fp{xwMnF_GO%EVVXqQ-RVKU&E%zTif=kB}Np;570b4xn^?U$YgUq&1m8^-9WVsYuJ4c&L^s%4MPZIwBWKz9MB1(5EIL3b>-uX%wg?OhV}A)A^A5`y=^?#t&f3x@Oi! z6ss^b&mguUo$;i=Z?3n}Yy0Gah?wxOs(dc$;}tRggGHvUNtQ9L+NFXlcsY^@v%TAw zryMZ%Jf+k)?GTVtYk$+ZqdwYCju%}CZ&7| z7f!CaC5n9nEFw${6HTpZJ@E-pM)YbH$}o+_7ZNq{={TfOey*LTo`LOcRvg4p}82{xt+B&St0<+{mG390^x?G>7G@(g-8YqtGIyk=8eT^PkiEawZx0H`>Ak~Xoh^U({Y}n4*97dE2&{_Q2 zaHBa?##K@MQ)O{Z!^~=j8YrkT#z4;i34uzvPjQ9rwj!Gy( z^vBFy3qdm#+*hhUd|0iwd{3mTO?eO~8dnlI1um=F(FCnG1dgq=lFG=#dKMG!q+LA| zZib){bqR&aYmM<@P(6i0IVIhhZ!P+W>*YS%GLOzEvK^$`L(h;raQ6;@O~oRp$H@Hv+z*j0AUap9shcN-wOaPp59El zB7h(998l5y`dVi2u>7hlx=5vzh*Vyu1p=sz4{62k7c#SSFC|BAnX=%pYbi=>@NocFaZq@T1zW8dQz#L2asFH+F65&RK%u9 zTqu84fK_I^ zNRC#>=cNwLxPyP38pxQa%Eg-hd>%7L7PZnUeNeP{Ur)%UNvqPu7|UwKz+FOvU{Zaf zyUk{n?X)Ti8DPXdmaLz#(C^mrQ;${7>b9h-gA^YM&$F{su~GL`OH@cTwqY)*q+0yv zyZx*00<}pW?@G(vg$^OsCcQ5u{yfJJ{kS;S3IXJgy2zUu@B;Rgmo*wvT($S96`eRM zux_a8nu(H4B!F^NfE{o7iRn1NGM3A8J!OSiS7lg`niaFxU%$Eo z@Png3&Am9dLs~iKK4Od?IFYivA}~f1J9LdY@z4!wJ21hS4Twptz~uw%%KD+e+9{jq zd4J+yBw!&k-+-^5AjWupnKl2#sv;6ncwEiqx0Xk`{s|jzP#q)Em6?~k&E{oWd66_L z0_<%TVl3jUn7_^4a$UI6^}B0<_8vCpILi*bqxt~3MOF^I|4bB<_%oYKS$VKret+fA z6!+xzkP8vL&^oNP$WMqa)Lls%m=%;!D;(4scwBV0#3BIF%9qN$-gfvz^x@M0+CYlA zSiw{i1|TJM_;o=@+`XT52RoumZyD&V!r=R(tvowNKTe$T6a@12&yziBZ#YTCTSX<; zPa`3vquKJ+r4aPDDd#g!M{@R^V4V(H=~5(0HpAb$G!261gN=pKo{!mpfL70#49XY| zqS#*)TKBmwQ&qWKpja{V2Icw%V723|#8n5aZdq+o{`rR4a_zWAkPY@85MCE*f-NmQ zO`IUDFW>;<F@%D-zBu^5@9=#b$*^tR`MBHcnTc$LtkjF@Q`S82$7jfl37SpEN^kb2jN(K$JJv@ zHGX_~kb+-hkET&oid=5F15qEqDoFS9swYaD;&(`_pMNrIStY$;PGDPa0NTDqxofUP zSdW=YmPH9B^?x~+$Cad91(>k z{w>L3OJ+lj6R;q@6?i-Q zF2;9bN&h=jNv0B2{ELjbW)2qI`v5w_3ZJbPnu%CX@5vR#{Gi+TY}4s2%$!~Mr_>;? zva=j!B|mSmo&vL=8V5M+Az;R>K)iou>_0&w;)O>bgD-(s1FQv!cb;e>yA4@22RkCO z7EVm(nWY=YE$2ZasQ&y4+d~becSg?8cpWu=06z|}Zg`O0EFs54BLP4_gT1HZ2}QR= zZ$xpXEn=JS%of$QuYWWov8nQ1jTUeSi|eL<9qabJ4Q**w_*CE;!<*$}qF$(4`K`?C z@EaNM#@#o1|3HgjL?N* zMq(i^O|T|+EDvAv36PD`NHt^X%XsH?Zs`>SZCyLYegPG$lT5}(*G#I{x%i?Uw-1!a zCxlGONX15XxDG$a=fkZ$%M2cOei$H2iM1PlJBNznLpus@2rNimlbAzU;Nl!gHS6w#qM$ooqT=f0yzXm#JXwBjQ#CG%q_Y)zkP z4B1mXH&Xd9wjr_R?tJM$Bj+R)C|v(b9_ z!7zc{GQ@lL|Ku$w#@w-fXPBQ$UPy@V(7}eX8pB;;YXS~-guIGO+2wBu^)aGBwyejc zHM)gQVya8`dZq>{f>q{3?8}%rHcauu-2j@rMma4!{x8qx*BYFl^>Ic~l`oi)pZ!Y1 z^mq=e2bneM2$jptE4`IwIJq-;JxZ;TEgocuW8ydI{{Hf}81LBMzduvcV|!c4oK2)Z z_{W-SdMaM@D}S#XKld}7v1S6bGIMp3&G$@MUfBH`_2a4!lOz*BDKpB|*FX)YT21VE z(Xs7Sr`Ltiz#_NfW}#fxN~oCp;~&K>1)ut^R&p%X%he7w{mDvGi)Vo$UuIp!b|-N! z32&LLd%l(3Q#sGp2;QUB)h>qW*h>8o+JIihI~awJ7=zLa0hy)88EEw`H05e#xZOOQ z6r9Vb74vVKIaV+lFXQO)2CLIH7D#Y?cO6s1YG$nRT0VOPF&hZMaDQBnIS%?U(tT7M z(=M8wZ41)VDn-DQtR+?bIKE_&<- z@8XD?mLE6}B1GVxwD$FN=JXX;&73Z)2=C5%Cg0I>Dc$)5Lgjlh9|_|;`0)D|`Ed}c z2)(IYfZ?)z(;DaJ>F5g3>yC&fcZpbv_MPbx9>4QUfiGzMyHVqG@ru?%_4wU{=BAR8 zk_=5d4WLv{+qYrtf>AlL`}BuZ@{0LYET?RWOTo)TxfCfo{#Fr^eGp8{JQ)!=Em5?a zh;bfr7j~If*yS#*8u$Po{{VU_5~j57r{%M!D_8z>PYXO#}1v)Y(&wJrn*cX7vaO|?TZn=y@UCytf z1{~qyq7Na0+{!i)7XY27pb1VL+@CY@J5K@DJzED{!z z;9Ou2vAbK&Ddq7yPiU)998%WU@3)d+EeJ4l7Y8g2_bKXImb<*;t!ZDcJ%N4&K#4l? zM!eF4x0t3|diWa8Cu75P+RmmNtBap}2+}8rHDV(MDPGk)? zL`bKE=O*Y`clnsaAz-Hjbx^VvotOsHyIdZ$)vo9koR=}st^8c4c6%-F1073MD$ttV zj}M|hXv~op5t7Icbf(eJS#I6W%gh$J6z74?@-uj!s`Z%3p)Lr=-e7*BS(c3!%w36QugOth0woqbqOYOt+C>O zU^I9fDqkhULs@pH%`8i#9TZ0;B$1}eADibt1iw;*lK?8=aaZ}dN@irsNinR!Upz3q zI|%l13l17Jz2r}PIX?X${FXLUvTX3WkmxIcp3CoAd+n9Xmp{U2rOyPOol0r4{whVz@Ts6h4Lbu-rs}SzbZa)1T|*EZ~Z@P8+Sb=bNWI^!pG;UWivtd5ksw062a&N&ll=w8nf?Z@_=G-80v- zHTEmWbYG}8bl2}*8S#+(IRDreqMi{(ui4Y^j_x|zUvqFyxbjg?CG&Eb)Xqqey9j|F zoWnQbSZ+tPYR6@nhhtI}#z97<+2t<0f4Ufp_EsE2-te<|usWJ%6tJB1R|$w*LBrXBcQ;(y~r3PGk zrToqD|A_VZ4``R@Zx83#RCNS@tXb#P|5)=cg)orR{~>dA{d{(vB>O)Gu^Y=MWb;qt zS;KtnmRCmpo|gU*=uzM z{Y8*qv+WZ1Uf!vR?Xh1l2p%=KYgQvZfh%lkiocCNb&8JPb>d7Y=3?z0ic z`$BY*@jA|8(XJi%toHoAEP(j{bZXfkQa_NyMcwj$r)u(^|A;{SKcd&q6X~Dy@qb{{ ze=m3dzRJ(`%$mY}X}A4z{`TJnfo_WiIPrGgKk~tCt$!)P|HNC!d&+_6Z+$fKhfQ?- zuVDRuc86aZuWR=Iw{ZU746@3wW@|AG{juJnBHzC!m;Yv}|3tq3#UM|BcH{K|+q%yN zMZR6X)x?$eb`ZEJYA5(t;3-`AGB1Zql5kjO8E}uP;1JDj4*$9oVo@YYJr)VEP#)}w z5%usp_Zl<*l1g?yrQv-^AcV{9qFD%}`0dowKD9}UG^^*=b}|aERrEv5ES0>TSlH7# zP+khf86B%s`RX!}_6-ItNfIqT!VR3%`vny2CiHOU9LISK_c9c^!f?#b#q@95*N>pg zS?waevs~LWY15J%55zDA_v)}6S%s#q)c0P#euSj0@C*NX_#I-aXUtMF!_a$mjIqpE zfnraDLn?5~hW|o_h7Qv*0){?C_sB1VqMypAO)v6gCtUe#DeA}DVlt~Kcg)BblN%f5Yr%>cJa3>fg+=PP{b$@_0;x57+ zf%6|jnROEj(Z}bfWQfxKl~8`#OfCU*vBIqwA98TXgl^EE!6gXX&vXnIvtZqH`UAMp z1zd=Fb_B(`-DLLf9_q=-f2Zqzi}X4tgU8;HHJ*~$Z-wOHCGPo*P1s=zT-JQEn*vk0 zIpQp;Y|(IGR>zDN^o7B?E@}nbVk{PW#viTfCU}c+@-^&mVcP~BD6tDz4jJH~yxAR| za+^`Q$1yj{_hT{cnp(kqCqz?mf})VqtfEzWZRVs35GZ6LZWmn+oh05v$(H$!y{u;Q_eTfN!lgZOkX=# z(4m)yrhca#ElfiQ@3-vLtLU6G3%Y(h@0-ObkCYh8L%jLZ#J(y^zRXU!bM(D<(@w;pkRx2IB~Oc~w!YBU&9n>U`jT6Jn^wQCE*Cr!v0(0@1%0nA z!l=n8wx3eMBJ}oK&1hc=Ui^Lya`ydnC0SD*l{z>$05T}18vC^4KB<;g(*hRl5R8+@`PoU2$89gRGhG z!MuPi2RQ&Ow?`<{^q2*}qf6}G(R+%|I_DI$mF2BCmljZjL# zajF#}&7p_~J!R!BeWa7PIOI`AU_JHqHbGl1;BIKH-rV=Y-+eq^CsFip2#{K^fTBfkT literal 0 HcmV?d00001 diff --git a/topics/benchmarks.md b/topics/benchmarks.md index 10338ced85..88ba11e687 100644 --- a/topics/benchmarks.md +++ b/topics/benchmarks.md @@ -9,23 +9,27 @@ The following options are supported: Usage: redis-benchmark [-h ] [-p ] [-c ] [-n [-k ] - -h Server hostname (default 127.0.0.1) - -p Server port (default 6379) - -s Server socket (overrides host and port) - -c Number of parallel connections (default 50) - -n Total number of requests (default 10000) - -d Data size of SET/GET value in bytes (default 2) - -k 1=keep alive 0=reconnect (default 1) - -r Use random keys for SET/GET/INCR, random values for SADD + -h Server hostname (default 127.0.0.1) + -p Server port (default 6379) + -s Server socket (overrides host and port) + -c Number of parallel connections (default 50) + -n Total number of requests (default 10000) + -d Data size of SET/GET value in bytes (default 2) + -k 1=keep alive 0=reconnect (default 1) + -r Use random keys for SET/GET/INCR, random values for SADD Using this option the benchmark will get/set keys - in the form mykey_rand000000012456 instead of constant + in the form mykey_rand:000000012456 instead of constant keys, the argument determines the max number of values for the random number. For instance - if set to 10 only rand000000000000 - rand000000000009 + if set to 10 only rand:000000000000 - rand:000000000009 range will be allowed. - -q Quiet. Just show query/sec values - -l Loop. Run the tests forever - -I Idle mode. Just open N idle connections and wait. + -P Pipeline requests. Default 1 (no pipeline). + -q Quiet. Just show query/sec values + --csv Output in CSV format + -l Loop. Run the tests forever + -t Only run the comma separated list of tests. The test + names are the same as the ones produced as output. + -I Idle mode. Just open N idle connections and wait. You need to have a running Redis instance before launching the benchmark. A typical example would be: @@ -65,11 +69,6 @@ multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed. It is not really fair to compare one single Redis instance to a multi-threaded data store. -Then the benchmark should do the same operations, and work in the same way with -the multiple data stores you want to compare. It is absolutely pointless to -compare the result of redis-benchmark to the result of another benchmark -program and extrapolate. - A common misconception is that redis-benchmark is designed to make Redis performances look stellar, the throughput achieved by redis-benchmark being somewhat artificial, and not achievable by a real application. This is @@ -77,13 +76,23 @@ actually plain wrong. The redis-benchmark program is a quick and useful way to get some figures and evaluate the performance of a Redis instance on a given hardware. However, -it does not represent the maximum throughput a Redis instance can sustain. -Actually, by using pipelining and a fast client (hiredis), it is fairly easy -to write a program generating more throughput than redis-benchmark. The current -version of redis-benchmark achieves throughput by exploiting concurrency only -(i.e. it creates several connections to the server). It does not use pipelining -or any parallelism at all (one pending query per connection at most, and -no multi-threading). +by default, it does not represent the maximum throughput a Redis instance can +sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly +easy to write a program generating more throughput than redis-benchmark. The +default behavior of redis-benchmark is to achieve throughput by exploiting +concurrency only (i.e. it creates several connections to the server). +It does not use pipelining or any parallelism at all (one pending query per +connection at most, and no multi-threading). + +To run a benchmark using pipelining mode (and achieve higher throughputs), +you need to explicitly use the -P option. Please note that it is still a +realistic behavior since a lot of Redis based applications actively use +pipelining to improve performance. + +Finally, the benchmark should apply the same operations, and work in the same way +with the multiple data stores you want to compare. It is absolutely pointless to +compare the result of redis-benchmark to the result of another benchmark +program and extrapolate. For instance, Redis and memcached in single-threaded mode can be compared on GET/SET operations. Both are in-memory data stores, working mostly in the same @@ -153,6 +162,16 @@ the TCP/IP loopback and unix domain sockets can be used. It depends on the platform, but unix domain sockets can achieve around 50% more throughput than the TCP/IP loopback (on Linux for instance). The default behavior of redis-benchmark is to use the TCP/IP loopback. ++ The performance benefit of unix domain sockets compared to TCP/IP loopback +tends to decrease when pipelining is heavily used (i.e. long pipelines). ++ When an ethernet network is used to access Redis, aggregating commands using +pipelining is especially efficient when the size of the data is kept under +the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes, +100 bytes, or 1000 bytes queries almost result in the same throughput. +See the graph below. + +![Data size impact](https://github.com/dspezia/redis-doc/raw/6374a07f93e867353e5e946c1e39a573dfc83f6c/topics/Data_size.png.gif) + + On multi CPU sockets servers, Redis performance becomes dependant on the NUMA configuration and process location. The most visible effect is that redis-benchmark results seem non deterministic because client and server From d8d44b6480384d1a07c0a5002e4c436162e9ec2e Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Sun, 29 Jul 2012 22:14:45 +0300 Subject: [PATCH 0206/2880] Fixed data size graph --- topics/benchmarks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/benchmarks.md b/topics/benchmarks.md index 88ba11e687..1a25bc6c58 100644 --- a/topics/benchmarks.md +++ b/topics/benchmarks.md @@ -170,7 +170,7 @@ the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes, 100 bytes, or 1000 bytes queries almost result in the same throughput. See the graph below. -![Data size impact](https://github.com/dspezia/redis-doc/raw/6374a07f93e867353e5e946c1e39a573dfc83f6c/topics/Data_size.png.gif) +![Data size impact](https://github.com/dspezia/redis-doc/raw/client_command/topics/Data_size.png) + On multi CPU sockets servers, Redis performance becomes dependant on the NUMA configuration and process location. The most visible effect is that From f0bac6fbd89e9fac3aab7151a4ecbe32d99715ac Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Mon, 30 Jul 2012 22:45:19 +0200 Subject: [PATCH 0207/2880] First drop of info command documentation --- commands/info.md | 141 +++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 125 insertions(+), 16 deletions(-) diff --git a/commands/info.md b/commands/info.md index 2e184256a3..336fce7df7 100644 --- a/commands/info.md +++ b/commands/info.md @@ -1,6 +1,26 @@ The `INFO` command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. +The optional parameter can be used to select a specific section of information: + +* `server`: General information about the Redis server +* `clients`: Client connections section +* `memory`: Memory consumption related information +* `persistence`: RDB and AOF related information +* `stats`: General statistics +* `replication`: Master/slave replication information +* `cpu`: CPU consumption statistics +* `commandstats`: Redis command statistics +* `cluster`: Redis Cluster section +* `keyspace`: Database related statistics + +It can also take the following values: + +* `all`: Return all sections +* `default`: Return only the default set of sections + +When no parameter is provided, the `default` option is assumed. + @return @bulk-reply: in the following format (compacted for brevity): @@ -24,22 +44,111 @@ All the fields are in the form of `field:value` terminated by `\r\n`. ## Notes -* `used_memory` is the total number of bytes allocated by Redis using its - allocator (either standard `libc` `malloc`, or an alternative allocator such - as [`tcmalloc`][hcgcpgp] - -* `used_memory_rss` is the number of bytes that Redis allocated as seen by the - operating system. - Optimally, this number is close to `used_memory` and there is little memory - fragmentation. - This is the number reported by tools such as `top` and `ps`. - A large difference between these numbers means there is memory - fragmentation. - Because Redis does not have control over how its allocations are mapped to - memory pages, `used_memory_rss` is often the result of a spike in memory - usage. - The ratio between `used_memory_rss` and `used_memory` is given as - `mem_fragmentation_ratio`. +Please note depending on the version of Redis some of the fields have been +added or removed. A robust client application should therefore parse the +result of this command by skipping unknown property, and gracefully handle +missing fields. + +Here is the meaning of all fields in the **server** section: + +* `redis_version`: Version of the Redis server +* `redis_git_sha1`: Git SHA1 +* `redis_git_dirty`: Git dirty flag +* `os`: Operating system hosting the Redis server +* `arch_bits`: Architecture (32 or 64 bits) +* `multiplexing_api`: event loop mechanism used by Redis +* `gcc_version`: Version of the GCC compiler used to compile the Redis server +* `process_id`: PID of the server process +* `run_id`: Random value identifying the Redis server (to be used by Sentinel and Cluster) +* `tcp_port`: TCP/IP listen port +* `uptime_in_seconds`: Number of seconds since Redis server start +* `uptime_in_days`: Same value expressed in days +* `lru_clock`: Clock incrementing every minute, for LRU management + +Here is the meaning of all fields in the **clients** section: + +* `connected_clients`: Number of client connections (excluding connections from slaves) +* `client_longest_output_list`: longest output list among current client connections +* `client_biggest_input_buf`: biggest input buffer among current client connections +* `blocked_clients`: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH) + +Here is the meaning of all fields in the **memory** section: + +* `used_memory`: total number of bytes allocated by Redis using its + allocator (either standard `libc` `jemalloc`, or an alternative allocator such + as [`tcmalloc`][hcgcpgp] +* `used_memory_human`: Human readable representation of previous value +* `used_memory_rss`: Number of bytes that Redis allocated as seen by the + operating system (a.k.a resident set size). This is the number reported by tools + such as `top` and `ps`. +* `used_memory_peak`: Peak memory consumed by Redis (in bytes) +* `used_memory_peak_human`: Human readable representation of previous value +* `used_memory_lua`: Number of bytes used by the Lua engine +* `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory` +* `mem_allocator`: Memory allocator, chosen at compile time. + +Ideally, the resident set size (rss) value should be close to `used_memory`. +A large difference between these numbers means there is memory fragmentation +(internal or external), represented by `mem_fragmentation_ratio`. + +Because Redis does not have control over how its allocations are mapped to +memory pages, high `used_memory_rss` is often the result of a spike in memory +usage. + +Here is the meaning of all fields in the **perstence** section: + +* `loading:0 +* `rdb_changes_since_last_save:0 +* `rdb_bgsave_in_progress:0 +* `rdb_last_save_time:1343589517 +* `rdb_last_bgsave_status:ok +* `rdb_last_bgsave_time_sec:-1 +* `rdb_current_bgsave_time_sec:-1 +* `aof_enabled:0 +* `aof_rewrite_in_progress:0 +* `aof_rewrite_scheduled:0 +* `aof_last_rewrite_time_sec:-1 +* `aof_current_rewrite_time_sec:-1 +* `aof_last_bgrewrite_status:ok + +Here is the meaning of all fields in the **stats** section: + +* `total_connections_received:1 +* `total_commands_processed:0 +* `instantaneous_ops_per_sec:0 +* `rejected_connections:0 +* `expired_keys:0 +* `evicted_keys:0 +* `keyspace_hits:0 +* `keyspace_misses:0 +* `pubsub_channels:0 +* `pubsub_patterns:0 +* `latest_fork_usec:0 + +Here is the meaning of all fields in the **replication** section: + +* `role:master +* `connected_slaves:0 + +Here is the meaning of all fields in the **cpu** section: + +* `used_cpu_sys:0.06 +* `used_cpu_user:0.08 +* `used_cpu_sys_children:0.00 +* `used_cpu_user_children:0.00 + +Here is the meaning of all fields in the **commandstats** section: + + +Here is the meaning of all fields in the **cluster** section: + +* `cluster_enabled:0 + +Here is the meaning of all fields in the **keyspace** section: + +db0:keys=3,expires=0 + + * `changes_since_last_save` refers to the number of operations that produced some kind of change in the dataset since the last time either `SAVE` or From f74c613a46fc29da4fc0602a259d63f7ba3671ea Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 31 Jul 2012 10:42:19 +0200 Subject: [PATCH 0208/2880] A few improvements to the FAQ. --- topics/faq.md | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index 177557e2ba..9bd4ea7fb8 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -62,11 +62,11 @@ you'll probably notice there is something wrong. The INFO command will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions. -You can also use the "maxmemory" option in the config file to put a limit to -the memory Redis can use. If this limit is reached Redis will start to reply +Alternatively can use the "maxmemory" option in the config file to put a limit +to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands), or you can configure it to evict keys when the max memory limit -is reached. +is reached in the case you are using Redis for caching. ## Background saving is failing with a fork() error under Linux even if I've a lot of free RAM! @@ -97,7 +97,7 @@ from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. [redhatvm]: http://www.redhat.com/magazine/001nov04/features/vm/ -## Are Redis on disk snapshots atomic? +## Are Redis on-disk-snapshots atomic? Yes, redis background saving process is always fork(2)ed when the server is outside of the execution of a command, so every command reported to be atomic @@ -105,25 +105,27 @@ in RAM is also atomic from the point of view of the disk snapshot. ## Redis is single threaded, how can I exploit multiple CPU / cores? -Simply start multiple instances of Redis in the same box and -treat them as different servers. At some point a single box may not be -enough anyway, so if you want to use multiple CPUs you can start thinking -at some way to shard earlier. However note that using pipelining Redis running +It's very unlikely that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound. For instance using pipelining Redis running on an average Linux system can deliver even 500k requests per second, so if your application mainly uses O(N) or O(log(N)) commands it is hardly going to use too much CPU. +However to maximize CPU usage you can start multiple instances of Redis in +the same box and treat them as different servers. At some point a single +box may not be enough anyway, so if you want to use multiple CPUs you can +start thinking at some way to shard earlier. + In Redis there are client libraries such Redis-rb (the Ruby client) and Predis (one of the most used PHP clients) that are able to handle multiple servers automatically using _consistent hashing_. -## What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set? +## What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Sorted Set? In theory Redis can handle up to 2^32 keys, and was tested in practice to handle at least 250 million of keys per instance. We are working in order to experiment with larger values. -Every list, set, and ordered set, can hold 2^32 elements. +Every list, set, and sorted set, can hold 2^32 elements. In other words your limit is likely the available memory in your system. From 7731a698935d72e253d5d0d418e70ab350d7517c Mon Sep 17 00:00:00 2001 From: antoniojrod Date: Tue, 31 Jul 2012 16:37:33 -0300 Subject: [PATCH 0209/2880] Update commands/save.md Fixed typo --- commands/save.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/save.md b/commands/save.md index 783c9d117c..4dbc6d5cde 100644 --- a/commands/save.md +++ b/commands/save.md @@ -2,7 +2,7 @@ The `SAVE` commands performs a **synchronous** save of the dataset producing a _point in time_ snapshot of all the data inside the Redis instance, in the form of an RDB file. -You almost never what to call `SAVE` in production environments where it will +You almost never want to call `SAVE` in production environments where it will block all the other clients. Instead usually `BGSAVE` is used. However in case of issues preventing Redis to create the background saving child From f7e3ebe805ef1ee866ae57ab87e8ff25beecde17 Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Tue, 31 Jul 2012 23:33:22 +0200 Subject: [PATCH 0210/2880] INFO command documentation drop 2 --- commands/info.md | 114 +++++++++++++++++++++++++++++------------------ 1 file changed, 71 insertions(+), 43 deletions(-) diff --git a/commands/info.md b/commands/info.md index 336fce7df7..083339cb84 100644 --- a/commands/info.md +++ b/commands/info.md @@ -75,85 +75,113 @@ Here is the meaning of all fields in the **clients** section: Here is the meaning of all fields in the **memory** section: * `used_memory`: total number of bytes allocated by Redis using its - allocator (either standard `libc` `jemalloc`, or an alternative allocator such - as [`tcmalloc`][hcgcpgp] + allocator (either standard **libc**, **jemalloc**, or an alternative allocator such + as [**tcmalloc**][hcgcpgp] * `used_memory_human`: Human readable representation of previous value * `used_memory_rss`: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools - such as `top` and `ps`. + such as **top** and **ps**. * `used_memory_peak`: Peak memory consumed by Redis (in bytes) * `used_memory_peak_human`: Human readable representation of previous value * `used_memory_lua`: Number of bytes used by the Lua engine * `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory` * `mem_allocator`: Memory allocator, chosen at compile time. -Ideally, the resident set size (rss) value should be close to `used_memory`. -A large difference between these numbers means there is memory fragmentation -(internal or external), represented by `mem_fragmentation_ratio`. +Ideally, the `used_memory_rss` value should be only slightly higher than `used_memory`. +When rss >> used, a large difference means there is memory fragmentation +(internal or external), which can be evaluated by checking `mem_fragmentation_ratio`. +When used >> rss, it means part of Redis memory has been swapped off by the operating +system: expect some significant latencies. Because Redis does not have control over how its allocations are mapped to memory pages, high `used_memory_rss` is often the result of a spike in memory usage. -Here is the meaning of all fields in the **perstence** section: - -* `loading:0 -* `rdb_changes_since_last_save:0 -* `rdb_bgsave_in_progress:0 -* `rdb_last_save_time:1343589517 -* `rdb_last_bgsave_status:ok -* `rdb_last_bgsave_time_sec:-1 -* `rdb_current_bgsave_time_sec:-1 -* `aof_enabled:0 -* `aof_rewrite_in_progress:0 -* `aof_rewrite_scheduled:0 -* `aof_last_rewrite_time_sec:-1 -* `aof_current_rewrite_time_sec:-1 -* `aof_last_bgrewrite_status:ok +When Redis frees memory, the memory is given back to the allocator, and the +allocator may or may not give the memory back to the system. There may be +a discrepancy between the `used_memory` value and memory consumption as +reported by the operating system. It may be due to the fact memory has been +used and released by Redis, but not given back to the system. The `used_memory_peak` +value is generally useful to check this point. + +Here is the meaning of all fields in the **persistence** section: + +* `loading`: Flag indicating if the load of a dump file is on-going +* `rdb_changes_since_last_save`: Number of changes since the last dump +* `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going +* `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save +* `rdb_last_bgsave_status`: Status of the last RDB save operation +* `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in seconds +* `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation if any +* `aof_enabled`: Flag indicating AOF logging is activated +* `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is on-going +* `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation + will be scheduled once the on-going RDB save is complete. +* `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in seconds +* `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite operation if any +* `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation + +`changes_since_last_save` refers to the number of operations that produced +some kind of changes in the dataset since the last time either `SAVE` or +`BGSAVE` was called. + +If AOF is activated, these additional fields will be added: + +* `aof_current_size`: AOF current file size +* `aof_base_size`: AOF file size on latest startup or rewrite +* `aof_pending_rewrite`: Flag indicating an AOF rewrite operation + will be scheduled once the on-going RDB save is complete. +* `aof_buffer_length`: Size of the AOF buffer +* `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer +* `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O queue +* `aof_delayed_fsync`: Delayed fsync counter + +If a load operation is on-going, these additional fields will be added: + +* `loading_start_time`: Epoch-based timestamp of the start of the load operation +* `loading_total_bytes`: Total file size +* `loading_loaded_bytes`: Number of bytes already loaded +* `loading_loaded_perc`: Same value expressed as a percentage +* `loading_eta_seconds`: ETA in seconds for the load to be complete Here is the meaning of all fields in the **stats** section: -* `total_connections_received:1 -* `total_commands_processed:0 -* `instantaneous_ops_per_sec:0 -* `rejected_connections:0 -* `expired_keys:0 -* `evicted_keys:0 -* `keyspace_hits:0 -* `keyspace_misses:0 -* `pubsub_channels:0 -* `pubsub_patterns:0 -* `latest_fork_usec:0 +* `total_connections_received`: Total number of connections accepted by the server +* `total_commands_processed`: Total number of commands processed by the server +* `instantaneous_ops_per_sec`: Number of commands processed per second +* `rejected_connections`: Number of connections rejected because of maxclients limit +* `expired_keys`: Total number of key expiration events +* `evicted_keys`: Number of evicted keys due to maxmemory limit +* `keyspace_hits`: Number of successful lookup of keys in the main dictionary +* `keyspace_misses`: Number of failed lookup of keys in the main dictionary +* `pubsub_channels`: Global number of pub/sub channels with client subscriptions +* `pubsub_patterns`: Global number of pub/sub pattern with client subscriptions +* `latest_fork_usec`: Duration of the latest fork operation in microseconds Here is the meaning of all fields in the **replication** section: * `role:master -* `connected_slaves:0 +* `connected_slaves: Here is the meaning of all fields in the **cpu** section: -* `used_cpu_sys:0.06 -* `used_cpu_user:0.08 -* `used_cpu_sys_children:0.00 -* `used_cpu_user_children:0.00 +* `used_cpu_sys`: +* `used_cpu_user`: +* `used_cpu_sys_children`: +* `used_cpu_user_children`: Here is the meaning of all fields in the **commandstats** section: Here is the meaning of all fields in the **cluster** section: -* `cluster_enabled:0 +* `cluster_enabled`: Here is the meaning of all fields in the **keyspace** section: db0:keys=3,expires=0 - -* `changes_since_last_save` refers to the number of operations that produced - some kind of change in the dataset since the last time either `SAVE` or - `BGSAVE` was called. - * `allocation_stats` holds a histogram containing the number of allocations of a certain size (up to 256). This provides a means of introspection for the type of allocations performed From 2b4c50c9de59d96ad1c39b39b4fbae12d2441f5b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ce=CC=81dric=20Deltheil?= Date: Wed, 1 Aug 2012 15:46:00 +0200 Subject: [PATCH 0211/2880] topics/latency.md: typo --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 73ab98e837..261a5ea2dd 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -327,7 +327,7 @@ and **so**, that counts the amount of memory swapped from/to the swap file. If you see non zero counts in those two columns then there is swapping activity in your system. -Finally, the **iostat** command be be used to check the global I/O activity of +Finally, the **iostat** command can be used to check the global I/O activity of the system. $ iostat -xk 1 From f08a408f2127dd5a4464f29511e318c4a4b4dc7e Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 2 Aug 2012 15:05:02 +0200 Subject: [PATCH 0212/2880] Sentinel Rules. --- topics/sentinel.md | 85 +++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 76 insertions(+), 9 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index d667c3c3bc..9ce0ac4e70 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -19,8 +19,8 @@ can be also invoked using the `--sentinel` option of the normal `redis-sever` executable. **WARNING:** Redis Sentinel is currently a work in progress. This document -describes how to use what we already have and may change as the Sentinel -implementation changes. +describes how to use what we is already implemented, and may change as the +Sentinel implementation evolves. Redis Sentinel is compatible with Redis 2.4.16 or greater, and redis 2.6.0-rc6 or greater. @@ -53,8 +53,8 @@ Both ways work the same. Configuring Sentinel --- -In the root of the Redis source distribution you will find a `sentinel.conf` -file that is a self-documented example configuration file you can use to +The Redis source distribution contains a file called `sentinel.conf` +that is a self-documented example configuration file you can use to configure Sentinel, however a typical minimal configuration file looks like the following: @@ -138,6 +138,27 @@ The ODOWN condition **only applies to masters**. For other kind of instances Sentinel don't require any agreement, so the ODOWN state is never reached for slaves and other sentinels. +The behavior of Redis Sentinel can be described by a set of rules that every +Sentinel follows. The complete behavior of Sentinel as a distributed system +composed of multiple Sentinels only results from this rules followed by +every single Sentinel instance. The following is the first set of rules. +In the course of this document more rules will be added in the appropriate +sections. + +**Sentinel Rule #1**: Every Sentinel sends a **PING** request to every known master, slave, and sentinel instance, every second. + +**Sentinel Rule #2**: An instance is Subjectively Down (**SDOWN**) if the latest valid reply to **PING** was received more than `down-after-milliseconds` milliseconds ago. Acceptable PING replies are: +PONG, -LOADING, -MASTERDOWN. + +**Sentinel Rule #3**: Every Sentinel is able to reply to the command **SENTINEL is-master-down-by-addr ` `**. This command replies true if the specified address is the one of a master instance, and the master is not in **SDOWN** state. + +**Sentinel Rule #4**: If a master is in **SDOWN** condition, every other Sentinel also monitoring this master, is queried for confirmation of this state, every second, using the **SENTINEL is-master-down-by-addr** command. + +**Sentinel Rule #5**: If a master is in **SDOWN** condition, and enough other Sentinels (to reach the configured quorum) agree about the condition, with a reply to **SENTINEL is-master-down-by-addr** that is no older than five seconds, then the master is marked as Objectively Down (**ODOWN**). + +**Sentinel Rule #6**: Every Sentinel sends an **INFO** request to every known master and slave instance, one time every 10 seconds. If a master is in **ODOWN** condition, its slaves are asked for **INFO** every second instead of being asked every 10 seconds. + +**Sentinel Rule #7**: If the **first** INFO reply a Sentinel receives about a master shows that it is actually a slave, Sentinel will update the configuration to actually monitor the master reported by the INFO output instead. So it is safe to start Sentinel against slaves. + Sentinels and Slaves auto discovery --- @@ -153,6 +174,12 @@ This is obtained by sending *Hello Messages* into the channel named Similarly you don't need to configure what is the list of the slaves attached to a master, as Sentinel will auto discover this list querying Redis. +**Sentinel Rule #8**: Every Sentinel publishes a message to every monitored master Pub/Sub channel `__sentinel__:hello`, every five seconds, announcing its presence with ip, port, runid, and ability to failover (accordingly to `can-failover` configuration directive in `sentinel.conf`). + +**Sentinel Rule #9**: Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master, looking for unknown sentinels. When new sentinels are detected, we add them as sentinels of this master. + +**Sentinel Rule #10**: Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added. + Sentinel API === @@ -234,11 +261,6 @@ and is only specified if the instance is not a master itself. * **+tilt** -- Tilt mode entered. * **-tilt** -- Tilt mode exited. -The Redis CLIENT SENTINELS command ---- - -* Work in progress, not yet implemented in Redis instances. - Sentinel failover === @@ -269,6 +291,33 @@ For a Sentinel to sense to be the **Objective Leader**, that is, the Sentinel th Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves into an observer instead. +**Sentinel Rule #11**: A **Good Slave** is a slave with the following requirements: +* It is not in SDOWN nor in ODOWN condition. +* We have a valid connection to it currently (not in DISCONNECTED state). +* Latest PING reply we received from it is not older than five seconds. +* Latest INFO reply we received from it is not older than five seconds. +* The latest INFO reply reported that the link with the master is down for no more than the time elapsed since we saw the master entering SDOWN state, plus ten times the configured `down_after_milliseconds` parameter. So for instance if a Sentinel is configured to sense the SDOWN condition after 10 seconds, and the master is down since 50 seconds, we accept a slave as a Good Slave only if the replication link was disconnected less than `50+(10*10)` seconds (two minutes and half more or less). + +**Sentinel Rule #12**: A **Subjective Leader** from the point of view of a Sentinel, is the Sentinel (including itself) with the lower runid monitoring a given master, that also replied to PING less than 5 seconds ago, reported to be able to do the failover via Pub/Sub hello channel, and is not in DISCONNECTED state. + +**Sentinel Rule #12**: If a master is down we ask `SENTINEL is-master-down-by-addr` to every other connected Sentinel as explained in Sentinel Rule #4. This command will also reply with the runid of the **Subjective Leader** from the point of view of the asked Sentinel. A given Sentinel believes to be the **Objective Leader** of a master if it is reported to be the subjective leader by N Sentinels (including itself), where: +* N must be equal or greater to the configured quorum for this master. +* N mast be equal or greater to the majority of the voters (`num_votres/2+1`), considering only the Sentinels that also reported the master to be down. + +**Sentinel Rule #13**: A Sentinel starts the failover as a **Leader** (that is, the Sentinel actually sending the commands to reconfigure the Redis servers) if the following conditions are true at the same time: +* The master is in ODOWN condition. +* The Sentinel is configured to perform the failover with `can-failover` set to yes. +* There is at least a Good Slave from the point of view of the Sentinel. +* The Sentinel believes to be the Objective Leader. +* There is no failover in progress already detected for this master. + +**Sentinel Rule #14**: A Sentinel detects a failover as an **Observer** (that is, the Sentinel just follows the failover generating the appropriate events in the log file and Pub/Sub interface, but without actively reconfiguring instances) if the following conditions are true at the same time: +* There is no failover already in progress. +* A slave instance of the monitored master turned into a master. +However the failover **will NOT be sensed as started if the slave instance turns into a master and at the same time the runid has changed** from the previous one. This means the instance turned into a master because of a restart, and is not a valid condition to consider it a slave election. + +**Sentinel Rule #15**: A Sentinel starting a failover as leader does not immediately starts it. It enters a state called **wait-start**, that lasts a random amount of time between 5 seconds and 15 seconds. During this time **Sentinel Rule #14** still applies: if a valid slave promotion is detected the failover as leader is aborted and the failover as observer is detected. + End of failover --- @@ -295,6 +344,11 @@ Note that when a leader terminates a failover for timeout, it sends a configured, in the hope that they'll receive the command and replicate with the new master eventually. +**Sentinel Rule #16** A failover is considered complete if for a leader or observer if: +* One slave was promoted to master (and the Sentinel can detect that this actually happened via INFO output), and all the additional slaves are all configured to replicate with the new slave (again, the sentinel needs to sense it using the INFO output). +* There is already a correctly promoted slave, but the configured `failover-timeout` time has already elapsed without any progress in the reconfiguration of the additional slaves. In this case a leader sends a best effort `SLAVEOF` command is sent to all the not yet configured slaves. +In both the two above conditions the promoted slave **must be reachable** (not in SDOWN state), otherwise a failover is never considered to be complete. + Leader failing during failover --- @@ -316,6 +370,13 @@ with duplicated SLAVEOF commands do not create any race condition, but at the same time we want to be sure that all the slaves are reconfigured in the case the original leader is no longer working. +**Sentinel Rule #17** A Sentinel that is an observer for a failover in progress +will turn itself into a failover leader, continuing the configuration of the +additional slaves, if all the following conditions are true: +* A failover is in progress, and this Sentinel is an observer. +* It detects to be an objective leader (so likely the previous leader is no longer reachable by other sentinels). +* At least 25% of the configured `failover-timeout` has elapsed without any progress in the observed failover process. + Promoted slave failing during failover --- @@ -334,6 +395,10 @@ leader sentinel aborts a failover it sends a `SLAVEOF` command to all the slaves already reconfigured or in the process of being reconfigured to switch the configuration back to the original master. +**Sentinel Rule #18** A Sentinel will consider the failover process aborted, both when acting as leader and when acting as an observer, in the following conditions are true: +* A failover is in progress and a slave to promote was already selected (or in the case of the observer was already detected as master). +* The promoted slave is in **Extended SDOWN** condition (continually in SDOWN condition for at least ten times the configured `down-after-milliseconds`). + Manual interactions --- @@ -440,6 +505,8 @@ Note: because currently slave priority is not implemented, the selection is performed only discarding unreachable slaves and picking the one with the lower Run ID. +**Sentinel Rule #19**: A Sentinel performing the failover as leader will select the slave to promote, among the existing **Good Slaves** (See rule #11), taking the one with the lower slave priority. When priority is the same the slave with lexicographically lower runid is preferred. + APPENDIX B - Get started with Sentinel in five minutes === From 513c74f1a3aa209f6113826d28257eef6c0fc25b Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 2 Aug 2012 15:07:41 +0200 Subject: [PATCH 0213/2880] Fixed typo --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 9ce0ac4e70..a7578599a2 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -149,7 +149,7 @@ sections. **Sentinel Rule #2**: An instance is Subjectively Down (**SDOWN**) if the latest valid reply to **PING** was received more than `down-after-milliseconds` milliseconds ago. Acceptable PING replies are: +PONG, -LOADING, -MASTERDOWN. -**Sentinel Rule #3**: Every Sentinel is able to reply to the command **SENTINEL is-master-down-by-addr ` `**. This command replies true if the specified address is the one of a master instance, and the master is not in **SDOWN** state. +**Sentinel Rule #3**: Every Sentinel is able to reply to the command **SENTINEL is-master-down-by-addr ` `**. This command replies true if the specified address is the one of a master instance, and the master is in **SDOWN** state. **Sentinel Rule #4**: If a master is in **SDOWN** condition, every other Sentinel also monitoring this master, is queried for confirmation of this state, every second, using the **SENTINEL is-master-down-by-addr** command. From 54585f8d3feee524c1e5f2b3175c5d96585f1335 Mon Sep 17 00:00:00 2001 From: Joffrey JAFFEUX Date: Thu, 2 Aug 2012 16:15:23 +0300 Subject: [PATCH 0214/2880] Update topics/sentinel.md Typo --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index a7578599a2..41726f6018 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -46,7 +46,7 @@ with the following command line: Otherwise you can use directly the `redis-server` executable starting it in Sentinel mode: - redis-server /path/to/sentine.conf --sentinel + redis-server /path/to/sentinel.conf --sentinel Both ways work the same. From 717c8ac4132388501164e706fdca482bc2221051 Mon Sep 17 00:00:00 2001 From: Justin Case Date: Thu, 2 Aug 2012 17:26:17 +0200 Subject: [PATCH 0215/2880] rm trailing whitespace --- topics/sentinel.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index a7578599a2..afa58b49ad 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -534,7 +534,7 @@ At this point you should see something like the following in every Sentinel you [4747] 23 Jul 14:49:19.645 * +sentinel sentinel 127.0.0.1:26379 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 [4747] 23 Jul 14:49:21.659 * +sentinel sentinel 127.0.0.1:26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 - redis-cli -p 26379 sentinel masters + redis-cli -p 26379 sentinel masters 1) 1) "name" 2) "mymaster" 3) "ip" From 94fa43bd80e1c9bd49de89f8f1eb6a92d8a15eee Mon Sep 17 00:00:00 2001 From: Justin Case Date: Thu, 2 Aug 2012 17:28:05 +0200 Subject: [PATCH 0216/2880] typos --- topics/sentinel.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index afa58b49ad..09a793da1a 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -526,7 +526,7 @@ To create the three configurations just create three files where you put somethi sentinel can-failover mymaster yes sentinel parallel-syncs mymaster 1 -Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other differnet non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentinel, but not good for production environments. +Note: where you see `port 26379`, use 26380 for the second Sentinel, and 26381 for the third Sentinel (any other different non colliding port will do of course). Also note that the `down-after-milliseconds` configuration option is set to just five seconds, that is a good value to play with Sentinel, but not good for production environments. At this point you should see something like the following in every Sentinel you are running: @@ -560,6 +560,6 @@ At this point you should see something like the following in every Sentinel you 23) "quorum" 24) "2" -To see how the failover works, just put down your slave (for instance sending `DEUBG SEGFAULT` to crash it) and see what happens. +To see how the failover works, just put down your slave (for instance sending `DEBUG SEGFAULT` to crash it) and see what happens. This HOWTO is a work in progress, more information will be added in the near future. From 84b427a821dc3982cffca4c97954690ce633c828 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Leandro=20L=C3=B3pez=20=28inkel=29?= Date: Thu, 2 Aug 2012 15:28:55 -0300 Subject: [PATCH 0217/2880] Fixed typo --- topics/sentinel.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index a7578599a2..a590f499ab 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -237,9 +237,9 @@ and is only specified if the instance is not a master itself. * **+slave** `` -- A new slave was detected and attached. * **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. * **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached slave turned into a master). -* **+salve-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. -* **+salve-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. -* **+salve-reconf-done** `` -- The slave is now synchronized with the new master. +* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. +* **+slave-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. +* **+slave-reconf-done** `` -- The slave is now synchronized with the new master. * **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). * **+sentinel** `` -- A new sentinel for this master was detected and attached. * **+sdown** `` -- The specified instance is now in Subjectively Down state. @@ -534,7 +534,7 @@ At this point you should see something like the following in every Sentinel you [4747] 23 Jul 14:49:19.645 * +sentinel sentinel 127.0.0.1:26379 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 [4747] 23 Jul 14:49:21.659 * +sentinel sentinel 127.0.0.1:26381 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 - redis-cli -p 26379 sentinel masters + redis-cli -p 26379 sentinel masters 1) 1) "name" 2) "mymaster" 3) "ip" From 104315d1cefda3ccd240fedc80a64cca30295de9 Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Thu, 2 Aug 2012 21:39:39 +0200 Subject: [PATCH 0218/2880] INFO command final drop --- commands/info.md | 85 ++++++++++++++++++++++++++++++------------------ 1 file changed, 53 insertions(+), 32 deletions(-) diff --git a/commands/info.md b/commands/info.md index 083339cb84..7ea82fc41e 100644 --- a/commands/info.md +++ b/commands/info.md @@ -23,32 +23,25 @@ When no parameter is provided, the `default` option is assumed. @return -@bulk-reply: in the following format (compacted for brevity): +@bulk-reply: as a collection of text lines. -``` -redis_version:2.2.2 -uptime_in_seconds:148 -used_cpu_sys:0.01 -used_cpu_user:0.03 -used_memory:768384 -used_memory_rss:1536000 -mem_fragmentation_ratio:2.00 -changes_since_last_save:118 -keyspace_hits:174 -keyspace_misses:37 -allocation_stats:4=56,8=312,16=1498,... -db0:keys=1240,expires=0 -``` +Lines can contain a section name (starting with a # character) or a property. +All the properties are in the form of `field:value` terminated by `\r\n`. -All the fields are in the form of `field:value` terminated by `\r\n`. +```cli +INFO +``` ## Notes Please note depending on the version of Redis some of the fields have been added or removed. A robust client application should therefore parse the -result of this command by skipping unknown property, and gracefully handle +result of this command by skipping unknown properties, and gracefully handle missing fields. +Here is the description of fields for Redis >= 2.4. + + Here is the meaning of all fields in the **server** section: * `redis_version`: Version of the Redis server @@ -160,31 +153,59 @@ Here is the meaning of all fields in the **stats** section: Here is the meaning of all fields in the **replication** section: -* `role:master -* `connected_slaves: +* `role`: The role is slave if the instance is slave to another instance. + Otherwise it is master. + Note that a slave can be master of another slave (daisy chaining). + +If the instance is a slave, these additional fields are provided: + +* `master_host`: Host or IP address of the master +* `master_port`: Master listening TCP port +* `master_link_status`: Status of the link (up/down) +* `master_last_io_seconds_ago`: Number of seconds since the last interaction with master +* `master_sync_in_progress`: Indicate the master is SYNCing to the slave + +If a SYNC operation is on-going, these additional fields are provided: + +* `master_sync_left_bytes`: Number of bytes left before SYNCing is complete +* `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation + +IF the link between master and slave is down, an additional field is provided: + +* `master_link_down_since_seconds`: Number of seconds since the link is down + +The following field is always provided: + +* `connected_slaves`: Number of connected slaves + +For each slave, the following line is added: + +* `slaveXXX`: id, ip address, port, state Here is the meaning of all fields in the **cpu** section: -* `used_cpu_sys`: -* `used_cpu_user`: -* `used_cpu_sys_children`: -* `used_cpu_user_children`: +* `used_cpu_sys`: System CPU consumed by the Redis server +* `used_cpu_user`:User CPU consumed by the Redis server +* `used_cpu_sys_children`: System CPU consumed by the background processes +* `used_cpu_user_children`: User CPU consumed by the background processes -Here is the meaning of all fields in the **commandstats** section: +The **commandstats** section provides statistics based on the command type, +including the number of calls, the total CPU time consumed by these commands, +and the average CPU consumed per command execution. +For each command type, the following line is added: -Here is the meaning of all fields in the **cluster** section: +* `cmdstat_XXX`:calls=XXX,usec=XXX,usec_per_call=XXX -* `cluster_enabled`: +The **cluster** section currently only contains a unique field: -Here is the meaning of all fields in the **keyspace** section: +* `cluster_enabled`: Indicate Redis cluster is enabled -db0:keys=3,expires=0 +The **keyspace** section provides statistics on the main dictionary of each database. +The statistics are the number of keys, and the number of keys with an expiration. +For each database, the following line is added: -* `allocation_stats` holds a histogram containing the number of allocations of - a certain size (up to 256). - This provides a means of introspection for the type of allocations performed - by Redis at run time. +* `dbXXX`:keys=XXX,expires=XXX [hcgcpgp]: http://code.google.com/p/google-perftools/ From 96083426f6f2aea3159978cc739e9b35f5c9ff39 Mon Sep 17 00:00:00 2001 From: Didier Spezia Date: Thu, 2 Aug 2012 21:53:02 +0200 Subject: [PATCH 0219/2880] Minor INFO fixes --- commands/info.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/commands/info.md b/commands/info.md index 7ea82fc41e..f041881503 100644 --- a/commands/info.md +++ b/commands/info.md @@ -153,8 +153,7 @@ Here is the meaning of all fields in the **stats** section: Here is the meaning of all fields in the **replication** section: -* `role`: The role is slave if the instance is slave to another instance. - Otherwise it is master. +* `role`: Value is "master" if the instance is slave of no one, or "slave" if the instance is enslaved to a master. Note that a slave can be master of another slave (daisy chaining). If the instance is a slave, these additional fields are provided: @@ -170,7 +169,7 @@ If a SYNC operation is on-going, these additional fields are provided: * `master_sync_left_bytes`: Number of bytes left before SYNCing is complete * `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O during a SYNC operation -IF the link between master and slave is down, an additional field is provided: +If the link between master and slave is down, an additional field is provided: * `master_link_down_since_seconds`: Number of seconds since the link is down From 7d7f5f0792c86c25576606e75f5578ea023f5dec Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 6 Sep 2012 12:43:00 +0200 Subject: [PATCH 0220/2880] Clarifications about Lua -> Redis protocol conversions. --- commands/eval.md | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 31522f78b9..40d1014da0 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -101,9 +101,9 @@ The following table shows you all the conversions rules: **Lua to Redis** conversion table. -* Lua number -> Redis integer reply +* Lua number -> Redis integer reply (the number is converted into an integer) * Lua string -> Redis bulk reply -* Lua table (array) -> Redis multi bulk reply +* Lua table (array) -> Redis multi bulk reply (truncated to the first nil inside the Lua array if any) * Lua table with a single `ok` field -> Redis status reply * Lua table with a single `err` field -> Redis error reply * Lua boolean false -> Redis Nil bulk reply. @@ -113,6 +113,11 @@ Redis to Lua conversion rule: * Lua boolean true -> Redis integer reply with value of 1. +Also there are two important rules to note: + +* Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. If you want to return a float from Lua you should return it as a string, exactly like Redis itself does (see for instance the `ZSCORE` command). +* There is [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html), this is a result of Lua table semantics, so when Redis converts a Lua array into Redis protocol the conversion is stopped if a nil is encountered. + Here are a few conversion examples: ``` @@ -128,11 +133,22 @@ Here are a few conversion examples: > eval "return redis.call('get','foo')" 0 "bar" ``` - The last example shows how it is possible to receive the exact return value of `redis.call()` or `redis.pcall()` from Lua that would be returned if the command was called directly. +In the following example we can see how floats and arrays with nils are handled: + +``` +> eval "return {1,2,3.3333,'foo',nil,'bar'}" 0 +1) (integer) 1 +2) (integer) 2 +3) (integer) 3 +4) "foo" +``` + +As you can see 3.333 is converted into 3, and the *bar* string is never returned as there is a nil before. + ## Atomicity of scripts Redis uses the same Lua interpreter to run all the commands. From 70ae0c5c77097f7af21accc05ea5e130d0d0ca09 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Sep 2012 12:54:45 +0200 Subject: [PATCH 0221/2880] tools.json describes the tools section in the clients page. --- tools.json | 262 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 262 insertions(+) create mode 100644 tools.json diff --git a/tools.json b/tools.json new file mode 100644 index 0000000000..20b03cb693 --- /dev/null +++ b/tools.json @@ -0,0 +1,262 @@ +[ + { + "name": "Resque", + "language": "Ruby", + "repository": "https://github.com/defunkt/resque", + "description": "Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.", + "authors": ["defunkt"] + }, + { + "name": "Rq", + "language": "Python", + "repository": "https://github.com/nvie/rq", + "description": "Minimalistic Python task queue. Supports only Redis.", + "authors": ["nvie"] + }, + { + "name": "Celery", + "language": "Python", + "repository": "https://github.com/ask/celery", + "description": "Python task queue. Supports multiple backends.", + "authors": ["asksolem"] + }, + { + "name": "Fnordmetric", + "language": "Ruby", + "repository": "https://github.com/paulasmuth/fnordmetric", + "description": "Redis/ruby-based realtime Event-Tracking app.", + "authors": ["paulasmuth"] + }, + { + "name": "Ohm", + "language": "Ruby", + "repository": "https://github.com/soveran/ohm", + "description": "Object-hash mapping library for Redis.", + "authors": ["soveran"] + }, + { + "name": "Kombu", + "language": "Python", + "repository": "https://github.com/ask/kombu", + "description": "Python AMQP Framework with redis suppport", + "authors": [] + }, + { + "name": "Sider", + "language": "Python", + "repository": "https://bitbucket.org/dahlia/sider", + "description": "Python persistent object library based on Redis.", + "authors": ["hongminhee"] + }, + { + "name": "Redis-objects", + "language": "Ruby", + "repository": "https://github.com/nateware/redis-objects", + "description": "Map Redis types directly to Ruby objects.", + "authors": ["nateware"] + }, + { + "name": "Redisco", + "language": "Python", + "repository": "https://github.com/iamteem/redisco", + "description": "Loose implementation of Ohm in Python (see above for Ohm project) - Warning: Not actively maintained at the moment.", + "authors": ["iamteem"] + }, + { + "name": "Redis-rdb-tools", + "language": "Python", + "repository": "https://github.com/sripathikrishnan/redis-rdb-tools", + "description": "Parse Redis dump.rdb files, Analyze Memory, and Export Data to JSON.", + "authors": ["srithedabbler"] + }, + { + "name": "Rdb-parser", + "language": "Javascript", + "repository": "https://github.com/pconstr/rdb-parser", + "description": "node.js asynchronous streaming parser for redis RDB database dumps.", + "authors": ["pconstr"] + }, + { + "name": "Redis-sync", + "language": "Javascript", + "repository": "https://github.com/pconstr/redis-sync", + "description": "A node.js redis replication slave toolkit", + "authors": ["pconstr"] + }, + { + "name": "Ost", + "language": "Ruby", + "repository": "https://github.com/soveran/ost", + "description": "Redis based queues and workers.", + "authors": ["soveran"] + }, + { + "name": "Meerkat", + "language": "Ruby", + "repository": "http://carlhoerberg.github.com/meerkat/", + "description": "Rack middleware for Server Sent Events with multiple backends.", + "authors": ["carlhoerberg"] + }, + { + "name": "Redis-sampler", + "language": "Ruby", + "repository": "https://github.com/antirez/redis-sampler", + "description": "Sample a Redis DB to understand dataset composition.", + "authors": ["antirez"] + }, + { + "name": "Recommendify", + "language": "Ruby", + "repository": "https://github.com/paulasmuth/recommendify", + "description": "Ruby/Redis based recommendation engine (collaborative filtering).", + "authors": ["paulasmuth"] + }, + { + "name": "Redis-store", + "language": "Ruby", + "repository": "https://github.com/jodosha/redis-store", + "description": "Namespaced Rack::Session, Rack::Cache, I18n and cache Redis stores for Ruby web frameworks.", + "authors": ["jodosha"] + }, + { + "name": "Redmon", + "language": "Ruby", + "repository": "https://github.com/steelThread/redmon", + "description": "A web interface for managing redis: cli, admin, and live monitoring.", + "authors": ["steel_thread"] + }, + { + "name": "Rollout", + "language": "Ruby", + "repository": "https://github.com/jamesgolick/rollout", + "description": "Conditionally roll out features with redis.", + "authors": ["jamesgolick"] + }, + { + "name": "Webdis", + "language": "C", + "url": "http://webd.is/", + "repository": "https://github.com/nicolasff/webdis", + "description": "A Redis HTTP interface with JSON output.", + "authors": ["yowgi"] + }, + { + "name": "Soulmate", + "language": "Ruby", + "repository": "https://github.com/seatgeek/soulmate", + "description": "Redis-backed service for fast autocompleting.", + "authors": ["seatgeek"] + }, + { + "name": "Redis_failover", + "language": "Ruby", + "repository": "https://github.com/ryanlecompte/redis_failover", + "description": "Redis Failover is a ZooKeeper-based automatic master/slave failover solution for Ruby.", + "authors": ["ryanlecompte"] + }, + { + "name": "Redis-dump", + "language": "Ruby", + "repository": "https://github.com/delano/redis-dump", + "description": "Backup and restore your Redis data to and from JSON. Warning: alpha code.", + "authors": ["solutious"] + }, + { + "name": "Sidekiq", + "language": "Ruby", + "repository": "http://mperham.github.com/sidekiq/", + "description": "Simple, efficient message processing for your Rails 3 application.", + "authors": ["mperham"] + }, + { + "name": "Omhiredis", + "language": "C", + "repository": "http://www.rsyslog.com/doc/build_from_repo.html", + "description": "redis output plugin for rsyslog (rsyslog dev, and rsyslog head).", + "authors": ["taotetek"] + }, + { + "name": "Mod_redis", + "language": "C", + "repository": "https://github.com/sneakybeaky/mod_redis", + "description": "An Apache HTTPD module for speaking to redis via HTTP", + "authors": [] + }, + { + "name": "leaderboard", + "language": "Ruby", + "repository": "https://github.com/agoragames/leaderboard", + "description": "Leaderboards backed by Redis.", + "authors": ["czarneckid"] + }, + { + "name": "Redis-rdb", + "language": "Ruby", + "repository": "https://github.com/nrk/redis-rdb", + "description": "A set of utilities to handle Redis .rdb files with Ruby.", + "authors": ["JoL1hAHN"] + }, + { + "name": "Lua-ohm", + "language": "Lua", + "repository": "https://github.com/slact/lua-ohm", + "description": "Lua Redis Object-hash-mapping and more", + "authors": [] + }, + { + "name": "PHP-Resque", + "language": "PHP", + "repository": "https://github.com/chrisboulton/php-resque", + "description": "Port of Resque to PHP.", + "authors": ["surfichris"] + }, + { + "name": "phpRedisAdmin", + "language": "PHP", + "repository": "https://github.com/ErikDubbelboer/phpRedisAdmin", + "description": "phpRedisAdmin is a simple web interface to manage Redis databases.", + "authors": [] + }, + { + "name": "HighcoTimelineBundle", + "language": "PHP", + "repository": "https://github.com/stephpy/TimelineBundle", + "description": "TimelineBundle is a Bundle which works with Symfony 2.* which provides a timeline for a subject as Facebook can do.", + "authors": ["stephpy"] + }, + { + "name": "Stdnet", + "language": "Python", + "repository": "https://github.com/lsbardel/python-stdnet", + "description": "Redis data manager with advanced query and search API.", + "authors": ["lsbardel"] + }, + { + "name": "Retools", + "language": "Python", + "repository": "https://github.com/bbangert/retools", + "description": "Caching and locking helper library.", + "authors": ["benbangert"] + }, + { + "name": "Redback", + "language": "Javascript", + "repository": "http://github.com/chriso/redback", + "description": "Higher-level Redis constructs - social graph, full text search, rate limiting, key pairs.", + "authors": ["chris6F"] + }, + { + "name": "Recurrent", + "language": "Javascript", + "repository": "https://github.com/pconstr/recurrent", + "description": "A redis-backed manager of recurrent jobs, for node.js", + "authors": ["pconstr"] + }, + { + "name": "Amico", + "language": "Ruby", + "repository": "https://github.com/agoragames/amico", + "description": "Relationships (e.g. friendships) backed by Redis.", + "authors": ["czarneckid"] + } +] From 776826c5c2a099c15025bb96915a76824768a868 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Sep 2012 13:08:23 +0200 Subject: [PATCH 0222/2880] Mark Eredis as recommended client for Erlang. --- clients.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index a41948ebf2..e2e4ccfdd6 100644 --- a/clients.json +++ b/clients.json @@ -47,7 +47,8 @@ "language": "Erlang", "repository": "https://github.com/wooga/eredis", "description": "Redis client with a focus on performance", - "authors": ["wooga"] + "authors": ["wooga"], + "recommended": true }, { From a2eb2dd41c8ae77e88cbf12e8fc1dddda9290509 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Sep 2012 16:06:22 +0200 Subject: [PATCH 0223/2880] Configuration page added. --- topics/config.md | 100 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 topics/config.md diff --git a/topics/config.md b/topics/config.md new file mode 100644 index 0000000000..4e5229f224 --- /dev/null +++ b/topics/config.md @@ -0,0 +1,100 @@ +Redis configuration +=== + +Redis is able to start without a configuration file using a built-in default +configuration, however this setup is only recommanded for testing and +development purposes. + +The proper way to configure Redis is by providing a Redis configuration file, +usually called `redis.conf`. + +The `redis.conf` file contains a number of directives that have a very simple +format: + + keyword argument1 argument2 ... argumentN + +This is an example of configuration directive: + + slaveof 127.0.0.1 6380 + +It is possible to provide strings containing spaces as arguments using +quotes, as in the following example: + + requirepass "hello world" + +The list of configuration directives, and their meaning and intended usage +is available in the self-commented example redis.conf shipped into the +Redis distribution. + +* The self commented [redis.conf for Redis 2.6](https://raw.github.com/antirez/redis/2.6/redis.conf). +* The self commented [redis.conf for Redis 2.4](https://raw.github.com/antirez/redis/2.4/redis.conf). + +Passing arguments via command line +--- + +Since Redis 2.6 it is possible to also pass Redis configuration parameters +using the command line directly. This is very useful for testing purposes. +The following is an example that stats a new Redis instance using port 6380 +as a slave of the instance running at 127.0.0.1 port 6379. + + ./redis-server --port 6380 --slaveof 127.0.0.1 6379 + +The format of the arguments passed via the command line is exactly the same +as the one used in the redis.conf file, with the exception that the keyword +is prefixed with `--`. + +Note that internally this generates an in-memory temporary config file +(possibly concatenating the config file passed by the user if any) where +arguments are translated into the format of redis.conf. + +Changing Redis configuration while the server is running +--- + +It is possible to reconfigure Redis on the fly without stopping and restarting +the service, or querying the current configuration programmatically using the +special commands `CONFIG SET` and `CONFIG GET`. + +Not all the configuration directives are supported in this way, but most +are supported as expected. Please refer to the `CONFIG SET` and `CONFIG GET` +pages for more information. + +Note that modifying the configuration on the fly **has no effects on the +redis.conf file** so at the next restart of Redis the old configuration will +be used instead. + +Make sure to also modify the `redis.conf` file accordingly to the configuration +you set using `CONFIG SET`. There are plans to provide a `CONFIG REWRITE` +command that will be able to run the `redis.conf` file rewriting the +configuration accordingly to the current server configuration, without modifying +the comments and the structure of the current file. + +Configuring Redis as a cache +--- + +If you plan to use Redis just as a cache where every key will have an +expire set, you may consider using the following configuration instead +(assuming a max memory limit of 2 megabytes as an example): + + maxmemory 2mb + maxmemory-policy allkeys-lru + +In this configuration there is no need for the application to set a +time to live for keys using the `EXPIRE` command (or equivalent) since +all the keys will be evicted using an approximated LRU algorithm as long +as we hit the 2 megabyte memory limit. + +This is more memory effective as setting expires on keys uses additional +memory. Also an LRU behavior is usually to prefer compared to a fixed expire +for every key, so that the *working set* of your data (the keys that are +used more frequently) will likely last more. + +Basically in this configuration Redis acts in a similar way to memcached. + +When Redis is used as a cache in this way, if the application also requires +the use Redis as a store, it is strongly suggested to create two Redis +instances, one as a cache, configured in this way, and one as a store, +configured accordingly to your persistence needs and only holding keys +that are not about cached data. + +*Note:* The user is adviced to read the example redis.conf to check how the +other maxmemory policies available work. From 303d4a9b478a1078c1b0c3963a87a7bf2bdbb55d Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Sep 2012 16:09:37 +0200 Subject: [PATCH 0224/2880] Expicit links to CONFIG GET/SET commands. Apparently redis.io code does not handle well auto-linking to commands composed of multiple tokens. Added explicit links to fix the issue for now. --- topics/config.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/topics/config.md b/topics/config.md index 4e5229f224..e9b092ca0f 100644 --- a/topics/config.md +++ b/topics/config.md @@ -52,10 +52,12 @@ Changing Redis configuration while the server is running It is possible to reconfigure Redis on the fly without stopping and restarting the service, or querying the current configuration programmatically using the -special commands `CONFIG SET` and `CONFIG GET`. +special commands [CONFIG SET](/commands/config-set) and +[CONFIG GET](/commands/config-get) Not all the configuration directives are supported in this way, but most -are supported as expected. Please refer to the `CONFIG SET` and `CONFIG GET` +are supported as expected. Please refer to the +[CONFIG SET](/commands/config-set) and [CONFIG GET](/commands/config-get) pages for more information. Note that modifying the configuration on the fly **has no effects on the @@ -63,7 +65,8 @@ redis.conf file** so at the next restart of Redis the old configuration will be used instead. Make sure to also modify the `redis.conf` file accordingly to the configuration -you set using `CONFIG SET`. There are plans to provide a `CONFIG REWRITE` +you set using [CONFIG SET](/commands/config-set). +There are plans to provide a `CONFIG REWRITE` command that will be able to run the `redis.conf` file rewriting the configuration accordingly to the current server configuration, without modifying the comments and the structure of the current file. From db4ec13ff896d2753246b9643a9ec89b3a84e62c Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 18 Sep 2012 17:08:39 +0200 Subject: [PATCH 0225/2880] Grammar fix for Config page. --- topics/config.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/config.md b/topics/config.md index e9b092ca0f..a8b2558b6d 100644 --- a/topics/config.md +++ b/topics/config.md @@ -23,11 +23,11 @@ quotes, as in the following example: requirepass "hello world" The list of configuration directives, and their meaning and intended usage -is available in the self-commented example redis.conf shipped into the +is available in the self documented example redis.conf shipped into the Redis distribution. -* The self commented [redis.conf for Redis 2.6](https://raw.github.com/antirez/redis/2.6/redis.conf). -* The self commented [redis.conf for Redis 2.4](https://raw.github.com/antirez/redis/2.4/redis.conf). +* The self documented [redis.conf for Redis 2.6](https://raw.github.com/antirez/redis/2.6/redis.conf). +* The self documented [redis.conf for Redis 2.4](https://raw.github.com/antirez/redis/2.4/redis.conf). Passing arguments via command line --- From 516b9d05e1d99b3ea6a9047d534df38fefd8331d Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:00:54 -0500 Subject: [PATCH 0226/2880] some small grammar tweaks --- topics/data-types-intro.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 3feb1024f2..43cfb6913a 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -17,7 +17,7 @@ supported as values: score, but where elements are always taken in order without requiring a sorting operation. -It's not always trivial to grasp how this data types work and what to use in +It's not always trivial to grasp how these data types work and what to use in order to solve a given problem from the [command reference](/commands), so this document is a crash course to Redis data types and their most used patterns. @@ -59,14 +59,14 @@ Let's play a bit with the string type: my binary safe value As you can see using the [SET command](/commands/set) and the [GET -command](/commands/get) is trivial to set values to strings and have this +command](/commands/get) is trivial to set values to strings and have the strings returned back. Values can be strings (including binary data) of every kind, for instance you can store a jpeg image inside a key. A value can't be bigger than 512 MB. Even if strings are the basic values of Redis, there are interesting operations -you can perform against them. For instance one is atomic increment: +you can perform against them. For instance, one is atomic increment: $ redis-cli set counter 100 OK From 504028dcf47f6d418ebcb0afaa76bfa10934c3ce Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:04:49 -0500 Subject: [PATCH 0227/2880] more tweaks, a bit more concise --- topics/data-types-intro.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 43cfb6913a..5831a0de7b 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -80,23 +80,23 @@ you can perform against them. For instance, one is atomic increment: The [INCR](/commands/incr) command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like [INCRBY](/commands/incrby), -[DECR](commands/decr) and [DECRBY](/commands/decrby). Actually internally it's +[DECR](commands/decr) and [DECRBY](/commands/decrby). Internally it's always the same command, acting in a slightly different way. -What means that INCR is atomic? That even multiple clients issuing INCR against +What does it mean that INCR is atomic? That even multiple clients issuing INCR against the same key will never incur into a race condition. For instance it can never happen that client 1 read "10", client 2 read "10" at the same time, both -increment to 11, and set the new value of 11. The final value will always be of +increment to 11, and set the new value of 11. The final value will always be 12 and the read-increment-set operation is performed while all the other clients are not executing a command at the same time. Another interesting operation on string is the [GETSET](/commands/getset) command, that does just what its name suggests: Set a key to a new value, -returning the old value, as result. Why this is useful? Example: you have a +returning the old value as result. Why this is useful? Example: you have a system that increments a Redis key using the [INCR](/commands/incr) command every time your web site receives a new visit. You want to collect this information one time every hour, without losing a single key. You can GETSET -the key assigning it the new value of "0" and reading the old value back. +the key, assigning it the new value of "0" and reading the old value back. The List type --- @@ -151,7 +151,7 @@ element of the range to return. Both the indexes can be negative to tell Redis to start to count from the end, so -1 is the last element, -2 is the penultimate element of the list, and so forth. -As you can guess from the example above, lists can be used, for instance, in +As you can guess from the example above, lists could be used in order to implement a chat system. Another use is as queues in order to route messages between different processes. But the key point is that *you can use Redis lists every time you require to access data in the same order they are From caeeaa91967b4d96ccbcb76a3c240e38916e9937 Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:12:18 -0500 Subject: [PATCH 0228/2880] clarify the tag example --- topics/data-types-intro.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 5831a0de7b..ef9442f64a 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -174,7 +174,7 @@ can be referenced in multiple times: in a list to preserve their chronological order, in a Set to remember they are about a specific category, in another list but only if this object matches some kind of requisite, and so forth. -Let's return back to the reddit.com example. A more credible pattern for adding +Let's return back to the reddit.com example. A better pattern for adding submitted links (news) to the list is the following: $ redis-cli incr next.news.id @@ -230,12 +230,12 @@ Now let's check if a given element exists: expressing relations between objects. For instance we can easily use Redis Sets in order to implement tags. -A simple way to model this is to have, for every object you want to tag, a Set -with all the IDs of the tags associated with the object, and for every tag that -exists, a Set of of all the objects tagged with this tag. +A simple way to model this is to have a Set for every object containing its associated +tag IDs, and a Set for every tag containing the object IDs that have that tag. For instance if our news ID 1000 is tagged with tag 1,2,5 and 77, we can -specify the following two Sets: +specify the following five Sets - one Set for the object's tags, and four Sets +for the four tags: $ redis-cli sadd news:1000:tags 1 (integer) 1 From 0870f900222256be27c6917701935332969e148f Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:19:41 -0500 Subject: [PATCH 0229/2880] more concise --- topics/data-types-intro.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index ef9442f64a..0c55e3e7b0 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -378,9 +378,8 @@ all, it's already all sorted: Didn't know that Linus was younger than Yukihiro btw ;) -Anyway I want to order these elements the other way around, using -[ZREVRANGE](/commands/zrevrange) instead of [ZRANGE](/commands/zrange) this -time: +What if I want to order them the opposite way, youngest to oldest? +Use [ZREVRANGE](/commands/zrevrange) instead of [ZRANGE](/commands/zrange): $ redis-cli zrevrange hackers 0 -1 1. Linus Torvalds @@ -399,8 +398,8 @@ the same time. Operating on ranges --- -Sorted sets are more powerful than this. They can operate on ranges. For -instance let's try to get all the individuals that were born up to the 1950. We +Sorted sets are more powerful than this. They can operate on ranges. +Let's get all the individuals that were born up to the 1950 inclusive. We use the [ZRANGEBYSCORE](/commands/zrangebyscore) command to do it: $ redis-cli zrangebyscore hackers -inf 1950 @@ -411,7 +410,7 @@ use the [ZRANGEBYSCORE](/commands/zrangebyscore) command to do it: We asked Redis to return all the elements with a score between negative infinity and 1950 (both extremes are included). -It's also possible to remove ranges of elements. For instance let's remove all +It's also possible to remove ranges of elements. Let's remove all the hackers born between 1940 and 1960 from the sorted set: $ redis-cli zremrangebyscore hackers 1940 1960 From 47ef20c5a8e45f9c69b33d9c94a579dd3e8fd9df Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:22:05 -0500 Subject: [PATCH 0230/2880] clarification --- topics/data-types-intro.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 0c55e3e7b0..374e33796b 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -427,13 +427,12 @@ populate a sorted set in order to generate the home page. A sorted set can contain all the news that are not older than a few days (we remove old entries from time to time using ZREMRANGEBYSCORE). A background job gets all the elements from this sorted set, get the user votes and the time of the news, and -compute the score to populate the *reddit.home.page* sorted set with the news +computes the score to populate the *reddit.home.page* sorted set with the news IDs and associated scores. To show the home page we just have to perform a blazingly fast call to ZRANGE. -From time to time we'll remove too old news from the *reddit.home.page* sorted -set as well in order for our system to work always against a limited set of -news. +From time to time we'll remove very old news from the *reddit.home.page* sorted +set to keep our system working with fresh news only. Updating the scores of a sorted set --- From f665db2447a4848fec0ef25692fa608b6ba36888 Mon Sep 17 00:00:00 2001 From: Rob Sanheim Date: Fri, 21 Sep 2012 13:23:45 -0500 Subject: [PATCH 0231/2880] bit more tweaks --- topics/data-types-intro.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index 374e33796b..e2800211c1 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -442,7 +442,8 @@ updated at any time. Just calling again ZADD against an element already included in the sorted set will update its score (and position) in O(log(N)), so sorted sets are suitable even when there are tons of updates. -This tutorial is in no way complete, this is just the basics to get started -with Redis, read the [command reference](/commands) to discover a lot more. +This tutorial is in no way complete and has covered just the basics. +Read the [command reference](/commands) to discover a lot more. -Thanks for reading. Salvatore. +Thanks for reading, +Salvatore. From 5ef098be35d702bb5900620468093df4153e84f5 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 21 Sep 2012 22:29:55 +0200 Subject: [PATCH 0232/2880] Added Redigo client for Go. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index e2e4ccfdd6..049d830d69 100644 --- a/clients.json +++ b/clients.json @@ -67,6 +67,15 @@ "authors": ["SunOf27"] }, + { + "name": "Redigo", + "language": "Go", + "repository": "https://github.com/garyburd/redigo", + "description": "Redigo is a Go client for the Redis database with support for Print-alike API, Pipelining (including transactions), Pub/Sub, Connection pooling, scripting.", + "authors": ["gburd"], + "recommended": true + }, + { "name": "Tideland RDC", "language": "Go", From cab7b14800e8eada9a77638b4e921e0c083da243 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 22 Sep 2012 16:26:08 +0200 Subject: [PATCH 0233/2880] clients.json annotated with active status of clients. --- clients.json | 114 +++++++++++++++++++++++++++++++++------------------ 1 file changed, 73 insertions(+), 41 deletions(-) diff --git a/clients.json b/clients.json index 3442769ddd..a213f29e8c 100644 --- a/clients.json +++ b/clients.json @@ -6,7 +6,8 @@ "repository": "https://github.com/redis/redis-rb", "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performances.", "authors": ["ezmobius", "soveran", "djanowski", "pnoordhuis"], - "recommended": true + "recommended": true, + "active": true }, { @@ -18,11 +19,13 @@ }, { - "name": "redis-clojure", + "name": "carmine", "language": "Clojure", - "repository": "https://github.com/tavisrudd/redis-clojure", - "description": "", - "authors": ["tavisrudd"] + "repository": "https://github.com/ptaoussanis/carmine", + "description": "Simple, high-performance Redis (2.0+) client for Clojure.", + "authors": ["ptaoussanis"], + "recommended": true, + "active": true }, { @@ -31,14 +34,15 @@ "url": "http://www.cliki.net/cl-redis", "repository": "https://github.com/vseloved/cl-redis", "description": "", - "authors": ["BigThingist"] + "authors": ["BigThingist"], + "active": true }, { "name": "Erldis", "language": "Erlang", "repository": "https://github.com/japerk/erldis", - "description": "", + "description": "A Redis erlang client library.", "authors": ["dialtone_","japerk"] }, @@ -48,7 +52,8 @@ "repository": "https://github.com/wooga/eredis", "description": "Redis client with a focus on performance", "authors": ["wooga"], - "recommended": true + "recommended": true, + "active": true }, { @@ -56,15 +61,17 @@ "language": "Fancy", "repository": "https://github.com/bakkdoor/redis.fy", "description": "A Fancy Redis client library", - "authors": ["bakkdoor"] + "authors": ["bakkdoor"], + "active": true }, { "name": "Go-Redis", "language": "Go", "repository": "https://github.com/alphazero/Go-Redis", - "description": "", - "authors": ["SunOf27"] + "description": "Google Go Client and Connectors for Redis.", + "authors": ["SunOf27"], + "active": true }, { @@ -73,7 +80,8 @@ "repository": "https://github.com/garyburd/redigo", "description": "Redigo is a Go client for the Redis database with support for Print-alike API, Pipelining (including transactions), Pub/Sub, Connection pooling, scripting.", "authors": ["gburd"], - "recommended": true + "recommended": true, + "active": true }, { @@ -81,15 +89,17 @@ "language": "Go", "repository": "http://code.google.com/p/tcgl/", "description": "A flexible Go Redis client able to handle all commands", - "authors": ["themue"] + "authors": ["themue"], + "active": true }, { "name": "godis", "language": "Go", "repository": "https://github.com/simonz05/godis", - "description": "", - "authors": ["simonz05"] + "description": "A Redis client for Go.", + "authors": ["simonz05"], + "active": true }, { @@ -98,15 +108,17 @@ "url": "http://hackage.haskell.org/package/hedis", "repository": "https://github.com/informatikr/hedis", "description": "Supports the complete command set. Commands are automatically pipelined for high performance.", - "authors": [] + "authors": [], + "active": true }, { - "name": "redis", + "name": "redis package", "language": "Haskell", "url": "http://hackage.haskell.org/package/redis", - "description": "", - "authors": [] + "description": "This library is a Haskell driver for Redis. It's tested with current git version and with v2.4.6 of redis server.", + "authors": [], + "active": true }, { @@ -124,7 +136,8 @@ "repository": "https://github.com/xetorthio/jedis", "description": "", "authors": ["xetorthio"], - "recommended": true + "recommended": true, + "active": true }, { @@ -133,7 +146,8 @@ "url": "http://code.google.com/p/jredis", "repository": "https://github.com/alphazero/jredis", "description": "", - "authors": ["SunOf27"] + "authors": ["SunOf27"], + "active": true }, { @@ -159,7 +173,8 @@ "repository": "https://github.com/nrk/redis-lua", "description": "", "authors": ["JoL1hAHN"], - "recommended": true + "recommended": true, + "active": true }, { @@ -167,7 +182,8 @@ "language": "Lua", "repository": "https://github.com/agladysh/lua-hiredis", "description": "Lua bindings for the hiredis library", - "authors": ["agladysh"] + "authors": ["agladysh"], + "active": true }, { @@ -177,7 +193,8 @@ "repository": "https://github.com/melo/perl-redis", "description": "Perl binding for Redis database", "authors": ["pedromelo"], - "recommended": true + "recommended": true, + "active": true }, { @@ -185,7 +202,8 @@ "language": "Perl", "url": "http://search.cpan.org/dist/Redis-hiredis/", "description": "Perl binding for the hiredis C client", - "authors": ["neophenix"] + "authors": ["neophenix"], + "active": true }, { @@ -203,7 +221,8 @@ "url": "http://search.cpan.org/dist/MojoX-Redis", "repository": "https://github.com/und3f/mojox-redis", "description": "asynchronous Redis client for Mojolicious", - "authors": ["und3f"] + "authors": ["und3f"], + "active": true }, { @@ -220,7 +239,8 @@ "repository": "https://github.com/nrk/predis", "description": "Mature and supported", "authors": ["JoL1hAHN"], - "recommended": true + "recommended": true, + "active": true }, { @@ -229,7 +249,8 @@ "repository": "https://github.com/nicolasff/phpredis", "description": "This is a client written in C as a PHP module.", "authors": ["yowgi"], - "recommended": true + "recommended": true, + "active": true }, { @@ -238,7 +259,8 @@ "url": "http://rediska.geometria-lab.net", "repository": "https://github.com/Shumkov/Rediska", "description": "", - "authors": ["shumkov"] + "authors": ["shumkov"], + "active": true }, { @@ -254,7 +276,8 @@ "language": "PHP", "repository": "https://github.com/jdp/redisent", "description": "", - "authors": ["justinpoliey"] + "authors": ["justinpoliey"], + "active": true }, { @@ -263,7 +286,8 @@ "repository": "https://github.com/andymccurdy/redis-py", "description": "Mature and supported. Currently the way to go for Python.", "authors": ["andymccurdy"], - "recommended": true + "recommended": true, + "active": true }, { @@ -279,7 +303,8 @@ "language": "Python", "repository": "https://github.com/aallamaa/desir", "description": "", - "authors": ["aallamaa"] + "authors": ["aallamaa"], + "active": true }, { @@ -296,7 +321,8 @@ "repository": "https://github.com/debasishg/scala-redis", "description": "Apparently a fork of the original client from @alejandrocrosa", "authors": ["debasishg"], - "recommended": true + "recommended": true, + "active": true }, { @@ -318,7 +344,7 @@ "name": "Tcl Client", "language": "Tcl", "repository": "https://github.com/antirez/redis/blob/unstable/tests/support/redis.tcl", - "description": "The client used in the Redis test suite.", + "description": "The client used in the Redis test suite. Not really full featured nor designed to be used in the real world.", "authors": ["antirez"] }, @@ -328,7 +354,8 @@ "url": "https://github.com/ServiceStack/ServiceStack.Redis", "description": "This is a fork and improvement of the original C# client written by Miguel De Icaza.", "authors": ["demisbellot"], - "recommended": true + "recommended": true, + "active": true }, { @@ -337,7 +364,8 @@ "url": "http://code.google.com/p/booksleeve/", "description": "This client was developed by Stack Exchange for very high performance needs.", "authors": ["marcgravell"], - "recommended": true + "recommended": true, + "active": true }, { @@ -354,7 +382,8 @@ "url": "https://github.com/mythz/DartRedisClient", "description": "A high-performance async/non-blocking Redis client for Dart", "authors": ["demisbellot"], - "recommended": true + "recommended": true, + "active": true }, { @@ -370,7 +399,7 @@ "name": "em-redis", "language": "Ruby", "repository": "https://github.com/madsimian/em-redis", - "description": "", + "description": "An eventmachine-based implementation of the Redis protocol. No longer actively maintained.", "authors": ["madsimian"] }, @@ -380,7 +409,8 @@ "repository": "https://github.com/antirez/hiredis", "description": "This is the official C client. Support for the whole command set, pipelining, event driven programming.", "authors": ["antirez","pnoordhuis"], - "recommended": true + "recommended": true, + "active": true }, { @@ -388,7 +418,8 @@ "language": "C", "repository": "http://code.google.com/p/credis/source/browse", "description": "", - "authors": [""] + "authors": [""], + "active": true }, { @@ -397,7 +428,8 @@ "repository": "https://github.com/mranney/node_redis", "description": "Recommended client for node.", "authors": ["mranney"], - "recommended": true + "recommended": true, + "active": true }, { From ba7c547982586fb41284bdc84d87fec1f0d04e8d Mon Sep 17 00:00:00 2001 From: Sam Pullara Date: Sat, 22 Sep 2012 10:16:42 -0700 Subject: [PATCH 0234/2880] Added my redis-protocol client --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index a213f29e8c..7f24804a07 100644 --- a/clients.json +++ b/clients.json @@ -167,6 +167,14 @@ "authors": ["e_mzungu"] }, + { + "name": "redis-protocol", + "language": "Java", + "repository": "https://github.com/spullara/redis-protocol", + "description": "Up to 2.6 compatible high-performance Java, Java w/Netty & Scala (finagle) client", + "authors": ["spullara"] + }, + { "name": "redis-lua", "language": "Lua", From 50ea21e7938f672fff0976cb6468349e33939b75 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 22 Sep 2012 19:52:15 +0200 Subject: [PATCH 0235/2880] Added actively developed flag for redis-protocol Java client. --- clients.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 7f24804a07..92379735f8 100644 --- a/clients.json +++ b/clients.json @@ -172,7 +172,8 @@ "language": "Java", "repository": "https://github.com/spullara/redis-protocol", "description": "Up to 2.6 compatible high-performance Java, Java w/Netty & Scala (finagle) client", - "authors": ["spullara"] + "authors": ["spullara"], + "active": true }, { From 2996a532d17c97391ec2e8effb2cb199d26b0b7e Mon Sep 17 00:00:00 2001 From: MercuryRising Date: Sat, 22 Sep 2012 15:35:48 -0500 Subject: [PATCH 0236/2880] Update topics/security.md spelling change: While Redis does not [tries->try] to --- topics/security.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/security.md b/topics/security.md index f06229cfae..b44493dc12 100644 --- a/topics/security.md +++ b/topics/security.md @@ -53,7 +53,7 @@ can be used by an external attacker to delete the whole data set. Authentication feature --- -While Redis does not tries to implement Access Control, it provides +While Redis does not try to implement Access Control, it provides a tiny layer of authentication that is optionally turned on editing the **redis.conf** file. From 7d81c58cd07a4be3941d5631ffd1506ab1545459 Mon Sep 17 00:00:00 2001 From: antirez Date: Sun, 23 Sep 2012 11:49:31 +0200 Subject: [PATCH 0237/2880] SRANDMEMBER variant documented. --- commands/srandmember.md | 37 +++++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/commands/srandmember.md b/commands/srandmember.md index c6db17ac39..0e54f80bf9 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -1,18 +1,39 @@ -Return a random element from the set value stored at `key`. +When called with just the `key` argument, return a random element from the set value stored at `key`. -This operation is similar to `SPOP`, however while `SPOP` also removes the -randomly selected element from the set, `SRANDMEMBER` will just return a random -element without altering the original set in any way. +When called with the additional `count` argument, return an array of `count` **distinct elements** if `count` is positive. If called with a negative `count` the behavior changes and the command is allowed to return the **same element multiple times**. In this case the numer of returned elements is the absolute value of the specified `count`. + +When called with just the key argument, the operation is similar to `SPOP`, however while `SPOP` also removes the randomly selected element from the set, `SRANDMEMBER` will just return a random element without altering the original set in any way. @return -@bulk-reply: the randomly selected element, or `nil` when `key` does not exist. +@bulk-reply: without the additional `count` argument the command returns a Bulk Reply with the randomly selected element, or `nil` when `key` does not exist. +@multi-bulk-reply: when the additional `count` argument is passed the command returns an array of elements, or an empty array when `key` does not exist. @examples ```cli -SADD myset "one" -SADD myset "two" -SADD myset "three" +SADD myset one two three SRANDMEMBER myset +SRANDMEMBER myset 2 +SRANDMEMBER myset -5 ``` + +## Specification of the behavior when count is passed + +When a count argument is passed and is positive, the elements are returned +as if every selected element is removed from the set (like the extraction +of numbers in the game of Bingo). However elements are **not removed** from +the Set. So basically: + +* No repeated elements are returned. +* If count is bigger than the number of elements inside the Set, the command will only return the whole set without additional elements. + +When instead the count is negative, the behavior changes and the extraction happens as if you put the extracted element inside the bag again after every extraction, so repeated elements are possible, and the number of elements requested is always returned as we can repeat the same elements again and again, with the exception of an empty Set (non existing key) that will always produce an empty array as a result. + +## Distribution of returned elements + +The distribution of the returned elements is far from perfect when the number of elements in the set is small, this is due to the fact that we used an approximated random element function that does not really guarantees good distribution. + +The algorithm used, that is implemented inside dict.c, samples the hash table buckets to find a non-empty one. Once a non empty bucket is found, since we use chaining in our hash table implementation, the number of elements inside the bucked is checked and a random element is selected. + +This means that if you have two non-empty buckets in the entire hash table, and one has three elements while one has just one, the element that is alone in its bucket will be returned with much higher probability. From f13fbe3f375685cbf595935d6d8b6f9f976313e1 Mon Sep 17 00:00:00 2001 From: antirez Date: Sun, 23 Sep 2012 11:54:42 +0200 Subject: [PATCH 0238/2880] SRANDMEMBER doc update. --- commands.json | 9 +++++++-- commands/srandmember.md | 2 +- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/commands.json b/commands.json index 094bca80fd..9bd1ec155e 100644 --- a/commands.json +++ b/commands.json @@ -1656,12 +1656,17 @@ "group": "set" }, "SRANDMEMBER": { - "summary": "Get a random member from a set", - "complexity": "O(1)", + "summary": "Get one or multiple random members from a set", + "complexity": "Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count.", "arguments": [ { "name": "key", "type": "key" + }, + { + "name": "count", + "type": "integer", + "optional": true } ], "since": "1.0.0", diff --git a/commands/srandmember.md b/commands/srandmember.md index 0e54f80bf9..53526e48da 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -1,6 +1,6 @@ When called with just the `key` argument, return a random element from the set value stored at `key`. -When called with the additional `count` argument, return an array of `count` **distinct elements** if `count` is positive. If called with a negative `count` the behavior changes and the command is allowed to return the **same element multiple times**. In this case the numer of returned elements is the absolute value of the specified `count`. +Starting from Redis version 2.6, when called with the additional `count` argument, return an array of `count` **distinct elements** if `count` is positive. If called with a negative `count` the behavior changes and the command is allowed to return the **same element multiple times**. In this case the numer of returned elements is the absolute value of the specified `count`. When called with just the key argument, the operation is similar to `SPOP`, however while `SPOP` also removes the randomly selected element from the set, `SRANDMEMBER` will just return a random element without altering the original set in any way. From 74801562bd4cfb84b9b3b540a0324acd5c0a9892 Mon Sep 17 00:00:00 2001 From: "Whitney.Jackson" Date: Sun, 23 Sep 2012 12:35:50 -0500 Subject: [PATCH 0239/2880] Added AnyEvent::Hiredis client for Perl. --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 92379735f8..ca5eb2930a 100644 --- a/clients.json +++ b/clients.json @@ -224,6 +224,16 @@ "authors": ["miyagawa"] }, + { + "name": "AnyEvent::Hiredis", + "language": "Perl", + "url": "http://search.cpan.org/dist/AnyEvent-Hiredis", + "repository": "https://github.com/wjackson/AnyEvent-Hiredis", + "description": "Non-blocking client using the hiredis C library", + "authors": [], + "active": true + }, + { "name": "MojoX::Redis", "language": "Perl", From 3d784955b66e93524655e3cc71ecb3df54008beb Mon Sep 17 00:00:00 2001 From: Christian Froemmel Date: Mon, 24 Sep 2012 13:43:25 +0200 Subject: [PATCH 0240/2880] Added RedisDB-CPAN-Module. --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 92379735f8..fbde6127d0 100644 --- a/clients.json +++ b/clients.json @@ -206,6 +206,16 @@ "active": true }, + { + "name": "RedisDB", + "language": "Perl", + "url": "http://search.cpan.org/dist/RedisDB", + "repository": "https://github.com/trinitum/RedisDB", + "description": "Perl binding for Redis database with fast XS-based protocolparser", + "authors": ["trinitum"], + "active": true + }, + { "name": "Redis::hiredis", "language": "Perl", From 1cbb09b48c7db5531f7f82e956be773f255bfce3 Mon Sep 17 00:00:00 2001 From: Justin Heyes-Jones Date: Tue, 25 Sep 2012 08:23:29 -0700 Subject: [PATCH 0241/2880] Add emacs lisp eredis client --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 92379735f8..50917e4252 100644 --- a/clients.json +++ b/clients.json @@ -518,6 +518,14 @@ "repository": "https://github.com/toymachine/libredis", "description": "Support for executing commands on multiple servers in parallel via poll(2), ketama hashing. Includes PHP bindings.", "authors": [] + }, + + { + "name": "eredis", + "language": "emacs lisp", + "repository": "http://code.google.com/p/eredis", + "description": "Full Redis API plus ways to pull Redis data into an org-mode table and push it back when edited", + "authors": ["justinhj"] } ] From 3bb163b63d98a2e1ab7352b7ba8a0c4a289e24b9 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 25 Sep 2012 21:19:58 +0200 Subject: [PATCH 0242/2880] BLPOP doc updated with more details about the exact behaviour. --- commands/blpop.md | 41 ++++++++++++++++++++++++++++++++--------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/commands/blpop.md b/commands/blpop.md index d428d367d8..5e4e044076 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -47,18 +47,41 @@ started to wait earlier, in a first- `!BLPOP` first-served fashion. ## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction -`BLPOP` can be used with pipelining (sending multiple commands and reading the -replies in batch), but it does not make sense to use `BLPOP` inside a `MULTI` / -`EXEC` block. -This would require blocking the entire server in order to execute the block +`BLPOP` can be used with pipelining (sending multiple commands and +reading the replies in batch), however this setup makes sense almost solely +when it is the last command of the pipeline. + +Using `BLPOP` inside a `MULTI` / `EXEC` block does not make a lot of sense +as it would require blocking the entire server in order to execute the block atomically, which in turn does not allow other clients to perform a push -operation. +operation. For this reason the behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to return a `nil` multi-bulk reply, which is the same +thing that happens when the timeout is reached. -The behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to -return a `nil` multi-bulk reply, which is the same thing that happens when the -timeout is reached. If you like science fiction, think of time flowing at infinite speed inside a -`MULTI` / `EXEC` block. +`MULTI` / `EXEC` block... + +## Behavior of `!BLPOP` when multiple elements are pushed inside a list. + +There are times when a list can receive multiple elements in the context of the same conceptual command: + +* Variadic push operations such as `LPUSH mylist a b c`. +* After an `EXEC` of a `MULTI` block with multiple push operations against the same list. +* Executing a Lua Script with Redis 2.6 or newer. + +When multiple elements are pushed inside a list where there are clients blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer. + +For Redis 2.6 what happens is that the command performing multiple pushes is executed, and *only after* the execution of the command the blocked clients are served. Consider this sequence of commands. + + Client A: BLPOP foo 0 + Client B: LPUSH foo a b c + +If the above condition happens using a Redis 2.6 server or greater, Client **A** will be served with the `c` element, because after the `LPUSH` command the list contains `c,b,a`, so taking an element from the left means to return `c`. + +Instead Redis 2.4 works in a different way: clients are served *in the context* of the push operation, so as long as `LPUSH foo a b c` starts pushing the first element to the list, it will be delivered to the Client **B**, that will receive `a` (the first element pushed). + +The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behaviour was introduced into Redis 2.6 to prevent problems. + +Note that for the same reason a Lua script or a `MULTI/EXEC` block may push elements into a list and afterward **delete the list**. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script. @return From 26acc7c7067c97b71404089b0553950df61626fd Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 26 Sep 2012 10:04:36 +0200 Subject: [PATCH 0243/2880] BLPOP priority ordering better specified. --- commands/blpop.md | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/commands/blpop.md b/commands/blpop.md index 5e4e044076..c024f58824 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -39,26 +39,11 @@ specified keys. The timeout argument is interpreted as an integer value. A timeout of zero can be used to block indefinitely. -## Multiple clients blocking for the same keys +## What key is served first? What client? What element? Priority ordering details. -Multiple clients can block for the same key. -They are put into a queue, so the first to be served will be the one that -started to wait earlier, in a first- `!BLPOP` first-served fashion. - -## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction - -`BLPOP` can be used with pipelining (sending multiple commands and -reading the replies in batch), however this setup makes sense almost solely -when it is the last command of the pipeline. - -Using `BLPOP` inside a `MULTI` / `EXEC` block does not make a lot of sense -as it would require blocking the entire server in order to execute the block -atomically, which in turn does not allow other clients to perform a push -operation. For this reason the behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to return a `nil` multi-bulk reply, which is the same -thing that happens when the timeout is reached. - -If you like science fiction, think of time flowing at infinite speed inside a -`MULTI` / `EXEC` block... +* If the client tries to blocks for multiple keys, but at least one key contains elements, the returned key / element pair is the first key from left to right that has one or more elements. In this case the client is not blocked. So for instance `BLPOP key1 key2 key3 key4 0`, assuming that both `key2` and `key4` are non-empty, will always return an element from `key2`. +* If multiple clients are blocked for the same key, the first client to be served is the one that was waiting for more time (the first that blocked for the key). Once a client is unblocked it does not retain any priority, when it blocks again with the next call to `BLPOP` it will be served accordingly to the number of clients already blocked for the same key, that will all be served before it (from the first to the last that blocked). +* When a client is blocking for multiple keys at the same time, and elements are available at the same time in multiple keys (because of a transaction or a Lua script added elements to multiple lists), the client will be unblocked using the first key that received a push operation (assuming it has enough elements to serve our client, as there may be other clients as well waiting for this key). Basically after the execution of every command Redis will run a list of all the keys that received data AND that have at least a client blocked. The list is ordered by new element arrival time, from the first key that received data to the last. For every key processed, Redis will serve all the clients waiting for that key in a FIFO fashion, as long as there are elements in this key. When the key is empty or there are no longer clients waiting for this key, the next key that received new data in the previous command / transaction / script is processed, and so forth. ## Behavior of `!BLPOP` when multiple elements are pushed inside a list. @@ -77,12 +62,27 @@ For Redis 2.6 what happens is that the command performing multiple pushes is exe If the above condition happens using a Redis 2.6 server or greater, Client **A** will be served with the `c` element, because after the `LPUSH` command the list contains `c,b,a`, so taking an element from the left means to return `c`. -Instead Redis 2.4 works in a different way: clients are served *in the context* of the push operation, so as long as `LPUSH foo a b c` starts pushing the first element to the list, it will be delivered to the Client **B**, that will receive `a` (the first element pushed). +Instead Redis 2.4 works in a different way: clients are served *in the context* of the push operation, so as long as `LPUSH foo a b c` starts pushing the first element to the list, it will be delivered to the Client **A**, that will receive `a` (the first element pushed). The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behaviour was introduced into Redis 2.6 to prevent problems. Note that for the same reason a Lua script or a `MULTI/EXEC` block may push elements into a list and afterward **delete the list**. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script. +## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction + +`BLPOP` can be used with pipelining (sending multiple commands and +reading the replies in batch), however this setup makes sense almost solely +when it is the last command of the pipeline. + +Using `BLPOP` inside a `MULTI` / `EXEC` block does not make a lot of sense +as it would require blocking the entire server in order to execute the block +atomically, which in turn does not allow other clients to perform a push +operation. For this reason the behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to return a `nil` multi-bulk reply, which is the same +thing that happens when the timeout is reached. + +If you like science fiction, think of time flowing at infinite speed inside a +`MULTI` / `EXEC` block... + @return @multi-bulk-reply: specifically: From 03e801a0450fac4f32ca8e7a6a8fe3c4ce420098 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 26 Sep 2012 10:08:56 +0200 Subject: [PATCH 0244/2880] Specifiy time unit for BLPOP. --- commands/blpop.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/commands/blpop.md b/commands/blpop.md index c024f58824..e5ed69d854 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -36,8 +36,7 @@ the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. -The timeout argument is interpreted as an integer value. -A timeout of zero can be used to block indefinitely. +**The timeout argument is interpreted as an integer value specifying the maximum number of seconds to block**. A timeout of zero can be used to block indefinitely. ## What key is served first? What client? What element? Priority ordering details. From e9a818a2dfb4b59a32768285049759ec1c203b9b Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 26 Sep 2012 10:19:25 +0200 Subject: [PATCH 0245/2880] BRPOPLPUSH mentioned in the BLPOP doc. --- commands/blpop.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/commands/blpop.md b/commands/blpop.md index e5ed69d854..6213702bc2 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -103,6 +103,12 @@ redis> BLPOP list1 list2 0 2) "a" ``` +## Reliable queues + +When `BLPOP` returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever. + +This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the `BRPOPLPUSH` command, that is a variant of `BLPOP` that adds the returned element to a traget list before returing it to the client. + ## Pattern: Event notification Using blocking list operations it is possible to mount different blocking From bbed7588523879ca67b5b491d73604c7cf4d65ac Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 1 Oct 2012 10:36:58 +0200 Subject: [PATCH 0246/2880] Lua scripting: section about helper functions. --- commands/eval.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/commands/eval.md b/commands/eval.md index 4949c482c7..606ea4a9c6 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -115,7 +115,7 @@ Redis to Lua conversion rule: Also there are two important rules to note: -* Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. If you want to return a float from Lua you should return it as a string, exactly like Redis itself does (see for instance the `ZSCORE` command). +* Lua has a single numerical type, Lua numbers. There is no distinction between integers and floats. So we always convert Lua numbers into integer replies, removing the decimal part of the number if any. **If you want to return a float from Lua you should return it as a string**, exactly like Redis itself does (see for instance the `ZSCORE` command). * There is [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html), this is a result of Lua table semantics, so when Redis converts a Lua array into Redis protocol the conversion is stopped if a nil is encountered. Here are a few conversion examples: @@ -149,6 +149,18 @@ In the following example we can see how floats and arrays with nils are handled: As you can see 3.333 is converted into 3, and the *bar* string is never returned as there is a nil before. +## Helper functions to return Redis types + +There are two helper functions to return Redis types from Lua. + +* `redis.error_reply(error_string)` returns an error reply. This function simply returns the single field table with the `err` field set to the specified string for you. +* `redis.status_reply(status_string)` returns a status reply. This function simply returns the single field table with the `ok` field set to the specified string for you. + +There is no difference between using the helper functions or directly returning the table with the specified format, so the following two forms are equivalent: + + return {err="My Error"} + return redis.error_reply("My Error") + ## Atomicity of scripts Redis uses the same Lua interpreter to run all the commands. From 2144ad6f515fbcacd28e60037dcb81011c6b8ce7 Mon Sep 17 00:00:00 2001 From: Fabien Date: Wed, 3 Oct 2012 13:29:41 -0300 Subject: [PATCH 0247/2880] Update topics/data-types-intro.md Fix ambiguity in sentence about Redis values. --- topics/data-types-intro.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index e2800211c1..193c87764e 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -3,7 +3,7 @@ As you already probably know Redis is not a plain key-value store, actually it is a *data structures server*, supporting different kind of values. That is, -you can't just set strings as values of keys. All the following data types are +you can set more than just strings as values of keys. All the following data types are supported as values: * Binary-safe strings. From 8a0ea07ba01922cd1265581dd44e0b3b452420aa Mon Sep 17 00:00:00 2001 From: Fabien Date: Wed, 3 Oct 2012 13:40:13 -0300 Subject: [PATCH 0248/2880] Improve the english --- topics/data-types-intro.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/data-types-intro.md b/topics/data-types-intro.md index e2800211c1..802d718cd8 100644 --- a/topics/data-types-intro.md +++ b/topics/data-types-intro.md @@ -263,8 +263,8 @@ To get all the tags for a given object is trivial: 4. 2 But there are other non trivial operations that are still easy to implement -using the right Redis commands. For instance we may want the list of all the -objects having as tags 1, 2, 10, and 27 at the same time. We can do this using +using the right Redis commands. For instance we may want a list of all the +objects with the tags 1, 2, 10, and 27 together. We can do this using the [SINTER](/commands/sinter) that performs the intersection between different sets. So in order to reach our goal we can just use: @@ -305,7 +305,7 @@ to get a unique ID for the tag "redis": tag:b840fc02d524045429941cc15f59e41cb7be6c52:id 123456* and return the new ID to the caller. -Nice. Or better.. broken! What about if two clients perform these commands at +Nice. Or rather.. broken! What about if two clients perform these commands at the same time trying to get the unique ID for the tag "redis"? If the timing is right they'll both get *nil* from the GET operation, will both increment the *next.tag.id* key and will set two times the key. One of the two clients will From 2223014548a238f7548ebac5e072a822ad423b7a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Juhani=20=C3=85hman?= Date: Thu, 4 Oct 2012 15:54:35 +0300 Subject: [PATCH 0249/2880] Update clients.json --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 92379735f8..024a989bc9 100644 --- a/clients.json +++ b/clients.json @@ -74,6 +74,16 @@ "active": true }, + { + "name": "Radix", + "language": "Go", + "repository": "https://github.com/fzzbt/radix", + "description": "MIT licensed Redis client.", + "authors": ["fzzbt"], + "recommended": true, + "active": true + }, + { "name": "Redigo", "language": "Go", From 3001b0c66315d400fc12189e0e3f0e7fc242d9ca Mon Sep 17 00:00:00 2001 From: Fabien Date: Thu, 4 Oct 2012 12:44:48 -0300 Subject: [PATCH 0250/2880] Fix English --- topics/virtual-memory.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/virtual-memory.md b/topics/virtual-memory.md index 3c00afbdb1..5dd69f9edb 100644 --- a/topics/virtual-memory.md +++ b/topics/virtual-memory.md @@ -1,4 +1,4 @@ -**IMPORTANT NOTE:** Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory (but it also warns you that Virtual Memory usage is discouraged). We found that using VM has several disadvantages and problems. In the future of Redis we want to simply provide the best in-memory database (but persistent on disk as usually) ever, without considering at least for now the support for databases bigger than RAM. Our future efforts are focused into providing scripting, cluster, and better persistence. +**IMPORTANT NOTE:** Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory (but it also warns you that Virtual Memory usage is discouraged). We found that using VM has several disadvantages and problems. In the future of Redis we want to simply provide the best in-memory database (but persistent on disk as usual) ever, without considering at least for now the support for databases bigger than RAM. Our future efforts are focused into providing scripting, cluster, and better persistence. Virtual Memory === From 2f40f207c7b40b7bacd12c8231d014fd7cea4396 Mon Sep 17 00:00:00 2001 From: Fabien Date: Thu, 4 Oct 2012 16:17:48 -0300 Subject: [PATCH 0251/2880] Update topics/persistence.md Correct english to create the intended sentence meaning --- topics/persistence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/persistence.md b/topics/persistence.md index 26a7aef03c..7650eec954 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -55,7 +55,7 @@ There are many users using AOF alone, but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups, for faster restarts, and in the event of bugs in the AOF engine. -Note: for all this reasons we'll likely end unifying AOF and RDB into a single persistence model in the future (long term plan). +Note: for all this reasons we'll likely end up unifying AOF and RDB into a single persistence model in the future (long term plan). The following sections will illustrate a few more details about the two persistence models. From 24faa447a36735fa6d415c890b9526712d568366 Mon Sep 17 00:00:00 2001 From: Fabien Date: Thu, 4 Oct 2012 16:20:10 -0300 Subject: [PATCH 0252/2880] Update topics/persistence.md Correct english --- topics/persistence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/persistence.md b/topics/persistence.md index 7650eec954..4ada98bc83 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -55,7 +55,7 @@ There are many users using AOF alone, but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups, for faster restarts, and in the event of bugs in the AOF engine. -Note: for all this reasons we'll likely end up unifying AOF and RDB into a single persistence model in the future (long term plan). +Note: for all these reasons we'll likely end up unifying AOF and RDB into a single persistence model in the future (long term plan). The following sections will illustrate a few more details about the two persistence models. From 6a47b0c1514a1b39f04829ca8161a03bc891f227 Mon Sep 17 00:00:00 2001 From: Paul Merlin Date: Sat, 6 Oct 2012 03:33:46 +0300 Subject: [PATCH 0253/2880] Added Qi4j Redis EntityStore in tools.json --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index 20b03cb693..7fa4a48e5f 100644 --- a/tools.json +++ b/tools.json @@ -258,5 +258,13 @@ "repository": "https://github.com/agoragames/amico", "description": "Relationships (e.g. friendships) backed by Redis.", "authors": ["czarneckid"] + }, + { + "name": "Redis Qi4j EntityStore", + "language": "Java", + "url": "http://qi4j.org/extension-es-redis.html", + "repository": "http://github.com/qi4j/qi4j-sdk", + "description": "Qi4j EntityStore backed by Redis", + "authors": ["eskatos"] } ] From 7573cb80ea777e18cc5b18a82ca1c7e5ffa8e288 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 18 Oct 2012 23:37:56 +0200 Subject: [PATCH 0254/2880] Hiredis client link updated. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 92379735f8..13a7887877 100644 --- a/clients.json +++ b/clients.json @@ -415,7 +415,7 @@ { "name": "hiredis", "language": "C", - "repository": "https://github.com/antirez/hiredis", + "repository": "https://github.com/redis/hiredis", "description": "This is the official C client. Support for the whole command set, pipelining, event driven programming.", "authors": ["antirez","pnoordhuis"], "recommended": true, From 1596704185502c744c1e78381831f7e00a458f93 Mon Sep 17 00:00:00 2001 From: Eugene Ponizovsky Date: Sat, 27 Oct 2012 18:03:29 +0400 Subject: [PATCH 0255/2880] Added new Redis client AnyEvent::Redis::RipeRedis for Perl language --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index 13a7887877..81fe791809 100644 --- a/clients.json +++ b/clients.json @@ -224,6 +224,16 @@ "authors": ["miyagawa"] }, + { + "name": "AnyEvent::Redis::RipeRedis", + "language": "Perl", + "url": "http://search.cpan.org/dist/AnyEvent-Redis-RipeRedis", + "repository": "https://github.com/iph0/AnyEvent-Redis-RipeRedis", + "description": "Non-blocking Redis client with reconnect feature", + "authors": ["iph"], + "active": true + }, + { "name": "MojoX::Redis", "language": "Perl", From 5b6bd01b99e19bf7983d3deb5c2bd1cc37ce4d07 Mon Sep 17 00:00:00 2001 From: Eugene Ponizovsky Date: Sat, 27 Oct 2012 18:11:16 +0400 Subject: [PATCH 0256/2880] Edited description for Redis client AnyEvent::Redis::RipeRedis in clients.json --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 81fe791809..a58f59c5e1 100644 --- a/clients.json +++ b/clients.json @@ -229,7 +229,7 @@ "language": "Perl", "url": "http://search.cpan.org/dist/AnyEvent-Redis-RipeRedis", "repository": "https://github.com/iph0/AnyEvent-Redis-RipeRedis", - "description": "Non-blocking Redis client with reconnect feature", + "description": "Flexible non-blocking Redis client with reconnect feature", "authors": ["iph"], "active": true }, From 20f756bdfced12deda6fe5b0d8ecbc3bc6c118bc Mon Sep 17 00:00:00 2001 From: Ben Smith Date: Wed, 31 Oct 2012 17:39:53 +0000 Subject: [PATCH 0257/2880] adding scala-redis-client to clients.json --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 13a7887877..8cb63701a4 100644 --- a/clients.json +++ b/clients.json @@ -349,6 +349,15 @@ "authors": ["pk11"] }, + { + "name": "scala-redis-client", + "language": "Scala", + "repository": "https://github.com/top10/scala-redis-client", + "description": "An idiomatic Scala client that keeps Jedis / Java hidden. Used in production at http://top10.com.", + "authors": ["thesmith", "heychinaski"], + "active": true + }, + { "name": "Tcl Client", "language": "Tcl", From e19e2071e64224f18ded7d4767bf3fa20f537ed2 Mon Sep 17 00:00:00 2001 From: Andrew Grigorev Date: Sun, 4 Nov 2012 21:52:10 +0400 Subject: [PATCH 0258/2880] commands.json: ZADD specification logic fix --- commands.json | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/commands.json b/commands.json index 9bd1ec155e..e0b7ffed49 100644 --- a/commands.json +++ b/commands.json @@ -1821,22 +1821,9 @@ "type": "key" }, { - "name": "score", - "type": "double" - }, - { - "name": "member", - "type": "string" - }, - { - "name": "score", - "type": "double", - "optional": true - }, - { - "name": "member", - "type": "string", - "optional": true + "name": ["score", "member"], + "type": ["double", "string"], + "multiple": true } ], "since": "1.2.0", From b9ef9a38755725290023de8c6e243f6dac6ebf92 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 22 Nov 2012 14:52:13 +0100 Subject: [PATCH 0259/2880] Transactions page updated with 2.6.5 behavior. Also there are other monior improvements to this page. --- topics/transactions.md | 39 ++++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 11 deletions(-) diff --git a/topics/transactions.md b/topics/transactions.md index f76712f256..a8f8066399 100644 --- a/topics/transactions.md +++ b/topics/transactions.md @@ -9,9 +9,10 @@ in a single step, with two important guarantees: sequentially. It can never happen that a request issued by another client is served **in the middle** of the execution of a Redis transaction. This guarantees that the commands are executed as a single -atomic operation. +isolated operation. -* Either all of the commands or none are processed. The `EXEC` command +* Either all of the commands or none are processed, so a Redis +transaction is also atomic. The `EXEC` command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a transaction before calling the `MULTI` command none of the operations @@ -25,9 +26,10 @@ are registered. Redis will detect this condition at restart, and will exit with append only file that will remove the partial transaction so that the server can start again. -Redis 2.2 allows for an extra guarantee to the above two, in the form -of optimistic locking in a way very similar to a check-and-set (CAS) -operation. This is documented [later](#cas) on this page. +Starting with version 2.2, Redis allows for an extra guarantee to the +above two, in the form of optimistic locking in a way very similar to a +check-and-set (CAS) operation. +This is documented [later](#cas) on this page. ## Usage @@ -56,9 +58,24 @@ array of replies, where every element is the reply of a single command in the transaction, in the same order the commands were issued. When a Redis connection is in the context of a `MULTI` request, -all commands will reply with the string `QUEUED` unless they are -syntactically incorrect. Some commands are still allowed to fail during -execution time. +all commands will reply with the string `QUEUED` (sent as a Status Reply +from the point of view of the Redis protocol). A queued command is +simply scheduled for execution when `EXEC` is called. + +## Errors inside a transaction + +During a transaction it is possible to encounter two kind of command errors: + +* A command may fail to be queued, so there may be an error before `EXEC` is called. For instance the command may be syntactically wrong (wrong number of arguments, wrong command name, ...), or there may be some critical condition like an out of memory condition (if the server is configured to have a memory limit using the `maxmemory` directive). +* A command may fail *after* `EXEC` is called, for instance since we performed an operation against a key with the wrong value (like calling a list operation against a string value). + +Clients used to sense the first kind of errors, happening before the `EXEC` call, by checking the return value of the queued command: if the command replies with QUEUED it was queued correctly, otherwise Redis returns an error. If there is an error while queueing a command, most clients will abort the transaction discarding it. + +However starting with Redis 2.6.5, the server will remember that there was an error during the accumulation of commands, and will refuse to execute the transaction returning also an error during `EXEC`, and discarding the transcation automatically. + +Before Redis 2.6.5 the behavior was to execute the transaction with just the subset of commands queued successfully in case the client called `EXEC` regardless of previous errors. The new behavior makes it much more simple to mix transactions with pipelining, so that the whole transaction can be sent at once, reading all the replies later at once. + +Errors happening *after* `EXEC` instead are not handled in a special way: all the other commands will be executed even if some command fails during the transaction. This is more clear on the protocol level. In the following example one command will fail when executed even if the syntax is right: @@ -78,7 +95,7 @@ command will fail when executed even if the syntax is right: +OK -ERR Operation against a key holding the wrong kind of value -`MULTI` returned two-element @bulk-reply where one is an `OK` code and +`EXEC` returned two-element @bulk-reply where one is an `OK` code and the other an `-ERR` reply. It's up to the client library to find a sensible way to provide the error to the user. @@ -97,7 +114,7 @@ syntax errors are reported ASAP instead: This time due to the syntax error the bad `INCR` command is not queued at all. -## Errors inside a transaction +## Why Redis does not support roll backs? If you have a relational databases background, the fact that Redis commands can fail during a transaction, but still Redis will execute the rest of the @@ -105,7 +122,7 @@ transaction instead of rolling back, may look odd to you. However there are good opinions for this behavior: -* Redis commands can fail only if called with a wrong syntax, or against keys holding the wrong data type: this means that in practical terms a failing command is the result of a programming errors, and a kind of error that is very likely to be detected during development, and not in production. +* Redis commands can fail only if called with a wrong syntax (and the problem is not detectable during the command queueing), or against keys holding the wrong data type: this means that in practical terms a failing command is the result of a programming errors, and a kind of error that is very likely to be detected during development, and not in production. * Redis is internally simplified and faster because it does not need the ability to roll back. An argument against Redis point of view is that bugs happen, however it should be noted that in general the roll back does not save you from programming errors. For instance if a query increments a key by 2 instead of 1, or increments the wrong key, there is no way for a rollback mechanism to help. Given that no one can save the programmer from his errors, and that the kind of errors required for a Redis command to fail are unlikely to enter in production, we selected the simpler and faster approach of not supporting roll backs on errors. From 77889bf94983b4dba925f0d34081e685f3c991c6 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 1 Dec 2012 19:13:27 +0100 Subject: [PATCH 0260/2880] First draft of Redis Sentinel and Redis clients interaction guidelines. --- topics/sentinel-clients.md | 91 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 topics/sentinel-clients.md diff --git a/topics/sentinel-clients.md b/topics/sentinel-clients.md new file mode 100644 index 0000000000..a6934e6e5a --- /dev/null +++ b/topics/sentinel-clients.md @@ -0,0 +1,91 @@ +**WARNING:** This document is a draft and the guidelines that it contains may change in the future as the Sentinel project evolves. + +Guidelines for Redis clients with support for Redis Sentinel +=== + +Redis Sentinel is a monitoring solution for Redis instances that handles different aspects of monitoring, including notification of events, automatic failover. +Sentinel can also play the role of configuration source for Redis clients. This document is targetted at Redis clients developers that want to support Sentinel in their clients implementation with the following goals: + +* Automatic configuration of clients via Sentinel. +* Improved reliability of Redis Sentinel automatic fail over, because of Sentinel-aware clients that will automatically reconnect to the new master. + +For details about how Redis Sentinel works, please check the [Redis Documentation](/topics/sentinel) itself, as this document only contains informations needed for Redis client developers. + +Redis service discovery via Sentinel +=== + +Redis Sentinel identify every master with a name like "stats" or "cache". +However the address of the Redis master that is used for a specific purpose inside a network may change after events like an automatic failover, a manually triggered failover (for instance in order to upgrade a Redis instance), and other reasons. + +Normally Redis clients have some kind of hard-coded configuraiton that specifies the address of a Redis master instance within a network as IP address and port number. However if the master address changes, manual intervention in every client is needed. + +A Redis client supporting Sentinel can automatically discover the address of a Redis master from the master name using Redis Sentinel. So instead of an hard coded IP address and port, a client supporting Sentinel should optionally be able to take as input: + +* A list of ip:port pairs pointing to known Sentinel instances. +* The name of the service, like "cache" or "timelines". + +This is the procedure a client should follow in order to obtain the master address starting from the list of Sentinels and the service name. + +Step 1: connecting to the first Sentinel +--- + +The client should iterate the list of Sentinel addresses. For every address it should try to connect to the Sentinel, using a short timeout. On errors or timeouts the next Sentinel address should be tried. + +If all the Sentinel addresses were tried without success, an error should be returned to the client. + +Once a connection with a Sentinel is established, the client should retry to execute the following command on the Sentinel: + + SENTINEL get-master-addr-by-name master-name + +Where *master-name* should be replaced with the actual service name specified by the user. + +The result from this call can be one of the following three replies: + +* An ip:port pair. +* A null reply. +* An `-IDONTKNOW` error. + +If an ip:port pair is received, this address should be used to connect to the Redis master. Otherwise if a null reply or `-IDONTKNOW` reply is received, the client should try the next Sentinel in the list. + +When a correct ip:port pair is received, the replying Sentinel address should be put at the top of the list of Sentinel addresses, so that the next time we'll try the responding Sentinel before any other. + +IMPORTANT: The result of this procedure should not be cached by the Redis client. Every time a new connection should be performed to a master the full resolving procedure should be used instead. + +Handling reconnections +=== + +Once the service name is resoled into the master address and a connection is established with the Redis master instance, every time a reconnection is needed, the client should resolve again the address using Sentinels. For instance: + +* If the client reconnects after a timeout or socket error. +* If the client reconnects because it was explicitly closed or reconnected in any way by the user. + +In the above cases and any other the client should resolve the master address again. + +Connection pools +=== + +For clients implementing connection pools, on reconnection of a single connection, the Sentinel should be contacted again, and in case of a master address change all the existing connections should be closed and connected to the new address. + +Error reporting +=== + +The client should correctly return the information to the user in case of errors. Specifically: + +* If no Sentinel can be contacted (so that the client was never able to get the reply to `SENTINEL get-master-addr-by-name`), an error that clearly states that Redis Sentinel is unreachable should be returned. +* If all the Sentinels in the pool replied with a null reply, the user should be informed with an error that Sentinels don't know this master name. +* If at least one Sentinel replies with `-IDONTKNOW` the client should return an error like: "Redis Sentinel don't know the specified master address." so that the user is informed that the service name is configured in at least a Sentinel instance, but apparently the master was never reached by the Sentinel. + +Sentinels list automatic refresh +=== + +Optionally once a successful reply to `get-master-add-by-name` is received, a client may update its internal list of Sentinel nodes following this procedure: + +* Obtain a list of other Sentinels for this master using the command `SENTINEL sentinels `. +* Add every ip:port pair not already existing in our list at the end of the list. + +It is not needed for a client to be able to make the list persistent updating its own configuration. The ability to upgrade the in-memory representation of the list of Sentinels can be already useful to improve reliability. + +Additional information +=== + +For additional information or to discuss specific aspects of this guidelines, please drop a message to the [Redis Google Group](groups.google.com/group/redis-db). From a830c4f80d0f961f76c901b99b3e9e709840c250 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 1 Dec 2012 19:15:30 +0100 Subject: [PATCH 0261/2880] Minor changes to sentinel-clients.md. --- topics/sentinel-clients.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/topics/sentinel-clients.md b/topics/sentinel-clients.md index a6934e6e5a..5d3f87dfdc 100644 --- a/topics/sentinel-clients.md +++ b/topics/sentinel-clients.md @@ -33,6 +33,9 @@ The client should iterate the list of Sentinel addresses. For every address it s If all the Sentinel addresses were tried without success, an error should be returned to the client. +Step 2: ask for master address +--- + Once a connection with a Sentinel is established, the client should retry to execute the following command on the Sentinel: SENTINEL get-master-addr-by-name master-name @@ -47,6 +50,9 @@ The result from this call can be one of the following three replies: If an ip:port pair is received, this address should be used to connect to the Redis master. Otherwise if a null reply or `-IDONTKNOW` reply is received, the client should try the next Sentinel in the list. +Step 3: give priority to the replying Sentinel +--- + When a correct ip:port pair is received, the replying Sentinel address should be put at the top of the list of Sentinel addresses, so that the next time we'll try the responding Sentinel before any other. IMPORTANT: The result of this procedure should not be cached by the Redis client. Every time a new connection should be performed to a master the full resolving procedure should be used instead. From f52f9925d38c5d9b970026eddfaa6acb1b4dcf1c Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 5 Dec 2012 11:53:59 +0100 Subject: [PATCH 0262/2880] Partitioning page added. --- topics/partitioning.md | 116 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 topics/partitioning.md diff --git a/topics/partitioning.md b/topics/partitioning.md new file mode 100644 index 0000000000..2c95bed1c9 --- /dev/null +++ b/topics/partitioning.md @@ -0,0 +1,116 @@ +Partitioning: how to split data among multiple Redis instances. +=== + +Partitioning is the process of splitting your data into multiple Redis instances, so that every instance will only contain a subset of your keys. The first part of this document will introduce you to the concept of partitioning, the second part will show you the alternatives for Redis partitioning. + +Why partitioning is useful +--- + +Partitioning in Redis serves two main goals: + +* It allows for much larger databases, using the sum of the memory of many computers. Without partitioning you are limited to the amount of memory a single computer can support. +* It allows to scale the computational power to multiple cores and multiple computers, and the network bandwidth to multiple computers and network adapters. + +Partitioning basics +--- + +There are different partitioning criteria. Imagine we have four Redis instances **R0**, **R1**, **R2**, **R3**, and many keys representing users like `user:1`, `user:2`, ... and so forth, we can find different ways to select in which instance we store a given key. In other words there are *different systems to map* a given key to a given Redis server. + +One of the simplest way to perform partitioning is called **range partitioning**, and is accomplished by mapping ranges of objects into specific Redis instances. For example I could say, users from ID 0 to ID 10000 will go into instance **R0**, while users form ID 10001 to ID 20000 will go into instance **R1** and so forth. + +This systems works and is actually used in practice, however it has the disadvantage that there is to take a table mapping ranges to instances. This table needs to be managed and we need a table for every kind of object we have. Usually with Redis it is not a good idea. + +An alternative to to range partitioning is **hash partitioning**. This scheme works with any key, no need for a key in the form `object_name:` as is as simple as this: +* Take the key name and use an hash function to turn it into a number. For instance I could use the `crc32` hash function. So if the key is `foobar` I do `crc32(foobar)` that will output something like 93024922. +* I use a modulo operation with this number in order to turn it into a number between 0 and 3, so that I can map this number to one of the four Redis instances I've. So `93024922 modulo 4` equals 2, so I know my key `foobar` should be stored into the **R2** instance. *Note: the modulo operation is just the rest of the division, usually it is implemented by the `%` operator in many programming languages.* + +There are many other ways to perform partitioning, but with this two examples you should get the idea. One advanced form of hash partitioning is called **consistent hashing** and is implemented by a few Redis clients and proxies. + +Different implementations of partitioning +--- + +Partitioning can be responsibility of different parts of a software stack. + +* **Client side partitioning** means that the clients directly select the right node where to write or read a given key. Many Redis clients implement client side partitioning. +* **Proxy assisted partitioning** means that our clients send requests to a proxy that is able to speak the Redis protocol, instead of sending requests directly to the right Redis instance. The proxy will make sure to forward our request to the right Redis instance accordingly to the configured partitioning schema, and will send the replies back to the client. The Redis and Memcached proxy [Twemproxy](https://github.com/twitter/twemproxy) implements proxy assisted partitioning. +* **Query routing** means that you can send your query to a random instance, and the instance will make sure to forward your query to the right node. Redis Cluster implements an hybrid form of query routing, with the help of the client (the request is not directly forwarded from a Redis instance to another, but the client gets *redirected* to the right node). + +Disadvantages of partitioning +--- + +Some features of Redis don't play very well with partitioning: + +* Operations involving multiple keys are usually not supported. For instance you can't perform the intersection between two sets if they are stored in keys that are mapped to different Redis instances (actually there are ways to do this, but not directly). +* Redis transactions involving multiple keys can not be used. +* The partitioning granuliary is the key, so it is not possible to shard a dataset with a single huge key like a very big sorted set. +* When partitioning is used, data handling is more complex, for instance you have to handle multiple RDB / AOF files, and to make a backup of your data you need to aggregate the persistence files from multiple instances and hosts. +* Adding and removing capacity can be complex. For instance Redis Cluster plans to support mostly transparent rebalancing of data with the ability to add and remove nodes at runtime, but other systems like client side partitioning and proxies don't support this feature. However a technique called *Presharding* helps in this regard. + +Data store or cache? +--- + +Partitioning when using Redis ad a data store or cache is conceptually the same, however there is a huge difference. While when Redis is used as a data store you need to be sure that a given key always maps to the same instance, when Redis is used as a cache if a given node is unavailable it is not a big problem if we start using a different node, altering the key-instance map as we wish to improve the *availability* of the system (that is, the ability of the system to reply to our queries). + +Consistent hashing implementations are often able to switch to other nodes if the preferred node for a given key is not available. Similarly if you add a new node, part of the new keys will start to be stored on the new node. + +The main concept here is the following: + +* If Redis is used as a cache **scaling up and down** using consistent hashing is easy. +* If Redis is used as a store, **we need to take the map between keys and nodes fixed, and a fixed number of nodes**. Otherwise we need a system that is able to rebalance keys between nodes when we add or remove nodes, and currently only Redis Cluster is able to do this, but Redis Cluster is not production ready. + +Presharding +--- + +We learned that a problem with partitioning is that, unless we are using Redis as a cache, to add and remove nodes can be tricky, and it is much simpler to use a fixed keys-instances map. + +However the data storage needs may vary over the time. Today I can live with 10 Redis nodes (instances), but tomorrow I may need 50 nodes. + +Since Redis is extremely small footprint and lightweight (a spare instance uses 1 MB of memory), a simple approach to this problem is to start with a lot of instances since the start. Even if you start with just one server, you can decide to live in a distributed world since your first day, and run multiple Redis instances in your single server, using partitioning. + +And you can select this number of instances to be quite big since the start. For example, 32 or 64 instances could do the trick for most users, and will provide enough room for growth. + +In this way as your data storage needs increase and you need more Redis servers, what to do is to simply move instances from one server to another. Once you add the first additional server, you will need to move half of the Redis instances from the first server to the second, and so forth. + +Using Redis replication you will likely be able to do the move with minimal or no downtime for your users: + +* Start empty instances in your new server. +* Move data configuring these new instances as slaves for your source instances. +* Stop your clients. +* Update the configuration of the moved instances with the new server IP address. +* Send the `SLAVEOF NO ONE` command to the slaves in the new server. +* Restart your clients with the new updated configuration. +* Finally shut down the no longer used instances in the old server. + +Implementations of Redis partitioning +=== + +So far we covered Redis partitioning in theory, but what about practice? What system should you use? + +Redis Cluster +--- + +Unfortunately Redis Cluster is currently not production ready, however you can get more information about it [reading the specification](/topics/cluster-spec) or checking the partial implementation in the `unstable` branch of the Redis GitHub repositoriy. + +Once Redis Cluster will be available, and if a Redis Cluster complaint client is available for your language, Redis Cluster will be the de facto standard for Redis partitioning. + +Redis Cluster is a mix between *query routing* and *client side partitioning*. + +Twemproxy +--- + +[Twemproxy is a proxy developed at Twitter](https://github.com/twitter/twemproxy) for the Memcached ASCII and the Redis protocol. It is single threaded, it is written in C, and is extremely fast. It is open source software released under the terms of the Apache 2.0 license. + +Twemproxy supports automatic partitioning among multiple Redis instances, with optional node ejection if a node is not available (this will change the keys-instances map, so you should use this feature only if you are using Redis as a cache). + +It is *not* a single point of failure since you can start multiple proxies and instruct your clients to connect to the first that accepts the connection. + +Basically Twemproxy is an intermediate layer between clients and Redis instances, that will reliably handle partitioning for us with minimal additional complexities. Currently it is the **suggested way to handle partitioning with Redis**. + +You can read more about Twemproxy [in this antirez blog post](http://antirez.com/news/44). + +Clients supporting consistent hashing +--- + +An alternative to Twemproxy is to use a client that implements client side partitioning via consistent hashing or other similar algorithms. There are multiple Redis clients with support for consistent hashing, notably [Redis-rb](https://github.com/redis/redis-rb) and [Predis](https://github.com/nrk/predis). + +Please check the [full list of Redis clients](http://redis.io/clients) to check if there is a mature client with consistent hashing implementation for your language. From 232343ca8d2ebe98132ac94056f3bf8218dcb3ab Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 5 Dec 2012 11:58:59 +0100 Subject: [PATCH 0263/2880] Markdown fixes. --- topics/partitioning.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/partitioning.md b/topics/partitioning.md index 2c95bed1c9..c21f342036 100644 --- a/topics/partitioning.md +++ b/topics/partitioning.md @@ -21,6 +21,7 @@ One of the simplest way to perform partitioning is called **range partitioning** This systems works and is actually used in practice, however it has the disadvantage that there is to take a table mapping ranges to instances. This table needs to be managed and we need a table for every kind of object we have. Usually with Redis it is not a good idea. An alternative to to range partitioning is **hash partitioning**. This scheme works with any key, no need for a key in the form `object_name:` as is as simple as this: + * Take the key name and use an hash function to turn it into a number. For instance I could use the `crc32` hash function. So if the key is `foobar` I do `crc32(foobar)` that will output something like 93024922. * I use a modulo operation with this number in order to turn it into a number between 0 and 3, so that I can map this number to one of the four Redis instances I've. So `93024922 modulo 4` equals 2, so I know my key `foobar` should be stored into the **R2** instance. *Note: the modulo operation is just the rest of the division, usually it is implemented by the `%` operator in many programming languages.* From 3a19f74a6c3301d43de7242bae03c9e62fdd1d60 Mon Sep 17 00:00:00 2001 From: Justin Case Date: Tue, 11 Dec 2012 04:49:12 +0100 Subject: [PATCH 0264/2880] EXPIRE typo --- commands/expire.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/expire.md b/commands/expire.md index 141d7412e8..dd0142da03 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -57,7 +57,7 @@ TTL mykey ## Pattern: Navigation session Imagine you have a web service and you are interested in the latest N pages -_recently_ visited by your users, such that each adiacent page view was not +_recently_ visited by your users, such that each adjacent page view was not performed more than 60 seconds after the previous. Conceptually you may think at this set of page views as a _Navigation session_ if your user, that may contain interesting information about what kind of From f6259d586e4b5cd3f343ff2b2418a06c6126c079 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Thu, 13 Dec 2012 22:49:45 -0300 Subject: [PATCH 0265/2880] Close a few issues... --- topics/whos-using-redis.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index a8bbb2471f..978a20a569 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -88,6 +88,16 @@ And many others: * [Forrst](http://forrst.com) * [Surfingbird](http://surfingbird.com) * [mig33](http://www.mig33.com) +* [SeatGeek](http://seatgeek.com/) +* [Wikipedia Game](http://thewikigame.com) - [Redis architecture description](http://www.clemesha.org/blog/really-using-redis-to-build-fast-real-time-web-apps/) +* [Mogu](http://gomogu.org) +* [Ancestry.com](http://www.ancestry.com/) +* [SocialReviver](http://www.socialreviver.net/) by VittGam, for its Settings Cloud +* [Telefónica Digital](http://www.telefonica.com/es/digital/html/home/) +* [Pond](http://web.pond.pt/) +* [Topics.io](http://topics.io) +* [AngiesList.com](https://github.com/angieslist/al-redis) +* [GraphBug](http://graphbug.com/) This list is incomplete. If you're using Redis and would like to be listed, [send a pull request](https://github.com/antirez/redis-doc). From 8cd18732ebdc37f26c75ca2c5318c1d50f278e53 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 14 Dec 2012 00:01:21 -0300 Subject: [PATCH 0266/2880] Remove unmaintained Haskell client. Closes #158. --- clients.json | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/clients.json b/clients.json index 050e4c6cd6..f52a5f0091 100644 --- a/clients.json +++ b/clients.json @@ -119,15 +119,7 @@ "repository": "https://github.com/informatikr/hedis", "description": "Supports the complete command set. Commands are automatically pipelined for high performance.", "authors": [], - "active": true - }, - - { - "name": "redis package", - "language": "Haskell", - "url": "http://hackage.haskell.org/package/redis", - "description": "This library is a Haskell driver for Redis. It's tested with current git version and with v2.4.6 of redis server.", - "authors": [], + "recommended": true, "active": true }, From 344590ebf5963fd50e5c58fb74f457dc03274152 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 14 Dec 2012 00:10:04 -0300 Subject: [PATCH 0267/2880] Add D client. Closes #145. --- clients.json | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index f52a5f0091..f61a161157 100644 --- a/clients.json +++ b/clients.json @@ -575,6 +575,14 @@ "repository": "http://code.google.com/p/eredis", "description": "Full Redis API plus ways to pull Redis data into an org-mode table and push it back when edited", "authors": ["justinhj"] - } + }, + { + "name": "Tiny Redis", + "language": "D", + "url": "http://adilbaig.github.com/Tiny-Redis/", + "repository": "https://github.com/adilbaig/Tiny-Redis", + "description": "", + "authors": ["adilbaig"] + } ] From a61cd6fa9f2a90e42c1d0c0a114fdd6505c11214 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 14 Dec 2012 00:16:44 -0300 Subject: [PATCH 0268/2880] Add Scheme client. Closes #135. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index f61a161157..6e33ba6598 100644 --- a/clients.json +++ b/clients.json @@ -584,5 +584,14 @@ "repository": "https://github.com/adilbaig/Tiny-Redis", "description": "", "authors": ["adilbaig"] + }, + + { + "name": "redis-client", + "language": "Scheme", + "url": "http://wiki.call-cc.org/eggref/4/redis-client", + "repository": "https://github.com/carld/redis-client.egg", + "description": "A Redis client for Chicken Scheme 4.7", + "authors": ["carld"] } ] From 0bc151a859d08c8c82d6e43a91fe340977caa7b9 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 14 Dec 2012 00:27:17 -0300 Subject: [PATCH 0269/2880] Add Java client. Closes #85. --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 8fd53a874d..61c61f3c9b 100644 --- a/clients.json +++ b/clients.json @@ -601,5 +601,13 @@ "repository": "https://github.com/carld/redis-client.egg", "description": "A Redis client for Chicken Scheme 4.7", "authors": ["carld"] + }, + + { + "name": "lettuce", + "language": "Java", + "repository": "https://github.com/wg/lettuce", + "description": "Thread-safe client supporting async usage and key/value codecs", + "authors": ["ar3te"] } ] From 1358881db13a2f0d8d508117f19523c1943fae3d Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 14 Dec 2012 00:29:39 -0300 Subject: [PATCH 0270/2880] Close #62. --- topics/whos-using-redis.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index 978a20a569..e42db2a362 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -98,6 +98,7 @@ And many others: * [Topics.io](http://topics.io) * [AngiesList.com](https://github.com/angieslist/al-redis) * [GraphBug](http://graphbug.com/) +* [SwarmIQ](http://www.swarmiq.com/) uses Redis as a caching / indexing layer for rapid lookups of chronological and ranked messages. This list is incomplete. If you're using Redis and would like to be listed, [send a pull request](https://github.com/antirez/redis-doc). From a3706c3290e1f995b35522e9bd52e56b78612994 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 17 Dec 2012 11:47:27 +0100 Subject: [PATCH 0271/2880] License page added. --- topics/license.md | 48 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 topics/license.md diff --git a/topics/license.md b/topics/license.md new file mode 100644 index 0000000000..e47d4e888e --- /dev/null +++ b/topics/license.md @@ -0,0 +1,48 @@ +# Redis license information + +Redis is **open source software** released under the terms of the **three clause BSD license**. Most of the Redis source code was written and is copyrighted by Salvatore Sanfilippo and Pieter Noordhuis. A list of other contributors can be found in the git history. + +Every file in the Redis distribution, with the exceptions of third party files specified in the list below, contain the following license: + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + +* Neither the name of Redis nor the names of its contributors may be used + to endorse or promote products derived from this software without + specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +# Third party files and licenses + +Redis uses source code from third parties. All this code contians a BSD or BSD-compatible license. The following is a list of third party files and information about their copyright. + +* Redis uses the [LHF compression library](http://oldhome.schmorp.de/marc/liblzf.html). LibLZF is copyright (c) 2000-2007 Marc Alexander Lehmann and is released under the terms of the **two clause BSD license**. + +* Redis uses the `sha1.c` file that is copyright by Steve Reid and released under the public domain. This file is extremely popular and used among open source and proprietary code. + +* When compiled on Linux Redis usees the [Jemalloc allocator](http://www.canonware.com/jemalloc/), that is copyright by Jason Evans, Mozilla Foundation and Facebook, Inc and is released under the **two clause BSD license**. + +* Inside Jemalloc the file `pprof` is copyright Google Inc and released under the **three clause BSD license**. + +* Inside Jemalloc the files `inttypes.h`, `stdbool.h`, `stdint.h`, `strings.h` under the `msvc_compat` directory are copyright Alexander Chemeris and released under the **three clause BSD license**. + +* The libraries **hiredis** and **linenoise** also included inside the Redis distribution are copyright Salvatore Sanfilippo and Pieter Noordhuis and released under the terms respectively of the **three clause BSD license** and **two clause BSD license**. + From b8b557f5ae66c587e4c8aec76607f9b119be7e05 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 17 Dec 2012 11:49:48 +0100 Subject: [PATCH 0272/2880] minor fixes in License.md --- topics/license.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/license.md b/topics/license.md index e47d4e888e..23dc86d941 100644 --- a/topics/license.md +++ b/topics/license.md @@ -34,9 +34,9 @@ POSSIBILITY OF SUCH DAMAGE. Redis uses source code from third parties. All this code contians a BSD or BSD-compatible license. The following is a list of third party files and information about their copyright. -* Redis uses the [LHF compression library](http://oldhome.schmorp.de/marc/liblzf.html). LibLZF is copyright (c) 2000-2007 Marc Alexander Lehmann and is released under the terms of the **two clause BSD license**. +* Redis uses the [LHF compression library](http://oldhome.schmorp.de/marc/liblzf.html). LibLZF is copyright Marc Alexander Lehmann and is released under the terms of the **two clause BSD license**. -* Redis uses the `sha1.c` file that is copyright by Steve Reid and released under the public domain. This file is extremely popular and used among open source and proprietary code. +* Redis uses the `sha1.c` file that is copyright by Steve Reid and released under the **public domain**. This file is extremely popular and used among open source and proprietary code. * When compiled on Linux Redis usees the [Jemalloc allocator](http://www.canonware.com/jemalloc/), that is copyright by Jason Evans, Mozilla Foundation and Facebook, Inc and is released under the **two clause BSD license**. From 530ecca4e0643f345766007bed8a539c94035e77 Mon Sep 17 00:00:00 2001 From: Joffrey JAFFEUX Date: Mon, 17 Dec 2012 11:51:32 +0100 Subject: [PATCH 0273/2880] Typo --- topics/license.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/license.md b/topics/license.md index 23dc86d941..fcbc5168e1 100644 --- a/topics/license.md +++ b/topics/license.md @@ -38,7 +38,7 @@ Redis uses source code from third parties. All this code contians a BSD or BSD-c * Redis uses the `sha1.c` file that is copyright by Steve Reid and released under the **public domain**. This file is extremely popular and used among open source and proprietary code. -* When compiled on Linux Redis usees the [Jemalloc allocator](http://www.canonware.com/jemalloc/), that is copyright by Jason Evans, Mozilla Foundation and Facebook, Inc and is released under the **two clause BSD license**. +* When compiled on Linux Redis uses the [Jemalloc allocator](http://www.canonware.com/jemalloc/), that is copyright by Jason Evans, Mozilla Foundation and Facebook, Inc and is released under the **two clause BSD license**. * Inside Jemalloc the file `pprof` is copyright Google Inc and released under the **three clause BSD license**. From c08557c0ee2c5c79509a23dd302be98a7aa95fdc Mon Sep 17 00:00:00 2001 From: Kristian Glass Date: Thu, 27 Dec 2012 23:40:52 +0000 Subject: [PATCH 0274/2880] Added blank line so command list renders --- topics/sentinel.md | 1 + 1 file changed, 1 insertion(+) diff --git a/topics/sentinel.md b/topics/sentinel.md index 790cc632eb..d2bea6849e 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -200,6 +200,7 @@ Sentinel commands --- The following is a list of accepted commands: + * **PING** this command simply returns PONG. * **SENTINEL masters** show a list of monitored masters and their state. * **SENTINEL slaves ``** show a list of slaves for this master, and their state. From 730ebd87ba9710deb9e0ab782266edcde0155745 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 15 Jan 2013 13:50:40 +0100 Subject: [PATCH 0275/2880] Documentation for CLIENT GETNAME and SETNAME. --- commands.json | 18 ++++++++++++++++++ commands/client getname.md | 5 +++++ commands/client setname.md | 19 +++++++++++++++++++ 3 files changed, 42 insertions(+) create mode 100644 commands/client getname.md create mode 100644 commands/client setname.md diff --git a/commands.json b/commands.json index e0b7ffed49..1fc881e944 100644 --- a/commands.json +++ b/commands.json @@ -151,6 +151,24 @@ "since": "2.4.0", "group": "server" }, + "CLIENT GETNAME": { + "summary": "Get the current connection name", + "complexity": "O(1)", + "since": "2.6.9", + "group": "server" + }, + "CLIENT SETNAME": { + "summary": "Set the current connection name", + "complexity": "O(1)", + "since": "2.6.9", + "arguments": [ + { + "name": "connection-name", + "type": "string" + } + ], + "group": "server" + }, "CONFIG GET": { "summary": "Get the value of a configuration parameter", "arguments": [ diff --git a/commands/client getname.md b/commands/client getname.md new file mode 100644 index 0000000000..455b1ea2fb --- /dev/null +++ b/commands/client getname.md @@ -0,0 +1,5 @@ +The `CLIENT GETNAME` returns the name of the current connection as set by `CLIENT SETNAME`. Since every new connection starts without an associated name, if no name was assigned a null bulk reply is returned. + +@return + +@bulk-reply: The connection name, or a null bulk reply if no name is set. diff --git a/commands/client setname.md b/commands/client setname.md new file mode 100644 index 0000000000..2a552f3e27 --- /dev/null +++ b/commands/client setname.md @@ -0,0 +1,19 @@ +The `CLIENT SETNAME` command assigns a name to the current connection. + +The assigned name is displayed in the output of `CLIENT LIST` so that it is possible to identify the client that performed a given connection. + +For instance when Redis is used in order to implement a queue, producers and consumers of messages may want to set the name of the connection according to their role. + +There is no limit to the length of the name that can be assigned if not the usual limits of the Redis string type (512 MB). However it is not possible to use spaces in the connection name as this would violate the format of the `CLIENT LIST` reply. + +It is possible to entirely remove the connection name setting it to the empty string, that is not a valid connection name since it serves to this specific purpose. + +The connection name can be inspected using `CLIENT GETNAME`. + +Every new connection starts without an assigned name. + +Tip: setting names to connections is a good way to debug connection leaks due to bugs in the application using Redis. + +@return + +@status-reply: `OK` if the connection name was successfully set. From 73c1143de7701ad8501e729d406dd4c7ba14c964 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ronny=20Lo=CC=81pez?= Date: Tue, 15 Jan 2013 22:57:53 +0100 Subject: [PATCH 0276/2880] Fixed command BITCOUNT arguments. --- commands.json | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/commands.json b/commands.json index 1fc881e944..bff4fdbb2e 100644 --- a/commands.json +++ b/commands.json @@ -45,14 +45,9 @@ "type": "key" }, { - "name": "start", - "type": "integer", - "optional": true - }, - { - "name": "end", - "type": "integer", - "optional": true + "name": ["start", "end"], + "type": ["integer", "integer"], + "multiple": true } ], "since": "2.6.0", From 49da29860dae9730861ab1e1dd8774490fe53546 Mon Sep 17 00:00:00 2001 From: John Weir Date: Thu, 17 Jan 2013 14:49:32 -0500 Subject: [PATCH 0277/2880] Document Pub/Sub and db number scope --- topics/pubsub.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/topics/pubsub.md b/topics/pubsub.md index 9765f47efb..d8772db905 100644 --- a/topics/pubsub.md +++ b/topics/pubsub.md @@ -49,6 +49,16 @@ issued by another client. The second element is the name of the originating channel, and the third argument is the actual message payload. +## Database & Scoping + +Pub/Sub has no relation to the key space. It was made to not interfere with +it on any level, including database numbers. + +Publishing on db 10, will be heard on by a subscriber on db 1. + +If you need scoping of some kind, prefix the channels with the name of the +environment (test, staging, production, ...). + ## Wire protocol example SUBSCRIBE first second From 0e4a48f890821bd09546cb41f2a147ecf0b770a9 Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 19 Jan 2013 13:38:46 +0100 Subject: [PATCH 0278/2880] Document Redis signal handling. --- topics/signals.md | 81 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 topics/signals.md diff --git a/topics/signals.md b/topics/signals.md new file mode 100644 index 0000000000..4306f71997 --- /dev/null +++ b/topics/signals.md @@ -0,0 +1,81 @@ +Redis Signals Handling +=== + +This document provides information about how Redis reacts to the reception +of differe POSIX signals such as `SIGTERM`, `SIGSEGV` and so forth. + +The information contained in this document is **only applicable to Redis version 2.6 or greater**. + +Handling of SIGTERM +--- + +The `SIGTERM` signals tells Redis to shutdown gracefully. When this signal is +received the server does not actually exits as a result, but it schedules +a shutdown very similar to the one performed when the `SHUTDOWN` command is +called. The scheduled shutdown starts ASAP, specifically as long as the +current command in execution terminates (if any), with a possible additional +delay of 0.1 seconds or less. + +In case the server is blocked by a Lua script that is taking too much time, +if the script is killable with `SCRIPT KILL` the scheduled shutdown will be +executed just after the script is killed, or if terminates spontaneously. + +The Shutdown performed in this condition includes the following actions: + +* If there is a background child saving the RDB file or performing an AOF rewrite, the child is killed. +* If the AOF is active, Redis calls the `fsync` system call on the AOF file descriptor in order to flush the buffers on disk. +* If Redis is configured to persist on disk using RDB files, a synchronous (blocking) save is performed. Since the save is performed in a synchronous way no additional memory is used. +* If the server is daemonized, the pid file is removed. +* If the Unix domain socket is enabled, it gets removed. +* The server exits with an exit code of zero. + +In case the RDB file can't be saved, the shutdown fails, and the server continues to run in order to ensure no data loss. Since Redis 2.6.11 no further attempt to shut down will be made unless a new `SIGTERM` will be received or the `SHUTDOWN` command issued. + +Handling of SIGSEGV, SIGBUS, SIGFPE and SIGILL +--- + +The following follow signals are handled as a Redis crash: + +* SIGSEGV +* SIGBUS +* SIGFPE +* SIGILL + +One one of these signals is trapped, Redis aborts any current operation and performs the following actions: + +* A bug report is produced on the log file. This includes a stack trace, dump of registers, and information about the state of clients. +* Since Redis 2.8 (currently a development version) a fast memory test is performed as a first check of the reliability of the crashing system. +* If the server was daemonized, the pid file is removed. +* Finally the server unregisters its own signal handler for the received signal, and sends the same signal again to itself, in order to make sure that the default action is performed, for instance dumping the core on the file system. + +What happens when a child process gets killed +--- + +When the child performing the Append Only File rewrite gets killed by a signal, +Redis handles this as an error and discards the (probably partial or corrupted) +AOF file. The rewrite will be re-triggered again later. + +When the child performing an RDB save is killed Redis will handle the +condition as a more severe error, because while the effect of a lack of +AOF file rewrite is a the AOF file enlargement, the effect of failed RDB file +creation is lack of durability. + +As a result of the child producing the RDB file being killed by a signal, +or when the child exits with an error (non zero exit code), Redis enters +a special error condition where no further write command is accepted. + +* Redis will continue to reply to read commands. +* Redis will reply to all write commands with a `MISCONFIG` error. + +This error condition is cleared only once it will be possible to create +an RDB file with success. + +Killing the RDB file without triggering an error condition +--- + +However sometimes the user may want to kill the RDB saving child without +generating an error. Since Redis version 2.6.10 this can be done using the +special signal `SIGUSR1` that is handled in a special way: +it kills the child process as any other signal, but the parent process will +not detect this as a critical error and will continue to serve write +requests as usually. From a5ab1b1c8f9e2d89e25d1c974b11d58034303ccc Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 21 Jan 2013 11:47:55 +0100 Subject: [PATCH 0279/2880] Protocol specification is now less embarassing. --- topics/protocol.md | 245 +++++++++++++++++++++++---------------------- 1 file changed, 125 insertions(+), 120 deletions(-) diff --git a/topics/protocol.md b/topics/protocol.md index 534cdf889b..792a9e0bce 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -24,6 +24,7 @@ The new unified request protocol The new unified protocol was introduced in Redis 1.2, but it became the standard way for talking with the Redis server in Redis 2.0. +This is the protocol you should implement in your Redis client. In the unified protocol all the arguments sent to the Redis server are binary safe. This is the general form: @@ -46,35 +47,37 @@ See the following example: myvalue This is how the above command looks as a quoted string, so that it is possible -to see the exact value of every byte in the query: +to see the exact value of every byte in the query, including newlines. "*3\r\n$3\r\nSET\r\n$5\r\nmykey\r\n$7\r\nmyvalue\r\n" As you will see in a moment this format is also used in Redis replies. The -format used for every argument `$6\r\nmydata\r\n` is called a Bulk Reply. -While the actual unified request protocol is what Redis uses to return list of -items, and is called a @multi-bulk-reply. It is just the sum of N different Bulk -Replies prefixed by a `*\r\n` string where `` is the number of -arguments (Bulk Replies) that will follow. +format used for the single argument `$6\r\nmydata\r\n` is called a **Bulk Reply**. + +The unified request protocol is what Redis already uses in replies in order +to send list of items to clients, and is called a **Multi Bulk Reply**. +It is just the sum of `N` different Bulk Replies prefixed by a `*\r\n` +string where `` is the number of arguments (Bulk Replies) that +will follow. Replies ------- -Redis will reply to commands with different kinds of replies. It is possible to -check the kind of reply from the first byte sent by the server: +Redis will reply to commands with different kinds of replies. It is always +possible to detect the kind of reply from the first byte sent by the server: -* With a single line reply the first byte of the reply will be "+" -* With an error message the first byte of the reply will be "-" -* With an integer number the first byte of the reply will be ":" -* With bulk reply the first byte of the reply will be "$" -* With multi-bulk reply the first byte of the reply will be "`*`" +* In a Status Reply the first byte of the reply is "+" +* In an Error Reply the first byte of the reply is "-" +* In an Integer Reply the first byte of the reply is ":" +* In a Bulk Reply the first byte of the reply is "$" +* In a Multi Bulk Reply the first byte of the reply s "`*`" Status reply ----------------- -A status reply (or: single line reply) is in the form of a single line string +A Status Reply (or: single line reply) is in the form of a single line string starting with "+" terminated by "\r\n". For example: +OK @@ -82,22 +85,38 @@ starting with "+" terminated by "\r\n". For example: The client library should return everything after the "+", that is, the string "OK" in this example. +Status replies are not binary safe and can't include newlines, and are usually +returned by commands that don't need to return data, but just some kind of +status. Status replies have very little overhead of three bytes (the initial +"+" and the final CRLF). + Error reply ----------- -Errors are sent similar to status replies. The only difference is that the first -byte is "-" instead of "+". +Error Replies are very similar to Status Replies. The only difference is that +the first byte is "-" instead of "+". -Error replies are only sent when something strange happened, for instance if +Error replies are only sent when something wrong happened, for instance if you try to perform an operation against the wrong data type, or if the command does not exist and so forth. So an exception should be raised by the library client when an Error Reply is received. -The redis server usually precedes error messages with "ERR". Some client libraries -assume this, so you may wish to add "ERR" after the minus sign if you are -writing a server implementation. +A few examples of an error replies are the following: + + -ERR unknown command 'foobar' + -WRONGTYPE Operation against a key holding the wrong kind of value + +The first word after the "-", up to the first space or newline, represents +the kind of error returned. + +`ERR` is the generic error, while `WRONGTYPE` is a more specific error. +A client implementation may return different kind of exceptions for different +errors, or may provide a generic way to trap errors by directly providing +the error name to the caller as a string. + +However such a feature should not be considered vital as it is rarely useful, and a limited client implementation may simply return a generic error conditon, such as `false`. Integer reply ------------- @@ -106,18 +125,22 @@ This type of reply is just a CRLF terminated string representing an integer, prefixed by a ":" byte. For example ":0\r\n", or ":1000\r\n" are integer replies. -With commands like INCR or LASTSAVE using the integer reply to actually return -a value there is no special meaning for the returned integer. It is just an -incremental number for INCR, a UNIX time for LASTSAVE and so on. +Examples of commands returning an integer are `INCR` and `LASTSAVE`. +There is no special meaning for the returned integer, it is just an +incremental number for `INCR`, a UNIX time for `LASTSAVE` and so forth, however +the returned integer is guaranteed to be in the range of a signed 64 bit +integer. -Some commands like EXISTS will return 1 for true and 0 for false. +Integer replies are also extensively used in order to return true or false. +For instance commands like `EXISTS` or `SISMEMBER` will return 1 for true +and 0 for false. -Other commands like SADD, SREM and SETNX will return 1 if the operation was -actually done, 0 otherwise. +Other commands like `SADD`, `SREM` and `SETNX` will return 1 if the operation +was actually performed, 0 otherwise. -The following commands will reply with an integer reply: SETNX, DEL, EXISTS, -INCR, INCRBY, DECR, DECRBY, DBSIZE, LASTSAVE, RENAMENX, MOVE, LLEN, SADD, SREM, -SISMEMBER, SCARD +The following commands will reply with an integer reply: `SETNX`, `DEL`, +`EXISTS`, `INCR`, `INCRBY`, `DECR`, `DECRBY`, `DBSIZE`, `LASTSAVE`, +`RENAMENX`, `MOVE`, `LLEN`, `SADD`, `SREM`, `SISMEMBER`, `SCARD`. @@ -126,7 +149,7 @@ Bulk replies ------------ Bulk replies are used by the server in order to return a single binary safe -string. +string up to 512 MB in length. C: GET mykey S: $6\r\nfoobar\r\n @@ -144,6 +167,8 @@ value -1 as data length, example: C: GET nonexistingkey S: $-1 +This is called a **NULL Bulk Reply**. + The client library API should not return an empty string, but a nil object, when the requested object does not exist. For example a Ruby library should return 'nil' while a C library should return NULL (or set a special flag in the @@ -154,57 +179,77 @@ reply object), and so forth. Multi-bulk replies ------------------ -Commands like LRANGE need to return multiple values (every element of the list -is a value, and LRANGE needs to return more than a single element). This is -accomplished using multiple bulk writes, prefixed by an initial line indicating -how many bulk writes will follow. The first byte of a multi bulk reply is -always `*`. Example: +Commands like `LRANGE` need to return multiple values (every element of a list +is a value, and `LRANGE` needs to return more than a single element). This is +accomplished using Multiple Bulk Replies. + +A Multi bulk reply is used to return an array of other replies. Every element +of a Multi Bulk Reply can be of any kind, including a nested Multi Bulk Reply. + +Multi Bulk Replies are sent using `*` as the first byte, followed by a string +representing the number of replies (elements of the array) that will follow, +followed by CR LF. C: LRANGE mylist 0 3 - s: *4 - s: $3 - s: foo - s: $3 - s: bar - s: $5 - s: Hello - s: $5 - s: World + S: *4 + S: $3 + S: foo + S: $3 + S: bar + S: $5 + S: Hello + S: $5 + S: World + +(Note: in the above example every string sent by the server has a trailing +CR LF newline). As you can see the multi bulk reply is exactly the same format used in order -to send commands to the Redis server using the unified protocol. +to send commands to the Redis server using the unified protocol. THe sole +differene is that while for the unified protocol only Bulk Replies are sent +as elements, with Multi Bulk Replies sent by the server as response to a +command every kind of reply type is valid as element of the Multi Bulk Reply. + +For instance a list of four integers and a binary safe string can be sent as +a Multi Bulk Reply in the following format: -To send integers in a multibulk reply, just send a colon following by the -integer like you would for a regular integer reply. Do not send the size -before sending the integer. + *5\r\n + :1\r\n + :2\r\n + :3\r\n + :4\r\n + $6\r\n + foobar\r\n -The first line the server sent is `*4\r\n` in order to specify that four bulk -replies will follow. Then every bulk write is transmitted. +The first line the server sent is `*5\r\n` in order to specify that five +replies will follow. Then every reply constituting the items of the +Multi Bulk reply is transmitted. -If the specified key does not exist, the key is considered to hold an empty -list and the value `0` is sent as multi bulk count. Example: +Empty Multi Bulk Reply are allowed, as in the following example: C: LRANGE nokey 0 1 - S: *0 + S: *0\r\n -When the `BLPOP` command times out, it returns the nil multi bulk reply. This -type of multi bulk has count `-1` and should be interpreted as a nil value. -Example: +Also the concept of Null Multi Bulk Reply exists. + +For instance when the `BLPOP` command times out, it returns a Null Multi Bulk +Reply, that has a count of `-1` as in the following example: C: BLPOP key 1 - S: *-1 + S: *-1\r\n -A client library API *SHOULD* return a nil object and not an empty list when this -happens. This is necessary to distinguish between an empty list and an error -condition (for instance the timeout condition of the `BLPOP` command). +A client library API should return a null object and not an empty Array when +Redis replies with a Null Multi Bulk Reply. This is necessary to distinguish +between an empty list and a different condition (for instance the timeout +condition of the `BLPOP` command). -Nil elements in Multi-Bulk replies +Null elements in Multi-Bulk replies ---------------------------------- Single elements of a multi bulk reply may have -1 length, in order to signal that this elements are missing and not empty strings. This can happen with the SORT command when used with the GET _pattern_ option when the specified key is -missing. Example of a multi bulk reply containing an empty element: +missing. Example of a multi bulk reply containing a null element: S: *3 S: $3 @@ -217,78 +262,38 @@ The second element is nul. The client library should return something like this: ["foo",nil,"bar"] +Note that this is not an exception to what said in the previous sections, but +just an example to further specify the protocol. + Multiple commands and pipelining -------------------------------- A client can use the same connection in order to issue multiple commands. Pipelining is supported so multiple commands can be sent with a single -write operation by the client, it is not needed to read the server reply -in order to issue the next command. All the replies can be read at the end. - -Usually Redis server and client will have a very fast link so this is not -very important to support this feature in a client implementation, still -if an application needs to issue a very large number of commands in short -time to use pipelining can be much faster. - -The old protocol for sending commands -------------------------------------- - -Before of the Unified Request Protocol Redis used a different protocol to send -commands, that is still supported since it is simpler to type by hand via -telnet. In this protocol there are two kind of commands: - -* Inline commands: simple commands where arguments are just space separated - strings. No binary safeness is possible. -* Bulk commands: bulk commands are exactly like inline commands, but the last - argument is handled in a special way in order to allow for a binary-safe last - argument. +write operation by the client, without the need to to read the server reply +of the previous command before issuing the next command. +All the replies can be read at the end. Inline Commands --------------- -The simplest way to send Redis a command is via **inline commands**. The -following is an example of a server/client chat using an inline command (the -server chat starts with S:, the client chat with C:) +Sometimes you have only `telnet` in your hands and you need to send a command +to the Redis server. While the Redis protocol is simple to implement it is +not ideal to use in interactive sessions, and `redis-cli` may not always be +available. For this reason Redis also accepts commands in a special way that +is designed for humans, and is called the **inline command** format. + +The following is an example of a server/client chat using an inline command +(the server chat starts with S:, the client chat with C:) C: PING S: +PONG -The following is another example of an INLINE command returning an integer: +The following is another example of an inline command returning an integer: C: EXISTS somekey S: :0 -Since 'somekey' does not exist the server returned ':0'. - -Note that the EXISTS command takes one argument. Arguments are separated -by spaces. - -Bulk commands -------------- - -Some commands when sent as inline commands require a special form in order to -support a binary safe last argument. This commands will use the last argument -for a "byte count", then the bulk data is sent (that can be binary safe since -the server knows how many bytes to read). - -See for instance the following example: - - C: SET mykey 6 - C: foobar - S: +OK - -The last argument of the command is '6'. This specify the number of DATA bytes -that will follow, that is, the string "foobar". Note that even this bytes are -terminated by two additional bytes of CRLF. - -All the bulk commands are in this exact form: instead of the last argument the -number of bytes that will follow is specified, followed by the bytes composing -the argument itself, and CRLF. In order to be more clear for the programmer -this is the string sent by the client in the above sample: - - "SET mykey 6\r\nfoobar\r\n" - -Redis has an internal list of what command is inline and what command is bulk, -so you have to send this commands accordingly. It is strongly suggested to use -the new Unified Request Protocol instead. - +Basically you simply write space-separated arguments in a telnet session. +Since no command starts with `*` that is instead used in the unified request +protocol, Redis is able to detect this condition and parse your command. From f8aacc29456c6f8c0f1f492eac3fb58be474f9c2 Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Mon, 21 Jan 2013 13:21:27 +0100 Subject: [PATCH 0280/2880] Fixed some typos. --- topics/protocol.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/topics/protocol.md b/topics/protocol.md index 792a9e0bce..fa90409007 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -205,8 +205,8 @@ followed by CR LF. CR LF newline). As you can see the multi bulk reply is exactly the same format used in order -to send commands to the Redis server using the unified protocol. THe sole -differene is that while for the unified protocol only Bulk Replies are sent +to send commands to the Redis server using the unified protocol. The sole +difference is that while for the unified protocol only Bulk Replies are sent as elements, with Multi Bulk Replies sent by the server as response to a command every kind of reply type is valid as element of the Multi Bulk Reply. @@ -258,7 +258,7 @@ missing. Example of a multi bulk reply containing a null element: S: $3 S: bar -The second element is nul. The client library should return something like this: +The second element is null. The client library should return something like this: ["foo",nil,"bar"] From 6c413ab6904a40a430d5c709c6ad063153ceb21b Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 21 Jan 2013 13:29:54 +0100 Subject: [PATCH 0281/2880] Added a new section in the protocol doc about high performance parsers. --- topics/protocol.md | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/topics/protocol.md b/topics/protocol.md index 792a9e0bce..b4a909918c 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -297,3 +297,46 @@ The following is another example of an inline command returning an integer: Basically you simply write space-separated arguments in a telnet session. Since no command starts with `*` that is instead used in the unified request protocol, Redis is able to detect this condition and parse your command. + +High performance parser for the Redis protocol +---------------------------------------------- + +While the Redis protocol is very human readable and easy to implement it can +be implemented with similar performances of a binary protocol. + +The Redis protocol uses prefixed lengths to transfer bulk data, so there is +never need to scan the payload for special characters like it happens for +instance with JSON, nor to quote the payload that needs to be sent to the +server. + +The Bulk and Multi Bulk lengths can be be processed with code that performs +a single operation per character while at the same time scanning for the +CR character, like the following C code: + +``` +#include + +int main(void) { + unsigned char *p = "$123\r\n"; + int len = 0; + + p++; + while(*p != '\r') { + len = (len*10)+(*p - '0'); + p++; + } + + /* Now p points at '\r', and the len is in bulk_len. */ + printf("%d\n", len); + return 0; +} +``` + +After the first CR is idenitifed, it can be skipped along with the following +LF without any processing. Then the bulk data can be read using a single +read operation that does not inspect the payload in any way. Finally +the remaining the CR and LF chacaters are discareded without any processing. + +While comparable in performance to a binary protocol the Redis protocol is +significantly simpler to implement in most very high level languages, +reducing the number of bugs in client software. From 237a25ef56e27bd025f120a7bc0fa5d3f0aed6e9 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 23 Jan 2013 12:14:34 +0100 Subject: [PATCH 0282/2880] Clients handling documentation. --- topics/clients.md | 164 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 164 insertions(+) create mode 100644 topics/clients.md diff --git a/topics/clients.md b/topics/clients.md new file mode 100644 index 0000000000..18cf35ac20 --- /dev/null +++ b/topics/clients.md @@ -0,0 +1,164 @@ +Redis Clients Handling +=== + +This document provides information about how Redis handles clients from the +point of view of the networklayer: connections, timeouts, buffers, and +other similar topics are covered here. + +The information contained in this document is **only applicable to Redis version 2.6 or greater**. + +How client connections are accepted +--- + +Redis accepts clients connections on the configured listening TCP port and +on the Unix socket if enabled. When a new client connection is accepted +the following operations are performed: + +* The client socket is put in non-blocking state since Redis uses multiplexing and non-blocking I/O. +* The `TCP_NODELAY` option is set in order to ensure that we don't have delays in our connection. +* A *readable* file event is created so that Redis is able to collect the client queries as soon as new data is available to be read on the socket. + +After the client is initialized, Redis checks if we are already at the limit +of the number of clients that it is possible to handle simultaneously +(this is configured using the `maxclients` configuration directive, see the +next section of this document for further information). + +In case it can't accept the current client because the maximum number of clients +was already accepted, Redis tries to send an error to the client in order to +make it aware of this condition, and closes the connection immediately. +The error message will be able to reach the client even if the connection is +closed immediately by Redis because the new socket output buffer is usually +big enough to contain the error, so the kernel will handle the transmission +of the error. + +In what order clients are served +--- + +The order is determined by a combination of the client scoket file descriptor +number and order in which the kernel reports events, so the order is to be +considered as unspecified. + +However Redis does the following two things when serving clients: + +* It only performs a single `read()` system call every time there is something new to read from the client socket, in order to ensure that if we have multiple clients connected, and a few are very demanding clients sending queries at an high rate, other clients are not penalized and will not experience a bad latency figure. +* However once new data is read from a client, all the queries contained in the current buffers are processed sequentially. This improves locality and does not need iterating a second time to see if there are clients that need some processing time. + +Maximum number of clients +--- + +In Redis 2.4 there was an hard-coded limit about the maximum number of clients +that was possible to handle simultaneously. + +In Redis 2.6 this limit is dynamic: by default is set to 10000 clients, unless +otherwise stated by the `maxmemory` directive in Redis.conf. + +However Redis checks with the kernel what is the maximum number of file +descriptors that we are able to open (the *soft limit* is checked), if the +limit is smaller than the maximum number of clients we want to handle, plus +32 (that is the number of file descriptors Redis reserves for internal uses), +then the number of maximum clients is modified by Redis to match the amount +of clients we are *really able to handle* under the current operating system +limit. + +When the configured number of maximum clients can not be honored, the condition +is logged at startup as in the following example: + +``` +$ ./redis-server --maxclients 100000 +[41422] 23 Jan 11:28:33.179 # Unable to set the max number of files limit to 10000032 (Invalid argument), setting the max clients configuration to 10112. +``` + +When Redis is configured in order to handle a specific number of clients it +is a good idea to make sure that the operating system limit to the maximum +number of file descriptors per process is also set accordingly. + +Under Linux these limits can be set both in the current session and as a +system-wide setting with the following commands: + +* ulimit -Sn 100000 # This will only work if hard limit is big enough. +* sysctl -w fs.file-max=100000 + +Output buffers limits +--- + +Redis needs to handle a variable-length output buffer for every client, since +a command can produce a big amount of data that needs to be transfered to the +client. + +However it is possible that a client sends more commands producing more output +to serve at a faster rate at which Redis can send the existing output to the +client. This is especially true with Pub/Sub clients in case a client is not +able to process new messages fast enough. + +Both the conditions will cause the client output buffer to grow and consume +more and more memory. For this reason by default Redis sets limits to the +output buffer size for different kind of clients. When the limit is reached +the client connection is closed and the event logged in the Redis log file. + +There are two kind of limits Redis uses: + +* The **hard limit** is a fixed limit that when reached will make Redis closing the client connection as soon as possible. +* The **soft limit** instead is a limit that depends on the time, for instance a soft limit of 32 megabytes per 10 seconds means that if the client has an output buffer bigger than 32 megabytes for, continuously, 10 seconds, the connection gets closed. + +Different kind of clients have different default limits: + +* **Normal clients** have a default limit of 0, that means, no limit at all, because most normal clients use blocking implementations sending a single command and waiting for the reply to be completely read before sending the next command, so it is always not desirable to close the connection in case of a normal client. +* **Pub/Sub clients** have a default hard limit of 32 megabytes and a soft limit of 8 megabytes per 60 seconds. +* **Slaves** have a default hard limit of 256 megabytes and a soft limit of 64 megabyte per 60 second. + +It is possible to change the limit at runtime using the `CONFIG SET` command or in a permanent way using the Redis configuration file `redis.conf`. See the example `redis.conf` in the Redis distribution for more information about how to set the limit. + +Query buffer hard limit +--- + +Every client is also subject to a query buffer limit. This is a non-configurable hard limit that will close the connection when the client query buffer (that is the buffer we use to accumulate commands from the client) reaches 1 GB, and is actually only an extreme limit to avoid a server crash in case of client or server software bugs. + +Client timeouts +--- + +By default recent versions of Redis don't close the connection with the client +if the client is idle for many seconds: the connection will remain open forever. + +However if you don't like this behavior, you can configure a timeout, so that +if the client is idle for more than the specified number of seconds, the client connection will be closed. + +You can configure this limit via `redis.conf` or simply using `CONFIG SET timeout `. + +Note that the timeout only applies to number clients and it **does not apply to Pub/Sub clients**, since a Pub/Sub connection is a *push style* connection so a client that is idle is the norm. + +Even if by default connections are not subject to timeout, there are two conditions when it makes sense to set a timeout: + +* Mission critical applications where a bug in the client software may saturate the Redis server with idle connections, causing service disruption. +* As a debugging mechanism in order to be able to connect with the server if a bug in the client software saturates the server with idle connections, making it impossible to interact with the server. + +Timeouts are not to be considered very precise: Redis avoids to set timer events or to run O(N) algorithms in order to check idle clients, so the check is performed incrementally from time to time. This means that it is possible that while the timeout is set to 10 seconds, the client connection will be closed, for instance, after 12 seconds if many clients are connected at the same time. + +CLIENT command +--- + +The Redis client command allows to inspect the state of every connected client, to kill a specific client, to set names to connections. It is a very powerful debugging tool if you use Redis at scale. + +`CLIENT LIST` is used in order to obtain a list of connected clients and their state: + +``` +redis 127.0.0.1:6379> client list +addr=127.0.0.1:52555 fd=5 name= age=855 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client +addr=127.0.0.1:52787 fd=6 name= age=6 idle=5 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping +``` + +In the above example session two clients are connected to the Redis server. The meaning of a few of the most interesting fields is the following: + +* **addr**: The client address, that is, the client IP and the remote port number it used to connect with the Redis server. +* **fd**: The client socket file descriptor number. +* **name**: The client name as set by `CLIENT SETNAME`. +* **age**: The number of seconds the connection existed for. +* **idle**: The number of seconds the connection is idle. +* **flags**: The kind of client (N means normal client, check the [full list of flags](http://redis.io/commands/client-list)). +* **omem**: The amount of memory used by the client for the output buffer. +* **cmd**: The last executed command. + +See the [CLIENT LIST](http://redis.io/commands/client-list) documentation for the full list of fields and their meaning. + +Once you have the list of clients, you can easily close the connection with a client using the `CLIENT KILL` command specifying the client address as argument. + +The commands `CLIENT SETNAME` and `CLIENT GETNAME` can be used to set and get the connection name. From ed6735bc6037c017a3f0e30c4a9e635897d21742 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 23 Jan 2013 16:42:25 +0100 Subject: [PATCH 0283/2880] Fixed typo. --- topics/clients.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/clients.md b/topics/clients.md index 18cf35ac20..1cb603ce3c 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -65,7 +65,7 @@ is logged at startup as in the following example: ``` $ ./redis-server --maxclients 100000 -[41422] 23 Jan 11:28:33.179 # Unable to set the max number of files limit to 10000032 (Invalid argument), setting the max clients configuration to 10112. +[41422] 23 Jan 11:28:33.179 # Unable to set the max number of files limit to 100032 (Invalid argument), setting the max clients configuration to 10112. ``` When Redis is configured in order to handle a specific number of clients it From 1e6f1567f1c3de39eadb391837c87a5b51a0186a Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 29 Jan 2013 14:01:41 +0100 Subject: [PATCH 0284/2880] Notifications doc. --- topics/notifications.md | 164 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 164 insertions(+) create mode 100644 topics/notifications.md diff --git a/topics/notifications.md b/topics/notifications.md new file mode 100644 index 0000000000..1e42325c8b --- /dev/null +++ b/topics/notifications.md @@ -0,0 +1,164 @@ +Redis Keyspace Notifications +=== + +**IMPORTANT** Keyspace notifications is a feature only available in development +versions of Redis. This documentation and the implementation of the feature are +likely to change in the next weeks. + +Feature overview +--- + +Keyspace notifications allows clients to subscribe to Pub/Sub channels in order +to receive events affecting the Redis data set in some way. + +Examples of the events that is possible to receive are the following: + +* All the commands affecting a given key. +* All the keys receiving an LPUSH operation. +* All the keys expiring in the database 0. + +Events are delivered using the normal Pub/Sub layer of Redis, so clients +implementing Pub/Sub are able to use this feature without modifications. + +Because Redis Pub/Sub is *fire and forget* currently there is no way to use this +feature if you application demands **reliable notification** of events, that is, +if your Pub/Sub client disconnects, and reconnects later, all the events +delivered during the time the client was disconnected are lost. + +In the future there are plans to allow for more reliable delivering of +events, but probably this will be addressed at a more general level either +bringing reliability to Pub/Sub itself, or allowing Lua scripts to intercept +Pub/Sub messages to perform operations like pushing the events into a list. + +Type of events +--- + +Keyspace notifications are implemented sending two distinct type of events +for every operation affecting the Redis data space. For instance a `DEL` +operation targeting the key named `mykey` in database `0` will trigger +the delivering of two messages, exactly equivalent to the following two +`PUBLISH` commands: + + PUBLISH __keyspace@0__:mykey del + PUBLISH __keyevent@0__:del mykey + +It is easy to see how one channel allows to listen to all the events targeting +the key `mykey` and the other channel allows to obtain informations about +all the keys that are target of a `del` operation. + +The first kind of event, with `keyspace` prefix in the channel is called +a **Key-space notification**, while the second, with the `keyevent` prefix, +is called a **Key-event notification**. + +In the above example a `del` event was generated for the key `mykey`. +What happens is that: + +* The Key-space channel receives as message the name of the event. +* The Key-event channel receives as message the name of the key. + +It is possible to enable only one kind of notification in order to deliver +just the subset of events we are interested in. + +Configuration +--- + +By default keyspace events notifications are disabled because while not +very sensible the feature uses some CPU power. Notifications are enabled +using the `notify-keyspace-events` of redis.conf or via the **CONFIG SET**. + +Setting the parameter to the empty string disables notifications. +In order to enable the feature a non-empty string is used, composed of multiple +characters, where every character has a special meaning according to the +following table: + + K Keyspace events, published with __keyspace@__ prefix. + E Keyevent events, published with __keyevent@__ prefix. + g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... + $ String commands + l List commands + s Set commands + h Hash commands + z Sorted set commands + x Expired events (events generated every time a key expires) + e Evicted events (events generated when a key is evicted for maxmemory) + A Alias for g$lshzxe, so that the "AKE" string means all the events. + +At least `K` or `E` should be present in the string, otherwise no event +will be delivered regardless of the rest of the string. + +For instance to enable just Key-space events for lists, the configuration +parameter must be set to `Kl`, and so forth. + +The string `KEA` can be used to enable every possible event. + +Events generated by different commands +--- + +Different commands generate different kind of events according to the following list. + +* `DEL` generates a `del` event for every deleted key. +* `RENAME` generates two events, a `rename_from` event for the source key, and a `rename_to` event for the destination key. +* `EXPIRE` generates an `expire` event when an expire is set to the key, or a `del` event every time setting an expire results into the key being deleted (see `EXPIRE` documentation for more info). +* `SORT` generates a `sortstore` event when `STORE` is used to set a new key. If the resulting list is empty, and the `STORE` option is used, and there was already an existing key with that name, the result is that the key is deleted, so a `del` event is generated in this condition. +* `SET` and all its variants (`SETEX`, `SETNX`,`GETSET`) generate `set` events. However `SETEX` will also generate an `expire` events. +* `MSET` generates a separated `set` event for every key. +* `SETRANGE` generates a `setrange` event. +* `INCR`, `DECR`, `INCRBY`, `DECRBY` commands all generate `incrby` events. +* `INCRBYFLOAT` generates an `incrbyfloat` events. +* `APPEND` generates an `append` event. +* `LPUSH` and `LPUSHX` generates a single `lpush` event, even in the variadic case. +* `RPUSH` and `RPUSHX` generates a single `rpush` event, even in the variadic case. +* `RPOP` generates an `rpop` event. Additionally a `del` event is generated if the key is removed because the last element from the list was popped. +* `LPOP` generates an `lpop` event. Additionally a `del` event is generated if the key is removed because the last element from the list was popped. +* `LINSERT` generates an `linsert` event. +* `LSET` generates an `lset` event. +* `LTRIM` generates an `ltrim` event, and additionally a `del` event if the resulting list is empty and the key is removed. +* `RPOPLPUSH` and `BRPOPLPUSH` generate an `rpop` event and an `lpush` event. In both cases the order is guaranteed (the `lpush` event will always be delivered after the `rpop` event). Additionally a `del` event will be generated if the resulting list is zero length and the key is removed. +* `HSET`, `HSETNX` and `HMSET` all generate a single `hset` event. +* `HINCRBY` generates an `hincrby` event. +* `HINCRBYFLOAT` generates an `hincrbyfloat` event. +* `HDEL` generates a single `hdel` event, and an additional `del` event if the resulting hash is empty and the key is removed. +* `SADD` generates a single `sadd` event, even in the variadic case. +* `SREM` generates a single `srem` event, and an additional `del` event if the resulting set is empty and the key is removed. +* `SMOVE` generates an `srem` event for the source key, and an `sadd` event for the destination key. +* `SPOP` generates an `spop` event, and an additional `del` event if the resulting set is empty and the key is removed. +* `SINTERSTORE`, `SUNIONSTORE`, `SDIFFSTORE` generate `sinterstore`, `sunionostore`, `sdiffstore` events respectively. In the speical case the resulting set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* `ZINCR` generates a `zincr` event. +* `ZADD` generates a single `zadd` event even when multiple elements are added. +* `ZREM` generates a single `zrem` event even when multiple elements are deleted. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* `ZREMBYSCORE` generates a single `zrembyscore` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* `ZREMBYRANK` generates a single `zrembyrank` event. When the resulting sorted set is empty and the key is generated, an additional `del` event is generated. +* `ZINTERSTORE` and `ZUNIONSTORE` respectively generate `zinterstore` and `zunionstore` events. In the speical case the resulting sorted set is empty, and the key where the result is stored already exists, a `del` event is generated since the key is removed. +* Every time a key with a time to live associated is removed from the data set because it expired, an `expired` event is generated. +* Every time a key is evicted from the data set in order to free memory as a result of the `maxmemory` policy, an `evicted` event is generated. + +**IMPORTANT** all the commands generate events only if the target key is really modified. For instance an `SREM` deleting a non-existing element from a Set will not actually change the value of the key, so no event will be generated. + +If in doubt about how events are generated for a given command, the simplest +thing to do is to watch yourself: + + $ redis-cli config set notify-keyspace-events KEA + $ redis-cli --csv psubscribe '__key*__:*' + Reading messages... (press Ctrl-C to quit) + "psubscribe","__key*__:*",1 + +At this point use `redis-cli` in another terminal to send commands to the +Redis server and watch the events generated: + + "pmessage","__key*__:*","__keyspace@0__:foo","set" + "pmessage","__key*__:*","__keyevent@0__:set","foo" + ... + +Timing of expired events +--- + +Keys with a time to live associated are expired by Redis in two ways: + +* When the key is accessed by a command and is found to be expired. +* Via a background system that looks for expired keys in background, incrementally, in order to be able to also collect keys that are never accessed. + +The `expired` events are generated when a key is accessed and is found to be expired by one of the above systems, as a result there are no guarantees that the Redis server will be able to generate the `expired` event at the time the key time to live reaches the value of zero. + +If no command targets the key constantly, and there are many keys with a TTL associated, there can be a significant delay between the time the key time to live drops to zero, and the time the `expired` event is generated. + +Basically `expired` events **are generated when the Redis server deletes the key** and not when the time to live theorically reaches the value of zero. From ee26e75bb4b8047308c68f427e0a679f8399c78f Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 29 Jan 2013 14:03:09 +0100 Subject: [PATCH 0285/2880] markdown fixes. --- topics/notifications.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/topics/notifications.md b/topics/notifications.md index 1e42325c8b..a10932fcbd 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -71,17 +71,17 @@ In order to enable the feature a non-empty string is used, composed of multiple characters, where every character has a special meaning according to the following table: - K Keyspace events, published with __keyspace@__ prefix. - E Keyevent events, published with __keyevent@__ prefix. - g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... - $ String commands - l List commands - s Set commands - h Hash commands - z Sorted set commands - x Expired events (events generated every time a key expires) - e Evicted events (events generated when a key is evicted for maxmemory) - A Alias for g$lshzxe, so that the "AKE" string means all the events. + K Keyspace events, published with __keyspace@__ prefix. + E Keyevent events, published with __keyevent@__ prefix. + g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... + $ String commands + l List commands + s Set commands + h Hash commands + z Sorted set commands + x Expired events (events generated every time a key expires) + e Evicted events (events generated when a key is evicted for maxmemory) + A Alias for g$lshzxe, so that the "AKE" string means all the events. At least `K` or `E` should be present in the string, otherwise no event will be delivered regardless of the rest of the string. From e76f9979ef4a950348e3788cc52c7db35bfd2ac5 Mon Sep 17 00:00:00 2001 From: Jonathan Leibiusky Date: Tue, 29 Jan 2013 20:06:51 -0200 Subject: [PATCH 0286/2880] Update commands/ttl.md Since Redis 2.8 the response can be also -2. --- commands/ttl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/ttl.md b/commands/ttl.md index 0d478c520a..8be2d28b05 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -4,7 +4,7 @@ given key will continue to be part of the dataset. @return -@integer-reply: TTL in seconds or `-1` when `key` does not exist or does not +@integer-reply: TTL in seconds, `-2` when `key` does not exist or `-1` when `key` does not have a timeout. @examples From aa5e101285a8d6d077690550965b6957d56418c1 Mon Sep 17 00:00:00 2001 From: Sam Pullara Date: Fri, 1 Feb 2013 08:26:51 -0800 Subject: [PATCH 0287/2880] INFO now supports an optional section parameter A user of my client pointed out that my generated clients do not support the optional INFO parameter. This is because it was missing from the machine API description. --- commands.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/commands.json b/commands.json index 1fc881e944..28e2de99f5 100644 --- a/commands.json +++ b/commands.json @@ -723,6 +723,13 @@ }, "INFO": { "summary": "Get information and statistics about the server", + "arguments": [ + { + "name": "section", + "type": "string", + "optional": true + } + ], "since": "1.0.0", "group": "server" }, From c18fd4626e0856d2fec6ff3f98315e52a5d72192 Mon Sep 17 00:00:00 2001 From: Jeremy Ong Date: Sat, 2 Feb 2013 14:04:49 -0800 Subject: [PATCH 0288/2880] Add sharded_eredis as an Erlang client library Sharded_eredis features process pooling and consistent hashing, ideal for presharded setups. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 61c61f3c9b..351179ce97 100644 --- a/clients.json +++ b/clients.json @@ -64,6 +64,15 @@ "active": true }, + { + "name": "sharded_eredis", + "language": "Erlang", + "repository": "https://github.com/jeremyong/sharded_eredis", + "description": "Wrapper around eredis providing process pools and consistent hashing.", + "authors": ["jeremyong", "hiroeorz"] + "active": true + }, + { "name": "redis.fy", "language": "Fancy", From 2635c59d0c664dee27f601ab50da1904731668a2 Mon Sep 17 00:00:00 2001 From: Jeremy Ong Date: Tue, 5 Feb 2013 12:04:01 -0800 Subject: [PATCH 0289/2880] Add missing comma to the sharded_eredis entry --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 351179ce97..20ec19c5be 100644 --- a/clients.json +++ b/clients.json @@ -69,7 +69,7 @@ "language": "Erlang", "repository": "https://github.com/jeremyong/sharded_eredis", "description": "Wrapper around eredis providing process pools and consistent hashing.", - "authors": ["jeremyong", "hiroeorz"] + "authors": ["jeremyong", "hiroeorz"], "active": true }, From 516fc3330d81e42eccf985e20a65bef5c2b54080 Mon Sep 17 00:00:00 2001 From: Dov Murik Date: Wed, 6 Feb 2013 17:36:32 +0200 Subject: [PATCH 0290/2880] markdown fixes for code parts in quickstart guide --- topics/quickstart.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/topics/quickstart.md b/topics/quickstart.md index eeb823f513..c0dc6f80b3 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -138,27 +138,27 @@ We assume you already copied **redis-server** and **redis-cli** executables unde * Create a directory where to store your Redis config files and your data: - sudo mkdir /etc/redis - sudo mkdir /var/redis + sudo mkdir /etc/redis + sudo mkdir /var/redis * Copy the init script that you'll find in the Redis distribution under the **utils** directory into /etc/init.d. We suggest calling it with the name of the port where you are running this instance of Redis. For example: - sudo cp utils/redis_init_script /etc/init.d/redis_6379 + sudo cp utils/redis_init_script /etc/init.d/redis_6379 * Edit the init script. - sudo vi /etc/init.d/redis_6379 + sudo vi /etc/init.d/redis_6379 Make sure to modify **REDIS_PORT** accordingly to the port you are using. Both the pid file path and the configuration file name depend on the port number. * Copy the template configuration file you'll find in the root directory of the Redis distribution into /etc/redis/ using the port number as name, for instance: - sudo cp redis.conf /etc/redis/6379.conf + sudo cp redis.conf /etc/redis/6379.conf * Create a directory inside /var/redis that will work as data and working directory for this Redis instance: - sudo mkdir /var/redis/6379 + sudo mkdir /var/redis/6379 * Edit the configuration file, making sure to perform the following changes: * Set **daemonize** to yes (by default it is set to no). @@ -167,9 +167,9 @@ Both the pid file path and the configuration file name depend on the port number * Set your preferred **loglevel**. * Set the **logfile** to /var/log/redis_6379.log * Set the **dir** to /var/redis/6379 (very important step!) - * Finally add the new Redis init script to all the default runlevels using the following command: +* Finally add the new Redis init script to all the default runlevels using the following command: - sudo update-rc.d redis_6379 defaults + sudo update-rc.d redis_6379 defaults You are done! Now you can try running your instance with: From ce4666c9cad210a5dbd084a6664e163f2dc2e404 Mon Sep 17 00:00:00 2001 From: Costin Leau Date: Wed, 6 Feb 2013 22:12:36 +0200 Subject: [PATCH 0291/2880] add Spring Data Redis --- tools.json | 10 +++++++++- topics/twitter-clone.md | 2 +- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/tools.json b/tools.json index 7fa4a48e5f..ecb5fce981 100644 --- a/tools.json +++ b/tools.json @@ -267,4 +267,12 @@ "description": "Qi4j EntityStore backed by Redis", "authors": ["eskatos"] } -] + { + "name": "Spring Data Redis", + "language": "Java", + "url": "http://www.springsource.org/spring-data/redis", + "repository": "http://github.com/SpringSource/spring-data-redis", + "description": "Spring integration for Redis promoting POJO programming, portability and productivity", + "authors": ["costinl"] + } +] \ No newline at end of file diff --git a/topics/twitter-clone.md b/topics/twitter-clone.md index 590e5ae6b0..0453d23ce0 100644 --- a/topics/twitter-clone.md +++ b/topics/twitter-clone.md @@ -14,7 +14,7 @@ of this article targets PHP, but Ruby programmers can also check the other source code, it conceptually very similar. **Note:** [Retwis-J](http://retwisj.cloudfoundry.com/) is a port of Retwis to -Java, using the Spring Data Framework, written by Costin Leau. The source code +Java, using the Spring Data Framework, written by [Costin Leau](http://twitter.com/costinl). The source code can be found on [GitHub](https://github.com/SpringSource/spring-data-keyvalue-examples) and there is comprehensive documentation available at From f98c0653930b32d645cd2cbda0ca52de0208139d Mon Sep 17 00:00:00 2001 From: Nikolay Khodyunya Date: Thu, 7 Feb 2013 15:25:49 +0800 Subject: [PATCH 0292/2880] remove carmine client duplicate from clients list --- clients.json | 8 -------- 1 file changed, 8 deletions(-) diff --git a/clients.json b/clients.json index 20ec19c5be..9b45f98974 100644 --- a/clients.json +++ b/clients.json @@ -28,14 +28,6 @@ "active": true }, - { - "name": "Carmine", - "language": "Clojure", - "repository": "https://github.com/ptaoussanis/carmine", - "description": "Deliberately simple, high-performance Redis (2.0+) client for Clojure.", - "authors": ["ptaoussanis"] - }, - { "name": "CL-Redis", "language": "Common Lisp", From 65276b76ec8905bfa1771ec95a6ad870c7d93cfc Mon Sep 17 00:00:00 2001 From: Nikolay Khodyunya Date: Thu, 7 Feb 2013 15:32:31 +0800 Subject: [PATCH 0293/2880] Added aleph client for Clojure. --- clients.json | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 9b45f98974..16dcdcf560 100644 --- a/clients.json +++ b/clients.json @@ -27,7 +27,14 @@ "recommended": true, "active": true }, - + { + "name": "aleph", + "language": "Clojure", + "repository": "https://github.com/ztellman/aleph.git", + "description": "Redis client build on top of lamina", + "authors":["Zach Tellman"], + "active": true + }, { "name": "CL-Redis", "language": "Common Lisp", From 3e00d2f5c1cda5a712e3d5d331990c108da3fad9 Mon Sep 17 00:00:00 2001 From: george Date: Fri, 8 Feb 2013 23:43:02 +0900 Subject: [PATCH 0294/2880] add examples for set-related store commands --- commands/sdiffstore.md | 13 +++++++++++++ commands/sinterstore.md | 13 +++++++++++++ commands/sunionstore.md | 13 +++++++++++++ 3 files changed, 39 insertions(+) diff --git a/commands/sdiffstore.md b/commands/sdiffstore.md index db95908556..e941016742 100644 --- a/commands/sdiffstore.md +++ b/commands/sdiffstore.md @@ -6,3 +6,16 @@ If `destination` already exists, it is overwritten. @return @integer-reply: the number of elements in the resulting set. + +@examples + +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SDIFFSTORE key key1 key2 +SMEMBERS key +``` diff --git a/commands/sinterstore.md b/commands/sinterstore.md index 26d6e3f381..17dd0bf0b4 100644 --- a/commands/sinterstore.md +++ b/commands/sinterstore.md @@ -6,3 +6,16 @@ If `destination` already exists, it is overwritten. @return @integer-reply: the number of elements in the resulting set. + +@examples + +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTERSTORE key key1 key2 +SMEMBERS key +``` diff --git a/commands/sunionstore.md b/commands/sunionstore.md index f3bf959c5d..74df06071f 100644 --- a/commands/sunionstore.md +++ b/commands/sunionstore.md @@ -6,3 +6,16 @@ If `destination` already exists, it is overwritten. @return @integer-reply: the number of elements in the resulting set. + +@examples + +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTERSTORE key key1 key2 +SMEMBERS key +``` From 8d981ab7f72974b7fd7b98ee436ee54e0637dcfd Mon Sep 17 00:00:00 2001 From: Costin Leau Date: Fri, 8 Feb 2013 17:07:11 +0200 Subject: [PATCH 0295/2880] add missing comma ... --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index ecb5fce981..22f49e91f2 100644 --- a/tools.json +++ b/tools.json @@ -266,7 +266,7 @@ "repository": "http://github.com/qi4j/qi4j-sdk", "description": "Qi4j EntityStore backed by Redis", "authors": ["eskatos"] - } + }, { "name": "Spring Data Redis", "language": "Java", From bd9b66812734436eb3fb1a674a7257e30ff0f842 Mon Sep 17 00:00:00 2001 From: Sasan Rose Date: Sat, 9 Feb 2013 15:35:12 +0330 Subject: [PATCH 0296/2880] PHPRedMin added --- tools.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools.json b/tools.json index 22f49e91f2..aa8f473b2c 100644 --- a/tools.json +++ b/tools.json @@ -274,5 +274,12 @@ "repository": "http://github.com/SpringSource/spring-data-redis", "description": "Spring integration for Redis promoting POJO programming, portability and productivity", "authors": ["costinl"] + }, + { + "name": "PHPRedMin", + "language": "PHP", + "repository": "https://github.com/sasanrose/phpredmin", + "description": "Yet another web interface for Redis", + "authors": ["sasanrose"] } ] \ No newline at end of file From 3f585e96ed14dea8debff383ee7c051f7e3eb8fc Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 11 Feb 2013 18:29:12 -0800 Subject: [PATCH 0297/2880] Add Travis configuration --- .travis.yml | 6 ++++++ 1 file changed, 6 insertions(+) create mode 100644 .travis.yml diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 0000000000..c2dcec4445 --- /dev/null +++ b/.travis.yml @@ -0,0 +1,6 @@ +language: ruby + +rvm: + - 1.9.3 + +script: rake From 974a50947d1c66197ad60bf7c396cab793be30d7 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 11 Feb 2013 18:39:08 -0800 Subject: [PATCH 0298/2880] Defer requiring remarkdown.rb from Rakefile --- Rakefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Rakefile b/Rakefile index 14b7f7b12b..1c00b9962b 100644 --- a/Rakefile +++ b/Rakefile @@ -42,9 +42,9 @@ end namespace :format do - require "./remarkdown" - def format(file) + require "./remarkdown" + return unless File.exist?(file) STDOUT.print "formatting #{file}..." From 2d540a166ea8cc775432bd2d667b54575c95ad07 Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 11 Feb 2013 18:40:42 -0800 Subject: [PATCH 0299/2880] Add Gemfile specifically for Travis --- .travis.yml | 3 ++- .travis/Gemfile | 5 +++++ .travis/Gemfile.lock | 14 ++++++++++++++ 3 files changed, 21 insertions(+), 1 deletion(-) create mode 100644 .travis/Gemfile create mode 100644 .travis/Gemfile.lock diff --git a/.travis.yml b/.travis.yml index c2dcec4445..04cd72ca51 100644 --- a/.travis.yml +++ b/.travis.yml @@ -3,4 +3,5 @@ language: ruby rvm: - 1.9.3 -script: rake +gemfile: + - .travis/Gemfile diff --git a/.travis/Gemfile b/.travis/Gemfile new file mode 100644 index 0000000000..6c9643ddf3 --- /dev/null +++ b/.travis/Gemfile @@ -0,0 +1,5 @@ +source "https://rubygems.org" + +gem "rake" +gem "batch" +gem "rdiscount" diff --git a/.travis/Gemfile.lock b/.travis/Gemfile.lock new file mode 100644 index 0000000000..6b833d174d --- /dev/null +++ b/.travis/Gemfile.lock @@ -0,0 +1,14 @@ +GEM + remote: https://rubygems.org/ + specs: + batch (0.0.3) + rake (0.9.2.2) + rdiscount (1.6.8) + +PLATFORMS + ruby + +DEPENDENCIES + batch + rake + rdiscount From 4756cb17be46e4d4f729eb054a17284570adb2ac Mon Sep 17 00:00:00 2001 From: Pieter Noordhuis Date: Mon, 11 Feb 2013 18:46:34 -0800 Subject: [PATCH 0300/2880] Install aspell before running tests --- .travis.yml | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.travis.yml b/.travis.yml index 04cd72ca51..243168dfbb 100644 --- a/.travis.yml +++ b/.travis.yml @@ -5,3 +5,6 @@ rvm: gemfile: - .travis/Gemfile + +before_install: + - sudo apt-get install -y aspell aspell-en From f9a57f39561cf83f03a6a4138b08f777c7b20fe8 Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Wed, 13 Feb 2013 13:46:29 -0200 Subject: [PATCH 0301/2880] Typos. --- topics/persistence.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 4ada98bc83..731236e61d 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -267,8 +267,8 @@ Since many Redis users are in the startup scene and thus don't have plenty of money to spend we'll review the most interesting disaster recovery techniques that don't have too high costs. -* Amazon S3 and other similar services are a good way for mounting your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using gpg -c (in symmetric encryption mode). Make sure to store your password in many differnet safe places (for instance give a copy to the most important guys of your organization). It is recommanded to use multiple storage services for improved data safety. -* Transfer your snapshots using scp (part of ssh) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and greate an ssh client key without passphrase, then make +* Amazon S3 and other similar services are a good way for mounting your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important guys of your organization). It is recommanded to use multiple storage services for improved data safety. +* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and greate an ssh client key without passphrase, then make add it in the authorized_keys file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers for best results. From c3c318f5f5c8dbff1376871896c5aa6763eabfdd Mon Sep 17 00:00:00 2001 From: michael-grunder Date: Wed, 13 Feb 2013 12:21:44 -0800 Subject: [PATCH 0302/2880] Fixed a typo --- topics/memory-optimization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index d2caa9ac5e..5b5bea9621 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -8,7 +8,7 @@ Since Redis 2.2 many data types are optimized to use less space up to a certain This is completely transparent from the point of view of the user and API. Since this is a CPU / memory trade off it is possible to tune the maximum number of elements and maximum element size for special encoded types using the following redis.conf directives. - hash-max-zipmap-entries 64 (hahs-max-ziplist-entries for Redis >= 2.6) + hash-max-zipmap-entries 64 (hash-max-ziplist-entries for Redis >= 2.6) hash-max-zipmap-value 512 (hash-max-ziplist-value for Redis >= 2.6) list-max-ziplist-entries 512 list-max-ziplist-value 64 From dc42723a13661145b2d0557fd3f29315776fe381 Mon Sep 17 00:00:00 2001 From: michael-grunder Date: Wed, 13 Feb 2013 18:27:30 -0800 Subject: [PATCH 0303/2880] Another typo, minor editing --- topics/mass-insert.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/mass-insert.md b/topics/mass-insert.md index 9ce8f7d96d..892346b66b 100644 --- a/topics/mass-insert.md +++ b/topics/mass-insert.md @@ -2,7 +2,7 @@ Redis Mass Insertion === Sometimes Redis instances needs to be loaded with big amount of preexisting -or user generated data in a short amount of time, so that million of keys +or user generated data in a short amount of time, so that millions of keys will be created as fast as possible. This is called a *mass insertion*, and the goal of this document is to @@ -13,7 +13,7 @@ Use the protocol, Luke Using a normal Redis client to perform mass insertion is not a good idea for a few reasons: the naive approach of sending one command after the other -is slow because there is to pay the round trip time for every command. +is slow because you have to pay for the round trip time for every command. It is possible to use pipelining, but for mass insertion of many records you need to write new commands while you read replies at the same time to make sure you are inserting as fast as possible. From 715fe1c3d839795559611180ecfe2280d92c16f6 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 14 Feb 2013 18:15:53 +0100 Subject: [PATCH 0304/2880] Introduction page updated. --- topics/introduction.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/topics/introduction.md b/topics/introduction.md index bc66597cd7..f0f1ee0559 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -1,7 +1,7 @@ Introduction to Redis === -Redis is an open source, advanced **key-value store**. It +Redis is an open source (BSD licensed), advanced **key-value store**. It is often referred to as a **data structure server** since keys can contain [strings](/topics/data-types#strings), [hashes](/topics/data-types#hashes), [lists](/topics/data-types#lists), @@ -26,15 +26,15 @@ Redis also supports trivial-to-setup [master-slave replication](/topics/replication), with very fast non-blocking first synchronization, auto-reconnection on net split and so forth. -Other features include a simple [check-and-set -mechanism](/topics/transactions), [pub/sub](/topics/pubsub) -and configuration settings to make Redis behave like a -cache. +Other features include [Transactions](/topics/transactions), +[Pub/Bub](/topics/pubsub), +[Lua scripting](/commands/eval), +[Keys with a limited time-to-live](/commands/expire), +and configuration settings to make Redis behave like a cache. You can use Redis from [most programming languages](/clients) out there. Redis is written in **ANSI C** and works in most POSIX systems like Linux, \*BSD, OS X without external dependencies. Linux and OSX are the two operating systems where Redis is developed and more tested, and we **recommend using Linux for deploying**. Redis may work in Solaris-derived systems like SmartOS, but the support is *best effort*. There -is no official support for Windows builds, although you may -have [some](http://code.google.com/p/redis/issues/detail?id=34) -[options](https://github.com/dmajkic/redis). +is no official support for Windows builds, but Microsoft develops and +maintains a [Win32-64 experimental version of Redis](https://github.com/MSOpenTech/redis). From 8e6f499ec51d44d8877c5ce5e37dd11f0dcaddcd Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 14 Feb 2013 18:36:26 +0100 Subject: [PATCH 0305/2880] Minor formatting change. --- topics/introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/introduction.md b/topics/introduction.md index f0f1ee0559..0bcae1666e 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -1,7 +1,7 @@ Introduction to Redis === -Redis is an open source (BSD licensed), advanced **key-value store**. It +Redis is an open source, BSD licensed, advanced **key-value store**. It is often referred to as a **data structure server** since keys can contain [strings](/topics/data-types#strings), [hashes](/topics/data-types#hashes), [lists](/topics/data-types#lists), From ef0e67c09af63dc8b808c620fccff24c0b11e661 Mon Sep 17 00:00:00 2001 From: george Date: Sat, 16 Feb 2013 12:04:44 +0900 Subject: [PATCH 0306/2880] fix typo - s/an hash/a hash --- topics/memory-optimization.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/memory-optimization.md b/topics/memory-optimization.md index 5b5bea9621..e2efeee280 100644 --- a/topics/memory-optimization.md +++ b/topics/memory-optimization.md @@ -45,10 +45,10 @@ where values can just be just strings, that is not just more memory efficient than Redis plain keys but also much more memory efficient than memcached. Let's start with some fact: a few keys use a lot more memory than a single key -containing an hash with a few fields. How is this possible? We use a trick. +containing a hash with a few fields. How is this possible? We use a trick. In theory in order to guarantee that we perform lookups in constant time (also known as O(1) in big O notation) there is the need to use a data structure -with a constant time complexity in the average case, like an hash table. +with a constant time complexity in the average case, like a hash table. But many times hashes contain just a few fields. When hashes are small we can instead just encode them in an O(N) data structure, like a linear @@ -60,7 +60,7 @@ it contains will grow too much (you can configure the limit in redis.conf). This does not work well just from the point of view of time complexity, but also from the point of view of constant times, since a linear array of key value pairs happens to play very well with the CPU cache (it has a better -cache locality than an hash table). +cache locality than a hash table). However since hash fields and values are not (always) represented as full featured Redis objects, hash fields can't have an associated time to live @@ -168,7 +168,7 @@ of your keys and values: hash-max-zipmap-value 1024 -Every time an hash will exceed the number of elements or element size specified +Every time a hash will exceed the number of elements or element size specified it will be converted into a real hash table, and the memory saving will be lost. You may ask, why don't you do this implicitly in the normal key space so that From 56f5eed4799d811123c52421fb92e704d259ffdd Mon Sep 17 00:00:00 2001 From: Bruno Celeste Date: Tue, 19 Feb 2013 15:13:25 +0100 Subject: [PATCH 0307/2880] Updated HeyWatch URL --- topics/whos-using-redis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index e42db2a362..1aa7ed1cae 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -70,7 +70,7 @@ And many others: * [OKNOtizie](http://oknotizie.virgilio.it) * [Moodstocks](http://www.moodstocks.com/2010/11/26/the-tech-behind-moodstocks-notes) uses Redis as its main database by means of [Ohm](http://ohm.keyvalue.org). * [Favstar](http://favstar.fm) -* [Heywatch](http://heywatch.com) +* [HeyWatch](http://www.heywatchencoding.com) * [Sharpcloud](http://www.sharpcloud.com) * [Wooga](http://www.wooga.com/games/) for the games _"Happy Hospital"_ and _"Monster World"_. * [Sina Weibo](http://t.sina.com.cn/) From 18b57881100be001d76f19be7bcd7bb3c6aeb141 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 19 Feb 2013 15:40:49 +0100 Subject: [PATCH 0308/2880] Who is using Redis: only link major brands and services. --- topics/whos-using-redis.md | 61 +++++++++----------------------------- 1 file changed, 14 insertions(+), 47 deletions(-) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index e42db2a362..d693327e63 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -4,6 +4,18 @@ Who's using Redis? Logos are linked to the relevant story when available.
      +
    • + + Twitter + +
    • + +
    • + + Instagram + +
    • +
    • EngineYard
    • @@ -29,7 +41,7 @@ Logos are linked to the relevant story when available.
    • - Digg + Digg
    • @@ -58,49 +70,4 @@ Logos are linked to the relevant story when available.
    -And many others: - -* [Superfeedr](http://blog.superfeedr.com/redis/mysql/memcache/datastore/performance/redis-at-superfeedr) -* [Vidiowiki](http://vidiowiki.com) -* [Wish Internet Consulting](http://wish.hu) -* [Ruby Minds](http://rubyminds.com) -* [Boxcar](http://www.boxcar.io) -* [Zoombu](http://www.zoombu.co.uk) -* [Dark Curse](http://www.darkcurse.com) -* [OKNOtizie](http://oknotizie.virgilio.it) -* [Moodstocks](http://www.moodstocks.com/2010/11/26/the-tech-behind-moodstocks-notes) uses Redis as its main database by means of [Ohm](http://ohm.keyvalue.org). -* [Favstar](http://favstar.fm) -* [Heywatch](http://heywatch.com) -* [Sharpcloud](http://www.sharpcloud.com) -* [Wooga](http://www.wooga.com/games/) for the games _"Happy Hospital"_ and _"Monster World"_. -* [Sina Weibo](http://t.sina.com.cn/) -* [Engage](http://engage.calibreapps.com/) -* [PoraOra](http://www.poraora.com/) -* [Leatherbound](http://leatherbound.me/) -* [AuthorityLabs](http://authoritylabs.com/) -* [Fotolog](http://www.fotolog.com/) -* [TheMatchFixer](http://www.thematchfixer.com/) -* [Check-Host](http://check-host.net/) describes their architecture [here](http://showmetheco.de/articles/2011/1/using-perl-mojolicious-and-redis-in-a-real-world-asynchronous-application.html). -* [ShopSquad](http://shopsquad.com/) -* [localshow.tv](http://localshow.tv/) -* [PennyAce](http://pennyace.com/) -* [Nasza Klasa](http://nk.pl/) -* [Forrst](http://forrst.com) -* [Surfingbird](http://surfingbird.com) -* [mig33](http://www.mig33.com) -* [SeatGeek](http://seatgeek.com/) -* [Wikipedia Game](http://thewikigame.com) - [Redis architecture description](http://www.clemesha.org/blog/really-using-redis-to-build-fast-real-time-web-apps/) -* [Mogu](http://gomogu.org) -* [Ancestry.com](http://www.ancestry.com/) -* [SocialReviver](http://www.socialreviver.net/) by VittGam, for its Settings Cloud -* [Telefónica Digital](http://www.telefonica.com/es/digital/html/home/) -* [Pond](http://web.pond.pt/) -* [Topics.io](http://topics.io) -* [AngiesList.com](https://github.com/angieslist/al-redis) -* [GraphBug](http://graphbug.com/) -* [SwarmIQ](http://www.swarmiq.com/) uses Redis as a caching / indexing layer for rapid lookups of chronological and ranked messages. - -This list is incomplete. If you're using Redis and would like to be -listed, [send a pull request](https://github.com/antirez/redis-doc). - -**Note:** we'll use a logo for very big and recognized companies / sites, and a mention in the text-only list for all the other companies. +And many others... link policy: we only link major sites, we used to also link to small companies and services but this rapidly became impossible to maintain. From 41d4fe01916150302ad8748e0105cf5479082755 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 19 Feb 2013 15:43:29 +0100 Subject: [PATCH 0309/2880] Minor change to who is using Redis page. --- topics/whos-using-redis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/whos-using-redis.md b/topics/whos-using-redis.md index d693327e63..32a068ac7e 100644 --- a/topics/whos-using-redis.md +++ b/topics/whos-using-redis.md @@ -70,4 +70,4 @@ Logos are linked to the relevant story when available.
  • -And many others... link policy: we only link major sites, we used to also link to small companies and services but this rapidly became impossible to maintain. +And many others! link policy: we only link major sites, we used to also link to small companies and services but this rapidly became impossible to maintain. From ecbaead436dde39211ffe94291412968632333d2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jos=C3=A9=20Carlos=20Nieto?= Date: Tue, 19 Feb 2013 11:52:19 -0500 Subject: [PATCH 0310/2880] Adding gosexy/redis to the list of redis clients. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 16dcdcf560..6d7a7623de 100644 --- a/clients.json +++ b/clients.json @@ -127,6 +127,15 @@ "authors": ["simonz05"], "active": true }, + + { + "name": "gosexy/redis", + "language": "Go", + "repository": "https://github.com/gosexy/redis", + "description": "Go bindings for the official C redis client (hiredis), supports the whole command set of redis 2.6.10 and subscriptions with go channels.", + "authors": ["xiam"], + "active": true + }, { "name": "hedis", From 2be5d3e1cf8644a771e55a17ed7aec4f7b6e5d24 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 20 Feb 2013 11:11:35 +0100 Subject: [PATCH 0311/2880] Fixed typo in introduction. --- topics/introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/introduction.md b/topics/introduction.md index 0bcae1666e..b0ee8f38a9 100644 --- a/topics/introduction.md +++ b/topics/introduction.md @@ -27,7 +27,7 @@ replication](/topics/replication), with very fast non-blocking first synchronization, auto-reconnection on net split and so forth. Other features include [Transactions](/topics/transactions), -[Pub/Bub](/topics/pubsub), +[Pub/Sub](/topics/pubsub), [Lua scripting](/commands/eval), [Keys with a limited time-to-live](/commands/expire), and configuration settings to make Redis behave like a cache. From e86c70f25482e188a97d22a6c21afe7e3fd0d76d Mon Sep 17 00:00:00 2001 From: Antonio Ognio Date: Sun, 24 Feb 2013 11:13:17 -0500 Subject: [PATCH 0312/2880] =?UTF-8?q?Adding=20br=C3=BCkva=20to=20the=20lis?= =?UTF-8?q?t=20of=20Python=20Redis=20clients?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index 6d7a7623de..75dfc54f21 100644 --- a/clients.json +++ b/clients.json @@ -381,6 +381,15 @@ "active": true }, + { + "name": "brukva", + "language": "Python", + "repository": "https://github.com/evilkost/brukva", + "description": "Asynchronous Redis client that works within Tornado IO loop", + "authors": ["evilkost"], + "active": true + }, + { "name": "scala-redis", "language": "Scala", From 9087dd36d3a0b355a0b658dabcc5d484be98ec35 Mon Sep 17 00:00:00 2001 From: Markus Rothe Date: Sun, 3 Mar 2013 09:19:50 +0000 Subject: [PATCH 0313/2880] URL of Radix changed --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 6d7a7623de..639c65f0da 100644 --- a/clients.json +++ b/clients.json @@ -93,7 +93,7 @@ { "name": "Radix", "language": "Go", - "repository": "https://github.com/fzzbt/radix", + "repository": "https://github.com/fzzy/radix", "description": "MIT licensed Redis client.", "authors": ["fzzbt"], "recommended": true, From d4bbbdc5e5ec92bf0ac400ac681866b3671a98fa Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 5 Mar 2013 20:07:16 +0100 Subject: [PATCH 0314/2880] Redis cluster spec updated: from 4096 to 16384 hash slots. --- topics/cluster-spec.md | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index dc525e6d35..5831cd0c52 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -26,8 +26,8 @@ subset of the features available in the Redis stand alone server. In Redis cluster there are no central or proxy nodes, and one of the major design goals is linear scalability. -Redis cluster sacrifices fault tolerance for consistence, so the system -try to be as consistent as possible while guaranteeing limited resistance +Redis cluster sacrifices fault tolerance for consistency, so the system +tries to be as consistent as possible while guaranteeing limited resistance to net splits and node failures (we consider node failures as special cases of net splits). @@ -46,10 +46,10 @@ operations like Set type unions or intersections are not implemented, and in general all the operations where in theory keys are not available in the same node are not implemented. -In the future there is the possibility to add a new kind of node called a -Computation Node to perform multi-key read only operations in the cluster, -but it is not likely that the Redis cluster itself will be able -to perform complex multi key operations implementing some kind of +In the future it is possible that using the MIGRATE COPY command users will +be able to use *Computation Nodes* to perform multi-key read only operations +in the cluster, but it is not likely that the Redis Cluster itself will be +able to perform complex multi key operations implementing some kind of transparent way to move keys around. Redis Cluster does not support multiple databases like the stand alone version @@ -82,11 +82,11 @@ keys and nodes can improve the performance in a sensible way. Keys distribution model --- -The key space is split into 4096 slots, effectively setting an upper limit -for the cluster size of 4096 nodes (however the suggested max size of -nodes is in the order of a few hundreds). +The key space is split into 16384 slots, effectively setting an upper limit +for the cluster size of 16384 nodes (however the suggested max size of +nodes is in the order of ~ 1000 nodes). -All the master nodes will handle a percentage of the 4096 hash slots. +All the master nodes will handle a percentage of the 16384 hash slots. When the cluster is **stable**, that means that there is no a cluster reconfiguration in progress (where hash slots are moved from one node to another) a single hash slot will be served exactly by a single node @@ -95,7 +95,7 @@ it in the case of net splits or failures). The algorithm used to map keys to hash slots is the following: - HASH_SLOT = CRC16(key) mod 4096 + HASH_SLOT = CRC16(key) mod 16384 * Name: XMODEM (also known as ZMODEM or CRC-16/ACORN) * Width: 16 bit @@ -109,9 +109,9 @@ The algorithm used to map keys to hash slots is the following: A reference implementation of the CRC16 algorithm used is available in the Appendix A of this document. -12 out of 16 bit of the output of CRC16 are used. +14 out of 16 bit of the output of CRC16 are used. In our tests CRC16 behaved remarkably well in distributing different kind of -keys evenly across the 4096 slots. +keys evenly across the 16384 slots. Cluster nodes attributes --- @@ -320,7 +320,7 @@ only ask the next query to the specified node. This is needed because the next query about hash slot 8 can be about the key that is still in A, so we always want that the client will try A and -then B if needed. Since this happens only for one hash slot out of 4096 +then B if needed. Since this happens only for one hash slot out of 16384 available the performance hit on the cluster is acceptable. However we need to force that client behavior, so in order to make sure @@ -380,11 +380,11 @@ a node that is now in a failure state). Once the configuration is processed the node enters one of the following states: -* FAIL: the cluster can't work. When the node is in this state it will not serve queries at all and will return an error for every query. This state is entered when the node detects that the current nodes are not able to serve all the 4096 slots. -* OK: the cluster can work as all the 4096 slots are served by nodes that are not flagged as FAIL. +* FAIL: the cluster can't work. When the node is in this state it will not serve queries at all and will return an error for every query. This state is entered when the node detects that the current nodes are not able to serve all the 16384 slots. +* OK: the cluster can work as all the 16384 slots are served by nodes that are not flagged as FAIL. This means that the Redis Cluster is designed to stop accepting queries once even a subset of the hash slots are not available. However there is a portion of time in which an hash slot can't be accessed correctly since the associated node is experiencing problems, but the node is still not marked as failing. -In this range of time the cluster will only accept queries about a subset of the 4096 hash slots. +In this range of time the cluster will only accept queries about a subset of the 16384 hash slots. Since Redis cluster does not support MULTI/EXEC transactions the application developer should make sure the application can recover from only a subset of queries being accepted by the cluster. From 0318afe6b4aa94d99ef4b0b17674beebad777b1b Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 5 Mar 2013 20:15:30 +0100 Subject: [PATCH 0315/2880] Cluster specification failure detection updated to reflect the code. --- topics/cluster-spec.md | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 5831cd0c52..ddf7a8fbfe 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -137,10 +137,11 @@ know: * A set of hash slots served by the node. * Last time we sent a PING packet using the cluster bus. * Last time we received a PONG packet in reply. +* The time at which we flagged the node as failing. * The number of slaves of this node. * The master node ID, if this node is a slave (or 0000000... if it is a master). -All this information is available using the `CLUSTER NODES` command that +Soem of this information is available using the `CLUSTER NODES` command that can be sent to all the nodes in the cluster, both master and slave nodes. The following is an example of output of CLUSTER NODES sent to a master @@ -360,18 +361,16 @@ Node failure detection Failure detection is implemented in the following way: * A node marks another node setting the PFAIL flag (possible failure) if the node is not responding to our PING requests for a given time. -* Nodes broadcast information about other nodes (three random nodes taken at random) when pinging other nodes. The gossip section contains information about other nodes flags. -* If we have a node marked as PFAIL, and we receive a gossip message where another nodes also think the same node is PFAIL, we mark it as FAIL (failure). -* Once a node marks another node as FAIL as result of a PFAIL confirmed by another node, a message is send to all the other nodes to force all the reachable nodes in the cluster to set the specified not as FAIL. +* Nodes broadcast information about other nodes (three random nodes per packet) when pinging other nodes. The gossip section contains information about other nodes flags. +* Nodes remember if other nodes advertised some node as failing. This is called a failure report. +* Once a node receives a new failure report, such as that the majority of master nodes agree about the failure of a given node, the node is marked as FAIL. +* When a node is marked as FAIL, a message is broadcasted to the cluster in order to force all the reachable nodes to set the specified node as FAIL. -So basically a node is not able to mark another node as failing without external acknowledge. +So basically a node is not able to mark another node as failing without external acknowledge, and the majority of the master nodes are required to agree. -(still to implement:) -Once a node is marked as failing, any other node receiving a PING or -connection attempt from this node will send back a "MARK AS FAIL" message -in reply that will force the receiving node to set itself as failing. +Old failure reports are removed, so the majority of master nodes need to have a recent entry in the failure report table of a given node for it to mark another node as FAIL. -Cluster state detection (only partially implemented) +Cluster state detection --- Every cluster node scan the list of nodes every time a configuration change @@ -386,9 +385,6 @@ Once the configuration is processed the node enters one of the following states: This means that the Redis Cluster is designed to stop accepting queries once even a subset of the hash slots are not available. However there is a portion of time in which an hash slot can't be accessed correctly since the associated node is experiencing problems, but the node is still not marked as failing. In this range of time the cluster will only accept queries about a subset of the 16384 hash slots. -Since Redis cluster does not support MULTI/EXEC transactions the application -developer should make sure the application can recover from only a subset of queries being accepted by the cluster. - Slave election (not implemented) --- From 8b74e49b1dc61347ce7dd3fdc094fd6e532dfdd5 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 6 Mar 2013 17:25:10 +0100 Subject: [PATCH 0316/2880] Cluster specification updated. --- topics/cluster-spec.md | 56 ++++++++++++------------------------------ 1 file changed, 16 insertions(+), 40 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index ddf7a8fbfe..6488e13f09 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -370,7 +370,7 @@ So basically a node is not able to mark another node as failing without external Old failure reports are removed, so the majority of master nodes need to have a recent entry in the failure report table of a given node for it to mark another node as FAIL. -Cluster state detection +Cluster state detection (partilly implemented) --- Every cluster node scan the list of nodes every time a configuration change @@ -379,56 +379,32 @@ a node that is now in a failure state). Once the configuration is processed the node enters one of the following states: -* FAIL: the cluster can't work. When the node is in this state it will not serve queries at all and will return an error for every query. This state is entered when the node detects that the current nodes are not able to serve all the 16384 slots. +* FAIL: the cluster can't work. When the node is in this state it will not serve queries at all and will return an error for every query. * OK: the cluster can work as all the 16384 slots are served by nodes that are not flagged as FAIL. -This means that the Redis Cluster is designed to stop accepting queries once even a subset of the hash slots are not available. However there is a portion of time in which an hash slot can't be accessed correctly since the associated node is experiencing problems, but the node is still not marked as failing. -In this range of time the cluster will only accept queries about a subset of the 16384 hash slots. +This means that the Redis Cluster is designed to stop accepting queries once even a subset of the hash slots are not available for some time. -Slave election (not implemented) ---- +However there is a portion of time in which an hash slot can't be accessed correctly since the associated node is experiencing problems, but the node is still not marked as failing. In this range of time the cluster will only accept queries about a subset of the 16384 hash slots. -Every master can have any number of slaves (including zero). -Slaves are responsible of electing themselves to masters when a given -master fails. For instance we may have node A1, A2, A3, where A1 is the -master an A2 and A3 are two slaves. +The FAIL state for the cluster happens in two cases. -If A1 is failing in some way and no longer replies to pings, other nodes -will end marking it as failing using the gossip protocol. When this happens -its **first slave** will try to perform the election. +* 1) If at least one hash slot is not served as the node serving it currently is in FAIL state. +* 2) If we are not able to reach the majority of masters (that is, if the majorify of masters are simply in PFAIL state, it is enough for the node to enter FAIL mode). -The concept of first slave is very simple. Of all the slaves of a master -the first slave is the one that has the smallest node ID, sorting node IDs -lexicographically. If the first slave is also marked as failing, the next -slave is in charge of performing the election and so forth. +The second check is required because in order to mark a node from PFAIL to FAIL state, the majority of masters are required. However when we are not connected with the majority of masters it is impossible from our side of the net split to mark nodes as FAIL. However since we detect this condition we set the Cluster state in FAIL mode to stop serving queries. -So after a configuration update every slave checks if it is the first slave -of the failing master. In the case it is it changes its state to master -and broadcasts a message to all the other nodes to update the configuration. - -Protection mode (not implemented) +Slave election (not implemented) --- -After a net split resulting into a few isolated nodes, this nodes will -end thinking all the other nodes are failing. In the process they may try -to start a slave election or some other action to modify the cluster -configuration. In order to avoid this problem, nodes seeing a majority of -other nodes in PFAIL or FAIL state for a long enough time should enter -a protection mode that will prevent them from taking actions. - -The protection mode is cleared once the cluster state is OK again. - -Majority of masters rule (not implemented) ---- +The design of slave election is a work in progress right now. -As a result of a net split it is possible that two or more partitions are -independently able to serve all the hash slots. -Since Redis Cluster try to be consistent this is not what we want, and -a net split should always produce zero or one single partition able to -operate. +The idea is to use the concept of first slave, that is, out of all the +slaves for a given node, the first slave is the one with the lower +Node ID (comparing node IDs lexicographically). -In order to enforce this rule nodes into a partition should only try to -serve queries if they have the **majority of the original master nodes**. +However it is likely that the same system used for failure reports will be +used in order to require the majority of masters to authorize the slave +election. Publish/Subscribe (implemented, but to refine) === From 1e77210feb2153c58d716c736028c3ce317c9b7c Mon Sep 17 00:00:00 2001 From: Michael Jackson Date: Wed, 6 Mar 2013 11:25:39 -0800 Subject: [PATCH 0317/2880] Add then-redis to client list --- clients.json | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 6d7a7623de..10db12b8bb 100644 --- a/clients.json +++ b/clients.json @@ -34,7 +34,7 @@ "description": "Redis client build on top of lamina", "authors":["Zach Tellman"], "active": true - }, + }, { "name": "CL-Redis", "language": "Common Lisp", @@ -127,7 +127,7 @@ "authors": ["simonz05"], "active": true }, - + { "name": "gosexy/redis", "language": "Go", @@ -515,6 +515,15 @@ "active": true }, + { + "name": "then-redis", + "language": "Node.js", + "repository": "https://github.com/mjijackson/then-redis", + "description": "A small, promise-based Redis client for node", + "authors": ["mjackson"], + "active": true + }, + { "name": "redis-node-client", "language": "Node.js", From d208cf6bc29e8a1be2d44af040878d4f06ebaf41 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 8 Mar 2013 19:23:50 +0100 Subject: [PATCH 0318/2880] Updates in the cluster spec about node failure detection. --- topics/cluster-spec.md | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 6488e13f09..e9efe98668 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -360,16 +360,23 @@ Node failure detection Failure detection is implemented in the following way: -* A node marks another node setting the PFAIL flag (possible failure) if the node is not responding to our PING requests for a given time. -* Nodes broadcast information about other nodes (three random nodes per packet) when pinging other nodes. The gossip section contains information about other nodes flags. +* A node marks another node setting the PFAIL flag (possible failure) if the node is not responding to our PING requests for a given time. This time is called the node timeout, and is a node-wise setting. +* Nodes broadcast information about other nodes (three random nodes per packet) when pinging other nodes. The gossip section contains information about other nodes flags, including the PFAIL and FAIL flags. * Nodes remember if other nodes advertised some node as failing. This is called a failure report. -* Once a node receives a new failure report, such as that the majority of master nodes agree about the failure of a given node, the node is marked as FAIL. +* Once a node (already considering a given other node in PFAIL state) receives enough failure reports, so that the majority of master nodes agree about the failure of a given node, the node is marked as FAIL. * When a node is marked as FAIL, a message is broadcasted to the cluster in order to force all the reachable nodes to set the specified node as FAIL. So basically a node is not able to mark another node as failing without external acknowledge, and the majority of the master nodes are required to agree. Old failure reports are removed, so the majority of master nodes need to have a recent entry in the failure report table of a given node for it to mark another node as FAIL. +The FAIL state is reversible in two cases: + +* If the FAIL state is set for a slave node, the FAIL state can be reversed if the slave is already reachable. There is no point in retaning the FAIL state for a slave node as it does not serve slots, and we want to make sure we have the chance to promote it to master if needed. +* If the FAIL state is set for a master node, and after four times the node timeout, plus 10 seconds, the slots are were still not failed over, and the node is reachable again, the FAIL state is reverted. + +The rationale for the second case is that if the failover did not worked we want the cluster to continue to work if the master is back online, without any kind of user intervetion. + Cluster state detection (partilly implemented) --- From 5a6f0f49eb5ad20ac664355fc6c77820b3b81a83 Mon Sep 17 00:00:00 2001 From: 0x20h Date: Fri, 15 Mar 2013 00:46:32 +0100 Subject: [PATCH 0319/2880] fixed typo --- commands/object.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/object.md b/commands/object.md index c0b9a1f709..87aee77de6 100644 --- a/commands/object.md +++ b/commands/object.md @@ -37,14 +37,14 @@ Objects can be encoded in different ways: sets of any size. All the specially encoded types are automatically converted to the general type -once you perform an operation that makes it no possible for Redis to retain the +once you perform an operation that makes it impossible for Redis to retain the space saving encoding. @return Different return values are used for different subcommands. -* Subcommands `refcount` and `idletime` returns integers. +* Subcommands `refcount` and `idletime` return integers. * Subcommand `encoding` returns a bulk reply. If the object you try to inspect is missing, a null bulk reply is returned. From 4c5f161a8c6fabdbc388b61c56ffd2e2cc1c3df3 Mon Sep 17 00:00:00 2001 From: ctnstone Date: Tue, 26 Mar 2013 08:25:16 -0400 Subject: [PATCH 0320/2880] Added csredis to C# clients --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index a9f6d1819c..6d0946164f 100644 --- a/clients.json +++ b/clients.json @@ -644,5 +644,13 @@ "repository": "https://github.com/wg/lettuce", "description": "Thread-safe client supporting async usage and key/value codecs", "authors": ["ar3te"] + }, + + { + "name": "csredis", + "language": "C#", + "repository": "https://github.com/ctstone/csredis", + "description": "Async (and sync) client for Redis and Sentinel", + "authors": ["ctnstone"] } ] From 0a63d5e0e90f0b3cbe01a45b1802ac944c922de6 Mon Sep 17 00:00:00 2001 From: bradvoth Date: Wed, 27 Mar 2013 21:18:12 -0300 Subject: [PATCH 0321/2880] Update tools.json Added redis-tcl --- tools.json | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/tools.json b/tools.json index aa8f473b2c..f1cdee7057 100644 --- a/tools.json +++ b/tools.json @@ -281,5 +281,12 @@ "repository": "https://github.com/sasanrose/phpredmin", "description": "Yet another web interface for Redis", "authors": ["sasanrose"] + }, + { + "name": "redis-tcl", + "language": "Tcl", + "repository" : "http://github.com/bradvoth/redis-tcl", + "description" : "Tcl library largely copied from the redis test tree, modified for minor bug fixes and expanded pub/sub capabilities", + "authors" : ["bradvoth","antirez"] } -] \ No newline at end of file +] From bb319c667fae43b78f368ab3402df7a8bbe06bc3 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 29 Mar 2013 14:23:02 +0100 Subject: [PATCH 0322/2880] SET command definition updated. --- commands.json | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/commands.json b/commands.json index 28e2de99f5..16ed1194c4 100644 --- a/commands.json +++ b/commands.json @@ -1410,6 +1410,24 @@ { "name": "value", "type": "string" + }, + { + "command": "EX", + "name": "seconds", + "type": "integer", + "optional": true + }, + { + "command": "PX", + "name": "milliseconds", + "type": "integer", + "optional": true + }, + { + "name": "condition", + "type": "enum", + "enum": ["NX", "XX"], + "optional": true } ], "since": "1.0.0", From 58bbdafd78d058c593a850178a18edec0167464a Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 29 Mar 2013 14:38:44 +0100 Subject: [PATCH 0323/2880] SET documenatation updated for SET options. --- commands/set.md | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/commands/set.md b/commands/set.md index b93d618525..b89f1e602d 100644 --- a/commands/set.md +++ b/commands/set.md @@ -1,9 +1,23 @@ Set `key` to hold the string `value`. If `key` already holds a value, it is overwritten, regardless of its type. +Any previous time to live associated with the key is discarded on successful `SET` operation. + +## Options + +Starting with Redis 2.6.12 `SET` supports a set of options that modify its +behavior: + +* `EX` *seconds* -- Set the specified expire time, in seconds. +* `PX` *milliseconds* -- Set the specified expire time, in milliseconds. +* `NX` -- Only set the key if it does not already exist. +* `XX` -- Only set the key if it already exist. + +Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, it is possible that in future versions of Redis these three commands will be deprecated and finally removed. @return -@status-reply: always `OK` since `SET` can't fail. +@status-reply: `OK` if `SET` was executed correctly. +@nil-reply: a Null Bulk Reply is returned if the `SET` operation was not performed becase the user specified the `NX` or `XX` option but the condition was not met. @examples @@ -11,3 +25,18 @@ If `key` already holds a value, it is overwritten, regardless of its type. SET mykey "Hello" GET mykey ``` + +## Patterns + +The command `SET resource-name anystring NX EX max-lock-time` is a simple way to implement a locking system with Redis. + +A client can acquire the lock if the above command returns `OK` (or retry after some time if the command returns Nil), and remove the lock just using `DEL`. + +The lock will be auto-released after the expire time is reached. + +It is possible to make this system more robust modifying the unlock schema as follows: + +* Instead of setting a random string, set a non-guessable large random string. +* Instead of releasing the lock with `DEL`, send a script that only removes the key if the value matches. + +This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later. From cc97848c97a682359598ae146bff1dc4fc60ad79 Mon Sep 17 00:00:00 2001 From: adilbaig Date: Sat, 30 Mar 2013 17:46:19 +0530 Subject: [PATCH 0324/2880] Corrected the twitter account for Tiny-Redis. Added a small description too. --- clients.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index a9f6d1819c..0c19dc75f9 100644 --- a/clients.json +++ b/clients.json @@ -625,8 +625,8 @@ "language": "D", "url": "http://adilbaig.github.com/Tiny-Redis/", "repository": "https://github.com/adilbaig/Tiny-Redis", - "description": "", - "authors": ["adilbaig"] + "description": "A Redis client for D2. Supports pipelining, transactions and Lua scripting", + "authors": ["aidezigns"] }, { From 214cf0208c2e93d5df4f4e9e5537113fa312cd80 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Apr 2013 11:05:10 +0200 Subject: [PATCH 0325/2880] Provide an example script for unlocking in SET pattern. --- commands/set.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/commands/set.md b/commands/set.md index b89f1e602d..209080cdad 100644 --- a/commands/set.md +++ b/commands/set.md @@ -36,7 +36,18 @@ The lock will be auto-released after the expire time is reached. It is possible to make this system more robust modifying the unlock schema as follows: -* Instead of setting a random string, set a non-guessable large random string. +* Instead of setting a fixed string, set a non-guessable large random string, called token. * Instead of releasing the lock with `DEL`, send a script that only removes the key if the value matches. This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later. + +An example of unlock script would be similar to the following: + + if redis.call("get",KEYS[1]) == ARGV[1] + then + return redis.call("del",KEYS[1]) + else + return 0 + end + +The script should be called with `EVAL ...script... 1 resource-name token-value` From 7249438696466646a0d47f0c9339a05d479c5a36 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 3 Apr 2013 11:08:38 +0200 Subject: [PATCH 0326/2880] Point to the SET based lock from the SETNX page. --- commands/setnx.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/commands/setnx.md b/commands/setnx.md index 72c33798d0..3c70cab673 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -20,6 +20,10 @@ GET mykey ## Design pattern: Locking with `!SETNX` +**NOTE:** Starting with Redis 2.6.12 it is possible to create a much simpler locking primitive using the `SET` command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the `SET` command page. + +The old `SETNX` based pattern is documented below for historical reasons. + `SETNX` can be used as a locking primitive. For example, to acquire the lock of the key `foo`, the client could try the following: From b5f66ef273a595551f0568d28ee2d576578cd06b Mon Sep 17 00:00:00 2001 From: Thomas Tourlourat Date: Mon, 8 Apr 2013 13:52:53 +0200 Subject: [PATCH 0327/2880] fix word --- topics/clients.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/clients.md b/topics/clients.md index 1cb603ce3c..eae7c189ec 100644 --- a/topics/clients.md +++ b/topics/clients.md @@ -34,7 +34,7 @@ of the error. In what order clients are served --- -The order is determined by a combination of the client scoket file descriptor +The order is determined by a combination of the client socket file descriptor number and order in which the kernel reports events, so the order is to be considered as unspecified. From d2fa2beb9331ee75c0f5bd9266d8d7b86e6302c2 Mon Sep 17 00:00:00 2001 From: Frank Mueller Date: Mon, 8 Apr 2013 16:06:57 +0300 Subject: [PATCH 0328/2880] Update clients.json Added the second Tideland client, after Go now Erlang/OTP. --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index a9f6d1819c..b7f1c276f5 100644 --- a/clients.json +++ b/clients.json @@ -72,6 +72,15 @@ "active": true }, + { + "name": "Tideland Erlang/OTP Redis Client", + "language": "Erlang", + "repository": "git://git.tideland.biz/errc", + "description": "A comfortable Redis client for Erlang/OTP support pooling, pub/sub and transactions.", + "authors": ["themue"], + "active": true + }, + { "name": "redis.fy", "language": "Fancy", From 4415628a596f1fd7ad69c701213db26df2600dc7 Mon Sep 17 00:00:00 2001 From: Martyn Loughran Date: Tue, 9 Apr 2013 12:24:31 +0100 Subject: [PATCH 0329/2880] Add ruby em-hiredis client --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index a9f6d1819c..9deefafb6f 100644 --- a/clients.json +++ b/clients.json @@ -487,6 +487,14 @@ "authors": [] }, + { + "name": "em-hiredis", + "language": "Ruby", + "repository": "https://github.com/mloughran/em-hiredis", + "description": "An EventMachine Redis client (uses hiredis).", + "authors": ["mloughran"] + }, + { "name": "em-redis", "language": "Ruby", From 1acf5d336669907b63cfb445959c774f9c6d4d4a Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 10 Apr 2013 23:02:50 +0200 Subject: [PATCH 0330/2880] The first two Redis Design Drafts. --- topics/rdd-1.md | 28 +++++++++++++++ topics/rdd-2.md | 90 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 118 insertions(+) create mode 100644 topics/rdd-1.md create mode 100644 topics/rdd-2.md diff --git a/topics/rdd-1.md b/topics/rdd-1.md new file mode 100644 index 0000000000..00ca3d2e72 --- /dev/null +++ b/topics/rdd-1.md @@ -0,0 +1,28 @@ +# Redis Design Draft 1 -- Redis Design Drafts + +Author: Salvatore Sanfilippo `antirez@gmail.com` +Github issue: none + +## History of revisions + +1.0, 10 April 2012 - Initial draft. + +## Overview + +Redis Design Drafts are a way to make the community aware of designs planned +in order to modify or evolve Redis. Every new Redis Design Draft is published +in the Redis mailing list and announced on Twitter, in the hope to receive +feedbacks before implementing a given feature. + +The way the community can provide feedbacks about a RDD is simply writing +a message to the Redis mailing list, or commenting in the associated +Github issue if any. + +Drafts are published only for features already approved as potentially very +interesting for the project by the current Redis project maintainer. + +The official Redis web site includes a list of published RDDs. + +## Format + +The format of RDDs should reflect the format of this RDD. diff --git a/topics/rdd-2.md b/topics/rdd-2.md new file mode 100644 index 0000000000..404615e25c --- /dev/null +++ b/topics/rdd-2.md @@ -0,0 +1,90 @@ +# Redis Design Draft 2 -- RDB version 7 info fields + +Author: Salvatore Sanfilippo `antirez@gmail.com` +Github issue: https://github.com/antirez/redis/issues/1048 + +## History of revisions + +1.0, 10 April 2012 - Initial draft. + +## Overview + +The Redis RDB format lacks a simple way to add info fields to an RDB file +without causing a backward compatibility issue even if the added meta data +is not required in order to load data from the RDB file. + +For example thanks to the info fields specified in this document it will +be possible to add to RDB informations like file creation time, Redis version +generating the file, and any other useful information, in a way that not +every field is required for an RDB version 7 file to be correctly processed. + +Also with minimal changes it will be possible to add RDB version 7 support to +Redis 2.6 without actually supporting the additional fields but just skipping +them when loading an RDB file. + +RDB info fields may have semantical meaning if needed, so that the presence +of the field may add information about the data set specified in the RDB +file format, however when an info field is required to be correctly decoded +in order to understand and load the data set content of the RDB file, the +RDB file format must be increased so that previous versions of Redis will not +attempt to load it. + +However currently the info fields are designed to only hold additional +informations that are not useful to load the dataset, but can better specify +how the RDB file was created. + +## Info fields representation + +The RDB format 6 has the following layout: + +* A 9 bytes magic "REDIS0006" +* key-value pairs +* An EOF opcode +* CRC64 checksum + +The proposal for RDB format 7 is to add the optional fields immediately +after the first 9 bytes magic, so that the new format will be: + +* A 9 bytes magic "REDIS0007" +* Info field 1 +* Info field 2 +* ... +* Info field N +* Info field end-of-fields +* key-value pairs +* An EOF opcode +* CRC64 checksum + +Every single info field has the following structure: + +* A 16 bit identifier +* A 64 bit data length +* A data section of the exact length as specified + +Both the identifier and the data length are stored in little endian byte +ordering. + +The special identifier 0 means that there are no other info fields, and that +the remaining of the RDB file contains the key-value pairs. + +## Handling of info fields + +A program can simply skip every info field it does not understand, as long +as the RDB version matches the one that it is capable to load. + +## Specification of info fields IDs and content. + +### Info field 0 -- End of info fields + +This just means there are no longer info fields to process. + +### Info field 1 -- Creation date + +This field represents the unix time at which the RDB file was created. +The format of the unix time is a 64 bit little endian integer representing +seconds since 1th January 1970. + +### Info field 2 -- Redis version + +This field represents a null-terminated string containing the Redis version +that generated the file, as displayed in the Redis version INFO field. From 7a51e2b78635a03780c66bbdc61720e2f296d3c8 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 10 Apr 2013 23:04:20 +0200 Subject: [PATCH 0331/2880] RDD markdown changes. --- topics/rdd-1.md | 4 ++-- topics/rdd-2.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/rdd-1.md b/topics/rdd-1.md index 00ca3d2e72..c9f54d602c 100644 --- a/topics/rdd-1.md +++ b/topics/rdd-1.md @@ -1,7 +1,7 @@ # Redis Design Draft 1 -- Redis Design Drafts -Author: Salvatore Sanfilippo `antirez@gmail.com` -Github issue: none +* Author: Salvatore Sanfilippo `antirez@gmail.com` +* Github issue: none ## History of revisions diff --git a/topics/rdd-2.md b/topics/rdd-2.md index 404615e25c..e85dd158f5 100644 --- a/topics/rdd-2.md +++ b/topics/rdd-2.md @@ -1,7 +1,7 @@ # Redis Design Draft 2 -- RDB version 7 info fields -Author: Salvatore Sanfilippo `antirez@gmail.com` -Github issue: https://github.com/antirez/redis/issues/1048 +* Author: Salvatore Sanfilippo `antirez@gmail.com` +* Github issue [#1048](https://github.com/antirez/redis/issues/1048) ## History of revisions From 910b16983f0a070653c46f83ada52c8df6c49e62 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 10 Apr 2013 23:44:49 +0200 Subject: [PATCH 0332/2880] RDD time machine. --- topics/rdd-1.md | 2 +- topics/rdd-2.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/rdd-1.md b/topics/rdd-1.md index c9f54d602c..7daa5ec92b 100644 --- a/topics/rdd-1.md +++ b/topics/rdd-1.md @@ -5,7 +5,7 @@ ## History of revisions -1.0, 10 April 2012 - Initial draft. +1.0, 10 April 2013 - Initial draft. ## Overview diff --git a/topics/rdd-2.md b/topics/rdd-2.md index e85dd158f5..15a464abf9 100644 --- a/topics/rdd-2.md +++ b/topics/rdd-2.md @@ -5,7 +5,7 @@ ## History of revisions -1.0, 10 April 2012 - Initial draft. +1.0, 10 April 2013 - Initial draft. ## Overview From 81657250e8165eed1a62b4b82ed8a6f2b2beeceb Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 11 Apr 2013 10:16:29 +0200 Subject: [PATCH 0333/2880] Redis Design Draft main page. --- topics/rdd.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 topics/rdd.md diff --git a/topics/rdd.md b/topics/rdd.md new file mode 100644 index 0000000000..15f1238bfd --- /dev/null +++ b/topics/rdd.md @@ -0,0 +1,18 @@ +Redis Design Drafts +=== + +Redis Design Drafts are a way to make the community aware about the design of +new features before this feature is actually implemented. This is done in the +hope to get good feedbacks from the user base, that may result in a change +of the design if a flaw or possible improvement was discovered. + +The following is the list of published RDDs so far: + +* [RDD1 -- Redis Design Drafts](/topics/rdd-1) +* [RDD2 -- RDB version 7 info fields](/topics/rdd-2) + +To get an RDD accepted for publication you need to talk about your idea in +the [Redis Google Group](http://groups.google.com/group/redis-db). Once the +general feature is accepted and/or considered for further exploration you +can write an RDD or ask the current Redis maintainer to write one about the +topic. From 98438f66a3b71bb208b5455fb2bb3105bd236695 Mon Sep 17 00:00:00 2001 From: Sandeep Shetty Date: Tue, 16 Apr 2013 17:37:09 +0530 Subject: [PATCH 0334/2880] Added phpish/redis --- clients.json | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 327a74c93c..33b7e4da66 100644 --- a/clients.json +++ b/clients.json @@ -362,7 +362,15 @@ "description": "Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.", "authors": ["colinmollenhour"] }, - + + { + "name": "phpish/redis", + "language": "PHP", + "repository": "https://github.com/phpish/redis", + "description": "Simple Redis client in PHP", + "authors": ["sandeepshetty"] + }, + { "name": "redis-py", "language": "Python", From 10847affa498574ebb8ffaac672f1938f9c5ae43 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 23 Apr 2013 10:45:14 +0200 Subject: [PATCH 0335/2880] Sentinel doc updated with handling of resurrecting master. --- topics/sentinel.md | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index d2bea6849e..48a318e84e 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -288,9 +288,9 @@ it the **Subjective Leader**, and is selected using the following rule: For a Sentinel to sense to be the **Objective Leader**, that is, the Sentinel that should start the failove process, the following conditions are needed. * It thinks it is the subjective leader itself. -* It receives acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request shoudl agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. +* It receives acknowledges from other Sentinels about the fact it is the leader: at least 50% plus one of all the Sentinels that were able to reply to the `SENTINEL is-master-down-by-addr` request should agree it is the leader, and additionally we need a total level of agreement at least equal to the configured quorum of the master instance that we are going to failover. -Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves into an observer instead. +Once a Sentinel things it is the Leader, the failover starts, but there is always a delay of five seconds plus an additional random delay. This is an additional layer of protection because if during this period we see another instance turning a slave into a master, we detect it as another instance staring the failover and turn ourselves into an observer instead. This is just a redundancy layer and should in theory never happen. **Sentinel Rule #11**: A **Good Slave** is a slave with the following requirements: * It is not in SDOWN nor in ODOWN condition. @@ -298,6 +298,7 @@ Once a Sentinel things it is the Leader, the failover starts, but there is alway * Latest PING reply we received from it is not older than five seconds. * Latest INFO reply we received from it is not older than five seconds. * The latest INFO reply reported that the link with the master is down for no more than the time elapsed since we saw the master entering SDOWN state, plus ten times the configured `down_after_milliseconds` parameter. So for instance if a Sentinel is configured to sense the SDOWN condition after 10 seconds, and the master is down since 50 seconds, we accept a slave as a Good Slave only if the replication link was disconnected less than `50+(10*10)` seconds (two minutes and half more or less). +* It is not flagged as DEMOTE (see the section about resurrecting masters). **Sentinel Rule #12**: A **Subjective Leader** from the point of view of a Sentinel, is the Sentinel (including itself) with the lower runid monitoring a given master, that also replied to PING less than 5 seconds ago, reported to be able to do the failover via Pub/Sub hello channel, and is not in DISCONNECTED state. @@ -400,6 +401,26 @@ the configuration back to the original master. * A failover is in progress and a slave to promote was already selected (or in the case of the observer was already detected as master). * The promoted slave is in **Extended SDOWN** condition (continually in SDOWN condition for at least ten times the configured `down-after-milliseconds`). +Resurrecting master +--- + +After the failover, at some point the old master may return back online. Starting with Redis 2.6.13 Sentinel is able to handle this condition by automatically reconfiguring the old master as a slave of the new master. + +This happens in the following way: + +* After the failover has started from the point of view of a Sentinel, either as a leader, or as an observer that detected the promotion of a slave, the old master is put in the list of slaves of the new master, but with a special `DEMOTE` flag (the flag can be seen in the `SENTINEL SLAVES` command output). +* Once the master is back online and it is possible to contact it again, if it still claims to be a master (from INFO output) Sentinels will send a `SLAVEOF` command trying to reconfigure it. Once the instance claims to be a slave, the `DEMOTE` flag is cleared. + +There is no single Sentinel in charge of turning the old master into a slave, so the process is resistant against failing sentinels. At the same time instances with the `DEMOTE` flag set are never selected as promotable slaves. + +In this specific case the `+slave` event is only generated only when the old master will report to be actually a slave again in its `INFO` output. + +**Sentinel Rule #19**: Once the failover starts (either as observer or leader), the old master is added as a slave of the new master, flagged as `DEMOTE`. + +**Sentinel Rule #20**: A slave instance claiming to be a master, and flagged as `DEMOTE`, is reconfigured via `SLAVEOF` every time a Sentinel receives an `INFO` output where the wrong role is detected. + +**Sentinel Rule #21**: The `DEMOTE` flag is cleared as soon as an `INFO` output shows the instance to report itself as a slave. + Manual interactions --- @@ -506,7 +527,7 @@ Note: because currently slave priority is not implemented, the selection is performed only discarding unreachable slaves and picking the one with the lower Run ID. -**Sentinel Rule #19**: A Sentinel performing the failover as leader will select the slave to promote, among the existing **Good Slaves** (See rule #11), taking the one with the lower slave priority. When priority is the same the slave with lexicographically lower runid is preferred. +**Sentinel Rule #22**: A Sentinel performing the failover as leader will select the slave to promote, among the existing **Good Slaves** (See rule #11), taking the one with the lower slave priority. When priority is the same the slave with lexicographically lower runid is preferred. APPENDIX B - Get started with Sentinel in five minutes === From b18ff2956f299d42966b5f59d6b77f46ac145cce Mon Sep 17 00:00:00 2001 From: antirez Date: Sat, 4 May 2013 00:58:09 +0200 Subject: [PATCH 0336/2880] 'an user' -> 'a user' everywhere in the docs. --- commands/bitcount.md | 2 +- topics/sentinel-spec.md | 8 ++++---- topics/twitter-clone.md | 6 +++--- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/commands/bitcount.md b/commands/bitcount.md index ad0ff50560..35f65b6bc1 100644 --- a/commands/bitcount.md +++ b/commands/bitcount.md @@ -38,7 +38,7 @@ with a small progressive integer. For instance day 0 is the first day the application was put online, day 1 the next day, and so forth. -Every time an user performs a page view, the application can register that in +Every time a user performs a page view, the application can register that in the current day the user visited the web site using the `SETBIT` command setting the bit corresponding to the current day. diff --git a/topics/sentinel-spec.md b/topics/sentinel-spec.md index 0b70e52303..a1d33efb7f 100644 --- a/topics/sentinel-spec.md +++ b/topics/sentinel-spec.md @@ -55,7 +55,7 @@ configured quorum, select the desired behavior among many possibilities. Redis Sentinel does not use any proxy: clients reconfiguration is performed running user-provided executables (for instance a shell script or a -Python program) in an user setup specific way. +Python program) in a user setup specific way. In what form it will be shipped === @@ -271,7 +271,7 @@ Guarantees of the Leader election process === As you can see for a Sentinel to become a leader the majority is not strictly -required. An user can force the majority to be needed just setting the master +required. A user can force the majority to be needed just setting the master quorum to, for instance, the value of 5 if there are a total of 9 sentinels. However it is also possible to set the quorum to the value of 2 with 9 @@ -350,7 +350,7 @@ The fail over process consists of the following steps: * 1) Turn the selected slave into a master using the SLAVEOF NO ONE command. * 2) Turn all the remaining slaves, if any, to slaves of the new master. This is done incrementally, one slave after the other, waiting for the previous slave to complete the synchronization process before starting with the next one. -* 3) Call an user script to inform the clients that the configuration changed. +* 3) Call a user script to inform the clients that the configuration changed. * 4) Completely remove the old failing master from the table, and add the new master with the same name. If Steps "1" fails, the fail over is aborted. @@ -471,7 +471,7 @@ TODO === * More detailed specification of user script error handling, including what return codes may mean, like 0: try again. 1: fatal error. 2: try again, and so forth. -* More detailed specification of what happens when an user script does not return in a given amount of time. +* More detailed specification of what happens when a user script does not return in a given amount of time. * Add a "push" notification system for configuration changes. * Document that for every master monitored the configuration specifies a name for the master that is reported by all the SENTINEL commands. * Make clear that we handle a single Sentinel monitoring multiple masters. diff --git a/topics/twitter-clone.md b/topics/twitter-clone.md index 0453d23ce0..e21e189f11 100644 --- a/topics/twitter-clone.md +++ b/topics/twitter-clone.md @@ -117,14 +117,14 @@ Data layout Working with a relational database this is the stage were the database layout should be produced in form of tables, indexes, and so on. We don't have tables, so what should be designed? We need to identify what keys are needed to represent our objects and what kind of values this keys need to hold. -Let's start from Users. We need to represent this users of course, with the username, userid, password, followers and following users, and so on. The first question is, what should identify an user inside our system? The username can be a good idea since it is unique, but it is also too big, and we want to stay low on memory. So like if our DB was a relational one we can associate an unique ID to every user. Every other reference to this user will be done by id. That's very simple to do, because we have our atomic INCR operation! When we create a new user we can do something like this, assuming the user is called "antirez": +Let's start from Users. We need to represent this users of course, with the username, userid, password, followers and following users, and so on. The first question is, what should identify a user inside our system? The username can be a good idea since it is unique, but it is also too big, and we want to stay low on memory. So like if our DB was a relational one we can associate an unique ID to every user. Every other reference to this user will be done by id. That's very simple to do, because we have our atomic INCR operation! When we create a new user we can do something like this, assuming the user is called "antirez": INCR global:nextUserId => 1000 SET uid:1000:username antirez SET uid:1000:password p1pp0 We use the _global:nextUserId_ key in order to always get an unique ID for every new user. Then we use this unique ID to populate all the other keys holding our user data. *This is a Design Pattern* with key-values stores! Keep it in mind. -Besides the fields already defined, we need some more stuff in order to fully define an User. For example sometimes it can be useful to be able to get the user ID from the username, so we set this key too: +Besides the fields already defined, we need some more stuff in order to fully define a User. For example sometimes it can be useful to be able to get the user ID from the username, so we set this key too: SET username:antirez:uid 1000 @@ -150,7 +150,7 @@ OK, we have more or less everything about the user, but authentication. We'll ha SET uid:1000:auth fea5e81ac8ca77622bed1c2132a021f9 SET auth:fea5e81ac8ca77622bed1c2132a021f9 1000 -In order to authenticate an user we'll do this simple work (`login.php`): +In order to authenticate a user we'll do this simple work (`login.php`): * Get the username and password via the login form * Check if the username:``:uid key actually exists * If it exists we have the user id, (i.e. 1000) From 9e8163f89085aadf6f6b9be1b351c8c4092e68b8 Mon Sep 17 00:00:00 2001 From: Victor Deryagin Date: Tue, 14 May 2013 11:23:46 +0300 Subject: [PATCH 0337/2880] Fixed typo in partitioning.md --- topics/partitioning.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/partitioning.md b/topics/partitioning.md index c21f342036..7d663407f0 100644 --- a/topics/partitioning.md +++ b/topics/partitioning.md @@ -50,7 +50,7 @@ Some features of Redis don't play very well with partitioning: Data store or cache? --- -Partitioning when using Redis ad a data store or cache is conceptually the same, however there is a huge difference. While when Redis is used as a data store you need to be sure that a given key always maps to the same instance, when Redis is used as a cache if a given node is unavailable it is not a big problem if we start using a different node, altering the key-instance map as we wish to improve the *availability* of the system (that is, the ability of the system to reply to our queries). +Partitioning when using Redis as a data store or cache is conceptually the same, however there is a huge difference. While when Redis is used as a data store you need to be sure that a given key always maps to the same instance, when Redis is used as a cache if a given node is unavailable it is not a big problem if we start using a different node, altering the key-instance map as we wish to improve the *availability* of the system (that is, the ability of the system to reply to our queries). Consistent hashing implementations are often able to switch to other nodes if the preferred node for a given key is not available. Similarly if you add a new node, part of the new keys will start to be stored on the new node. From 6c0d29710f94e32c2715618148f3e142a1f055ab Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 15 May 2013 12:54:09 +0200 Subject: [PATCH 0338/2880] Changes to the sponsors page. --- topics/sponsors.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/topics/sponsors.md b/topics/sponsors.md index 3fb8ae516f..a2e94d3aa1 100644 --- a/topics/sponsors.md +++ b/topics/sponsors.md @@ -1,7 +1,9 @@ Redis Sponsors === -All the work [Salvatore Sanfilippo](http://twitter.com/antirez) and [Pieter Noordhuis](http://twitter.com/pnoordhuis) are doing in order to develop Redis is sponsored by [VMware](http://vmware.com). The Redis project no longer accepts money donations. +Starting from May 2013, all the work [Salvatore Sanfilippo](http://twitter.com/antirez) is doing in order to develop Redis is sponsored by [Pivotal](http://gopivotal.com). The Redis project no longer accepts money donations. + +Before May 2013 the project was sponsored by VMware with the work of [Salvatore Sanfilippo](http://twitter.com/antirez) and [Pieter Noordhuis](http://twitter.com/pnoordhuis). In the past Redis accepted donations from the following companies: @@ -17,6 +19,6 @@ Also thanks to the following people or organizations that donated to the Project * [Brad Jasper](http://bradjasper.com/) * [Mrkris](http://www.mrkris.com/) -We are grateful to [VMware](http://vmware.com) and to the companies and people that donated to the Redis project. Thank you. +We are grateful to [Pivotal](http://gopivotal.com), [VMware](http://vmware.com) and to the other companies and people that donated to the Redis project. Thank you. The Redis.io domain is kindly donated to the project by [I Want My Name](http://iwantmyname.com). From 14cc15a04a74c63864b1c96fc83c8936cb4dc04d Mon Sep 17 00:00:00 2001 From: BB Date: Sat, 18 May 2013 08:43:46 +0200 Subject: [PATCH 0339/2880] Added Redis client for Rebol. --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..c0caf92da5 100644 --- a/clients.json +++ b/clients.json @@ -399,6 +399,14 @@ "active": true }, + { + "name": "prot-redis", + "language": "Rebol", + "repository": "https://github.com/rebolek/prot-redis", + "description": "Redis network scheme for Rebol 3", + "authors": ["rebolek"] + }, + { "name": "scala-redis", "language": "Scala", From 68f2caa343f6921e30d78844d2f8f4d4d04cf0a2 Mon Sep 17 00:00:00 2001 From: Matt MacAulay Date: Fri, 14 Jun 2013 15:00:51 -0400 Subject: [PATCH 0340/2880] Added Brando to the list of clients --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..5bdbb735d6 100644 --- a/clients.json +++ b/clients.json @@ -669,5 +669,13 @@ "repository": "https://github.com/ctstone/csredis", "description": "Async (and sync) client for Redis and Sentinel", "authors": ["ctnstone"] + }, + + { + "name": "Brando", + "language": "Scala", + "repository": "https://github.com/chrisdinn/brando", + "description": "A Redis client written with the Akka IO package introduced in Akka 2.2.", + "authors": ["chrisdinn"] } ] From c49821ce776a8e1c4e9ba41e1d8898b8c1739cc9 Mon Sep 17 00:00:00 2001 From: Matt Perpick Date: Tue, 18 Jun 2013 12:16:30 -0300 Subject: [PATCH 0341/2880] Fixing latency typo s/log time/long time --- topics/latency.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/latency.md b/topics/latency.md index 261a5ea2dd..29bbaaff1a 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -436,7 +436,7 @@ The active expiring is designed to be adaptive. An expire cycle is started every + Sample `REDIS_EXPIRELOOKUPS_PER_CRON` keys, evicting all the keys already expired. + If the more than 25% of the keys were found expired, repeat. -Given that `REDIS_EXPIRELOOKUPS_PER_CRON` is set to 10 by default, and the process is performed ten times per second, usually just 100 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a log time, so that the *lazy* algorithm does not help. At the same time expiring just 100 keys per second has no effects in the latency a Redis instance. +Given that `REDIS_EXPIRELOOKUPS_PER_CRON` is set to 10 by default, and the process is performed ten times per second, usually just 100 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time, so that the *lazy* algorithm does not help. At the same time expiring just 100 keys per second has no effects in the latency a Redis instance. However the algorithm is adaptive and will loop if it founds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*. From 9f61549ef4923c47708f82d3f380be73d0555940 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 26 Jun 2013 11:38:56 +0200 Subject: [PATCH 0342/2880] PUBSUB command documented. --- commands.json | 18 ++++++++++++++++++ commands/pubsub.md | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+) create mode 100644 commands/pubsub.md diff --git a/commands.json b/commands.json index 16ed1194c4..2f2d70f05a 100644 --- a/commands.json +++ b/commands.json @@ -1121,6 +1121,24 @@ "since": "2.0.0", "group": "pubsub" }, + "PUBSUB": { + "summary": "Inspect the state of the Pub/Sub subsystem", + "complexity": "O(N) for the CHANNELS subcommand, where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns). O(N) for the NUMSUB subcommand, where N is the number of requested channels. O(1) for the NUMPAT subcommand.", + "arguments": [ + { + "name": "subcommand", + "type": "string" + }, + { + "name": "argument", + "type": "string", + "optional": true, + "multiple": true + } + ], + "since": "2.8.0", + "group": "pubsub" + }, "PTTL": { "summary": "Get the time to live for a key in milliseconds", "complexity": "O(1)", diff --git a/commands/pubsub.md b/commands/pubsub.md new file mode 100644 index 0000000000..319653f738 --- /dev/null +++ b/commands/pubsub.md @@ -0,0 +1,42 @@ +The PUBSUB command is an introspection command that allows to inspect the +state of the Pub/Sub subsystem. It is composed of subcommands that are +documented separately. The general form is: + + PUBSUB ... args ... + +# PUBSUB CHANNELS [pattern] + +Lists the currently *active channels*. An active channel is a Pub/Sub channel +with one ore more subscribers (not including clients subscribed to patterns). + +If no `pattern` is specified, all the channels are listed, otherwise if pattern +is specified only channels matching the specified glob-style pattern are +listed. + +@return + +@multi-bulk-reply: a list of active channels, optionally matching the specified pattern. + +# PUBSUB NUMSUB [channel-1 ... channel-N] + +Returns the number of subscribers (not counting clients subscribed to patterns) +for the specified channels. + +@return + +@multi-bulk-reply: a list of channels and number of subscribers for every channel. The format is channel, count, channel, count, ..., so the list is flat. +The order in which the channels are listed is the same as the order of the +channels specified in the command call. + +Note that it is valid to call this command without channels. In this case it +will just return an empty list. + +# PUBSUB NUMPAT + +Returns the number of subscriptions to patterns (that are performed using the +`PSUBSCRIBE` command). Note that this is not just the count of clients subscribed +to patterns but the total number of patterns all the clients are subscribed to. + +@return + +@integer-reply: the number of patterns all the clients are subscribed to. From 9db568250965da21e47258e1292757414c7f556b Mon Sep 17 00:00:00 2001 From: Tianon Gravi Date: Thu, 4 Jul 2013 14:56:29 -0600 Subject: [PATCH 0343/2880] Swapped MojoX::Redis for Mojo::Redis MojoX::Redis is deprecated in favor of Mojo::Redis --- clients.json | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/clients.json b/clients.json index 327a74c93c..53b6642e31 100644 --- a/clients.json +++ b/clients.json @@ -291,12 +291,12 @@ }, { - "name": "MojoX::Redis", + "name": "Mojo::Redis", "language": "Perl", - "url": "http://search.cpan.org/dist/MojoX-Redis", - "repository": "https://github.com/und3f/mojox-redis", + "url": "http://search.cpan.org/dist/Mojo-Redis", + "repository": "https://github.com/marcusramberg/mojo-redis", "description": "asynchronous Redis client for Mojolicious", - "authors": ["und3f"], + "authors": ["und3f", "marcusramberg", "jhthorsen"], "active": true }, From 510229a5efe3e8035fdfd103a4365162785ad323 Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Thu, 18 Jul 2013 22:12:07 +0200 Subject: [PATCH 0344/2880] Document COPY and REPLACE option for migrate. --- commands.json | 12 ++++++++++++ commands/migrate.md | 5 +++++ 2 files changed, 17 insertions(+) diff --git a/commands.json b/commands.json index 2f2d70f05a..a50abdcf82 100644 --- a/commands.json +++ b/commands.json @@ -964,6 +964,18 @@ { "name": "timeout", "type": "integer" + }, + { + "name": "copy", + "type": "enum", + "enum": ["COPY"], + "optional": true + }, + { + "name": "replace", + "type": "enum", + "enum": ["REPLACE"], + "optional": true } ], "since": "2.6.0", diff --git a/commands/migrate.md b/commands/migrate.md index 69736e1f21..775d1ea6fc 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -37,6 +37,11 @@ same name was also _already_ present on the target instance). On success OK is returned. +## Options + +* `COPY` -- Do not remove the key from the local instance. +* `REPLACE` -- Replace existing key on the remote instance. + @return @status-reply: The command returns OK on success. From 6dbf47427835c6120b5f9f35a3c9c1689318c54b Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 19 Jul 2013 10:44:50 +0200 Subject: [PATCH 0345/2880] Replication page updated with Redis 2.8 features. --- topics/replication.md | 65 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 64 insertions(+), 1 deletion(-) diff --git a/topics/replication.md b/topics/replication.md index 43c5c27014..3191fc7c5e 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -6,6 +6,8 @@ replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication: +* Redis uses asynchronous replication. Starting with Redis 2.8 there is however a periodic (one time every second) acknowledge of the replication stream processed by slaves. + * A master can have multiple slaves. * Slaves are able to accept other slaves connections. Aside from @@ -56,7 +58,33 @@ slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests, it performs a single background save in order to serve all of them. -When a master and a slave reconnects after the link went down, a full resync is performed. +When a master and a slave reconnects after the link went down, a full resync +is always performed. However starting with Redis 2.8, a partial resynchronization +is also possible. + +Partial resynchronization +--- + +Starting with Redis 2.8, master and slave are usually able to continue the +replication process without requiring a full resynchronization after the +replication link went down. + +This works using an in-memory backlog of the replication stream in the +master side. Also the master and all the slaves agree on a *replication +offset* and a *master run id*, so when the link goes down, the slave will +reconnect and ask the master to continue the replication, assuming the +master run id is still the same, and that the offset specified is available +in the replication backlog. + +If the conditions are met, the master just sends the part of the replication +stream the master missed, and the replication continues. +Otherwise a full resynchronization is performed as in the past versions of +Redis. + +The new partial resynchronization feature uses the `PSYNC` command internally, +while the old implementation used the `SYNC` command, however a Redis 2.8 +slave is able to detect if the server it is talking with does not support +`PSYNC`, and will use `SYNC` instead. Configuration --- @@ -70,6 +98,10 @@ Of course you need to replace 192.168.1.1 6379 with your master IP address (or hostname) and port. Alternatively, you can call the `SLAVEOF` command and the master host will start a sync with the slave. +There are also a few parameters in order to tune the replication backlog taken +in memory by the master to perform the partial resynchronization. See the example +`redis.conf` shipped with the Redis distribution for more information. + Read only slave --- @@ -93,3 +125,34 @@ To do it on a running instance, use `redis-cli` and type: To set it permanently, add this to your config file: masterauth + +Allow writes only with N attached replicas +--- + +Starting with Redis 2.8 it is possible to configure a Redis master in order to +accept write queries only if at least N slaves are currently connected to the +master, in order to improve data safety. + +However because Redis uses asynchronous replication it is not possible to ensure +the write actually received a given write, so there is always a window for data +loss. + +This is how the feature works: + +* Redis slaves ping the master every second, acknowledging the amount of replication stream processed. +* Redis masters will remember the last time it received a ping from every slave. +* The user can configure a minimum number of slaves that have a lag not greater than a maximum number of seconds. + +If there are at least N slaves, with a lag less than M seconds, then the write will be accepted. + +You may think at it as a relaxed version of the "C" in the CAP theorem, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. + +If the conditions are not met, the master will instead reply with an error and the write will not be accepted. + +There are two configuration parameters for this feature: + +* min-slaves-to-write `` +* min-slaves-max-lag `` + +For more information please check the example `redis.conf` file shipped with the +Redis source distribution. From f3323a2761b6244452c923504d1c036cacdfccec Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 19 Jul 2013 11:20:48 +0200 Subject: [PATCH 0346/2880] CONFIG REWRITE documented. --- commands.json | 5 +++++ commands/config rewrite.md | 20 ++++++++++++++++++++ 2 files changed, 25 insertions(+) create mode 100644 commands/config rewrite.md diff --git a/commands.json b/commands.json index 2f2d70f05a..d0d4b14a46 100644 --- a/commands.json +++ b/commands.json @@ -180,6 +180,11 @@ "since": "2.0.0", "group": "server" }, + "CONFIG REWRITE": { + "summary": "Rewrite the configuration file with the in memory configuration", + "since": "2.8.0", + "group": "server" + }, "CONFIG SET": { "summary": "Set a configuration parameter to the given value", "arguments": [ diff --git a/commands/config rewrite.md b/commands/config rewrite.md new file mode 100644 index 0000000000..dcf5a973ad --- /dev/null +++ b/commands/config rewrite.md @@ -0,0 +1,20 @@ +The `CONFIG REWRITE` command rewrites the `redis.conf` file the server was started with, applying the minimal changes needed to make it reflecting the configuration currently used by the server, that may be different compared to the original one because of the use of the `CONFIG SET` command. + +The rewrite is performed in a very conservative way: + +* Comments and the overall structure of the original redis.conf are preserved as much as possible. +* If an option already exists in the old redis.conf file, it will be rewritten at the same position (line number). +* If an option was not already present, but it is set to its default value, it is not added by the rewrite process. +* If an option was not already present, but it is set to a non-default value, it is appended at the end of the file. +* Non used lines are blanked. For instance if you used to have multiple `save` directives, but the current configuration has fewer or none as you disabled RDB persistence, all the lines will be blanked. + +CONFIG REWRITE is also able to rewrite the configuration file from scratch if the original one no longer exists for some reason. However if the server was started without a configuration file at all, the CONFIG REWRITE will just return an error. + +## Atomic rewrite process + +In order to make sure the redis.conf file is always consistent, that is, on errors or crashes you always end with the old file, or the new one, the rewrite is perforemd with a single `write(2)` call that has enough content to be at least as big as the old file. Sometimes additional padding in the form of comments is added in order to make sure the resulting file is big enough, and later the file gets truncated to remove the padding at the end. + +@return + +@status-reply: `OK` when the configuration was rewritten properly. +Otherwise an error is returned. From b478a67868f3636ac27df0464c8a1de97c7d4d95 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 19 Jul 2013 15:41:14 +0200 Subject: [PATCH 0347/2880] Benchmark page updated. --- topics/benchmarks.md | 152 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 128 insertions(+), 24 deletions(-) diff --git a/topics/benchmarks.md b/topics/benchmarks.md index 1a25bc6c58..b645f4ea28 100644 --- a/topics/benchmarks.md +++ b/topics/benchmarks.md @@ -1,35 +1,35 @@ # How fast is Redis? -Redis includes the `redis-benchmark` utility that simulates SETs/GETs done by N -clients at the same time sending M total queries (it is similar to the Apache's -`ab` utility). Below you'll find the full output of a benchmark executed +Redis includes the `redis-benchmark` utility that simulates running commands done +by N clients at the same time sending M total queries (it is similar to the +Apache's `ab` utility). Below you'll find the full output of a benchmark executed against a Linux box. The following options are supported: Usage: redis-benchmark [-h ] [-p ] [-c ] [-n [-k ] - -h Server hostname (default 127.0.0.1) - -p Server port (default 6379) - -s Server socket (overrides host and port) - -c Number of parallel connections (default 50) - -n Total number of requests (default 10000) - -d Data size of SET/GET value in bytes (default 2) - -k 1=keep alive 0=reconnect (default 1) - -r Use random keys for SET/GET/INCR, random values for SADD + -h Server hostname (default 127.0.0.1) + -p Server port (default 6379) + -s Server socket (overrides host and port) + -c Number of parallel connections (default 50) + -n Total number of requests (default 10000) + -d Data size of SET/GET value in bytes (default 2) + -k 1=keep alive 0=reconnect (default 1) + -r Use random keys for SET/GET/INCR, random values for SADD Using this option the benchmark will get/set keys in the form mykey_rand:000000012456 instead of constant keys, the argument determines the max number of values for the random number. For instance if set to 10 only rand:000000000000 - rand:000000000009 range will be allowed. - -P Pipeline requests. Default 1 (no pipeline). - -q Quiet. Just show query/sec values - --csv Output in CSV format - -l Loop. Run the tests forever - -t Only run the comma separated list of tests. The test + -P Pipeline requests. Default 1 (no pipeline). + -q Quiet. Just show query/sec values + --csv Output in CSV format + -l Loop. Run the tests forever + -t Only run the comma separated list of tests. The test names are the same as the ones produced as output. - -I Idle mode. Just open N idle connections and wait. + -I Idle mode. Just open N idle connections and wait. You need to have a running Redis instance before launching the benchmark. A typical example would be: @@ -39,6 +39,79 @@ A typical example would be: Using this tool is quite easy, and you can also write your own benchmark, but as with any benchmarking activity, there are some pitfalls to avoid. +Running only a subset of the tests +--- + +You don't need to run all the default tests every time you execute redis-benchmark. +The simplest thing to select only a subset of tests is to use the `-t` option +like in the following example: + + $ redis-benchmark -t set,lpush -n 100000 -q + SET: 74239.05 requests per second + LPUSH: 79239.30 requests per second + +In the above example we asked to just run test the SET and LPUSH commands, +in quite mode (see the `-q` switch). + +It is also possible to specify the command to benchmark directly like in the +following example: + + $ redis-benchmark -n 100000 -q script load "redis.call('set','foo','bar')" + script load redis.call('set','foo','bar'): 69881.20 requests per second + +Selecting the size of the key space +--- + +By default the benchmark runs against a single key. In Redis the difference +between such a synthetic benchmark and a real one is not huge since it is an +in memory system, however it is possible to stress cache misses and in general +to simulate a more real-world work load by using a large key space. + +This is obtained by using the `-r` switch. For instance if I want to run +one million of SET operations, using a random key for every operation out of +100k possible keys, I'll use the following command line: + + $ redis-cli flushall + OK + + $ redis-benchmark -t set -r 100000 -n 1000000 + ====== SET ====== + 1000000 requests completed in 13.86 seconds + 50 parallel clients + 3 bytes payload + keep alive: 1 + + 99.76% `<=` 1 milliseconds + 99.98% `<=` 2 milliseconds + 100.00% `<=` 3 milliseconds + 100.00% `<=` 3 milliseconds + 72144.87 requests per second + + $ redis-cli dbsize + (integer) 99993 + +Using pipelining +--- + +By default every client (the benchmark simulates 50 clients if not otherwise +specified with `-c`) sends the next command only when the reply of the previous +command is received, this means that the server will likely need a read call +in order to read each command from every client. Also RTT is payed as well. + +Redis supports [/topics/pipelining](pipelining), so it is possible to send +multiple commands at once, a feature often exploited by real world applications. +Redis pipelining is able to dramatically improve the number of operations per +second a server is able do deliver. + +This is an example of running the benchmark in a Macbook air 11" using a +pipeling of 16 commands: + + $ redis-benchmark -n 1000000 -t set,get -P 16 -q + SET: 403063.28 requests per second + GET: 508388.41 requests per second + +Using pipelining resulted into a sensible amount of more commands processed. + Pitfalls and misconceptions --------------------------- @@ -239,16 +312,47 @@ the generated log file on a remote filesystem. instance using INFO at regular interval to gather statistics is probably fine, but MONITOR will impact the measured performance significantly. -# Example of benchmark result +# Benchmark results on different virtualized and bare metal servers. + +* The test was done with 50 simultaneous clients performing 2 million requests. +* Redis 2.6.14 is used for all the tests. +* Test executed using the loopback interface. +* Test executed using a key space of 1 million keys. +* Test executed with and without pipelining (16 commands pipeline). + +**Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (with pipelining)** + + $ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -P 16 -q + SET: 552028.75 requests per second + GET: 707463.75 requests per second + LPUSH: 767459.75 requests per second + LPOP: 770119.38 requests per second + +**Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (without pipelining)** + + $ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -q + SET: 122556.53 requests per second + GET: 123601.76 requests per second + LPUSH: 136752.14 requests per second + LPOP: 132424.03 requests per second + +**Linode 2048 instance (with pipelining)** + + $ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -q -P 16 + SET: 195503.42 requests per second + GET: 250187.64 requests per second + LPUSH: 230547.55 requests per second + LPOP: 250815.16 requests per second -* The test was done with 50 simultaneous clients performing 100000 requests. -* The value SET and GET is a 256 bytes string. -* The Linux box is running *Linux 2.6*, it's *Xeon X3320 2.5 GHz*. -* Text executed using the loopback interface (127.0.0.1). +**Linode 2048 instance (without pipelining)** -Results: *about 110000 SETs per second, about 81000 GETs per second.* + $ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -q + SET: 35001.75 requests per second + GET: 37481.26 requests per second + LPUSH: 36968.58 requests per second + LPOP: 35186.49 requests per second -## Latency percentiles +## More detailed tests without pipelining $ redis-benchmark -n 100000 From 7a87240ed0e105906d7005874df0e9142f2aafb2 Mon Sep 17 00:00:00 2001 From: Philipp Klose Date: Fri, 26 Jul 2013 02:16:15 +0200 Subject: [PATCH 0348/2880] Haxe was renamed Haxe was renamed. From "haXe" to "Haxe". --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 327a74c93c..1a7d9e6d33 100644 --- a/clients.json +++ b/clients.json @@ -489,7 +489,7 @@ { "name": "hxneko-redis", - "language": "haXe", + "language": "Haxe", "url": "http://code.google.com/p/hxneko-redis", "repository": "http://code.google.com/p/hxneko-redis/source/browse", "description": "", From ae80f66def21b68498f5c592d974cb3bdc1196fb Mon Sep 17 00:00:00 2001 From: sugelav Date: Sat, 27 Jul 2013 23:25:51 +0530 Subject: [PATCH 0349/2880] Added entry for aredis java client in clients.json. --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..df24918f70 100644 --- a/clients.json +++ b/clients.json @@ -212,6 +212,14 @@ "active": true }, + { + "name": "aredis", + "language": "Java", + "repository": "http://aredis.sourceforge.net/", + "description": "Asynchronous, pipelined client based on Java 7 NIO Channel API", + "authors": ["msuresh"] + }, + { "name": "redis-lua", "language": "Lua", From 209a76a7270d85a84452b5cfd2580cdc9fe314b1 Mon Sep 17 00:00:00 2001 From: sugelav Date: Sun, 28 Jul 2013 12:57:02 +0530 Subject: [PATCH 0350/2880] Minor change to the description of aredis. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index df24918f70..00fd367596 100644 --- a/clients.json +++ b/clients.json @@ -216,7 +216,7 @@ "name": "aredis", "language": "Java", "repository": "http://aredis.sourceforge.net/", - "description": "Asynchronous, pipelined client based on Java 7 NIO Channel API", + "description": "Asynchronous, pipelined client based on the Java 7 NIO Channel API", "authors": ["msuresh"] }, From 380858ed6c4ab72c86959626c94887ac462da40f Mon Sep 17 00:00:00 2001 From: sugelav Date: Mon, 29 Jul 2013 22:27:12 +0530 Subject: [PATCH 0351/2880] Blanked out author tag for aredis in clients.json since author msuresh does not have a twitter account. --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 00fd367596..63dc9d9ee5 100644 --- a/clients.json +++ b/clients.json @@ -217,7 +217,7 @@ "language": "Java", "repository": "http://aredis.sourceforge.net/", "description": "Asynchronous, pipelined client based on the Java 7 NIO Channel API", - "authors": ["msuresh"] + "authors": [] }, { From 8e6266bb33edba5e81afd765ccdf2ca6bf7cbf63 Mon Sep 17 00:00:00 2001 From: Shawn Milochik Date: Fri, 9 Aug 2013 17:04:35 -0400 Subject: [PATCH 0352/2880] typo & grammar fixes, other minor edits Typos: (quite vs quiet, text vs test) A couple of capitalization fixes. A few small English grammar improvements. --- topics/benchmarks.md | 58 ++++++++++++++++++++++---------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/topics/benchmarks.md b/topics/benchmarks.md index b645f4ea28..e95a3f35a2 100644 --- a/topics/benchmarks.md +++ b/topics/benchmarks.md @@ -27,7 +27,7 @@ The following options are supported: -q Quiet. Just show query/sec values --csv Output in CSV format -l Loop. Run the tests forever - -t Only run the comma separated list of tests. The test + -t Only run the comma-separated list of tests. The test names are the same as the ones produced as output. -I Idle mode. Just open N idle connections and wait. @@ -51,7 +51,7 @@ like in the following example: LPUSH: 79239.30 requests per second In the above example we asked to just run test the SET and LPUSH commands, -in quite mode (see the `-q` switch). +in quiet mode (see the `-q` switch). It is also possible to specify the command to benchmark directly like in the following example: @@ -64,11 +64,11 @@ Selecting the size of the key space By default the benchmark runs against a single key. In Redis the difference between such a synthetic benchmark and a real one is not huge since it is an -in memory system, however it is possible to stress cache misses and in general +in-memory system, however it is possible to stress cache misses and in general to simulate a more real-world work load by using a large key space. This is obtained by using the `-r` switch. For instance if I want to run -one million of SET operations, using a random key for every operation out of +one million SET operations, using a random key for every operation out of 100k possible keys, I'll use the following command line: $ redis-cli flushall @@ -110,7 +110,7 @@ pipeling of 16 commands: SET: 403063.28 requests per second GET: 508388.41 requests per second -Using pipelining resulted into a sensible amount of more commands processed. +Using pipelining results in a significant increase in performance. Pitfalls and misconceptions --------------------------- @@ -124,8 +124,8 @@ in account. + Redis is a server: all commands involve network or IPC roundtrips. It is meaningless to compare it to embedded data stores such as SQLite, Berkeley DB, -Tokyo/Kyoto Cabinet, etc ... because the cost of most operations is precisely -dominated by network/protocol management. +Tokyo/Kyoto Cabinet, etc ... because the cost of most operations is +primarily in network/protocol management. + Redis commands return an acknowledgment for all usual commands. Some other data stores do not (for instance MongoDB does not implicitly acknowledge write operations). Comparing Redis to stores involving one-way queries is only @@ -136,7 +136,7 @@ you need multiple connections (like redis-benchmark) and/or to use pipelining to aggregate several commands and/or multiple threads or processes. + Redis is an in-memory data store with some optional persistency options. If you plan to compare it to transactional servers (MySQL, PostgreSQL, etc ...), -then you should consider activating AOF and decide of a suitable fsync policy. +then you should consider activating AOF and decide on a suitable fsync policy. + Redis is a single-threaded server. It is not designed to benefit from multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed. It is not really fair to compare one @@ -184,7 +184,7 @@ memcached (dormando) developers. You can see that in the end, the difference between the two solutions is not so staggering, once all technical aspects are considered. Please note both -Redis and memcached have been optimized further after these benchmarks ... +Redis and memcached have been optimized further after these benchmarks. Finally, when very efficient servers are benchmarked (and stores like Redis or memcached definitely fall in this category), it may be difficult to saturate @@ -198,7 +198,7 @@ Factors impacting Redis performance There are multiple factors having direct consequences on Redis performance. We mention them here, since they can alter the result of any benchmarks. Please note however, that a typical Redis instance running on a low end, -non tuned, box usually provides good enough performance for most applications. +untuned box usually provides good enough performance for most applications. + Network bandwidth and latency usually have a direct impact on the performance. It is a good practice to use the ping program to quickly check the latency @@ -207,7 +207,7 @@ Regarding the bandwidth, it is generally useful to estimate the throughput in Gbits/s and compare it to the theoretical bandwidth of the network. For instance a benchmark setting 4 KB strings in Redis at 100000 q/s, would actually consume 3.2 Gbits/s of bandwidth -and probably fit with a 10 GBits/s link, but not a 1 Gbits/s one. In many real +and probably fit within a 10 GBits/s link, but not a 1 Gbits/s one. In many real world scenarios, Redis throughput is limited by the network well before being limited by the CPU. To consolidate several high-throughput Redis instances on a single server, it worth considering putting a 10 Gbits/s NIC @@ -215,24 +215,24 @@ or multiple 1 Gbits/s NICs with TCP/IP bonding. + CPU is another very important factor. Being single-threaded, Redis favors fast CPUs with large caches and not many cores. At this game, Intel CPUs are currently the winners. It is not uncommon to get only half the performance on -an AMD Opteron CPU compared to similar Nehalem EP/Westmere EP/Sandy bridge +an AMD Opteron CPU compared to similar Nehalem EP/Westmere EP/Sandy Bridge Intel CPUs with Redis. When client and server run on the same box, the CPU is the limiting factor with redis-benchmark. + Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects. For large objects (>10 KB), it may become -noticeable though. Usually, it is not really cost effective to buy expensive +noticeable though. Usually, it is not really cost-effective to buy expensive fast memory modules to optimize Redis. -+ Redis runs slower on a VM. Virtualization toll is quite high because ++ Redis runs slower on a VM. The virtualization toll is quite high because for many common operations, Redis does not add much overhead on top of the required system calls and network interruptions. Prefer to run Redis on a physical box, especially if you favor deterministic latencies. On a state-of-the-art hypervisor (VMWare), result of redis-benchmark on a VM -through the physical network is almost divided by 2 compared to the +through the physical network is almost cut in half compared to the physical machine, with some significant CPU time spent in system and interruptions. + When the server and client benchmark programs run on the same box, both -the TCP/IP loopback and unix domain sockets can be used. It depends on the -platform, but unix domain sockets can achieve around 50% more throughput than +the TCP/IP loopback and unix domain sockets can be used. Depending on the +platform, unix domain sockets can achieve around 50% more throughput than the TCP/IP loopback (on Linux for instance). The default behavior of redis-benchmark is to use the TCP/IP loopback. + The performance benefit of unix domain sockets compared to TCP/IP loopback @@ -247,7 +247,7 @@ See the graph below. + On multi CPU sockets servers, Redis performance becomes dependant on the NUMA configuration and process location. The most visible effect is that -redis-benchmark results seem non deterministic because client and server +redis-benchmark results seem non-deterministic because client and server processes are distributed randomly on the cores. To get deterministic results, it is required to use process placement tools (on Linux: taskset or numactl). The most efficient combination is always to put the client and server on two @@ -260,7 +260,7 @@ Please note this benchmark is not meant to compare CPU models between themselves ![NUMA chart](https://github.com/dspezia/redis-doc/raw/6374a07f93e867353e5e946c1e39a573dfc83f6c/topics/NUMA_chart.gif) + With high-end configurations, the number of client connections is also an -important factor. Being based on epoll/kqueue, Redis event loop is quite +important factor. Being based on epoll/kqueue, the Redis event loop is quite scalable. Redis has already been benchmarked at more than 60000 connections, and was still able to sustain 50000 q/s in these conditions. As a rule of thumb, an instance with 30000 connections can only process half the throughput @@ -278,7 +278,7 @@ Jumbo frames may also provide a performance boost when large objects are used. + Depending on the platform, Redis can be compiled against different memory allocators (libc malloc, jemalloc, tcmalloc), which may have different behaviors in term of raw speed, internal and external fragmentation. -If you did not compile Redis by yourself, you can use the INFO command to check +If you did not compile Redis yourself, you can use the INFO command to check the mem_allocator field. Please note most benchmarks do not run long enough to generate significant external fragmentation (contrary to production Redis instances). @@ -289,7 +289,7 @@ Other things to consider One important goal of any benchmark is to get reproducible results, so they can be compared to the results of other tests. -+ A good practice is to try to run tests on isolated hardware as far as possible. ++ A good practice is to try to run tests on isolated hardware as much as possible. If it is not possible, then the system must be monitored to check the benchmark is not impacted by some external activity. + Some configurations (desktops and laptops for sure, some servers as well) @@ -300,8 +300,8 @@ reproducible results, it is better to set the highest possible fixed frequency for all the CPU cores involved in the benchmark. + An important point is to size the system accordingly to the benchmark. The system must have enough RAM and must not swap. On Linux, do not forget -to set the overcommit_memory parameter correctly. Please note 32 and 64 bits -Redis instances have not the same memory footprint. +to set the overcommit_memory parameter correctly. Please note 32 and 64 bit +Redis instances do not have the same memory footprint. + If you plan to use RDB or AOF for your benchmark, please check there is no other I/O activity in the system. Avoid putting RDB or AOF files on NAS or NFS shares, or on any other devices impacting your network bandwidth and/or latency @@ -312,13 +312,13 @@ the generated log file on a remote filesystem. instance using INFO at regular interval to gather statistics is probably fine, but MONITOR will impact the measured performance significantly. -# Benchmark results on different virtualized and bare metal servers. +# Benchmark results on different virtualized and bare-metal servers. * The test was done with 50 simultaneous clients performing 2 million requests. * Redis 2.6.14 is used for all the tests. -* Test executed using the loopback interface. -* Test executed using a key space of 1 million keys. -* Test executed with and without pipelining (16 commands pipeline). +* Test was executed using the loopback interface. +* Test was executed using a key space of 1 million keys. +* Test was executed with and without pipelining (16 commands pipeline). **Intel(R) Xeon(R) CPU E5520 @ 2.27GHz (with pipelining)** @@ -447,7 +447,7 @@ will output the following: LPUSH: 34803.41 requests per second LPOP: 37367.20 requests per second -Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 GHz: +Another one using a 64-bit box, a Xeon L5420 clocked at 2.5 GHz: $ ./redis-benchmark -q -n 100000 PING: 111731.84 requests per second @@ -463,7 +463,7 @@ Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 GHz: * Redis version **2.4.2** * Default number of connections, payload size = 256 * The Linux box is running *SLES10 SP3 2.6.16.60-0.54.5-smp*, CPU is 2 x *Intel X5670 @ 2.93 GHz*. -* Text executed while running redis server and benchmark client on the same CPU, but different cores. +* Test executed while running Redis server and benchmark client on the same CPU, but different cores. Using a unix domain socket: From 464e917b5c104d62d30aee601e6f3f91adc10805 Mon Sep 17 00:00:00 2001 From: Amber Jain Date: Thu, 15 Aug 2013 19:16:13 +0530 Subject: [PATCH 0353/2880] fixed typos in http://redis.io/topics/quickstart --- topics/quickstart.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/topics/quickstart.md b/topics/quickstart.md index c0dc6f80b3..5de6b92cf5 100644 --- a/topics/quickstart.md +++ b/topics/quickstart.md @@ -28,19 +28,19 @@ In order to compile Redis follow this simple steps: cd redis-stable make -At this point you can try if your build works correctly typing **make test**, but this is an optional step. After the compilation the **src** directory inside the Redis distribution is populated with the different executables that are part of Redis: +At this point you can try if your build works correctly by typing **make test**, but this is an optional step. After the compilation the **src** directory inside the Redis distribution is populated with the different executables that are part of Redis: * **redis-server** is the Redis Server itself. * **redis-cli** is the command line interface utility to talk with Redis. * **redis-benchmark** is used to check Redis performances. * **redis-check-aof** and **redis-check-dump** are useful in the rare event of corrupted data files. -It is a good idea to copy both the Redis server than the command line interface in proper places using the following commands: +It is a good idea to copy both the Redis server and the command line interface in proper places using the following commands: * sudo cp redis-server /usr/local/bin/ * sudo cp redis-cli /usr/local/bin/ -In the following documentation I assume that /usr/local/bin is in your PATH environment variable so you can execute both the binaries without specifying the full path. +In the following documentation I assume that /usr/local/bin is in your PATH environment variable so that you can execute both the binaries without specifying the full path. Starting Redis === @@ -114,7 +114,7 @@ commands calling methods. A short interactive example using Ruby: Redis persistence ================= -You can learn [how Redis persisence works in this page](http://redis.io/topics/persistence), however what is important to understand for a quick start is that by default, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time (for instance after at least five minutes if you have at least 100 changes in your data), so if you want your database to persist and be reloaded after a restart make sure to call the **SAVE** command manually every time you want to force a data set snapshot. Otherwise make sure to shutdown the database using the **SHUTDOWN** command: +You can learn [how Redis persisence works on this page](http://redis.io/topics/persistence), however what is important to understand for a quick start is that by default, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time (for instance after at least five minutes if you have at least 100 changes in your data), so if you want your database to persist and be reloaded after a restart make sure to call the **SAVE** command manually every time you want to force a data set snapshot. Otherwise make sure to shutdown the database using the **SHUTDOWN** command: $ redis-cli shutdown @@ -182,5 +182,5 @@ Make sure that everything is working as expected: * Check that your Redis instance is correctly logging in the log file. * If it's a new machine where you can try it without problems make sure that after a reboot everything is still working. -Note: in the above instructions we skipped many Redis configurations parameters that you would like to change, for instance in order to use AOF persistence instead of RDB persistence, or to setup replication, and so forth. +Note: In the above instructions we skipped many Redis configuration parameters that you would like to change, for instance in order to use AOF persistence instead of RDB persistence, or to setup replication, and so forth. Make sure to read the redis.conf file (that is heavily commented) and the other documentation you can find in this web site for more information. From a33ab77627e1871fc8b01fa2061533127ac266aa Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 21 Aug 2013 16:48:29 +0200 Subject: [PATCH 0354/2880] Slave election documented in Redis Cluster spec. --- topics/cluster-spec.md | 38 ++++++++++++++++++++++++++++++-------- 1 file changed, 30 insertions(+), 8 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index e9efe98668..455c274d0a 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -400,18 +400,40 @@ The FAIL state for the cluster happens in two cases. The second check is required because in order to mark a node from PFAIL to FAIL state, the majority of masters are required. However when we are not connected with the majority of masters it is impossible from our side of the net split to mark nodes as FAIL. However since we detect this condition we set the Cluster state in FAIL mode to stop serving queries. -Slave election (not implemented) +Slave election --- -The design of slave election is a work in progress right now. +Once a master node is in FAIL state, if one or more slaves exist for this master one should be elected as a master and all the other slaves reconfigured to replicate with the new master. -The idea is to use the concept of first slave, that is, out of all the -slaves for a given node, the first slave is the one with the lower -Node ID (comparing node IDs lexicographically). +The election of a slave is a task that is handled directly by the slaves of the failing master. The trigger is the following set of conditions: -However it is likely that the same system used for failure reports will be -used in order to require the majority of masters to authorize the slave -election. +* A node is a slave of a master in FAIL state. +* The master was serving a non-zero number of slots. +* The slave's data is considered reliable, that is, from the point of view of the replication layer, the replication link has not been down for more than the configured node timeout multiplied for a given multiplication factor (see the `REDIS_CLUSTER_SLAVE_VALIDITY_MULT` define). + +If all the above conditions are true, the slave starts requesting the +authorization to be promoted to master to all the reachable masters. + +A master will reply with a positive message `FAILOVER_AUTH_GRANTED` if the sender of the message has the following properties: + +* Is a slave, and the master is indeed in FAIL state. +* Ordering all the slaves for this master, it has the lowest Node ID. +* It appears to be up and running (no FAIL or PFAIL state). + +Once the slave receives the authorization from the majority of the masters within a certain amount of time, it starts the failover process performing the following tasks: + +* It starts advertising itself as a master (via PONG packets). +* It also advertises it is a promoted slave (via PONG packets). +* It also starts claiming all the nodes that were served by the old master. +* A PONG packet is broadcasted to all the nodes to speedup the proccess. + +All the other nodes will update the configuration accordingly. Specifically: + +* All the slots claimed by the new master will be updated, since they are currently claimed by a master in FAIL state. +* All the other slaves of the old master will detect the PROMOTED flag and will switch the replication to the new master. +* If the old master will return back again, will detect the PROMOTED flag and will configure itself as a slave of the new master. + +The PROMOTED flag will be lost by a node when it is turned again into a slave for some reason during the life of the cluster. Publish/Subscribe (implemented, but to refine) === From 5149995e8fc682be9e21afe7ee82e851193dff85 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 21 Aug 2013 16:49:22 +0200 Subject: [PATCH 0355/2880] typo in cluster spec. elected -> promoted. --- topics/cluster-spec.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 455c274d0a..0bbab07625 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -403,7 +403,7 @@ The second check is required because in order to mark a node from PFAIL to FAIL Slave election --- -Once a master node is in FAIL state, if one or more slaves exist for this master one should be elected as a master and all the other slaves reconfigured to replicate with the new master. +Once a master node is in FAIL state, if one or more slaves exist for this master one should be promoted as a master and all the other slaves reconfigured to replicate with the new master. The election of a slave is a task that is handled directly by the slaves of the failing master. The trigger is the following set of conditions: From 4424e5354cedd12057d33a809556396ac7bc643b Mon Sep 17 00:00:00 2001 From: Matteo Centenaro Date: Fri, 23 Aug 2013 18:17:40 +0200 Subject: [PATCH 0356/2880] The redhatvm cited article have a known bug The "Understanding Virtual Memory" article cited when motivating the setting for overcommit_memory had the meaning of the values 1 and 2 reversed. I found it while reading this comment http://superuser.com/a/200504. With this commit, I'm trying to make this known to the Redis FAQ reader. The proc(5) man page has it pretty clear: /proc/sys/vm/overcommit_memory This file contains the kernel virtual memory accounting mode. Values are: 0: heuristic overcommit (this is the default) 1: always overcommit, never check 2: always check, never overcommit In mode 0, calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed". Under Linux 2.4 any nonzero value implies mode 1. In mode 2 (available since Linux 2.6), the total virtual address space on the system is limited to (SS + RAM*(r/100)), where SS is the size of the swap space, and RAM is the size of the physical memory, and r is the contents of the file /proc/sys/vm/overcommit_ratio. --- topics/faq.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/topics/faq.md b/topics/faq.md index dfb42a0665..154c1cbf16 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -252,8 +252,12 @@ more optimistic allocation fashion, and this is indeed what you want for Redis. A good source to understand how Linux Virtual Memory work and other alternatives for `overcommit_memory` and `overcommit_ratio` is this classic from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. +Beware, this article had 1 and 2 configurtation value for `overcommit_memory` +reversed: reffer to the ["proc(5)"][proc5] man page for the right meaning of the +available values. [redhatvm]: http://www.redhat.com/magazine/001nov04/features/vm/ +[proc5]: http://man7.org/linux/man-pages/man5/proc.5.html ## Are Redis on disk snapshots atomic? From a7a6c8751c6a798713a2d4e35c07fc1141c2c642 Mon Sep 17 00:00:00 2001 From: Matteo Centenaro Date: Fri, 23 Aug 2013 18:23:26 +0200 Subject: [PATCH 0357/2880] Remove " araund proc(5) --- topics/faq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/faq.md b/topics/faq.md index 154c1cbf16..5ea626080a 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -253,7 +253,7 @@ A good source to understand how Linux Virtual Memory work and other alternatives for `overcommit_memory` and `overcommit_ratio` is this classic from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. Beware, this article had 1 and 2 configurtation value for `overcommit_memory` -reversed: reffer to the ["proc(5)"][proc5] man page for the right meaning of the +reversed: reffer to the [proc(5)][proc5] man page for the right meaning of the available values. [redhatvm]: http://www.redhat.com/magazine/001nov04/features/vm/ From 0c9c73adbf7d10d3d6768df122dfdb5fe3f86bb3 Mon Sep 17 00:00:00 2001 From: Matteo Centenaro Date: Fri, 23 Aug 2013 18:25:19 +0200 Subject: [PATCH 0358/2880] FIX: typos here and there - configurtation -> configuration - reffer -> refer --- topics/faq.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/faq.md b/topics/faq.md index 5ea626080a..c7294947ec 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -252,8 +252,8 @@ more optimistic allocation fashion, and this is indeed what you want for Redis. A good source to understand how Linux Virtual Memory work and other alternatives for `overcommit_memory` and `overcommit_ratio` is this classic from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. -Beware, this article had 1 and 2 configurtation value for `overcommit_memory` -reversed: reffer to the [proc(5)][proc5] man page for the right meaning of the +Beware, this article had 1 and 2 configuration value for `overcommit_memory` +reversed: refer to the [proc(5)][proc5] man page for the right meaning of the available values. [redhatvm]: http://www.redhat.com/magazine/001nov04/features/vm/ From 9ebac39d49f576a4e14ff9a4ffe642fb48aa68ba Mon Sep 17 00:00:00 2001 From: Matteo Centenaro Date: Fri, 23 Aug 2013 18:28:43 +0200 Subject: [PATCH 0359/2880] Format option values as code --- topics/faq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/faq.md b/topics/faq.md index c7294947ec..9e53d4bde0 100644 --- a/topics/faq.md +++ b/topics/faq.md @@ -252,7 +252,7 @@ more optimistic allocation fashion, and this is indeed what you want for Redis. A good source to understand how Linux Virtual Memory work and other alternatives for `overcommit_memory` and `overcommit_ratio` is this classic from Red Hat Magazine, ["Understanding Virtual Memory"][redhatvm]. -Beware, this article had 1 and 2 configuration value for `overcommit_memory` +Beware, this article had `1` and `2` configuration values for `overcommit_memory` reversed: refer to the [proc(5)][proc5] man page for the right meaning of the available values. From f66cf7cffc7f1980cc9f50a1b1a8bfa9d5c0ebee Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 26 Aug 2013 09:48:13 +0200 Subject: [PATCH 0360/2880] Redis release cycle. --- topics/releases.md | 74 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 topics/releases.md diff --git a/topics/releases.md b/topics/releases.md new file mode 100644 index 0000000000..bdaf616fe9 --- /dev/null +++ b/topics/releases.md @@ -0,0 +1,74 @@ +Redis release cycle +=== + +Redis is system software, and a type of system software that holds user +data, so it is among the most critical pieces of a software stack. + +For this reason our release cycle tries hard to make sure that a stable +release is only released when it reaches a sufficiently high level of +stability, even at the cost of a slower release cycle. + +A given version of Redis can be at three different levels of stability: + +* unstable +* development +* frozen +* release candidate +* stable + +Unstable tree +=== + +The unstable version of Redis is always located in the `unstable` branch in +the [Redis Github Repository](http://github.com/antirez/redis). + +This is the source tree where most of the new features are developed and +is not considered to be production ready: it may contain critical bugs, +not entirely ready features, and may be unstable. + +However we try hard to make sure that even the unstable branch of most of the +times usable in a development environment without major issues. + +Forked, Frozen, Release candidate tree +=== + +When a new version of Redis starts to be planned, the unstable branch +(or sometimes the currently stable branch) is forked into a new +branch that has the name of the target release. + +For instance when Redis 2.6 was released a stable, the `unstable` branch +was forked into a `2.8` branch. + +This new branch can be at three different levels of stability: development, frozen, release canddiate. + +* Development: new features and bug fixes are commited into the branch, but not everything goign into `unstable` is merged here. Only the features that can become stable in a reasonable timeframe are merged. +* Frozen: no new feature is added, unless it is almost guaranteed to have zero stability impacts on the source code, and at the same time for some reason it is a very important feature that must be shipped ASAP. Big code changes are only allowed when they are needed in order to fix bugs. +* Release Candidate: only fixes are committed against this release. + +Stable tree +=== + +At some point, when a given Redis release is in the Release Candidate state +for enough time, we observe that the frequency at which critical bugs are +signaled start to decrease, at the point that for a few weeks we don't have +any report of serious bugs. + +When this happens the release is marked as stable. + +Version numbers +--- + +Stable releases follow the usual `major.minor.patch` versioning schema, with the following special rules: + +* The minor is even in stable versions of Redis. +* The minor is odd in unstable, development, frozen, release candidates. For instance the unstable version of 2.8.x will have a version number in the form 2.7.x. In general the unstable version of x.y.z will have a version x.(y-1).z. +* As an unstable version of Redis progresses, the patchlevel is incremented from time to time, so at a given time you may have 2.7.2, and later 2.7.3 and so forth. However when the release candidate state is reached, the patchlevel starts tfrom 101. So for instance 2.7.101 is the first release candidate for 2.8, 2.7.105 is Release Candidate 5, and so forth. + +Support +--- + +Old versions are not supported as we try hard to take the Redis mostly API compatible with the past, so upgrading to newer versions is usually trivial. + +So for instance if currently stable release is 2.6.x we accept bug reports and provide support for the previous stable release (2.4.x), but not for older releases such as 2.2.x. + +When 2.8 will be released as a stable release 2.6.x will be the oldest supported release, and so forth. From 4e1f3f4700417ad6ffe1bae8f25abf5737156ecc Mon Sep 17 00:00:00 2001 From: bmatte Date: Mon, 26 Aug 2013 10:53:37 +0200 Subject: [PATCH 0361/2880] Typos. --- topics/releases.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/releases.md b/topics/releases.md index bdaf616fe9..33ddcbf2b8 100644 --- a/topics/releases.md +++ b/topics/releases.md @@ -62,7 +62,7 @@ Stable releases follow the usual `major.minor.patch` versioning schema, with the * The minor is even in stable versions of Redis. * The minor is odd in unstable, development, frozen, release candidates. For instance the unstable version of 2.8.x will have a version number in the form 2.7.x. In general the unstable version of x.y.z will have a version x.(y-1).z. -* As an unstable version of Redis progresses, the patchlevel is incremented from time to time, so at a given time you may have 2.7.2, and later 2.7.3 and so forth. However when the release candidate state is reached, the patchlevel starts tfrom 101. So for instance 2.7.101 is the first release candidate for 2.8, 2.7.105 is Release Candidate 5, and so forth. +* As an unstable version of Redis progresses, the patchlevel is incremented from time to time, so at a given time you may have 2.7.2, and later 2.7.3 and so forth. However when the release candidate state is reached, the patchlevel starts from 101. So for instance 2.7.101 is the first release candidate for 2.8, 2.7.105 is Release Candidate 5, and so forth. Support --- From 53a0b25a35a7628bbd6866050a5101cb440522a5 Mon Sep 17 00:00:00 2001 From: Michel Martens Date: Thu, 29 Aug 2013 09:37:49 +0200 Subject: [PATCH 0362/2880] Minor editing for releases. --- topics/releases.md | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/topics/releases.md b/topics/releases.md index bdaf616fe9..25c1d9535b 100644 --- a/topics/releases.md +++ b/topics/releases.md @@ -26,8 +26,9 @@ This is the source tree where most of the new features are developed and is not considered to be production ready: it may contain critical bugs, not entirely ready features, and may be unstable. -However we try hard to make sure that even the unstable branch of most of the -times usable in a development environment without major issues. +However, we try hard to make sure that even the unstable branch is +usable most of the time in a development environment without major +issues. Forked, Frozen, Release candidate tree === @@ -36,12 +37,13 @@ When a new version of Redis starts to be planned, the unstable branch (or sometimes the currently stable branch) is forked into a new branch that has the name of the target release. -For instance when Redis 2.6 was released a stable, the `unstable` branch -was forked into a `2.8` branch. +For instance, when Redis 2.6 was released as stable, the `unstable` branch +was forked into the `2.8` branch. -This new branch can be at three different levels of stability: development, frozen, release canddiate. +This new branch can be at three different levels of stability: +development, frozen, and release candidate. -* Development: new features and bug fixes are commited into the branch, but not everything goign into `unstable` is merged here. Only the features that can become stable in a reasonable timeframe are merged. +* Development: new features and bug fixes are commited into the branch, but not everything going into `unstable` is merged here. Only the features that can become stable in a reasonable timeframe are merged. * Frozen: no new feature is added, unless it is almost guaranteed to have zero stability impacts on the source code, and at the same time for some reason it is a very important feature that must be shipped ASAP. Big code changes are only allowed when they are needed in order to fix bugs. * Release Candidate: only fixes are committed against this release. @@ -50,10 +52,10 @@ Stable tree At some point, when a given Redis release is in the Release Candidate state for enough time, we observe that the frequency at which critical bugs are -signaled start to decrease, at the point that for a few weeks we don't have -any report of serious bugs. +signaled starts to decrease, to the point that for a few weeks we don't have +any serious bugs reported. -When this happens the release is marked as stable. +When this happens, the release is marked as stable. Version numbers --- @@ -67,8 +69,13 @@ Stable releases follow the usual `major.minor.patch` versioning schema, with the Support --- -Old versions are not supported as we try hard to take the Redis mostly API compatible with the past, so upgrading to newer versions is usually trivial. +Older versions are not supported as we try very hard to make the +Redis API mostly backward compatible. Upgrading to newer versions +is usually trivial. -So for instance if currently stable release is 2.6.x we accept bug reports and provide support for the previous stable release (2.4.x), but not for older releases such as 2.2.x. +For example, if the current stable release is 2.6.x, we accept bug +reports and provide support for the previous stable release +(2.4.x), but not for older ones such as 2.2.x. -When 2.8 will be released as a stable release 2.6.x will be the oldest supported release, and so forth. +When 2.8 becomes the current stable release, the 2.6.x will be the +oldest supported release. From e5d1a63f926d8cfcb7bf04200defd1e7cfb8d5f8 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 3 Sep 2013 14:42:06 +0200 Subject: [PATCH 0363/2880] TTL command doc improved. --- commands/ttl.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/commands/ttl.md b/commands/ttl.md index 8be2d28b05..17055f4884 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -2,10 +2,18 @@ Returns the remaining time to live of a key that has a timeout. This introspection capability allows a Redis client to check how many seconds a given key will continue to be part of the dataset. +In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. + +Starting with Redis 2.8 the return value in case of error changed: + +* The command returns `-2` if the key does not exist. +* The command returns `-1` if the key exists but has no associated expire. + +See also the `PTTL` command that returns the same information with milliseconds resolution (Only available in Redis 2.8 or greater). + @return -@integer-reply: TTL in seconds, `-2` when `key` does not exist or `-1` when `key` does not -have a timeout. +@integer-reply: TTL in seconds, or a negative value in order to signal an error (see the description above). @examples From fe073ea10296cb3f091dee2e8e6c90840ac901c6 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 18 Sep 2013 18:47:41 +0200 Subject: [PATCH 0364/2880] Fixed typo and grammar in cluster spec. --- topics/cluster-spec.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index 0bbab07625..a218fbfa6d 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -422,10 +422,10 @@ A master will reply with a positive message `FAILOVER_AUTH_GRANTED` if the sende Once the slave receives the authorization from the majority of the masters within a certain amount of time, it starts the failover process performing the following tasks: -* It starts advertising itself as a master (via PONG packets). -* It also advertises it is a promoted slave (via PONG packets). -* It also starts claiming all the nodes that were served by the old master. -* A PONG packet is broadcasted to all the nodes to speedup the proccess. +* Starts advertising itself as a master (via PONG packets). +* Starts advertising it is a promoted slave (via PONG packets). +* Starts claiming all the slots that were served by the old master. +* A PONG packet is broadcasted to all the nodes to speedup the proccess, without waiting for the usual PING/PONG period. All the other nodes will update the configuration accordingly. Specifically: From f39169afff494f6e87850e84e8f592a403ef3e56 Mon Sep 17 00:00:00 2001 From: reterVision Date: Wed, 25 Sep 2013 15:23:55 +0800 Subject: [PATCH 0365/2880] Fix typos in BLPOP doc. --- commands/blpop.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/blpop.md b/commands/blpop.md index 6213702bc2..7e9cbdb5a8 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -107,7 +107,7 @@ redis> BLPOP list1 list2 0 When `BLPOP` returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever. -This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the `BRPOPLPUSH` command, that is a variant of `BLPOP` that adds the returned element to a traget list before returing it to the client. +This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the `BRPOPLPUSH` command, that is a variant of `BLPOP` that adds the returned element to a target list before returning it to the client. ## Pattern: Event notification From 7b327da4837ffb5d913c20d90375e1a53425bd5e Mon Sep 17 00:00:00 2001 From: Lucas Chi Date: Tue, 1 Oct 2013 22:51:01 -0400 Subject: [PATCH 0366/2880] grammar fixes and rewording in persistence docs --- topics/persistence.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/persistence.md b/topics/persistence.md index 731236e61d..d9710e11fc 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -40,7 +40,7 @@ AOF disadvantages * AOF files are usually bigger than the equivalent RDB files for the same dataset. * AOF can be slower then RDB depending on the exact fsync policy. In general with fsync set to *every second* performances are still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. -* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to don't reproduce exactly the same dataset on reloading. This bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but this kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). 2) We never had a single report from users about an AOF corruption that was detected in the real world. +* In the past we've experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF to inaccurately reproduce the dataset on recovery. These bugs are rare and are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works by incrementally updating an existing state, like MySQL or MongoDB, while RDB snapshotting is conceptually more robust because it recreates the snapshot from scratch each time. It should be noted that every time the AOF is rewritten it is recreated from scratch using the actual data contained in the dataset and therefore is more robust when compared to a perpetually appending AOF file (or one that is rewritten by reading the old AOF instead of reading the data in memory). To date, there has never been a single report from users about an AOF corruption that was detected in the real world. Ok, so what should I use? --- From bb259e1b7469722ed19741bf11bc59c33b0e801e Mon Sep 17 00:00:00 2001 From: Lucas Chi Date: Tue, 1 Oct 2013 23:05:13 -0400 Subject: [PATCH 0367/2880] update readme with parse task dependencies --- README.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/README.md b/README.md index 2ee41c8696..d51fb29f99 100644 --- a/README.md +++ b/README.md @@ -111,6 +111,15 @@ You can do this by running Rake inside your working directory. $ rake parse ``` +The parse task has the following dependencies: + +* batch +* rdiscount + +``` +gem install batch rdiscount +``` + Additionally, if you have [Aspell][han] installed, you can spell check the documentation: From 27e81e8c851f65fd11b7d6ba8bed082b14056801 Mon Sep 17 00:00:00 2001 From: "Seth W. Klein" Date: Wed, 2 Oct 2013 18:23:51 -0400 Subject: [PATCH 0368/2880] hoisie/redis.go moved to hoisie/redis --- clients.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/clients.json b/clients.json index 327a74c93c..7496c0aa4b 100644 --- a/clients.json +++ b/clients.json @@ -569,7 +569,7 @@ { "name": "redis.go", "language": "Go", - "repository": "https://github.com/hoisie/redis.go", + "repository": "https://github.com/hoisie/redis", "description": "", "authors": ["hoisie"] }, From 9d68bfcaaee3da9711a67a52d633168e1650b52d Mon Sep 17 00:00:00 2001 From: Damian Janowski Date: Fri, 4 Oct 2013 17:42:15 -0300 Subject: [PATCH 0369/2880] Typo. --- topics/config.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/config.md b/topics/config.md index a8b2558b6d..cf18a9190e 100644 --- a/topics/config.md +++ b/topics/config.md @@ -2,7 +2,7 @@ Redis configuration === Redis is able to start without a configuration file using a built-in default -configuration, however this setup is only recommanded for testing and +configuration, however this setup is only recommended for testing and development purposes. The proper way to configure Redis is by providing a Redis configuration file, @@ -29,7 +29,7 @@ Redis distribution. * The self documented [redis.conf for Redis 2.6](https://raw.github.com/antirez/redis/2.6/redis.conf). * The self documented [redis.conf for Redis 2.4](https://raw.github.com/antirez/redis/2.4/redis.conf). -Passing arguments via command line +Passing arguments via the command line --- Since Redis 2.6 it is possible to also pass Redis configuration parameters From bfd275d38a1a2cb24755c378a5ded76ce5e7bd37 Mon Sep 17 00:00:00 2001 From: antirez Date: Tue, 8 Oct 2013 17:40:13 +0200 Subject: [PATCH 0370/2880] Cluster design document updated. --- topics/cluster-spec.md | 399 +++++++++++++++++++++++++++++------------ 1 file changed, 288 insertions(+), 111 deletions(-) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index a218fbfa6d..d3389a5c4e 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -1,41 +1,17 @@ Redis cluster Specification (work in progress) === -Introduction +Redis Cluster goals --- -This document is a work in progress specification of Redis cluster. -The document is split into two parts. The first part documents what is already -implemented in the unstable branch of the Redis code base, the second part -documents what is still to be implemented. +Redis Cluster is a distributed implementation of Redis with the following goals, in order of importance in the design: -All the parts of this document may be modified in the future as result of a -design change in Redis cluster, but the part not yet implemented is more likely -to change than the part of the specification that is already implemented. +* High performance and linear scalability up to 1000 nodes. +* No merge operations in order to play well with the large values typical of the Redis data model. +* Write safety: the system tries to retain all the writes originating from clients connected with the majority of the nodes. However there are small windows where acknowledged writes can be lost. +* Availability: Redis Cluster is able to survive to partitions where the majority of the master nodes are reachable and there is at least a reachable salve for every master node that is no longer reachable. -The specification includes everything needed to write a client library, -however client libraries authors should be aware that it is possible for the -specification to change in the future in some detail. - -What is Redis cluster ---- - -Redis cluster is a distributed and fault tolerant implementation of a -subset of the features available in the Redis stand alone server. - -In Redis cluster there are no central or proxy nodes, and one of the -major design goals is linear scalability. - -Redis cluster sacrifices fault tolerance for consistency, so the system -tries to be as consistent as possible while guaranteeing limited resistance -to net splits and node failures (we consider node failures as -special cases of net splits). - -Fault tolerance is achieved using two different roles for nodes, that -can be either masters or slaves. Even if nodes are functionally the same -and run the same server implementation, slave nodes are not used if -not to replace lost master nodes. It is actually possible to use slave nodes -for read-only queries when read-after-write consistency is not required. +What is described in this document is implemented in the `unstable` branch of the Github Redis repository. Implemented subset --- @@ -43,17 +19,17 @@ Implemented subset Redis Cluster implements all the single keys commands available in the non distributed version of Redis. Commands performing complex multi key operations like Set type unions or intersections are not implemented, and in -general all the operations where in theory keys are not available in the -same node are not implemented. +general all the operations where keys are not available in the node processing +the command are not implemented. -In the future it is possible that using the MIGRATE COPY command users will +In the future it is possible that using the `MIGRATE COPY` command users will be able to use *Computation Nodes* to perform multi-key read only operations in the cluster, but it is not likely that the Redis Cluster itself will be able to perform complex multi key operations implementing some kind of transparent way to move keys around. Redis Cluster does not support multiple databases like the stand alone version -of Redis, there is just database 0, and SELECT is not allowed. +of Redis, there is just database 0, and `SELECT` is not allowed. Clients and Servers roles in the Redis cluster protocol --- @@ -79,6 +55,56 @@ getting redirected if needed, so the client is not required to take the state of the cluster. However clients that are able to cache the map between keys and nodes can improve the performance in a sensible way. +Write safety +--- + +Redis Cluster uses asynchronous replication between nodes, so there are always windows when it is possible to lose writes during partitions. However these windows are very different in the case of a client that is connected to the majority of masters, and a client that is connected to the minority of masters. + +Redis Cluster tries hard to retain all the writes that are performed by clients connected to the majority of masters, with two exceptions: + +1) A write may reach a master, but while the master may be able to reply to the client, the write may not be propagated to slaves via the asynchronous replication used between master and slave nodes. If the master dies without the write reaching the slaves, the write is lost forever in case the master is unreachable for a long enough period that one of its slaves is promoted. + +2) Another theoretically possible failure mode where writes are lost is the following: + +* A master is unreachable because of a partition. +* It gets failed over by one of its slaves. +* After some time it may be reachable again. +* A client with a not updated routing table may write to it before the master is converted to a slave (of the new master) by the cluster. + +Practically this is very unlikely to happen because nodes not able to reach the majority of other masters for enough time to be failed over, no longer accept writes, and when the partition is fixed writes are still refused for a small amount of time to allow other nodes to inform about configuration changes. All the nodes in general try to reach a node that joins the cluster again as fast as possible, using a non-blocking connection attempt and sending a ping packet (that is enough to upgrade the node configuration) as soon as there is a new link with the node. This makes unlikely that a node is not informed about configuration changes before it returns writable again. + +Redis Cluster loses a non trivial amount of writes on partitions where there is a minority of masters and at least one or more clients, since all the writes sent to the masters may potentially get lost if the masters are failed over in the majority side. + +For a master to be failed over, it must be not reachable by the majority of masters for at least `NODE_TIMEOUT`, so if the partition is fixed before that time, no write is lost. When the partition lasts for more than `NODE_TIMEOUT`, the minority side of the cluster will start refusing writes as soon as `NODE_TIMEOUT` time has elapsed, so there is a maximum window after which the minority becomes no longer available, hence no write is accepted and lost after that time. + +Availability +--- + +Redis Cluster is not available in the minority side of the partition. In the majority side of the partition assuming that there are at least the majority of masters and a slave for every unreachable master, the cluster returns available after `NODE_TIMEOUT` plus some more second required for a slave to get elected and failover its master. + +This means that Redis Cluster is designed to survive to failures of a few nodes in the cluster, but is not a suitable solution for applications that require availability in the event of large net splits. + +In the example of a cluster composed of N master nodes where every node has a single slave, the majority side of the cluster will remain available as soon as a single node is partitioned away, and will remain available with a probability of `1-(1/(N*2-1))` when two nodes are partitioned away (After the first node fails we are left with `N*2-1` nodes in total, and the probability of the only master without a replica to fail is `1/(N*2-1))`. + +For example in a cluster with 5 nodes and a single slave per node, there is a `1/(5*2-1) = 0.1111` probabilities that after two nodes are partitioned away from the majority, the cluster will no longer be available, that is about 11% of probabilities. + +Performance +--- + +In Redis Cluster nodes don't proxy commands to the right node in charge for a given key, but instead they redirect clients to the right nodes serving a given range of the key space. +Eventually clients obtain an up to date representation of the cluster and which node serves which subset of keys, so during normal operations clients directly contact the right nodes in order to send a given command. + +Because of the use of asynchronous replication, nodes does not wait for other nodes acknowledgement of writes (optional synchronous replication is a work in progress and will be likely added in future releases). + +Also, because of the restriction to the subset of commands that don't perform operations on multiple keys, data is never moved between nodes if not in case of resharding. + +So normal operations are handled exactly as in the case of a single Redis instance. This means that in a Redis Cluster with N master nodes you can expect the same performance as a single Redis instance multiplied by N as the design allows to scale linearly. At the same time the query is usually performed in a single round trip, since clients usually retain persistent connections with the nodes, so latency figures are also the same as the single stand alone Redis node case. + +Why merge operations are avoided +--- + +Redis Cluster design avoids conflicting versions of the same key-value pair in multiple nodes since in the case of the Redis data model this is not always desirable: values in Redis are often very large, it is common to see lists or sorted sets with millions of elements. Also data types are semantically complex. Transferring and merging these kind of values can be a major bottleneck. + Keys distribution model --- @@ -135,16 +161,16 @@ know: * The IP address and TCP port where the node is located. * A set of flags. * A set of hash slots served by the node. -* Last time we sent a PING packet using the cluster bus. -* Last time we received a PONG packet in reply. +* Last time we sent a ping packet using the cluster bus. +* Last time we received a pong packet in reply. * The time at which we flagged the node as failing. * The number of slaves of this node. * The master node ID, if this node is a slave (or 0000000... if it is a master). -Soem of this information is available using the `CLUSTER NODES` command that +Some of this information is available using the `CLUSTER NODES` command that can be sent to all the nodes in the cluster, both master and slave nodes. -The following is an example of output of CLUSTER NODES sent to a master +The following is an example of output of `CLUSTER NODES` sent to a master node in a small cluster of three nodes. $ redis-cli cluster nodes @@ -154,7 +180,16 @@ node in a small cluster of three nodes. In the above listing the different fields are in order: node id, address:port, flags, last ping sent, last pong received, link state, slots. -Nodes handshake (implemented) +Cluster topology +--- + +Redis cluster is a full mesh where every node is connected with every other node using a TCP connection. + +In a cluster of N nodes, every node has N-1 outgoing TCP connections, and N-1 incoming connections. + +These TCP connections are kept alive all the time and are not created on demand. + +Nodes handshake --- Nodes always accept connection in the cluster bus port, and even reply to @@ -164,15 +199,12 @@ is not considered part of the cluster. A node will accept another node as part of the cluster only in two ways: -* If a node will present itself with a MEET message. A meet message is exactly -like a PING message, but forces the receiver to accept the node as part of -the cluster. Nodes will send MEET messages to other nodes ONLY IF the system -administrator requests this via the following command: - +* If a node will present itself with a `MEET` message. A meet message is exactly +like a `PING` message, but forces the receiver to accept the node as part of +the cluster. Nodes will send `MEET` messages to other nodes **only if** the system administrator requests this via the following command: CLUSTER MEET ip port - * A node will also register another node as part of the cluster if a node that is already trusted will gossip about this other node. So if A knows B, and B knows C, eventually B will send gossip messages to A about C. When this happens, A will register C as part of the network, and will try to connect with C. This means that as long as we join nodes in any connected graph, they'll eventually form a fully connected graph automatically. This means that basically the cluster is able to auto-discover other nodes, but only if there is a trusted relationship that was forced by the system administrator. @@ -248,25 +280,25 @@ The following subcommands are available: * CLUSTER SETSLOT slot MIGRATING node * CLUSTER SETSLOT slot IMPORTING node -The first two commands, ADDSLOTS and DELSLOTS, are simply used to assign +The first two commands, `ADDSLOTS` and `DELSLOTS`, are simply used to assign (or remove) slots to a Redis node. After the hash slots are assigned they will propagate across all the cluster using the gossip protocol. -The ADDSLOTS command is usually used when a new cluster is configured +The `ADDSLOTS` command is usually used when a new cluster is configured from scratch to assign slots to all the nodes in a fast way. -The SETSLOT subcommand is used to assign a slot to a specific node ID if -the NODE form is used. Otherwise the slot can be set in the two special -states MIGRATING and IMPORTING: +The `SETSLOT` subcommand is used to assign a slot to a specific node ID if +the `NODE` form is used. Otherwise the slot can be set in the two special +states `MIGRATING` and `IMPORTING`: * When a slot is set as MIGRATING, the node will accept all the requests for queries that are about this hash slot, but only if the key in question -exists, otherwise the query is forwarded using a -ASK redirection to the +exists, otherwise the query is forwarded using a `-ASK` redirection to the node that is target of the migration. * When a slot is set as IMPORTING, the node will accept all the requests for queries that are about this hash slot, but only if the request is preceded by an ASKING command. Otherwise if not ASKING command was given by the client, the query is redirected to the real hash slot owner via -a -MOVED redirection error. +a `-MOVED` redirection error. At first this may appear strange, but now we'll make it more clear. Assume that we have two Redis nodes, called A and B. @@ -293,19 +325,19 @@ The above command will return `count` keys in the specified hash slot. For every key returned, redis-trib sends node A a `MIGRATE` command, that will migrate the specified key from A to B in an atomic way (both instances are locked for the time needed to migrate a key so there are no race -conditions). This is how MIGRATE works: +conditions). This is how `MIGRATE` works: MIGRATE target_host target_port key target_database id timeout -MIGRATE will connect to the target instance, send a serialized version of +`MIGRATE` will connect to the target instance, send a serialized version of the key, and once an OK code is received will delete the old key from its own dataset. So from the point of view of an external client a key either exists in A or B in a given time. In Redis cluster there is no need to specify a database other than 0, but -MIGRATE can be used for other tasks as well not involving Redis cluster so +`MIGRATE` can be used for other tasks as well not involving Redis cluster so it is a general enough command. -MIGRATE is optimized to be as fast as possible even when moving complex +`MIGRATE` is optimized to be as fast as possible even when moving complex keys such as long lists, but of course in Redis cluster reconfiguring the cluster where big keys are present is not considered a wise procedure if there are latency constraints in the application using the database. @@ -345,97 +377,242 @@ Note that however if a buggy client will perform the map earlier this is not a problem since it will not send the ASKING command before the query and B will redirect the client to A using a MOVED redirection error. -Clients implementations hints +Fault Tolerance +=== + +Nodes heartbeat and gossip messages --- -* TODO Pipelining: use MULTI/EXEC for pipelining. -* TODO Persistent connections to nodes. -* TODO hash slot guessing algorithm. +Nodes in the cluster exchange ping / pong packets. -Fault Tolerance -=== +Usually a node will ping a few random nodes every second so that the total number of ping packets send (and pong packets received) is a constant amount regardless of the number of nodes in the cluster. + +However every node makes sure to ping every other node that we don't either sent a ping or received a pong for longer than half the `NODE_TIMEOUT` time. Before `NODE_TIMEOUT` has elapsed, nodes also try to reconnect the TCP link with another node to make sure nodes are not believed to be unreachable only because there is a problem in the current TCP connection. + +The amount of messages exchanged can be bigger than O(N) if `NODE_TIMEOUT` is set to a small figure and the number of nodes (N) is very large, since every node will try to ping every other node for which we don't have fresh information for half the `NODE_TIMEOUT` time. + +For example in a 100 nodes cluster with a node timeout set to 60 seconds, every node will try to send 99 pings every 30 seconds, with a total amount of pings of 3.3 per second, that multiplied for 100 nodes is 330 pings per second in the total cluster. + +There are ways to use the gossip information already exchanged by Redis Cluster to reduce the amount of messages exchanged in a significant way. For example we may ping within half `NODE_TIMEOUT` only nodes that are already reported to be in "possible failure" state (see later) by other nodes, and ping the other nodes that are reported as working only in a best-effort way within the limit of the few packets per second. However in real-world tests large clusters with very small `NODE_TIMEOUT` settings used to work reliably so this change will be considered in the future as actual deployments of large clusters will be tested. + +Ping and Pong packets content +--- + +Ping and Pong packets contain an header that is common to all the kind of packets (for instance packets to request a vote), and a special Gossip Section that is specific of Ping and Pong packets. + +The common header has the following information: + +* Node ID, that is a 160 bit pseudorandom string that is assigned the first time a node is created and remains the same for all the life of a Redis Cluster node. +* The `currentEpoch` and `configEpoch` field, that are used in order to mount the distributed algorithms used by Redis Cluster (this is explained in details in the next sections). If the node is a slave the `configEpoch` is the last known `configEpoch` of the master. +* The node flags, indicating if the node is a slave, a master, and other single-bit node information. +* A bitmap of the hash slots served by a given node, or if the node is a slave, a bitmap of the slots served by its master. +* Port: the sender TCP base port (that is, the port used by Redis to accept client commands, add 10000 to this to obtain the cluster port). +* State: the state of the cluster from the point of view of the sender (down or ok). +* The master node ID, if this is a slave. + +Ping and pong packets contain a gossip section. This section offers to the receiver a view about what the sender node thinks about other nodes in the cluster. The gossip section only contains informations about a few random nodes among the known nodes set of the sender. + +For every node added in the gossip section the following fields are reported: + +* Node ID. +* IP and port of the node. +* Node flags. + +Gossip sections allow receiving nodes to get information about the state of other nodes from the point of view of the sender. This is useful both for failure detection and to discover other nodes in the cluster. -Node failure detection +Failure detection --- -Failure detection is implemented in the following way: +Redis Cluster failure detection is used to recognize when a master or slave node is no longer reachable by the majority of nodes, and as a result of this event, either promote a slave to the role of master, of when this is not possible, put the cluster in an error state to stop receiving queries from clients. -* A node marks another node setting the PFAIL flag (possible failure) if the node is not responding to our PING requests for a given time. This time is called the node timeout, and is a node-wise setting. -* Nodes broadcast information about other nodes (three random nodes per packet) when pinging other nodes. The gossip section contains information about other nodes flags, including the PFAIL and FAIL flags. -* Nodes remember if other nodes advertised some node as failing. This is called a failure report. -* Once a node (already considering a given other node in PFAIL state) receives enough failure reports, so that the majority of master nodes agree about the failure of a given node, the node is marked as FAIL. -* When a node is marked as FAIL, a message is broadcasted to the cluster in order to force all the reachable nodes to set the specified node as FAIL. +Every node takes a list of flags associated with other known nodes. There are two flags that are used for failure detection that are called `PFAIL` and `FAIL`. `PFAIL` means _Possible failure_, and is a non acknowledged failure type. `FAIL` means that a node is failing and that this condition was confirmed by a majority of masters in a fixed amount of time. -So basically a node is not able to mark another node as failing without external acknowledge, and the majority of the master nodes are required to agree. +**PFAIL flag:** -Old failure reports are removed, so the majority of master nodes need to have a recent entry in the failure report table of a given node for it to mark another node as FAIL. +A node flags another node with the `PFAIL` flag when the node is not reachable for more than `NODE_TIMEOUT` time. Both master and slave nodes can flag another node as `PFAIL`, regardless of its type. -The FAIL state is reversible in two cases: +The concept of non reachability for a Redis Cluster node is that we have an **active ping** (a ping that we sent for which we still have to get a reply) pending for more than `NODE_TIMEOUT`, so for this mechanism to work the `NODE_TIMEOUT` must be large compared to the network round trip time. In order to add reliability during normal operations, nodes will try to reconnect with other nodes in the cluster as soon as half of the `NODE_TIMEOUT` has elapsed without a reply to a ping. This mechanism ensures that connections are kept alive so broken connections should usually not result into false failure reports between nodes. -* If the FAIL state is set for a slave node, the FAIL state can be reversed if the slave is already reachable. There is no point in retaning the FAIL state for a slave node as it does not serve slots, and we want to make sure we have the chance to promote it to master if needed. -* If the FAIL state is set for a master node, and after four times the node timeout, plus 10 seconds, the slots are were still not failed over, and the node is reachable again, the FAIL state is reverted. +**FAIL flag:** -The rationale for the second case is that if the failover did not worked we want the cluster to continue to work if the master is back online, without any kind of user intervetion. +The `PFAIL` flag alone is just some local information every node has about other nodes, but it is not used in order to act and is not sufficient to trigger a slave promotion. For a node to be really considered down the `PFAIL` condition needs to be promoted to a `FAIL` condition. -Cluster state detection (partilly implemented) +As outlined in the node heartbeats section of this document, every node sends gossip messages to every other node including the state of a few random known nodes. So every node eventually receives the set of node flags for every other node. This way every node has a mechanism to signal other nodes about failure conditions they detected. + +This mechanism is used in order to escalate a `PFAIL` condition to a `FAIL` condition, when the following set of conditions are met: + +* Some node, that we'll call A, has another node B flagged as `PFAIL`. +* Node A collected, via gossip sections, informations about the state of B from the point of view of the majority of masters in the cluster. +* The majority of masters signaled the `PFAIL` or `PFAIL` condition within `NODE_TIMEOUT * FAIL_REPORT_VALIDITY_MULT` time. + +If all the above conditions are true, Node A will: + +* Mark the node as `FAIL`. +* Send a `FAIL` message to all the reachable nodes. + +The `FAIL` message will force every receiving node to mark the node in `FAIL` state. + +Note that *the FAIL flag is mostly one way*, that is, a node can go from `PFAIL` to `FAIL`, but for the `FAIL` flag to be cleared there are only two possibilities: + +* The node is already reachable, and it is a slave. In this case the `FAIL` flag can be cleared as slaves are not failed over. +* The node is already reachable, is a master, but a long time (N times the `NODE_TIMEOUT`) has elapsed without any detectable slave promotion. + +**While the `PFAIL` -> `FAIL` transition uses a form of agreement, the agreement used is weak:** + +1) Nodes collect views of other nodes during some time, so even if the majority of master nodes need to "agree", actually this is just state that we collected from different nodes at different times and we are not sure this state is stable. + +2) While every node detecting the `FAIL` condition will force that condition on other nodes in the cluster using the `FAIL` message, there is no way to ensure the message will reach all the nodes. For instance a node may detect the `FAIL` condition and because of a partition will not be able to reach any other node. + +However the Redis Cluster failure detection has a requirement: eventually all the nodes should agree about the state of a given node even in case of partitions. There are two cases that can originate from split brain conditions, either some minority of nodes believe the node is in `FAIL` state, or a minority of nodes believe the node is not in `FAIL` state. In both the cases eventually the cluster will have a single view of the state of a given node: + +**Case 1**: If an actual majority of masters flagged a node as `FAIL`, for the chain effect every other node will flag the master as `FAIL` eventually. + +**Case 2**: When only a minority of masters flagged a node as `FAIL`, the slave promotion will not happen (as it uses a more formal algorithm that makes sure everybody will know about the promotion eventually) and every node will clear the `FAIL` state for the `FAIL` state clearing rules above (no promotion after some time > of N times the `NODE_TIMEOUT`). + +**Basically the `FAIL` flag is only used as a trigger to run the safe part of the algorithm** for the slave promotion. In theory a slave may act independently and start a slave promotion when its master is not reachable, and wait for the masters to refuse the provide acknowledgement if the master is actually reachable by the majority. However the added complexity of the `PFAIL -> FAIL` state, the weak agreement, and the `FAIL` message to force the propagation of the state in the shortest amount of time in the reachable part of the cluster, have practical advantages. Because of this mechanisms usually all the nodes will stop accepting writes about at the same time if the cluster is in an error condition, that is a desirable feature from the point of view of applications using Redis Cluster. Also not necessary elections, initiated by slaves that can't reach its master that is otherwise reachable by the majority of the other master nodes, are avoided. + +Cluster epoch --- -Every cluster node scan the list of nodes every time a configuration change -happens in the cluster (this can be an update to an hash slot, or simply -a node that is now in a failure state). +Redis Cluster uses a concept similar to the Raft algorithm "term". In Redis Cluster the term is called epoch instead, and it is used in order to give an incremental version to events, so that when multiple nodes provide conflicting informaiton, it is possible for another node to understand which state is the most up to date. + +The `currentEpoch` is a 64 bit unsigned number. + +At node creation every Redis Cluster node, both slaves and master nodes, set the `currentEpoch` to 0. + +Every time a ping or pong is received from another node, if the epoch of the sender (part of the cluster bus messages header) is greater than the local node epoch, then `currentEpoch` is updated to the sender epoch. + +Because of this semantics eventually all the nodes will agree to the greater epoch in the cluster. + +The way this information is used is when the state is changed and a node seeks agreement in order to perform some action. + +Currently this happens only during slave promotion, as described in the next section. Basically the epoch is a logical clock for the cluster and dictates whatever a given information wins over one with a smaller epoch. -Once the configuration is processed the node enters one of the following states: +Config epoch +--- + +Every master always advertises its `configEpoch` in ping and pong packets along with a bitmap advertising the set of slots it serves. -* FAIL: the cluster can't work. When the node is in this state it will not serve queries at all and will return an error for every query. -* OK: the cluster can work as all the 16384 slots are served by nodes that are not flagged as FAIL. +The `configEpoch` is set to zero in masters when a new node is created. -This means that the Redis Cluster is designed to stop accepting queries once even a subset of the hash slots are not available for some time. +Slaves that are promoted to master because of a failover event instead have a `configEpoch` that is set to the value of the `currentEpoch` at the time the slave won the election in order to replace its failing master. -However there is a portion of time in which an hash slot can't be accessed correctly since the associated node is experiencing problems, but the node is still not marked as failing. In this range of time the cluster will only accept queries about a subset of the 16384 hash slots. +As explained in the next sections the `configEpoch` helps to resolve conflicts due to different nodes claiming diverging configurations (a condition that may happen after partitions). -The FAIL state for the cluster happens in two cases. +Slave nodes also advertise the `configEpoch` field in ping and pong packets, but in case of slaves the field represents the `configEpoch` of its master the last time they exchanged packets. This allows other instances to detect when a slave has an old configuration that needs to be updated (Master nodes will not grant votes to slaves with an old configuration). -* 1) If at least one hash slot is not served as the node serving it currently is in FAIL state. -* 2) If we are not able to reach the majority of masters (that is, if the majorify of masters are simply in PFAIL state, it is enough for the node to enter FAIL mode). +Every time the `configEpoch` changes for some known node, it is permanently stored in the nodes.conf file. -The second check is required because in order to mark a node from PFAIL to FAIL state, the majority of masters are required. However when we are not connected with the majority of masters it is impossible from our side of the net split to mark nodes as FAIL. However since we detect this condition we set the Cluster state in FAIL mode to stop serving queries. +When a node is restarted its `currentEpoch` is set to the greatest `configEpoch` of the known nodes. -Slave election +Slave election and promotion --- -Once a master node is in FAIL state, if one or more slaves exist for this master one should be promoted as a master and all the other slaves reconfigured to replicate with the new master. +Slave election and promotion is handled by slave nodes, with the help of master nodes that vote for the slave to promote. +A slave election happens when a master is in `FAIL` state from the point of view of at least one of its slaves that has the prerequisites in order to become a master. -The election of a slave is a task that is handled directly by the slaves of the failing master. The trigger is the following set of conditions: +In order for a slave to promote itself to master, it requires to start an election and win it. All the slaves for a given master can start an election if the master is in `FAIL` state, however only one slave will win the election and promote itself to master. -* A node is a slave of a master in FAIL state. +A slave starts an election when the following conditions are met: + +* The slave's master is in `FAIL` state. * The master was serving a non-zero number of slots. -* The slave's data is considered reliable, that is, from the point of view of the replication layer, the replication link has not been down for more than the configured node timeout multiplied for a given multiplication factor (see the `REDIS_CLUSTER_SLAVE_VALIDITY_MULT` define). +* The slave replication link was disconnected from the master for no longer than a given amount of time, in order to ensure to promote a slave with a reasonable data freshness. + +In order to be elected the first step for a slave is to increment its `currentEpoch` counter, and request votes from master instances. + +Votes are requested by the slave by broadcasting a `FAILOVER_AUTH_REQUEST` packet to every master node of the cluster. +Then it waits for replies to arrive for a maximum time of `NODE_TIMEOUT`. +Once a master voted for a given slave, replying positively with a `FAILOVER_AUTH_ACK`, it can no longer vote for another slave of the same master for a period of `NODE_TIMEOUT * 2`. In this period it will not be able to reply to other authorization requests at all. + +The slave will discard all the ACKs that are received having an epoch that is less than the `currentEpoch`, in order to never count as valid votes that are about a previous election. + +Once the slave receives ACKs from the majority of masters, it wins the election. +Otherwise if the majority is not reached within the period of the `NODE_TIMEOUT`, the election is aborted and a new one will be tried again after `NODE_TIMEOUT * 4`. + +A slave does not try to get elected as soon as the master is in `FAIL` state, but there is a little delay, that is computed as: + + DELAY = fixed_delay + (data_age - NODE_TIMEOUT) / 10 + random delay between 0 and 2000 milliseconds. + +The fixed delay ensures that we wait for the `FAIL` state to propagate across the cluster, otherwise the slave may try to get elected when the masters are still not away of the `FAIL` state, refusing to grant their vote. + +The `data_age / 10` figure is used in order to give an advantage to slaves with fresher data (disconnected from the master for a smaller period of time). +The random delay is used in order to add some non-determinism that makes less likely that multiple slaves start the election at the same time, a situation that may result into no slave winning the election, requiring another election that makes the cluster not available in the meantime. + +Once a slave wins the election, it starts advertising itself as master in ping and pong packets, providing the set of served slots with a `configEpoch` set to the `currentEpoch` at which the election was started. + +In order to speedup the reconfiguration of other nodes, a pong packet is broadcasted to all the nodes of the cluster (however nodes not currently reachable will eventually receive a ping or pong packet and will be reconfigured). + +The other nodes will detect that there is a new master serving the same slots served by the old master but with a greater `configEpoch`, and will upgrade the configuration. Slaves of the old master, or the failed over master that rejoins the cluster, will not just upgrade the configuration but will also configure to replicate from the new master. + +Masters reply to slave vote request +--- + +In the previous section it was discussed how slaves try to get elected, this section explains what happens from the point of view of a master that is requested to vote for a given slave. + +Masters receive requests for votes in form of `FAILOVER_AUTH_REQUEST` requests from slaves. + +For a vote to be granted the following conditions need to be met: + +* 1) A master only votes a single time for a given epoch, and refuses to vote for older epochs: every master has a lastVoteEpoch field and will refuse to vote again as long as the `currentEpoch` in the auth request packet is not greater than the lastVoteEpoch. When a master replies positively to an vote request, the lastVoteEpoch is updated accordingly. +* 2) A master votes for a slave only if the slave's master is flagged as `FAIL`. +* 3) Auth requests with a `currentEpoch` that is less than the master `currentEpoch` are ignored. Because of this the Master reply will always have the same `currentEpoch` as the auth request. If the same slave asks again to be voted, incrementing the `currentEpoch`, it is guaranteed that an old delayed reply from the master can not be accepted for the new vote. + +Example of the issue caused by not using this rule: + +Master `currentEpoch` is 5, lastVoteEpoch is 1 (this may happen after a few failed elections) + +* Slave `currentEpoch` is 3 +* Slave tries to be elected with epoch 4 (3+1), master replies with an ok with `currentEpoch` 5, however the reply is delayed. +* Slave tries to be elected again, with epoch 5 (4+1), the delayed reply reaches the master with `currentEpoch` 5, and is accepted as valid. + +* 4) Masters don't vote a slave of the same master before `NODE_TIMEOUT * 2` has elapsed since a slave of that master was already voted. This is not strictly required as it is not possible that two slaves win the election in the same epoch, but in practical terms it ensures that normally when a slave is elected it has plenty of time to inform the other slaves avoiding that another slave will try a new election. +* 5) Masters don't try to select the best slave in any way, simply if the slave's master is in `FAIL` state and the master did not voted in the current term, the positive vote is granted. +* 6) When a master refuses to vote for a given slave there is no negative response, the request is simply ignored. +* 7) Masters don't grant the vote to slaves sending a `configEpoch` that is less than any `configEpoch` in the master table for the slots claimed by the slave. Remember that the slave sends the `configEpoch` of its master, and the bitmap of the slots served by its master. What this means is basically that the slave requesting the vote must have a configuration, for the slots it wants to failover, that is newer or equal the one of the master granting the vote. + +Race conditions during slaves election +--- + +This section illustrates how the concept of epoch is used to make the slave promotion process more resistant to partitions. + +* A master is no longer reachable indefinitely. The master has three slaves A, B, C. +* Slave A wins the election and is promoted as master. +* A partition makes A not available for the majority of the cluster. +* Slave B wins the election and is promoted as master. +* A partition makes B not available for the majority of the cluster. +* The previous partition is fixed, and A is available again. + +At this point B is down, and A is available again and will compete with C that will try to get elected in order to fail over B. + +Both will eventually claim to be promoted slaves for the same set of hash slots, however the `configEpoch` they publish will be different, and the C epoch will be greater, so all the other nodes will upgrade their configuration to C. + +A itself will detect pings from C serving the same slots with a greater epoch and will reconfigure as a slave of C. + +Rules for server slots information propagation +--- + +An important part of Redis Cluster is the mechanism used to propagate the information about which cluster node is serving a given set of hash slots. This is vital to both the startup of a fresh cluster and the ability to upgrade the configuration after a slave was promoted to serve the slots of its failing master. + +Ping and Pong packets that instances continuously exchange contain an header that is used by the sender in oder to advertise the hash slots it claims to be responsible for. This is the main mechanism used in order to propagate change, with the exception of a manual reconfiguration operated by the cluster administrator (for example a manual resharding via redis-trib in order to move hash slots among masters). -If all the above conditions are true, the slave starts requesting the -authorization to be promoted to master to all the reachable masters. +When a new Redis Cluster node is created, its local slot table, that maps a given hash slot with a given node ID, is initialized so that every hash slot is assigned to nill, that is, the hash slot is unassigned. -A master will reply with a positive message `FAILOVER_AUTH_GRANTED` if the sender of the message has the following properties: +The first rule followed by a node in order to update its hash slot table is the following: -* Is a slave, and the master is indeed in FAIL state. -* Ordering all the slaves for this master, it has the lowest Node ID. -* It appears to be up and running (no FAIL or PFAIL state). +**Rule 1: If an hash slot is unassigned, and a known node claims it, I'll modify my hash slot table to associate the hash slot to this node.** -Once the slave receives the authorization from the majority of the masters within a certain amount of time, it starts the failover process performing the following tasks: +Because of this rule, when a new cluster is created, it is only needed to manually assign (using the `CLUSTER` command, usually via the redis-trib command line tool) the slots served by each master node to the node itself, and the information will rapidly propagate across the cluster. -* Starts advertising itself as a master (via PONG packets). -* Starts advertising it is a promoted slave (via PONG packets). -* Starts claiming all the slots that were served by the old master. -* A PONG packet is broadcasted to all the nodes to speedup the proccess, without waiting for the usual PING/PONG period. +However this rule is not enough when a configuration update happens because of a slave gets promoted to master after a master failure. The new master instance will advertise the slots previously served by the old slave, but those slots are not unassigned from the point of view of the other nodes, that will not upgrade the configuration if they just follow the first rule. -All the other nodes will update the configuration accordingly. Specifically: +For this reason there is a second rule that is used in order to rebind an hash slot already assigned to a previous node to a new node claiming it. The rule is the following: -* All the slots claimed by the new master will be updated, since they are currently claimed by a master in FAIL state. -* All the other slaves of the old master will detect the PROMOTED flag and will switch the replication to the new master. -* If the old master will return back again, will detect the PROMOTED flag and will configure itself as a slave of the new master. +**Rule 2: If an hash slot is already assigned, and a known node is advertising it using a `configEpoch` that is greater than the `configEpoch` advertised by the current owner of the slot, I'll rebind the hash slot to the new node.** -The PROMOTED flag will be lost by a node when it is turned again into a slave for some reason during the life of the cluster. +Because of the second rule eventually all the nodes in the cluster will agree that the owner of a slot is the one with the greatest `configEpoch` among the nodes advertising it. -Publish/Subscribe (implemented, but to refine) +Publish/Subscribe === In a Redis Cluster clients can subscribe to every node, and can also From f10af03633762961af88cecafd8e99b77e502270 Mon Sep 17 00:00:00 2001 From: Alexandre Curreli Date: Thu, 10 Oct 2013 11:41:30 -0400 Subject: [PATCH 0371/2880] Added scredis to Scala clients --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..1c6ae47be7 100644 --- a/clients.json +++ b/clients.json @@ -669,5 +669,13 @@ "repository": "https://github.com/ctstone/csredis", "description": "Async (and sync) client for Redis and Sentinel", "authors": ["ctnstone"] + }, + + { + "name": "scredis", + "language": "Scala", + "repository": "https://github.com/Livestream/scredis", + "description": "Advanced async (and sync) client entirely written in Scala. Extensively used in production at http://www.livestream.com", + "authors": ["Livestream"] } ] From 97b55db50b340b7f3c975cd9e59f615aae795961 Mon Sep 17 00:00:00 2001 From: Igor Malinovskiy Date: Tue, 22 Oct 2013 19:52:35 +0300 Subject: [PATCH 0372/2880] Added tool - Redis Desktop Manager --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index f1cdee7057..c59676a4cf 100644 --- a/tools.json +++ b/tools.json @@ -288,5 +288,13 @@ "repository" : "http://github.com/bradvoth/redis-tcl", "description" : "Tcl library largely copied from the redis test tree, modified for minor bug fixes and expanded pub/sub capabilities", "authors" : ["bradvoth","antirez"] + }, + { + "name": "Redis Desktop Manager", + "language": "C++", + "url": "http://www.springsource.org/spring-data/redis", + "repository": "https://github.com/uglide/RedisDesktopManager", + "description": "Cross-platform desktop GUI management tool for Redis", + "authors": ["u_glide"] } ] From bc4cf8dd95fc32119d4f4cfb0c72e4301a3a8f3f Mon Sep 17 00:00:00 2001 From: Hugo Lopes Tavares Date: Wed, 23 Oct 2013 11:51:29 -0400 Subject: [PATCH 0373/2880] Add note to restart redis after `vm.overcommit_memory` changes Redis needs to be restarted after a sysctl `vm.overcommit_memory` change to take effect. --- topics/admin.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/admin.md b/topics/admin.md index 1baa94adf9..98a2da2fe0 100644 --- a/topics/admin.md +++ b/topics/admin.md @@ -8,7 +8,7 @@ Redis setup hints ----------------- + We suggest deploying Redis using the **Linux operating system**. Redis is also tested heavily on osx, and tested from time to time on FreeBSD and OpenBSD systems. However Linux is where we do all the major stress testing, and where most production deployments are working. -+ Make sure to set the Linux kernel **overcommit memory setting to 1**. Add `vm.overcommit_memory = 1` to `/etc/sysctl.conf` and then reboot or run the command `sysctl vm.overcommit_memory=1` for this to take effect immediately. ++ Make sure to set the Linux kernel **overcommit memory setting to 1**. Add `vm.overcommit_memory = 1` to `/etc/sysctl.conf` and then reboot the machine, or run the command `sysctl vm.overcommit_memory=1` and restart Redis for this to take effect immediately. + Make sure to **setup some swap** in your system (we suggest as much as swap as memory). If Linux does not have swap and your Redis instance accidentally consumes too much memory, either Redis will crash for out of memory or the Linux kernel OOM killer will kill the Redis process. + If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log **Redis may use up to 2 times the memory normally used**. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly. + Even if you have persistence disabled, Redis will need to perform RDB saves if you use replication. From 3ad190078e02a13ea6f7fd2a9b8cdd735be19496 Mon Sep 17 00:00:00 2001 From: Igor Malinovskiy Date: Fri, 25 Oct 2013 16:52:23 +0300 Subject: [PATCH 0374/2880] Fixed url of Redis Desktop Manager --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index c59676a4cf..038ef20f32 100644 --- a/tools.json +++ b/tools.json @@ -292,7 +292,7 @@ { "name": "Redis Desktop Manager", "language": "C++", - "url": "http://www.springsource.org/spring-data/redis", + "url": "http://redisdesktop.com", "repository": "https://github.com/uglide/RedisDesktopManager", "description": "Cross-platform desktop GUI management tool for Redis", "authors": ["u_glide"] From 67041891c4081292b3da2ff6287c8b7f687d2453 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 12:24:27 +0100 Subject: [PATCH 0375/2880] SCAN, SSCAN, HSCAN, ZSCAN added to commands.json. --- commands.json | 108 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/commands.json b/commands.json index 32aecc31a6..0c78e79e9d 100644 --- a/commands.json +++ b/commands.json @@ -2249,5 +2249,113 @@ ], "since": "2.0.0", "group": "sorted_set" + }, + "SCAN": { + "summary": "Incrementally iterate the keys space", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..", + "arguments": [ + { + "name": "cursor", + "type": "integer" + }, + { + "command": "MATCH", + "name": "pattern", + "type": "pattern", + "optional": true + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "2.8.0", + "group": "generic" + }, + "SSCAN": { + "summary": "Incrementally iterate Set elements", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "cursor", + "type": "integer" + }, + { + "command": "MATCH", + "name": "pattern", + "type": "pattern", + "optional": true + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "2.8.0", + "group": "set" + }, + "HSCAN": { + "summary": "Incrementally iterate hash fields and associated values", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "cursor", + "type": "integer" + }, + { + "command": "MATCH", + "name": "pattern", + "type": "pattern", + "optional": true + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "2.8.0", + "group": "hash" + }, + "ZSCAN": { + "summary": "Incrementally iterate sorted sets elements and associated scores", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection..", + "arguments": [ + { + "name": "key", + "type": "key" + }, + { + "name": "cursor", + "type": "integer" + }, + { + "command": "MATCH", + "name": "pattern", + "type": "pattern", + "optional": true + }, + { + "command": "COUNT", + "name": "count", + "type": "integer", + "optional": true + } + ], + "since": "2.8.0", + "group": "sorted_set" } } From 9bd2ef8c24e693c2b1d36726e380be66432e2851 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:08:28 +0100 Subject: [PATCH 0376/2880] SCAN documentation. --- commands/scan.md | 186 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 commands/scan.md diff --git a/commands/scan.md b/commands/scan.md new file mode 100644 index 0000000000..e23ca1fe63 --- /dev/null +++ b/commands/scan.md @@ -0,0 +1,186 @@ +The [SCAN] command and the closely related commands [SSCAN], [HSCAN] and [ZSCAN] are used in order to incrementally iterate over a collection of elements. + +* [SCAN] iterates the set of keys in the currently selected Redis database. +* [SSCAN] iterates elements of Sets types. +* [HSCAN] iterates fields of Hash types and their associated values. +* [ZSCAN] iterates elements of Sorted Set types and their associated scores. + +Since these commands allow for incremental iteration, that means that only a small number of elements are returned at every call, they can be used in production and are very fast commands, without the downside of commands like [KEYS] or [SMEMBERS] that may block the server for a long time (even several seconds) when called against big collections of keys or elements. + +However while blocking commands like [SMEMBERS] are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. + +Note that [SCAN], [SSCAN], [HSCAN] and [ZSCAN] all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of [SSCAN], [HSCAN] and [ZSCAN] the first argument is the name of the key holding the Set, Hash or Sorted Set value. The [SCAN] command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. + +## SCAN basic usage + +SCAN is a cursor based iteration. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call. + +An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration: + +``` +redis 127.0.0.1:6379> scan 0 +1) "17" +2) 1) "key:12" + 2) "key:8" + 3) "key:4" + 4) "key:14" + 5) "key:16" + 6) "key:17" + 7) "key:15" + 8) "key:10" + 9) "key:3" + 10) "key:7" + 11) "key:1" +redis 127.0.0.1:6379> scan 17 +1) "0" +2) 1) "key:5" + 2) "key:18" + 3) "key:0" + 4) "key:2" + 5) "key:19" + 6) "key:13" + 7) "key:6" + 8) "key:9" + 9) "key:11" +``` + +In the example above, the first call uses zero as a cursor, to start the iteration. The second call uses the cursor returned by the previous call as the first element of the reply, that is, 17. + +As you can see the **SCAN return value** is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements. + +Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling [SCAN] until the returned cursor is 0 again is called a **full iteration**. + +## Scan guarantees + +The [SCAN] command, and the other commands in the [SCAN] family, are able to provide to the user a set of guarantees associated to full iterations. + +* A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point [SCAN] returned it to the user. +* A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, [SCAN] ensures that this element will never be returned. + +However because [SCAN] has very little state associated (just the cursor) it has the following drawbacks: + +* A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times. +* Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined. + +## Number of elements returned at every SCAN call + +[SCAN] family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero. + +However the number of returned elements is reasonable, that is, in practical terms SCAN may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small sets, hashes and sorted sets). + +However there is a way for the user to tune the order of magnitude of the number of returned elements per call using the **COUNT** option. + +## The COUNT option + +While [SCAN] does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of [SCAN] using the **COUNT** option. Basically with COUNT the user specified the *amount of work that should be done at every call in order to retrieve elements from the collection*. This is **just an hint** for the implementation, however generally speaking this is what you could expect most of the times from the implementation. + +* The default COUNT value is 10. +* When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by an hash table, assuming no **MATCH** option is used, the server will usually return *count* or a bit more than *count* elements per call. +* When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first [SCAN] call regardless of the COUNT value. + +Important: **there is no need to use the same COUNT value** for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command. + +## The MATCH option + +It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the [KEYS] command that takes a pattern as only argument. + +To do so, just append the `MATCH ` arguments at the end of the [SCAN] command (it works with all the SCAN family commands). + +This is an example of iteration using **MATCH**: + +``` +redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood +(integer) 6 +redis 127.0.0.1:6379> sscan myset 0 match f* +1) "0" +2) 1) "foo" + 2) "feelsgood" + 3) "foobar" +redis 127.0.0.1:6379> +``` + +It is important to note that the **MATCH** filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, [SCAN] will likely return no elements in most iterations. An example is shown below: + +``` +redis 127.0.0.1:6379> scan 0 MATCH *11* +1) "288" +2) 1) "key:911" +redis 127.0.0.1:6379> scan 288 MATCH *11* +1) "224" +2) (empty list or set) +redis 127.0.0.1:6379> scan 224 MATCH *11* +1) "80" +2) (empty list or set) +redis 127.0.0.1:6379> scan 80 MATCH *11* +1) "176" +2) (empty list or set) +redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000 +1) "0" +2) 1) "key:611" + 2) "key:711" + 3) "key:118" + 4) "key:117" + 5) "key:311" + 6) "key:112" + 7) "key:111" + 8) "key:110" + 9) "key:113" + 10) "key:211" + 11) "key:411" + 12) "key:115" + 13) "key:116" + 14) "key:114" + 15) "key:119" + 16) "key:811" + 17) "key:511" + 18) "key:11" +redis 127.0.0.1:6379> +``` + +As you can see most of the calls returned zero elements, but the last call where a COUNT of 1000 was used in order to force the command to do more scanning for that iteration. + +## Multiple parallel iterations + +It is possible to an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. So server side no state is taken. + +## Terminating iterations in the middle + +Since there is no state server side, but the full state is captured by the cursor, the caller is free to terminate an iteration half-way without signaling this to the server in any way. An infinite number of iterations can be started and never terminated without any issue. + +## Calling SCAN with a corrupted cursor + +Calling [SCAN] with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the [SCAN] implementation. + +The only valid cursors to use are: +* The cursor value of 0 when starting an iteration. +* The cursor returned by the previous call to SCAN in order to continue the iteration. + +## Guarantee of termination + +The [SCAN] algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into [SCAN] to never terminate a full iteration. + +This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to [SCAN] and its COUNT option value compared with the rate at which the collection grows. + +## Return value + +[SCAN], [SSCAN], [HSCAN] and [ZSCAN] return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements. + +* [SCAN] array of elements is a list of keys. +* [SSCAN] array of elements is a list of Set members. +* [HSCAN] array of elements contain two elements, a field and a value, for every returned element of the Hash. +* [ZSCAN] array of elements contain two elements, a member and its associated score, for every returned element of the sorted set. + +## Additional examples + +Iteration of an Hash value. + +``` +redis 127.0.0.1:6379> hmset hash name Jack age 33 +OK +redis 127.0.0.1:6379> hscan hash 0 +1) "0" +2) 1) "name" + 2) "Jack" + 3) "age" + 4) "33" +``` From 8b1068b5e972756b9120519cff24225ba9b73167 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:12:34 +0100 Subject: [PATCH 0377/2880] SCAN doc markup fix. --- commands/scan.md | 48 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/commands/scan.md b/commands/scan.md index e23ca1fe63..ef7ab60c91 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -1,15 +1,15 @@ -The [SCAN] command and the closely related commands [SSCAN], [HSCAN] and [ZSCAN] are used in order to incrementally iterate over a collection of elements. +The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN` are used in order to incrementally iterate over a collection of elements. -* [SCAN] iterates the set of keys in the currently selected Redis database. -* [SSCAN] iterates elements of Sets types. -* [HSCAN] iterates fields of Hash types and their associated values. -* [ZSCAN] iterates elements of Sorted Set types and their associated scores. +* `SCAN` iterates the set of keys in the currently selected Redis database. +* `SSCAN` iterates elements of Sets types. +* `HSCAN` iterates fields of Hash types and their associated values. +* `ZSCAN` iterates elements of Sorted Set types and their associated scores. Since these commands allow for incremental iteration, that means that only a small number of elements are returned at every call, they can be used in production and are very fast commands, without the downside of commands like [KEYS] or [SMEMBERS] that may block the server for a long time (even several seconds) when called against big collections of keys or elements. However while blocking commands like [SMEMBERS] are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. -Note that [SCAN], [SSCAN], [HSCAN] and [ZSCAN] all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of [SSCAN], [HSCAN] and [ZSCAN] the first argument is the name of the key holding the Set, Hash or Sorted Set value. The [SCAN] command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. +Note that `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of `SSCAN`, `HSCAN` and `ZSCAN` the first argument is the name of the key holding the Set, Hash or Sorted Set value. The `SCAN` command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. ## SCAN basic usage @@ -48,23 +48,23 @@ In the example above, the first call uses zero as a cursor, to start the iterati As you can see the **SCAN return value** is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements. -Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling [SCAN] until the returned cursor is 0 again is called a **full iteration**. +Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling `SCAN` until the returned cursor is 0 again is called a **full iteration**. ## Scan guarantees -The [SCAN] command, and the other commands in the [SCAN] family, are able to provide to the user a set of guarantees associated to full iterations. +The `SCAN` command, and the other commands in the `SCAN` family, are able to provide to the user a set of guarantees associated to full iterations. -* A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point [SCAN] returned it to the user. -* A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, [SCAN] ensures that this element will never be returned. +* A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point `SCAN` returned it to the user. +* A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, `SCAN` ensures that this element will never be returned. -However because [SCAN] has very little state associated (just the cursor) it has the following drawbacks: +However because `SCAN` has very little state associated (just the cursor) it has the following drawbacks: * A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times. * Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined. ## Number of elements returned at every SCAN call -[SCAN] family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero. +`SCAN` family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero. However the number of returned elements is reasonable, that is, in practical terms SCAN may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small sets, hashes and sorted sets). @@ -72,11 +72,11 @@ However there is a way for the user to tune the order of magnitude of the number ## The COUNT option -While [SCAN] does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of [SCAN] using the **COUNT** option. Basically with COUNT the user specified the *amount of work that should be done at every call in order to retrieve elements from the collection*. This is **just an hint** for the implementation, however generally speaking this is what you could expect most of the times from the implementation. +While `SCAN` does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of `SCAN` using the **COUNT** option. Basically with COUNT the user specified the *amount of work that should be done at every call in order to retrieve elements from the collection*. This is **just an hint** for the implementation, however generally speaking this is what you could expect most of the times from the implementation. * The default COUNT value is 10. * When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by an hash table, assuming no **MATCH** option is used, the server will usually return *count* or a bit more than *count* elements per call. -* When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first [SCAN] call regardless of the COUNT value. +* When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first `SCAN` call regardless of the COUNT value. Important: **there is no need to use the same COUNT value** for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command. @@ -84,7 +84,7 @@ Important: **there is no need to use the same COUNT value** for every iteration. It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the [KEYS] command that takes a pattern as only argument. -To do so, just append the `MATCH ` arguments at the end of the [SCAN] command (it works with all the SCAN family commands). +To do so, just append the `MATCH ` arguments at the end of the `SCAN` command (it works with all the SCAN family commands). This is an example of iteration using **MATCH**: @@ -99,7 +99,7 @@ redis 127.0.0.1:6379> sscan myset 0 match f* redis 127.0.0.1:6379> ``` -It is important to note that the **MATCH** filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, [SCAN] will likely return no elements in most iterations. An example is shown below: +It is important to note that the **MATCH** filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, `SCAN` will likely return no elements in most iterations. An example is shown below: ``` redis 127.0.0.1:6379> scan 0 MATCH *11* @@ -149,7 +149,7 @@ Since there is no state server side, but the full state is captured by the curso ## Calling SCAN with a corrupted cursor -Calling [SCAN] with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the [SCAN] implementation. +Calling `SCAN` with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the `SCAN` implementation. The only valid cursors to use are: * The cursor value of 0 when starting an iteration. @@ -157,18 +157,18 @@ The only valid cursors to use are: ## Guarantee of termination -The [SCAN] algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into [SCAN] to never terminate a full iteration. +The `SCAN` algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into `SCAN` to never terminate a full iteration. -This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to [SCAN] and its COUNT option value compared with the rate at which the collection grows. +This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to `SCAN` and its COUNT option value compared with the rate at which the collection grows. ## Return value -[SCAN], [SSCAN], [HSCAN] and [ZSCAN] return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements. +`SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` return a two elements multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements. -* [SCAN] array of elements is a list of keys. -* [SSCAN] array of elements is a list of Set members. -* [HSCAN] array of elements contain two elements, a field and a value, for every returned element of the Hash. -* [ZSCAN] array of elements contain two elements, a member and its associated score, for every returned element of the sorted set. +* `SCAN` array of elements is a list of keys. +* `SSCAN` array of elements is a list of Set members. +* `HSCAN` array of elements contain two elements, a field and a value, for every returned element of the Hash. +* `ZSCAN` array of elements contain two elements, a member and its associated score, for every returned element of the sorted set. ## Additional examples From fd28c07cca927f4faccd0a7b6cb82839322deb53 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:15:44 +0100 Subject: [PATCH 0378/2880] SCAN doc, more fixes. --- commands/scan.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/scan.md b/commands/scan.md index ef7ab60c91..1ebfea18e7 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -5,7 +5,7 @@ The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN` * `HSCAN` iterates fields of Hash types and their associated values. * `ZSCAN` iterates elements of Sorted Set types and their associated scores. -Since these commands allow for incremental iteration, that means that only a small number of elements are returned at every call, they can be used in production and are very fast commands, without the downside of commands like [KEYS] or [SMEMBERS] that may block the server for a long time (even several seconds) when called against big collections of keys or elements. +Since these commands allow for incremental iteration, returning only a small number of elements t every call, they can be used in production without the downside of commands like `KEYS` or `SMEMBERS` that may block the server for a long time (even several seconds) when called against big collections of keys or elements. However while blocking commands like [SMEMBERS] are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. From 11e667102d9f58b521bb55087cc3312e61632089 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:16:21 +0100 Subject: [PATCH 0379/2880] More markup fixes for SCAN. --- commands/scan.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/commands/scan.md b/commands/scan.md index 1ebfea18e7..d6c331b634 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -7,7 +7,7 @@ The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN` Since these commands allow for incremental iteration, returning only a small number of elements t every call, they can be used in production without the downside of commands like `KEYS` or `SMEMBERS` that may block the server for a long time (even several seconds) when called against big collections of keys or elements. -However while blocking commands like [SMEMBERS] are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. +However while blocking commands like `SMEMBERS` are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. Note that `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` all work very similarly, so this documentation covers all the four commands. However an obvious difference is that in the case of `SSCAN`, `HSCAN` and `ZSCAN` the first argument is the name of the key holding the Set, Hash or Sorted Set value. The `SCAN` command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. @@ -82,7 +82,7 @@ Important: **there is no need to use the same COUNT value** for every iteration. ## The MATCH option -It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the [KEYS] command that takes a pattern as only argument. +It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the `KEYS` command that takes a pattern as only argument. To do so, just append the `MATCH ` arguments at the end of the `SCAN` command (it works with all the SCAN family commands). From 75e96c6beae1630cea5638978cca752bc115d116 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:22:42 +0100 Subject: [PATCH 0380/2880] Fixed typo. --- commands/scan.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/scan.md b/commands/scan.md index d6c331b634..9eef2c926e 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -5,7 +5,7 @@ The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN` * `HSCAN` iterates fields of Hash types and their associated values. * `ZSCAN` iterates elements of Sorted Set types and their associated scores. -Since these commands allow for incremental iteration, returning only a small number of elements t every call, they can be used in production without the downside of commands like `KEYS` or `SMEMBERS` that may block the server for a long time (even several seconds) when called against big collections of keys or elements. +Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like `KEYS` or `SMEMBERS` that may block the server for a long time (even several seconds) when called against big collections of keys or elements. However while blocking commands like `SMEMBERS` are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. From 08bf3f4a4b59cae1c60f51402d797d33ed09a2b4 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:24:34 +0100 Subject: [PATCH 0381/2880] SCAN typo fixed. --- commands/scan.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/scan.md b/commands/scan.md index 9eef2c926e..167ae82565 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -13,7 +13,7 @@ Note that `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` all work very similarly, so this ## SCAN basic usage -SCAN is a cursor based iteration. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call. +SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call. An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration: From 8d010460d28979529b03232312fce1f36d3eb799 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:26:28 +0100 Subject: [PATCH 0382/2880] SCAN: more typos fixed. --- commands/scan.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/scan.md b/commands/scan.md index 167ae82565..3e0c0ffdc2 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -141,7 +141,7 @@ As you can see most of the calls returned zero elements, but the last call where ## Multiple parallel iterations -It is possible to an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. So server side no state is taken. +It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. Server side no state is taken at all. ## Terminating iterations in the middle From 85e8c6f25193ff11a63e2f9fc49b026485ea1903 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 31 Oct 2013 17:27:14 +0100 Subject: [PATCH 0383/2880] SCAN: markdown fix for list. --- commands/scan.md | 1 + 1 file changed, 1 insertion(+) diff --git a/commands/scan.md b/commands/scan.md index 3e0c0ffdc2..ccbd0944d5 100644 --- a/commands/scan.md +++ b/commands/scan.md @@ -152,6 +152,7 @@ Since there is no state server side, but the full state is captured by the curso Calling `SCAN` with a broken, negative, out of range, or otherwise invalid cursor, will result into undefined behavior but never into a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the `SCAN` implementation. The only valid cursors to use are: + * The cursor value of 0 when starting an iteration. * The cursor returned by the previous call to SCAN in order to continue the iteration. From 04a7a2aad4010eea1c33319e5e5959de80c32973 Mon Sep 17 00:00:00 2001 From: Xavier Shay Date: Sun, 3 Nov 2013 06:51:39 -0800 Subject: [PATCH 0384/2880] Minor typo fix in protocol documentation. --- topics/protocol.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/protocol.md b/topics/protocol.md index 032e3b921b..44cc02f75f 100644 --- a/topics/protocol.md +++ b/topics/protocol.md @@ -70,7 +70,7 @@ possible to detect the kind of reply from the first byte sent by the server: * In an Error Reply the first byte of the reply is "-" * In an Integer Reply the first byte of the reply is ":" * In a Bulk Reply the first byte of the reply is "$" -* In a Multi Bulk Reply the first byte of the reply s "`*`" +* In a Multi Bulk Reply the first byte of the reply is "`*`" From 6eab5ab73396b89726a8d85582e9e0601c155f8d Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 7 Nov 2013 15:31:29 +0100 Subject: [PATCH 0385/2880] GPG key added in the security page. --- topics/security.md | 69 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/topics/security.md b/topics/security.md index b44493dc12..d0a81cb433 100644 --- a/topics/security.md +++ b/topics/security.md @@ -6,6 +6,10 @@ view of Redis: the access control provided by Redis, code security concerns, attacks that can be triggered from the outside by selecting malicious inputs and other similar topics are covered. +For security related contacts please open an issue on Github, or when you feel it +is really important that the security of the communication is preserved, use the +GPG key at the end of this document. + Redis general security model ---- @@ -162,3 +166,68 @@ The Redis authors are currently investigating the possibility of adding a new configuration parameter to prevent **CONFIG SET/GET dir** and other similar run-time configuration directives. This would prevent clients from forcing the server to write Redis dump files at arbitrary locations. + +GPG key +--- + +``` +-----BEGIN PGP PUBLIC KEY BLOCK----- +Version: GnuPG v1.4.13 (Darwin) + +mQINBFJ7ouABEAC5HwiDmE+tRCsWyTaPLBFEGDHcWOLWzph5HdrRtB//UUlSVt9P +tTWZpDvZQvq/ujnS2i2c54V+9NcgVqsCEpA0uJ/U1sUZ3RVBGfGO/l+BIMBnM+B+ +TzK825TxER57ILeT/2ZNSebZ+xHJf2Bgbun45pq3KaXUrRnuS8HWSysC+XyMoXET +nksApwMmFWEPZy62gbeayf1U/4yxP/YbHfwSaldpEILOKmsZaGp8PAtVYMVYHsie +gOUdS/jO0P3silagq39cPQLiTMSsyYouxaagbmtdbwINUX0cjtoeKddd4AK7PIww +7su/lhqHZ58ZJdlApCORhXPaDCVrXp/uxAQfT2HhEGCJDTpctGyKMFXQbLUhSuzf +IilRKJ4jqjcwy+h5lCfDJUvCNYfwyYApsMCs6OWGmHRd7QSFNSs335wAEbVPpO1n +oBJHtOLywZFPF+qAm3LPV4a0OeLyA260c05QZYO59itakjDCBdHwrwv3EU8Z8hPd +6pMNLZ/H1MNK/wWDVeSL8ZzVJabSPTfADXpc1NSwPPWSETS7JYWssdoK+lXMw5vK +q2mSxabL/y91sQ5uscEDzDyJxEPlToApyc5qOUiqQj/thlA6FYBlo1uuuKrpKU1I +e6AA3Gt3fJHXH9TlIcO6DoHvd5fS/o7/RxyFVxqbRqjUoSKQeBzXos3u+QARAQAB +tChTYWx2YXRvcmUgU2FuZmlsaXBwbyA8YW50aXJlekBnbWFpbC5jb20+iQI+BBMB +AgAoBQJSe6LgAhsDBQld/A8ABgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRAx +gTcoDlyI1riPD/oDDvyIVHtgHvdHqB8/GnF2EsaZgbNuwbiNZ+ilmqnjXzZpu5Su +kGPXAAo+v+rJVLSU2rjCUoL5PaoSlhznw5PL1xpBosN9QzfynWLvJE42T4i0uNU/ +a7a1PQCluShnBchm4Xnb3ohNVthFF2MGFRT4OZ5VvK7UcRLYTZoGRlKRGKi9HWea +2xFvyUd9jSuGZG/MMuoslgEPxei09rhDrKxnDNQzQZQpamm/42MITh/1dzEC5ZRx +8hgh1J70/c+zEU7s6kVSGvmYtqbV49/YkqAbhENIeZQ+bCxcTpojEhfk6HoQkXoJ +oK5m21BkMlUEvf1oTX22c0tuOrAX8k0y1M5oismT2e3bqs2OfezNsSfK2gKbeASk +CyYivnbTjmOSPbkvtb27nDqXjb051q6m2A5d59KHfey8BZVuV9j35Ettx4nrS1Ni +S7QrHWRvqceRrIrqXJKopyetzJ6kYDlbP+EVN9NJ2kz/WG6ermltMJQoC0oMhwAG +dfrttG+QJ8PCOlaYiZLD2bjzkDfdfanE74EKYWt+cseenZUf0tsncltRbNdeGTQb +1/GHfwJ+nbA1uKhcHCQ2WrEeGiYpvwKv2/nxBWZ3gwaiAwsz/kI6DQlPZqJoMea9 +8gDK2rQigMgbE88vIli4sNhc0yAtm3AbNgAO28NUhzIitB+av/xYxN/W/LkCDQRS +e6LgARAAtdfwe05ZQ0TZYAoeAQXxx2mil4XLzj6ycNjj2JCnFgpYxA8m6nf1gudr +C5V7HDlctp0i9i0wXbf07ubt4Szq4v3ihQCnPQKrZZWfRXxqg0/TOXFfkOdeIoXl +Fl+yC5lUaSTJSg21nxIr8pEq/oPbwpdnWdEGSL9wFanfDUNJExJdzxgyPzD6xubc +OIn2KviV9gbFzQfOIkgkl75V7gn/OA5g2SOLOIPzETLCvQYAGY9ppZrkUz+ji+aT +Tg7HBL6zySt1sCCjyBjFFgNF1RZY4ErtFj5bdBGKCuglyZou4o2ETfA8A5NNpu7x +zkls45UmqRTbmsTD2FU8Id77EaXxDz8nrmjz8f646J0rqn9pGnIg6Lc2PV8j7ACm +/xaTH03taIloOBkTs/Cl01XYeloM0KQwrML43TIm3xSE/AyGF9IGTQo3zmv8SnMO +F+Rv7+55QGlSkfIkXUNCUSm1+dJSBnUhVj/RAjxkekG2di+Jh/y8pkSUxPMDrYEa +OtDoiq2G/roXjVQcbOyOrWA2oB58IVuXO6RzMYi6k6BMpcbmQm0y+TcJqo64tREV +tjogZeIeYDu31eylwijwP67dtbWgiorrFLm2F7+povfXjsDBCQTYhjH4mZgV94ri +hYjP7X2YfLV3tvGyjsMhw3/qLlEyx/f/97gdAaosbpGlVjnhqicAEQEAAYkCJQQY +AQIADwUCUnui4AIbDAUJXfwPAAAKCRAxgTcoDlyI1kAND/sGnXTbMvfHd9AOzv7i +hDX15SSeMDBMWC+8jH/XZASQF/zuHk0jZNTJ01VAdpIxHIVb9dxRrZ3bl56BByyI +8m5DKJiIQWVai+pfjKj6C7p44My3KLodjEeR1oOODXXripGzqJTJNqpW5eCrCxTM +yz1rzO1H1wziJrRNc+ACjVBE3eqcxsZkDZhWN1m8StlX40YgmQmID1CC+kRlV+hg +LUlZLWQIFCGo2UJYoIL/xvUT3Sx4uKD4lpOjyApWzU40mGDaM5+SOsYYrT8rdwvk +nd/efspff64meT9PddX1hi7Cdqbq9woQRu6YhGoCtrHyi/kklGF3EZiw0zWehGAR +2pUeCTD28vsMfJ3ZL1mUGiwlFREUZAcjIlwWDG1RjZDJeZ0NV07KH1N1U8L8aFcu ++CObnlwiavZxOR2yKvwkqmu9c7iXi/R7SVcGQlNao5CWINdzCLHj6/6drPQfGoBS +K/w4JPe7fqmIonMR6O1Gmgkq3Bwl3rz6MWIBN6z+LuUF/b3ODY9rODsJGp21dl2q +xCedf//PAyFnxBNf5NSjyEoPQajKfplfVS3mG8USkS2pafyq6RK9M5wpBR9I1Smm +gon60uMJRIZbxUjQMPLOViGNXbPIilny3FdqbUgMieTBDxrJkE7mtkHfuYw8bERy +vI1sAEeV6ZM/uc4CDI3E2TxEbQ== +``` + +**Key fingerprint** + +``` +pub 4096R/0E5C88D6 2013-11-07 [expires: 2063-10-26] + Key fingerprint = E5F3 DA80 35F0 2EC1 47F9 020F 3181 3728 0E5C 88D6 + uid Salvatore Sanfilippo + sub 4096R/3B34D15F 2013-11-07 [expires: 2063-10-26] +``` From 8d45b0d13bc1ed18ef704224ada3c7a2d0a4156e Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Sat, 9 Nov 2013 13:54:55 +0100 Subject: [PATCH 0386/2880] Documented included libraries and added examples. --- commands/eval.md | 84 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 82 insertions(+), 2 deletions(-) diff --git a/commands/eval.md b/commands/eval.md index 606ea4a9c6..d00771c127 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -481,14 +481,94 @@ The Redis Lua interpreter loads the following Lua libraries: * string lib. * math lib. * debug lib. +* struct lib. * cjson lib. * cmsgpack lib. +* redis.sha1hex function. Every Redis instance is _guaranteed_ to have all the above libraries so you can be sure that the environment for your Redis scripts is always the same. -The CJSON library provides extremely fast JSON maniplation within Lua. -All the other libraries are standard Lua libraries. +struct, CJSON and cmsgpack are external libraries, all the other libraries are standard +Lua libraries. + +### struct + +struct is a library for packing/unpacking structures within Lua. + +``` +Valid formats: +> - big endian +< - little endian +![num] - alignment +x - pading +b/B - signed/unsigned byte +h/H - signed/unsigned short +l/L - signed/unsigned long +T - size_t +i/In - signed/unsigned integer with size `n' (default is size of int) +cn - sequence of `n' chars (from/to a string); when packing, n==0 means + the whole string; when unpacking, n==0 means use the previous + read number as the string length +s - zero-terminated string +f - float +d - double +' ' - ignored +``` + + +Example: + +``` +127.0.0.1:6379> eval 'return struct.pack("HH", 1, 2)' 0 +"\x01\x00\x02\x00" +3) (integer) 5 +127.0.0.1:6379> eval 'return {struct.unpack("HH", ARGV[1])}' 0 "\x01\x00\x02\x00" +1) (integer) 1 +2) (integer) 2 +3) (integer) 5 +127.0.0.1:6379> eval 'return struct.size("HH")' 0 +(integer) 4 +``` + +### CJSON + +The CJSON library provides extremely fast JSON manipulation within Lua. + +Example: + +``` +redis 127.0.0.1:6379> eval 'return cjson.encode({["foo"]= "bar"})' 0 +"{\"foo\":\"bar\"}" +redis 127.0.0.1:6379> eval 'return cjson.decode(ARGV[1])["foo"]' 0 "{\"foo\":\"bar\"}" +"bar" +``` + +### cmsgpack + +The cmsgpack library provides simple and fast MessagePack manipulation within Lua. + +Example: + +``` +127.0.0.1:6379> eval 'return cmsgpack.pack({"foo", "bar", "baz"})' 0 +"\x93\xa3foo\xa3bar\xa3baz" +127.0.0.1:6379> eval 'return cmsgpack.unpack(ARGV[1])' 0 "\x93\xa3foo\xa3bar\xa3baz +1) "foo" +2) "bar" +3) "baz" +``` + +### redis.sha1hex + +Perform the SHA1 of the input string. + +Example: + +``` +127.0.0.1:6379> eval 'return redis.sha1hex(ARGV[1])' 0 "foo" +"0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33" +``` ## Emitting Redis logs from scripts From 33a0b714a196aa18b193e37aa184083c4d1ba235 Mon Sep 17 00:00:00 2001 From: Frank Mueller Date: Tue, 12 Nov 2013 10:56:56 +0100 Subject: [PATCH 0387/2880] Changed location of the Tideland Go Redis Client Repository moved from Google into own Git. --- clients.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index 327a74c93c..1a052e18ea 100644 --- a/clients.json +++ b/clients.json @@ -120,9 +120,9 @@ }, { - "name": "Tideland CGL Redis", + "name": "Tideland Go Redis Client", "language": "Go", - "repository": "http://code.google.com/p/tcgl/", + "repository": "http://git.tideland.biz/godm/redis", "description": "A flexible Go Redis client able to handle all commands", "authors": ["themue"], "active": true From a838e74d857ca49a9549cb171c47d11f54fad517 Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Tue, 12 Nov 2013 16:56:24 +0100 Subject: [PATCH 0388/2880] Documented that migrate options are only available in 2.8 Fixes antirez/redis#506 --- commands/migrate.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/commands/migrate.md b/commands/migrate.md index 775d1ea6fc..cee852393b 100644 --- a/commands/migrate.md +++ b/commands/migrate.md @@ -42,6 +42,8 @@ On success OK is returned. * `COPY` -- Do not remove the key from the local instance. * `REPLACE` -- Replace existing key on the remote instance. +`COPY` and `REPLACE` are added in 2.8 and not available in 2.6 + @return @status-reply: The command returns OK on success. From 25cdf50fbceea0904b278a67d76d586fc65d8065 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 13 Nov 2013 13:19:55 +0100 Subject: [PATCH 0389/2880] RENAME doc updated. --- commands/rename.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/rename.md b/commands/rename.md index 6317706540..b8c524a4f2 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -1,7 +1,7 @@ Renames `key` to `newkey`. It returns an error when the source and destination names are the same, or when `key` does not exist. -If `newkey` already exists it is overwritten. +If `newkey` already exists it is overwritten, when this happens `RENAME` executes an implicit `DEL` operation, so if the deleted key contains a very big value it may cause high latency even if `RENAME` itself is usually a constant-time operation. @return From 4e99f5dbbccf65a277bf543e323bb2498bccdf32 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 18 Nov 2013 09:57:35 +0100 Subject: [PATCH 0390/2880] Added SSCAN, ZSCAN, HSCAN pages redirecting to SCAN. --- commands/hscan.md | 1 + commands/sscan.md | 1 + commands/zscan.md | 1 + 3 files changed, 3 insertions(+) create mode 100644 commands/hscan.md create mode 100644 commands/sscan.md create mode 100644 commands/zscan.md diff --git a/commands/hscan.md b/commands/hscan.md new file mode 100644 index 0000000000..9ab261616a --- /dev/null +++ b/commands/hscan.md @@ -0,0 +1 @@ +See `SCAN` for `HSCAN` documentation. diff --git a/commands/sscan.md b/commands/sscan.md new file mode 100644 index 0000000000..c19f3b1bf3 --- /dev/null +++ b/commands/sscan.md @@ -0,0 +1 @@ +See `SCAN` for `SSCAN` documentation. diff --git a/commands/zscan.md b/commands/zscan.md new file mode 100644 index 0000000000..3926307fbe --- /dev/null +++ b/commands/zscan.md @@ -0,0 +1 @@ +See `SCAN` for `ZSCAN` documentation. From b3d53fd3e7014a79895874cf93c19c2f24db1335 Mon Sep 17 00:00:00 2001 From: Fayiz Musthafa Date: Wed, 20 Nov 2013 20:22:36 +0530 Subject: [PATCH 0391/2880] Added scredis a scala client --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..af53096001 100644 --- a/clients.json +++ b/clients.json @@ -441,6 +441,14 @@ "active": true }, + { + "name": "scredis", + "language": "Scala", + "repository": "https://github.com/Livestream/scredis", + "description": "Scredis is an advanced Redis client entirely written in Scala. Used in production at http://Livestream.com.", + "active": true + }, + { "name": "Tcl Client", "language": "Tcl", From 47115ec5f7cdc2fff7b7a9f278db8fe316b25de3 Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 21 Nov 2013 18:01:24 +0100 Subject: [PATCH 0392/2880] New Sentinel doc. --- topics/{sentinel.md => sentinel-old.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename topics/{sentinel.md => sentinel-old.md} (100%) diff --git a/topics/sentinel.md b/topics/sentinel-old.md similarity index 100% rename from topics/sentinel.md rename to topics/sentinel-old.md From 173155f6d3b185348fb62c8dceb5616a33d51ffa Mon Sep 17 00:00:00 2001 From: antirez Date: Thu, 21 Nov 2013 18:02:15 +0100 Subject: [PATCH 0393/2880] New doc actually added... --- topics/sentinel.md | 359 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 359 insertions(+) create mode 100644 topics/sentinel.md diff --git a/topics/sentinel.md b/topics/sentinel.md new file mode 100644 index 0000000000..5a600697c3 --- /dev/null +++ b/topics/sentinel.md @@ -0,0 +1,359 @@ +Redis Sentinel Documentation +=== + +**Note:** this page documents the *new* Sentinel implementation that entered the Github repository 21th of November. The old Sentinel implementation is [documented here](http://redis.io/topics/sentinel-old), however using the old implementation is discouraged. + +Redis Sentinel is a system designed to help managing Redis instances. +It performs the following three tasks: + +* **Monitoring**. Sentinel constantly check if your master and slave instances are working as expected. +* **Notification**. Sentinel can notify the system administrator, or another computer program, via an API, that something is wrong with one of the monitored Redis instances. +* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a slave is promoted to master, the other additional slaves are reconfigured to use the new master, and the applications using the Redis server informed about the new address to use when connecting. + +Redis Sentinel is a distributed system, this means that usually you want to run +multiple Sentinel processes across your infrastructure, and this processes +will use gossip protocols in order to understand if a master is down and +agreement protocols in order to perform the failover and assign a version to +the new configuration. + +Redis Sentinel is shipped as a stand-alone executable called `redis-sentinel` +but actually it is a special execution mode of the Redis server itself, and +can be also invoked using the `--sentinel` option of the normal `redis-sever` +executable. + +**WARNING:** Redis Sentinel is currently a work in progress. This document +describes how to use what we is already implemented, and may change as the +Sentinel implementation evolves. + +Redis Sentinel is compatible with Redis 2.4.16 or greater, and Redis 2.6.0 or greater, however it works better if used against Redis instances version 2.8.0 or greater. + +Obtaining Sentinel +--- + +Currently Sentinel is part of the Redis *unstable* branch at github. +To compile it you need to clone the *unstable* branch and compile Redis. +You'll see a `redis-sentinel` executable in your `src` directory. + +Alternatively you can use directly the `redis-server` executable itself, +starting it in Sentinel mode as specified in the next paragraph. + +An updated version of Sentinel is also available as part of the Redis 2.8.0 release. + +Running Sentinel +--- + +If you are using the `redis-sentinel` executable (or if you have a symbolic +link with that name to the `redis-server` executable) you can run Sentinel +with the following command line: + + redis-sentinel /path/to/sentinel.conf + +Otherwise you can use directly the `redis-server` executable starting it in +Sentinel mode: + + redis-server /path/to/sentinel.conf --sentinel + +Both ways work the same. + +However **it is mandatory** to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable. + +Configuring Sentinel +--- + +The Redis source distribution contains a file called `sentinel.conf` +that is a self-documented example configuration file you can use to +configure Sentinel, however a typical minimal configuration file looks like the +following: + + sentinel monitor mymaster 127.0.0.1 6379 2 + sentinel down-after-milliseconds mymaster 60000 + sentinel failover-timeout mymaster 180000 + sentinel parallel-syncs mymaster 1 + + sentinel monitor resque 192.168.1.3 6380 4 + sentinel down-after-milliseconds resque 10000 + sentinel failover-timeout resque 180000 + sentinel parallel-syncs resque 5 + +The first line is used to tell Redis to monitor a master called *mymaster*, +that is at address 127.0.0.1 and port 6379, with a level of agreement needed +to detect this master as failing of 2 sentinels (if the agreement is not reached +the automatic failover does not start). + +However note that whatever the agreement you specify to detect an instance as not working, a Sentinel requires **the vote from the majority** of the known Sentinels in the system in order to start a failover and reserve a given *configuration Epoch* (that is a version to attach to a new master configuration). + +In other words **Sentinel is not able to perform the failover if only a minority of the Sentinel processes are working**. + +The other options are almost always in the form: + + sentinel + +And are used for the following purposes: + +* `down-after-milliseconds` is the time in milliseconds an instance should not +be reachable (either does not reply to our PINGs or it is replying with an +error) for a Sentinel starting to think it is down. After this time has elapsed +the Sentinel will mark an instance as **subjectively down** (also known as +`SDOWN`), that is not enough to start the automatic failover. +However if enough instances will think that there is a subjectively down +condition, then the instance is marked as **objectively down**. The number of +sentinels that needs to agree depends on the configured agreement for this +master. +* `parallel-syncs` sets the number of slaves that can be reconfigured to use +the new master after a failover at the same time. The lower the number, the +more time it will take for the failover process to complete, however if the +slaves are configured to serve old data, you may not want all the slaves to +resync at the same time with the new master, as while the replication process +is mostly non blocking for a slave, there is a moment when it stops to load +the bulk data from the master during a resync. You may make sure only one +slave at a time is not reachable by setting this option to the value of 1. + +The other options are described in the rest of this document and +documented in the example sentinel.conf file shipped with the Redis +distribution. + +SDOWN and ODOWN +--- + +As already briefly mentioned in this document Redis Sentinel has two different +concepts of *being down*, one is called a *Subjectively Down* condition +(SDOWN) and is a down condition that is local to a given Sentinel instance. +Another is called *Objectively Down* condition (ODOWN) and is reached when +enough Sentinels (at least the number configured as the `quorum` parameter +of the monitored master) have an SDOWN condition, and get feedbacks from +other Sentinels using the `SENTINEL is-master-down-by-addr` command. + +From the point of view of a Sentinel an SDOWN condition is reached if we +don't receive a valid reply to PING requests for the number of seconds +specified in the configuration as `is-master-down-after-milliseconds` +parameter. + +An acceptable reply to PING is one of the following: + +* PING replied with +PONG. +* PING replied with -LOADING error. +* PING replied with -MASTERDOWN error. + +Any other reply (or no reply) is considered non valid. + +Note that SDOWN requires that no acceptable reply is received for the whole +interval configured, so for instance if the interval is 30000 milliseconds +(30 seconds) and we receive an acceptable ping reply every 29 seconds, the +instance is considered to be working. + +To switch from SDOWN to ODOWN no strong quorum algorithm is used, but +just a form of gossip: if a given Sentinel gets acknowledge that the master +is not working from enough Sentinels in a given time range, the SDOWN is +promoted to ODOWN. If this acknowledge is later missing, the flag is cleared. + +The ODOWN condition **only applies to masters**. For other kind of instances +Sentinel don't require any agreement, so the ODOWN state is never reached +for slaves and other sentinels. + +However once a Sentinel sees a master in ODOWN state, it can try to be +elected by the other Sentinels to perform the failover. + +Tasks every Sentinel accomplish periodically +--- + +* Every Sentinel sends a **PING** request to every known master, slave, and sentinel instance, every second. +* An instance is Subjectively Down (**SDOWN**) if the latest valid reply to **PING** was received more than `down-after-milliseconds` milliseconds ago. Acceptable PING replies are: +PONG, -LOADING, -MASTERDOWN. +* If a master is in **SDOWN** condition, every other Sentinel also monitoring this master, is queried for confirmation of this state, every second. +* If a master is in **SDOWN** condition, and enough other Sentinels (to reach the configured quorum) agree about the condition in a given time range, the master is marked as Objectively Down (**ODOWN**). +* Every Sentinel sends an **INFO** request to every known master and slave instance, one time every 10 seconds. If a master is in **ODOWN** condition, its slaves are asked for **INFO** every second instead of being asked every 10 seconds. +* The **ODOWN** condition is cleared if there is no longer acknowledgement about enough other Sentinels about the fact that the master is unreachable. The **SDOWN** condition is cleared as soon as the master starts to reply again to pings. + +Sentinels and Slaves auto discovery +--- + +While Sentinels stay connected with other Sentinels in order to reciprocally +check the availability of each other, and to exchange messages, you don't +need to configure the other Sentinel addresses in every Sentinel instance you +run, as Sentinel uses the Redis master Pub/Sub capabilities in order to +discover the other Sentinels that are monitoring the same master. + +This is obtained by sending *Hello Messages* into the channel named +`__sentinel__:hello`. + +Similarly you don't need to configure what is the list of the slaves attached +to a master, as Sentinel will auto discover this list querying Redis. + +* Every Sentinel publishes a message to every monitored master and slave Pub/Sub channel `__sentinel__:hello`, every two seconds, announcing its presence with ip, port, runid. +* Every Sentinel is subscribed to the Pub/Sub channel `__sentinel__:hello` of every master and slave, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master. +* Hello messages also include the full current configuration of the master. If another Sentinel has a configuration for a given master that is older than the one received, it updates to the new configuration immediately. +* Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added. + +Sentinel API +=== + +By default Sentinel runs using TCP port 26379 (note that 6379 is the normal +Redis port). Sentinels accept commands using the Redis protocol, so you can +use `redis-cli` or any other unmodified Redis client in order to talk with +Sentinel. + +There are two ways to talk with Sentinel: it is possible to directly query +it to check what is the state of the monitored Redis instances from its point +of view, to see what other Sentinels it knows, and so forth. + +An alternative is to use Pub/Sub to receive *push style* notifications from +Sentinels, every time some event happens, like a failover, or an instance +entering an error condition, and so forth. + +Sentinel commands +--- + +The following is a list of accepted commands: + +* **PING** this command simply returns PONG. +* **SENTINEL masters** show a list of monitored masters and their state. +* **SENTINEL slaves ``** show a list of slaves for this master, and their state. +* **SENTINEL get-master-addr-by-name ``** return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. +* **SENTINEL reset ``** this command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. +* **SENTINEL failover ``** force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). + +Pub/Sub Messages +--- + +A client can use a Sentinel as it was a Redis compatible Pub/Sub server +(but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to +channels and get notified about specific events. + +The channel name is the same as the name of the event. For instance the +channel named `+sdown` will receive all the notifications related to instances +entering an `SDOWN` condition. + +To get all the messages simply subscribe using `PSUBSCRIBE *`. + +The following is a list of channels and message formats you can receive using +this API. The first word is the channel / event name, the rest is the format of the data. + +Note: where *instance details* is specified it means that the following arguments are provided to identify the target instance: + + @ + +The part identifying the master (from the @ argument to the end) is optional +and is only specified if the instance is not a master itself. + +* **+reset-master** `` -- The master was reset. +* **+slave** `` -- A new slave was detected and attached. +* **+failover-state-reconf-slaves** `` -- Failover state changed to `reconf-slaves` state. +* **+failover-detected** `` -- A failover started by another Sentinel or any other external entity was detected (An attached slave turned into a master). +* **+slave-reconf-sent** `` -- The leader sentinel sent the `SLAVEOF` command to this instance in order to reconfigure it for the new slave. +* **+slave-reconf-inprog** `` -- The slave being reconfigured showed to be a slave of the new master ip:port pair, but the synchronization process is not yet complete. +* **+slave-reconf-done** `` -- The slave is now synchronized with the new master. +* **-dup-sentinel** `` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted). +* **+sentinel** `` -- A new sentinel for this master was detected and attached. +* **+sdown** `` -- The specified instance is now in Subjectively Down state. +* **-sdown** `` -- The specified instance is no longer in Subjectively Down state. +* **+odown** `` -- The specified instance is now in Objectively Down state. +* **-odown** `` -- The specified instance is no longer in Objectively Down state. +* **+new-epoch** `` -- The current epoch was updated. +* **+try-failover** `` -- New failover in progress, waiting to be elected by the majority. +* **+elected-leader** `` -- Won the election for the specified epoch, can do the failover. +* **+failover-state-select-slave** `` -- New failover state is `select-slave`: we are trying to find a suitable slave for promotion. +* **no-good-slave** `` -- There is no good slave to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case. +* **selected-slave** `` -- We found the specified good slave to promote. +* **failover-state-send-slaveof-noone** `` -- We are trynig to reconfigure the promoted slave as master, waiting for it to switch. +* **failover-end-for-timeout** `` -- The failover terminated for timeout, slaves will eventually be configured to replicate with the new master anyway. +* **failover-end** `` -- The failover terminated with success. All the slaves appears to be reconfigured to replicate with the new master. +* **switch-master** ` ` -- The master new IP and address is the specified one after a configuration change. This is **the message most external users are interested in**. +* **+tilt** -- Tilt mode entered. +* **-tilt** -- Tilt mode exited. + +Sentinel failover +=== + +The failover process consists on the following steps: + +* Recognize that the master is in ODOWN state. +* Increment our current epoch (see Raft leader election), and try to be elected for the current epoch. +* If the election failed, retry to be elected again after two times the configured failover timeout and stop for now. Otherwise continue with the following steps. +* Select a slave to promote as master. +* The promoted slave is turned into a master with the command **SLAVEOF NO ONE**. +* The Hello messages broadcasted via Pub/Sub contain the updated configuration starting from now, so that all the other Sentinels will update their config. +* All the other slaves attached to the original master are configured with the **SLAVEOF** command in order to start the replication process with the new master. +* The leader terminates the failover process when all the slaves are reconfigured. + +The Sentinel to elect as master is chosen in the following way: + +* We remove all the slaves in SDOWN, disconnected, or with the last ping reply received older than 5 seconds (PING is sent every second). +* We remove all the slaves disconnected from the master for more than 10 times the configured `down-after` time. +* Of all the remaining instances, we get the one with the greatest replication offset if available, or the one with the lowest `runid`, lexicographically, if the replication offset is not available or the same. + +Consistency qualities of Sentinel failover +--- + +The Sentinel failover uses the leader election from the Raft algorithm in order +to guarantee that only a given leader is elected in a given epoch. + +This means that there are no two Sentinels that will try to perform the +election in the same epoch. Also Sentinels will never vote another leader for +a given epoch more than one time. + +Higher configuration epochs always win over older epochs, so every Sentinel will +actively replace its configuration with a new one. + +Basically it is possible to think to Sentinel configurations as a state with an associated version. The state is **eventually propagated** to all the other Sentinels in a last-write-wins fashion (that is, last configuration wins). + +For example during network partitions, a given Sentinel can claim an older configuration, that will be updated as soon as the Sentinel is already able to receive an update. + +In environments where consistency is important during network partitions, it is suggested to use the Redis option that stops accepting writes if not connected to at least a given number of slaves instances, and at the same time to run a Redis Sentinel process in every physical or virtual machine where a Redis master or slave is running. + +Sentinel persistent state +--- + +Sentinel state is persisted in the sentinel configuration file. For example +every time a new configuration is received, or created (leader Sentinels), for +a master, the configuration is persisted on disk together with the configuration +epoch. This means that it is safe to stop and restart Sentinel processes. + +Sentinel reconfiguration of instances outside the failover procedure. +--- + +Even when no failover is in progress, Sentinels will always try to set the +current configuration on monitored instances. Specifically: + +* Slaves (according to the current configuration) that claim to be masters, will be configured as slaves to replicate with the current master. +* Slaves connected to a wrong master, will be reconfigured to replicate with the right master. + +However when this conditions are encountered Sentinel waits enough time to be sure to catch a configuration update in via Pub/Sub Hello messages before to reconfigure the instances, in order to avoid that Sentinels with a stale configuration will try to change the slaves configuration without a good reason. + +TILT mode +--- + +Redis Sentinel is heavily dependent on the computer time: for instance in +order to understand if an instance is available it remembers the time of the +latest successful reply to the PING command, and compares it with the current +time to understand how old it is. + +However if the computer time changes in an unexpected way, or if the computer +is very busy, or the process blocked for some reason, Sentinel may start to +behave in an unexpected way. + +The TILT mode is a special "protection" mode that a Sentinel can enter when +something odd is detected that can lower the reliability of the system. +The Sentinel timer interrupt is normally called 10 times per second, so we +expect that more or less 100 milliseconds will elapse between two calls +to the timer interrupt. + +What a Sentinel does is to register the previous time the timer interrupt +was called, and compare it with the current call: if the time difference +is negative or unexpectedly big (2 seconds or more) the TILT mode is entered +(or if it was already entered the exit from the TILT mode postponed). + +When in TILT mode the Sentinel will continue to monitor everything, but: + +* It stops acting at all. +* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted. + +If everything appears to be normal for 30 second, the TILT mode is exited. + +Handling of -BUSY state +--- + +(Warning: Yet not implemented) + +The -BUSY error is returned when a script is running for more time than the +configured script time limit. When this happens before triggering a fail over +Redis Sentinel will try to send a "SCRIPT KILL" command, that will only +succeed if the script was read-only. From 99ebb3ec136f446cc7a318f272d7007c42757bf3 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 22 Nov 2013 10:20:18 +0100 Subject: [PATCH 0394/2880] Sentinel doc improved a bit. --- topics/sentinel.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/topics/sentinel.md b/topics/sentinel.md index 5a600697c3..0f0a3f6845 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -274,6 +274,8 @@ The failover process consists on the following steps: * All the other slaves attached to the original master are configured with the **SLAVEOF** command in order to start the replication process with the new master. * The leader terminates the failover process when all the slaves are reconfigured. +**Note:** every time a Redis instance is reconfigured, either by turning it into a master, a slave, or reconfiguring it as a slave of a different instance, the `CONFIG REWRITE` command is sent to the instance in order to make sure the configuration is persisted on disk. + The Sentinel to elect as master is chosen in the following way: * We remove all the slaves in SDOWN, disconnected, or with the last ping reply received older than 5 seconds (PING is sent every second). @@ -357,3 +359,8 @@ The -BUSY error is returned when a script is running for more time than the configured script time limit. When this happens before triggering a fail over Redis Sentinel will try to send a "SCRIPT KILL" command, that will only succeed if the script was read-only. + +Sentinel clients implementation +--- + +Sentinel requires explicit client support, unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance (virtual IP or other similar systems). The topic of client libraries implementation is covered in the document [Sentinel clients guidelines](/topics/sentinel-clients). From 62a3bffe65aecce387067d84a704a7a76d21b81e Mon Sep 17 00:00:00 2001 From: Sean Charles Date: Sun, 24 Nov 2013 19:26:47 +0000 Subject: [PATCH 0395/2880] Added GNU Prolog client information --- clients.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/clients.json b/clients.json index 327a74c93c..6bdef60f2f 100644 --- a/clients.json +++ b/clients.json @@ -669,5 +669,13 @@ "repository": "https://github.com/ctstone/csredis", "description": "Async (and sync) client for Redis and Sentinel", "authors": ["ctnstone"] + }, + + { + "name": "gnuprolog-redisclient", + "language": "GNU Prolog", + "repository": "https://github.com/emacstheviking/gnuprolog-redisclient", + "description": "Simple Redis client for GNU Prolog in native Prolog, no FFI, libraries etc.", + "authors": ["seancharles"] } ] From e0342264cd24c49bddfe3a1f5dbab7e34856a406 Mon Sep 17 00:00:00 2001 From: Curtis Maloney Date: Tue, 26 Nov 2013 10:20:25 +1100 Subject: [PATCH 0396/2880] Minor grammar corrections --- topics/persistence.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/topics/persistence.md b/topics/persistence.md index 731236e61d..42cd83ee55 100644 --- a/topics/persistence.md +++ b/topics/persistence.md @@ -39,8 +39,10 @@ AOF disadvantages --- * AOF files are usually bigger than the equivalent RDB files for the same dataset. -* AOF can be slower then RDB depending on the exact fsync policy. In general with fsync set to *every second* performances are still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. -* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to don't reproduce exactly the same dataset on reloading. This bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but this kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). 2) We never had a single report from users about an AOF corruption that was detected in the real world. +* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performances are still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of an huge write load. +* In the past we experienced rare bugs in specific commands (for instance there was one involving blocking commands like BRPOPLPUSH) causing the AOF produced to not reproduce exactly the same dataset on reloading. This bugs are rare and we have tests in the test suite creating random complex datasets automatically and reloading them to check everything is ok, but this kind of bugs are almost impossible with RDB persistence. To make this point more clear: the Redis AOF works incrementally updating an existing state, like MySQL or MongoDB does, while the RDB snapshotting creates everything from scratch again and again, that is conceptually more robust. However - + 1) It should be noted that every time the AOF is rewritten by Redis it is recreated from scratch starting from the actual data contained in the data set, making resistance to bugs stronger compared to an always appending AOF file (or one rewritten reading the old AOF instead of reading the data in memory). + 2) We never had a single report from users about an AOF corruption that was detected in the real world. Ok, so what should I use? --- From cd1cadaf8429f856e0b7cf6a06a084fe3476a531 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:16:37 +0100 Subject: [PATCH 0397/2880] Cluster tutorial. --- topics/cluster-tutorial.md | 707 +++++++++++++++++++++++++++++++++++++ 1 file changed, 707 insertions(+) create mode 100644 topics/cluster-tutorial.md diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md new file mode 100644 index 0000000000..b6b7da958f --- /dev/null +++ b/topics/cluster-tutorial.md @@ -0,0 +1,707 @@ +Redis cluster tutorial +=== + +This document is a gentle introduction to Redis Cluster, that does not use +complex to understand distributed systems concepts. It provides instructions +about how to setup a cluster, test, and operate it, without +going into the details that are covered in +the [Redis Cluster specification](/topics/cluster-spec) but just describing +how the system behaves from the point of view of the cluster. + +Note that if you plan to run a serious Redis Cluster deployment, the +more formal specification is an highly suggested reading. + +**Redis cluster is currently alpha quality code**, please get in touch in the +Redis mailing list or open an issue in the Redis Github repository if you +find any issue. + +Redis Cluster 101 +--- + +Redis Cluster provides a way to run a Redis installation where data is +**automatically sharded across multiple Redis nodes**. + +Commands dealing with multiple keys are not supported by the cluster, because +this would require moving data between Redis nodes, making Redis Cluster +not able to provide Redis-alike performances and predictable behavior +under load. + +Redis Cluster also provides **some degree of availability during partitions**, +that is in practical terms the ability to continue the operations when +some nodes fail or are not able to communicate. + +So in practical terms, what you get with Redis Cluster? + +* The ability to automatically split your dataset among multiple nodes. +* The ability to continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster. + +Redis Cluster data sharding +--- + +Redis Cluster does not use consistency hashing, but a different form of sharding +where every key is conceptually part of what we call an **hash slot**. + +There are 16384 hash slots in Redis Cluster, and to compute what is the hash +slot of a given key, we simply take the CRC16 of the hash slot modulo +16384. + +Every node in a Redis Cluster is responsible of a subset of the hash slots, +so for example you may have a cluster wit 3 nodes, where: + +* Node A contains hash slots from 0 to 5500. +* Node B contains hash slots from 5501 to 11000. +* Node C contains hash slots from 11001 to 16384. + +This allows to add and remove nodes in the cluster easily. For example if +I want to add a new node D, I need to move some hash slot from nodes A, B, C +to D. Similarly if I want to remove node A from the cluster I can just +move the hash slots served by A to B and C. When the node A will be empty +I can remove it from the cluster completely. + +Because moving hash slots from a node to another does not require to stop +operations, adding and removing nodes, or changing the percentage of hash +slots hold by nodes, does not require any downtime. + +Redis Cluster master-slave model +--- + +In order to remain available when a subset of nodes are failing or are not able +to communicate with the majority of nodes, Redis Cluster uses a master-slave +model where every node has from 1 (the master itself) to N replicas (N-1 +additional slaves). + +In our example cluster with nodes A, B, C, if node B fails the cluster is not +able to continue, since we no longer have a way to serve hash slots in the +range 5501-11000. + +However if when the cluster is created (or at a latter time) we add a slave +node to every master, so that the final cluster is composed of A, B, C +that are masters, and A1, B1, C1 that are slaves, the system is able to +continue if node B fails. + +Node B1 replicates B, so the cluster will elect node B1 as the new master +and will continue to operate correctly. + +However note that if nodes B and B1 fail at the same time Redis Cluster is not +able to continue to operate. + +Redis Cluster consistency guarantees +--- + +Redis Cluster is not able to guarantee **strong consistency**. In practical +terms this means that under certain conditions it is possible that Redis +Cluster will forget a write that was acknowledged by the system. + +The first reason why Redis Cluster can lose writes is because it uses +asynchronous replication. This means that during writes the following +happens: + +* Your client writes to the master B. +* The master B replies OK to your client. +* The master B propagates the write to its slaves B1, B2 and B3. + +As you can see B does not write for an acknowledge from B1, B2, B3 before +replying to the client, since this would be a prohibitive latency penalty +for Redis, so if your client writes something, B acknowledges the write, +but crashes before being able to send the write to its slaves, one of the +slaves can be promoted to master losing the write forever. + +This is **very similar to what happens** with most databases that are +configured to flush data to disk every second, so it is a scenario you +are already able to reason about because of past experiences with traditional +database systems not involving distributed systems. Similarly you can +improve consistency by forcing the database to flush data on disk before +replying to the client, but this usually results into prohibitively low +performances. + +Basically there is a trade-off to take between performances and consistency. + +Note: Redis Cluster in the future will allow users to perform synchronous +writes when absolutely needed. + +There is another scenario where Redis Cluster will lose writes, that happens +during a network partition where a client is isolated with a minority of +instances including at least a master. + +Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, +with 3 masters and 3 slaves. There is also a client, that we will call Z1. + +After a partition occurs, it is possible that in one side of the +partition we have A, C, A1, B1, C1, and in the other side we have B and Z1. + +Z1 is still able to write to B, that will accept its writes. If the +partition heals in a very short time, the cluster will continue normally. +However if the partition lasts enough time for B1 to be promoted to master +in the majority side of the partition, the writes that Z1 is sending to B +will be lost. + +Note that there is a maximum window to the amount of writes Z1 will be able +to send to B: if enough time has elapsed for the majority side of the +partition to elect a slave as master, every master node in the minority +side stops accepting writes. + +This amount of time is a very important configuration directive of Redis +Cluster, and is called the **node timeout**. + +After node timeout has elapsed, a master node is considered to be failing, +and can be replaced by one if its replicas. +Similarly after node timeout has elapsed without a master node to be able +to sense the majority of the other master nodes, it enters an error state +and stops accepting writes. + +Creating and using a Redis Cluster +=== + +To create a cluster, the first thing we need is to have a few empty +Redis instances running in **cluster mode**. This basically means that +clusters are not created using normal Redis instances, but a special mode +needs to be configured so that the Redis instance will enable the Cluster +specific features and commands. + +The following is a minimal Redis cluster configuration file: + +``` +port 7000 +cluster-enabled yes +cluster-config-file nodes.conf +cluster-node-timeout 5000 +appendonly yes +``` + +As you can see what enables the cluster mode is simply the `cluster-enabled` +directive. Every instance also contains the path of a file where the +configuration for this node is stored, that by default is `nodes.conf`. +This file is never touched by humans, it is simply generated at startup +by the Redis Cluster instances, and updated every time it is needed. + +Note that the **minimal cluster** that works as expected requires to contain +at least three master nodes. For your first tests it is strongly suggested +to start a six nodes cluster with three masters and three slaves. + +To do so, enter a new directory, and create the following directories named +after the port number of the instance we'll run inside any given directory. + +Something like: + +``` +mkdir cluster-test +cd cluster-test +mkdir 7000 7001 7002 7003 7004 7005 +``` + +Create a `redis.conf` file inside each of the directories, from 7000 to 7005. +As a template for your configuration file just use the small example above, +but make sure to replace the port number `7000` with the right port number +according to the directory name. + +Now copy your redis-server executable, **compiled from the latest sources in the unstable branch at Github**, into the `cluster-test` directory, and finally open 6 terminal tabs in your favorite terminal application. + +Start every instance like that, one every tab: + +``` +cd 7000 +../redis-server ./redis.conf +``` + +As you can see from the logs of every instance, since no `nodes.conf` file was +existing, every node assigns itself a new ID. + + [82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1 + +This ID will be used forever by this specific instance in order for the instance +to have an unique name in the context of the cluster. All the other nodes +remember the other nodes by this specific ID, and not by IP or port, these can +change, but the unique node identifier will never change for all the life +of the node. We call this identifier simply **Node ID**. + +Creating the cluster +--- + +Now that we have a number of instances running, we need to create our +cluster writing some meaningful configuration to the nodes. + +This is very easy to accomplish as we are helped by the Redis Cluster +command line interface utility called `redis-trib`, that is a Ruby program +calling CLUSTER commands in the instances in order to create new clusters, +check or reshard an existing cluster. + +The `redis-trib` utility is in the `src` directory of the Redis source code +distribution. To create your cluster simply type: + + ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 + +The command used here is **create**, since we want to create a new cluster. +The option `--replicas 1` means that we want a slave for every master created. +The other arguments are the list of addresses of the instances I want to use +to create the new cluster. + +Obviously the only setup with our requirements is to create a cluster with +3 masters and 3 slaves. + +Redis-trib will propose you a configuration. Accept typing **yes**. +The cluster will be configured and *joined*, that means, instances will be +bootstrapped into talking with each other. Finally if everything went ok +you'll see a message like that: + + [OK] All 16384 slots covered + +This means that there is at least a master instance serving each of the +16384 slots available. + +Playing with the cluster +--- + +At this stage one of the problems with Redis Cluster is the lack of +client libraries implementations. + +I'm aware of the following implementations: + +* [redis-rb-cluster](http://github.com/antirez/redis-rb-cluster) is a Ruby implementation written by me (@antirez) as a reference for other languages. It is a simple wrapper around the original redis-rb, implementing the minimal semantics to talk with the cluster efficiently. +* [redis-py-cluster](https://github.com/Grokzen/redis-py-cluster) appears to be a port of redis-rb-cluster to Python. Not recently updated (last commit 6 months ago) however it may be a starting point. +* The popular [Predis](https://github.com/nrk/predis) used to have some Redis Cluster support at the very early stages of Redis Cluster, however I'm currently not sure about the completeness of the support, nor if the support is designed to work with recent versions of Redis Cluster (at some point we changed the number of hash slots from 4k to 16k). +* The `redis-cli` utility in the unstable branch of the Redis repository at Github implements a very basic cluster support when started with the `-c` switch. + +Long story short an easy way to test Redis Cluster is either to try the +[redis-rb-cluster](http://github.com/antirez/redis-rb-cluster) Ruby client or +simply the `redis-cli` command line utility. The following is an example +of interaction using the latter: + +``` +$ redis-cli -c -p 7000 +redis 127.0.0.1:7000> set foo bar +-> Redirected to slot [12182] located at 127.0.0.1:7002 +OK +redis 127.0.0.1:7002> set hello world +-> Redirected to slot [866] located at 127.0.0.1:7000 +OK +redis 127.0.0.1:7000> get foo +-> Redirected to slot [12182] located at 127.0.0.1:7002 +"bar" +redis 127.0.0.1:7000> get hello +-> Redirected to slot [866] located at 127.0.0.1:7000 +"world" +``` + +The redis-cli cluster support is very basic so it always uses the fact that +Redis Cluster nodes are able to redirect a client to the right node. +A serious client is able to do better than that, and cache the map between +hash slots and nodes addresses, to directly use the right connection to the +right node. The map is refreshed only when something changed in the cluster +configuration, for example after a failover or after the system administrator +changed the cluster layout by adding or removing nodes. + +Writing an example app with redis-rb-cluster +--- + +Before goign forward showing how to operate the Redis Cluster, doing things +like a failover, or a resharding, we need to create some example application +or at least to be able to understand the semantics of a simple Redis Cluster +client interaction. + +In this way we can run an example and at the same time try to make nodes +failing, or start a resharding, to see how Redis Cluster behaves under real +world conditions. It is not very helpful to see what happens while nobody +is writing to the cluster. + +This section explains some basic usage of redis-rb-cluster showing two +examples. The first is the following, and is the `example.rb` file inside +the redis-rb-cluster distribution: + +``` + 1 require './cluster' + 2 + 3 startup_nodes = [ + 4 {:host => "127.0.0.1", :port => 7000}, + 5 {:host => "127.0.0.1", :port => 7001} + 6 ] + 7 rc = RedisCluster.new(startup_nodes,32,:timeout => 0.1) + 8 + 9 last = false + 10 + 11 while not last + 12 begin + 13 last = rc.get("__last__") + 14 last = 0 if !last + 15 rescue => e + 16 puts "error #{e.to_s}" + 17 sleep 1 + 18 end + 19 end + 20 + 21 ((last.to_i+1)..1000000000).each{|x| + 22 begin + 23 rc.set("foo#{x}",x) + 24 puts rc.get("foo#{x}") + 25 rc.set("__last__",x) + 26 rescue => e + 27 puts "error #{e.to_s}" + 28 end + 29 sleep 0.1 + 30 } +``` + +The application does a very simple thing, it sets keys in the form `foo` to `number`, one after the other. So if you run the program the result is the +following stream of commands: + +* SET foo0 0 +* SET foo1 1 +* SET foo2 2 +* And so forth... + +The program looks more complex than it should usually as it is designed to +show errors on the screen instead of exiting with an exception, so every +operation performed with the cluster is wrapped by `begin` `rescue` blocks. + +The **line 7** is the first interesting line in the program. It creates the +Redis Cluster object, using as argument a list of *startup nodes*, the maximum +number of connections this object is allowed to take against different nodes, +and finally the timeout after a given operation is considered to be failed. + +The startup nodes don't need to be all the nodes of the cluster. The important +thing is that at least one node is reachable. Also note that redis-rb-cluster +updates this list of startup nodes as soon as it is able to connect with the +first node. You should expect such a behavior with any other serious client. + +Now that we have the Redis Cluster object instance stored in the **rc** variable +we are ready to use the object like if it was a normal Redis object instance. + +This is exactly what happens in **line 11 to 19**: when we restart the example +we don't want to start again with `foo0`, so we store the counter inside +Redis itself. The code above is designed to read this counter, or if the +counter does not exist, to assign it the value of zero. + +However note how it is a while loop, as we want to try again and again even +if the cluster is down and is returning errors. Normal applications don't need +to be so careful. + +**Lines between 21 and 30** start the main loop where the keys are set or +an error is displayed. + +Note the `sleep` call at the end of the loop. In your tests you can remove +the sleep if you want to write to the cluster as fast as possible (relatively +to the fact that this is a busy loop without real parallelism of course, so +you'll get the usually 10k ops/second in the best of the conditions). + +Normally writes are slowed down in order for the example application to be +easier to follow by humans. + +Starting the application produces the following output: + +``` +ruby ./example.rb +1 +2 +3 +4 +5 +6 +7 +8 +9 +^C (I stopped the program here) +``` + +This is not a very interesting program and we'll use a better one in a moment +but we can already try what happens during a resharding when the program +is running. + +Resharding the cluster +--- + +Now we are ready to try a cluster resharding. To do this please +keep the example.rb program running, so that you can see if there is some +impact on the program running. Also you may want to comment the `sleep` +call in order to have some more serious write load during resharding. + +Resharding basically means to move hash slots from a set of nodes to another +set of nodes, and like cluster creation it is accomplished using the +redis-trib utility. + +To start a resharding just type: + + ./redis-trib.rb reshard 127.0.0.1:7000 + +You only need to specify a single node, redis-trib will find the other nodes +automatically. + +For now redis-trib is only able to reshard with the administrator support, +you can't just say move 5% of slots from this node to the other one (but +this is pretty trivial to implement). So it starts with questions. The first +is how much a big resharding do you want to do: + + How many slots do you want to move (from 1 to 16384)? + +We can try to reshard 1000 hash slots, that should already contain a non +trivial amount of keys if the example is still running without the sleep +call. + +Then redis-trib needs to know what is the target of the resharding. +I'll use the first master node, that is, 127.0.0.1:7000, but I need +to specify the Node ID of the instance. This was already printed in a +list by redis-trib, but I can always find the ID of a node with the following +command if I need: + +``` +$ redis-cli -p 7000 cluster nodes | grep myself +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460 +``` + +Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1. + +Now you'll get asked from what nodes you want to take those keys. +I'll just type `all` in order to take a bit of hash slots from all the +other master nodes. + +After the final confirmation you'll see a message for every slot that +redis-trib is going to move from a node to another, and a dot will be printed +for every actual key moved from one side to the other. + +While the resharding is in progress you should be able to see your +example program running unaffected. You can stop and restart it multiple times +during the resharding if you want. + +At the end of the resharding, you can test the health of the cluster with +the following command: + + ./redis-trib.rb check 127.0.0.1:7000 + +All the slots will be covered as usually, but this time the master at +127.0.0.1:7000 will have more hash slots, something around 6461. + +A more interesting example application +--- + +So far so good, but the example application we used is not very good. +It writes acritically to the cluster without ever checking if what was +written is the right thing. + +From our point of view the cluster receiving the writes could just always +write the key `foo` to `42` to every operation, and we would not notice at +all. + +So in the reids-rb-cluster repository, there is a more interesting application +that is called `consistency-test.rb`. It is a much more interesting application +as it uses a set of counters, by default 1000, and sends `INCR` commands +in order to increment the counters. + +However instead of just writing, the application does two additional things: + +* When a counter is updated using `INCR`, the application remembers the write. +* It also reads a random counter before every write, and check if the value is what it expected it to be, comparing it with the value it has in memory. + +What this means is that this application is a simple **consistency checker**, +and is able to tell you if the cluster lost some write, or if it accepted +a write that we did not received acknowledgement for. In the first case we'll +see a counter having a value that is smaller than the one we remember, while +in the second case the value will be greater. + +Running the consistency-test application produces a line of output every +second: + +``` +$ ruby consistency-test.rb +925 R (0 err) | 925 W (0 err) | +5030 R (0 err) | 5030 W (0 err) | +9261 R (0 err) | 9261 W (0 err) | +13517 R (0 err) | 13517 W (0 err) | +17780 R (0 err) | 17780 W (0 err) | +22025 R (0 err) | 22025 W (0 err) | +25818 R (0 err) | 25818 W (0 err) | +``` + +The line shows the number of **R**eads and **W**rites performed, and the +number of errors (query not accepted because of errors since the system was +not available). + +If some inconsistency is found, new lines are added to the output. +This is what happens, for example, if I reset a counter manually while +the program is running: + +``` +$ redis 127.0.0.1:7000> set key_217 0 +OK + +(in the other tab I see...) + +94774 R (0 err) | 94774 W (0 err) | +98821 R (0 err) | 98821 W (0 err) | +102886 R (0 err) | 102886 W (0 err) | 114 lost | +107046 R (0 err) | 107046 W (0 err) | 114 lost | +``` + +When I set the counter to 0 the real value was 144, so the program reports +144 lost writes (`INCR` commands that are not remembered by the cluster). + +This program is much more interesting as a test case, so we'll use it +to test the Redis Cluster failover. + +Testing the failover +--- + +Note: during this test, you should take a tab open with the consistency test +application running. + +In order to trigger the failover, the simplest thing we can do (that is also +the semantically simplest failure that can occur in a distributed system) +is to crash a single process, in our case a single master. + +We can identify a cluster and crash it with the following command: + +``` +$ redis-cli -p 7000 cluster nodes | grep master +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921 +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383 +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 +``` + +Ok, so 7000, 7001, and 7002 are masters. Let's crash node 7002 with the +**DEBUG SEGFAULT** command: + +``` +$ redis-cli -p 7002 debug segfault +Error: Server closed the connection +``` + +Now we can look at the output of the consistency test to see what it reported. + +``` +18849 R (0 err) | 18849 W (0 err) | +23151 R (0 err) | 23151 W (0 err) | +27302 R (0 err) | 27302 W (0 err) | + +... many error warnings here ... + +29659 R (578 err) | 29660 W (577 err) | +33749 R (578 err) | 33750 W (577 err) | +37918 R (578 err) | 37919 W (577 err) | +42077 R (578 err) | 42078 W (577 err) | +``` + +As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may +sound unexpected as in the first part of this tutorial we stated that Redis +Cluster can lost writes during the failover because it uses synchronous +replication. What we did not said is that this is not very likely to happen +because Redis sends the reply to the client, and the commands to replicate +to the slaves, about at the same time, so there is a very small window to +lose data. However the fact that it is hard to trigger does not mean that it +is impossible, so this does not change the consistency guarantees provided +by Redis cluster. + +We can now check what is the cluster setup after the failover (note that +in the meantime I restarted the crashed instance so that it rejoins the +cluster as a slave): + +``` +$ redis-cli -p 7000 cluster nodes +3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected +a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383 +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921 +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected +``` + +Now the masters are running on ports 7000, 7001 and 7005. What was previously +a master, that is the Redis instance running on port 7002, is now a slave of +7005. + +The output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens: + +* Node ID +* ip:port +* flags: master, slave, myself, fail, ... +* if it is a slave, the Node ID of the master +* Time of the last pending PING still waiting for a reply. +* Time of the last PONG received. +* Configuration epoch for this node (see the Cluster specification). +* Status of the link to this node. +* Slots served... + +Adding a new node +--- + +Adding a new node is basically the process of adding an empty node and then +moving some data into it, in case it is a new master, or telling it to +setup as a replica of a known node, in case it is a slave. + +We'll show both, starting with the addition of a new master instance. + +In both cases the first step to perform is **adding an empty node**. + +This is as simple as to start a new node in port 7006 (we already used +from 7000 to 7005 for our existing 6 nodes) with the same configuration +used for the other nodes, except for the port number, so what you should +do in order to conform with the setup we used for the previous nodes: + +* Create a new tab in your terminal application. +* Enter the `cluster-test` directory. +* Create a directory named `7006`. +* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number. +* Finally start the server with `../redis-server ./redis.conf` + +At this point the server should be running. + +Now we can use **redis-trib** as usually in order to add the node to +the existing cluster. + + ./redis-trib.rb addnode 127.0.0.1:7006 127.0.0.1:7000 + +As you can see I used the **addnode** command specifying the address of the +new node as first argument, and the address of a random existing node in the +cluster as second argument. + +In practical terms redis-trib here did very little to help us, it just +sent a `CLUSTER MEET` message to the node, something that is also possible +to accomplish manually. However redis-trib also checks the state of the +cluster before to operate, that is an advantage, and will be improved more +and more in the future in order to also be able to rollback changes when +needed or to help the user to fix a messed up cluster when there are issues. + +Now we can connect to the new node to see if it really joined the cluster: + +``` +redis 127.0.0.1:7006> cluster nodes +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921 +3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected +f093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected +a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected +97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422 +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383 +``` + +Note that since this node is already connected to the cluster it is already +able to redirect client queries correctly and is generally speaking part of +the cluster. However it has two peculiarities compared to the other masters: + +* It holds no data as it has no assigned hash slots. +* Because it is a master without assigned slots, it does not participate in the election process when a slave wants to become a master. + +Now it is possible to assign hash slots to this node using the resharding +feature of `redis-trib`. It is basically useless to show this as we already +did in a previous section, so instead I want to cover the case where we want +to turn this instance into a replica (a slave) of some other master. + +For example in order to add a replica for the node 127.0.0.1:7005 that is +currently serving hash slots in the range 11423-16383, that has a Node ID +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need to do is to connect +with the new node that we just added and send a simple command: + + redis 127.0.0.1:7006> cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e + +That's it. Now we have a new replica for this set of hash slots, and all +the other nodes in the cluster already know (after a few seconds needed to +update their config). We can verify with the following command: + +``` +$ redis-cli -p 7000 cluster nodes | grep slave | grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e +f093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected +``` + +The node 3c3a0c... now has two slaves, running on ports 7002 (the existing one) and 7006 (the new one). + +Removing a node +--- + +Work in progress. From 4103f8cae754f085201208fd3862b696db023ab8 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:18:44 +0100 Subject: [PATCH 0398/2880] Long line split in cluster tutorial. --- topics/cluster-tutorial.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index b6b7da958f..68aea3cb75 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -228,7 +228,8 @@ check or reshard an existing cluster. The `redis-trib` utility is in the `src` directory of the Redis source code distribution. To create your cluster simply type: - ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 + ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \ + 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 The command used here is **create**, since we want to create a new cluster. The option `--replicas 1` means that we want a slave for every master created. From 2fc3e9bed576ffe34108ece85c9bcf9b5b69c045 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:27:33 +0100 Subject: [PATCH 0399/2880] typo in cluster-tutorial. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 68aea3cb75..428d96de37 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -6,7 +6,7 @@ complex to understand distributed systems concepts. It provides instructions about how to setup a cluster, test, and operate it, without going into the details that are covered in the [Redis Cluster specification](/topics/cluster-spec) but just describing -how the system behaves from the point of view of the cluster. +how the system behaves from the point of view of the user. Note that if you plan to run a serious Redis Cluster deployment, the more formal specification is an highly suggested reading. From 09d7756a443b51145a3de8f54b822353824b4ae3 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:29:15 +0100 Subject: [PATCH 0400/2880] typo #2 in cluster-tutorial. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 428d96de37..bfc73b438c 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -42,7 +42,7 @@ Redis Cluster does not use consistency hashing, but a different form of sharding where every key is conceptually part of what we call an **hash slot**. There are 16384 hash slots in Redis Cluster, and to compute what is the hash -slot of a given key, we simply take the CRC16 of the hash slot modulo +slot of a given key, we simply take the CRC16 of the key modulo 16384. Every node in a Redis Cluster is responsible of a subset of the hash slots, From 5d3346cf09fd0bcceb7c81bf138661c065ba399c Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:31:52 +0100 Subject: [PATCH 0401/2880] typo #3 in cluster-tutorial. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index bfc73b438c..92a9c52a5e 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -100,7 +100,7 @@ happens: * The master B replies OK to your client. * The master B propagates the write to its slaves B1, B2 and B3. -As you can see B does not write for an acknowledge from B1, B2, B3 before +As you can see B does not wait for an acknowledge from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its slaves, one of the From f0d2619a289bc76c4142361777ba41e7fc88291a Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:41:40 +0100 Subject: [PATCH 0402/2880] A few sentences improved in cluster-tutorial. --- topics/cluster-tutorial.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 92a9c52a5e..679f176da0 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -203,16 +203,16 @@ cd 7000 ../redis-server ./redis.conf ``` -As you can see from the logs of every instance, since no `nodes.conf` file was -existing, every node assigns itself a new ID. +As you can see from the logs of every instance, since no `nodes.conf` file +existed, every node assigns itself a new ID. [82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1 This ID will be used forever by this specific instance in order for the instance -to have an unique name in the context of the cluster. All the other nodes -remember the other nodes by this specific ID, and not by IP or port, these can -change, but the unique node identifier will never change for all the life -of the node. We call this identifier simply **Node ID**. +to have an unique name in the context of the cluster. Every node +remembers every other node using this IDs, and not by IP or port. +IP addresses and ports may change, but the unique node identifier will never +change for all the life of the node. We call this identifier simply **Node ID**. Creating the cluster --- @@ -221,9 +221,9 @@ Now that we have a number of instances running, we need to create our cluster writing some meaningful configuration to the nodes. This is very easy to accomplish as we are helped by the Redis Cluster -command line interface utility called `redis-trib`, that is a Ruby program -calling CLUSTER commands in the instances in order to create new clusters, -check or reshard an existing cluster. +command line utility called `redis-trib`, that is a Ruby program +executing special commands in the instances in order to create new clusters, +check or reshard an existing cluster, and so forth. The `redis-trib` utility is in the `src` directory of the Redis source code distribution. To create your cluster simply type: From f9a124e70fad9a4571592f24f86659ec8af5b1bd Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:51:35 +0100 Subject: [PATCH 0403/2880] Grammar fix in cluster-tutorial. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 679f176da0..63c3fefdc7 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -425,7 +425,7 @@ To start a resharding just type: You only need to specify a single node, redis-trib will find the other nodes automatically. -For now redis-trib is only able to reshard with the administrator support, +Currently redis-trib is only able to reshard with the administrator support, you can't just say move 5% of slots from this node to the other one (but this is pretty trivial to implement). So it starts with questions. The first is how much a big resharding do you want to do: From 9f9920e40f7243def0ea39ad2dd4436b293f5175 Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 10:55:40 +0100 Subject: [PATCH 0404/2880] Sentence more clear in cluster-tutorial. --- topics/cluster-tutorial.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 63c3fefdc7..3151195b75 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -436,7 +436,8 @@ We can try to reshard 1000 hash slots, that should already contain a non trivial amount of keys if the example is still running without the sleep call. -Then redis-trib needs to know what is the target of the resharding. +Then redis-trib needs to know what is the target of the resharding, that is, +the node that will receive the hash slots. I'll use the first master node, that is, 127.0.0.1:7000, but I need to specify the Node ID of the instance. This was already printed in a list by redis-trib, but I can always find the ID of a node with the following From 0b52b85b004a6240582e31ffd640fd5a4097412a Mon Sep 17 00:00:00 2001 From: antirez Date: Wed, 27 Nov 2013 11:03:37 +0100 Subject: [PATCH 0405/2880] syncrhonous -> asynchronous. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 3151195b75..19043e83a5 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -581,7 +581,7 @@ Now we can look at the output of the consistency test to see what it reported. As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may sound unexpected as in the first part of this tutorial we stated that Redis -Cluster can lost writes during the failover because it uses synchronous +Cluster can lost writes during the failover because it uses asynchronous replication. What we did not said is that this is not very likely to happen because Redis sends the reply to the client, and the commands to replicate to the slaves, about at the same time, so there is a very small window to From 6f0a9b62b0536304a198b29e07d7275103b999ae Mon Sep 17 00:00:00 2001 From: Jan-Erik Rediger Date: Sat, 30 Nov 2013 14:50:14 +0100 Subject: [PATCH 0406/2880] Update notice about keyspace notifications --- topics/notifications.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/topics/notifications.md b/topics/notifications.md index a10932fcbd..7cee600f62 100644 --- a/topics/notifications.md +++ b/topics/notifications.md @@ -1,9 +1,7 @@ Redis Keyspace Notifications === -**IMPORTANT** Keyspace notifications is a feature only available in development -versions of Redis. This documentation and the implementation of the feature are -likely to change in the next weeks. +**IMPORTANT** Keyspace notifications is a feature available since 2.8.0 Feature overview --- From e9bca400841626f9a8ff0620425e37ac5da7c775 Mon Sep 17 00:00:00 2001 From: Jonathan Lassoff Date: Sat, 30 Nov 2013 10:23:35 -0800 Subject: [PATCH 0407/2880] Fix small mis-spelling. --- topics/cluster-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/topics/cluster-tutorial.md b/topics/cluster-tutorial.md index 19043e83a5..cd3f62cab7 100644 --- a/topics/cluster-tutorial.md +++ b/topics/cluster-tutorial.md @@ -46,7 +46,7 @@ slot of a given key, we simply take the CRC16 of the key modulo 16384. Every node in a Redis Cluster is responsible of a subset of the hash slots, -so for example you may have a cluster wit 3 nodes, where: +so for example you may have a cluster with 3 nodes, where: * Node A contains hash slots from 0 to 5500. * Node B contains hash slots from 5501 to 11000. From 15ba3deb536a4482a828e64aabe54fe7d84c3b52 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 2 Dec 2013 16:16:46 +0100 Subject: [PATCH 0408/2880] Added authors field for Scredis in clients.json. --- clients.json | 1 + 1 file changed, 1 insertion(+) diff --git a/clients.json b/clients.json index 71e5dad987..a37f58a377 100644 --- a/clients.json +++ b/clients.json @@ -446,6 +446,7 @@ "language": "Scala", "repository": "https://github.com/Livestream/scredis", "description": "Scredis is an advanced Redis client entirely written in Scala. Used in production at http://Livestream.com.", + "authors": ["livestream"], "active": true }, From 719fc979c102ca5f7930e8839d1a1aea1686dcda Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 6 Dec 2013 23:43:10 +0100 Subject: [PATCH 0409/2880] Nydus added to tools section. --- tools.json | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools.json b/tools.json index 038ef20f32..d09920c570 100644 --- a/tools.json +++ b/tools.json @@ -296,5 +296,13 @@ "repository": "https://github.com/uglide/RedisDesktopManager", "description": "Cross-platform desktop GUI management tool for Redis", "authors": ["u_glide"] + }, + { + "name": Nydus", + "language": "Python", + "url": "https://pypi.python.org/pypi/nydus", + "repository": "https://pypi.python.org/pypi/nydus", + "description": "Connection clustering and routing for Redis and Python.", + "authors": ["@zeeg"] } ] From a4d5623d3e60643340fe8c6d1d258a87413d3f11 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 6 Dec 2013 23:44:22 +0100 Subject: [PATCH 0410/2880] json fix --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index d09920c570..4518441b57 100644 --- a/tools.json +++ b/tools.json @@ -298,7 +298,7 @@ "authors": ["u_glide"] }, { - "name": Nydus", + "name": "Nydus", "language": "Python", "url": "https://pypi.python.org/pypi/nydus", "repository": "https://pypi.python.org/pypi/nydus", From 465aa903da13869ea52323c0f7d5a6e6077bcfd6 Mon Sep 17 00:00:00 2001 From: antirez Date: Fri, 6 Dec 2013 23:45:44 +0100 Subject: [PATCH 0411/2880] Twitter ref fixed in Nydus tool. --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index 4518441b57..c85a0e6d4e 100644 --- a/tools.json +++ b/tools.json @@ -303,6 +303,6 @@ "url": "https://pypi.python.org/pypi/nydus", "repository": "https://pypi.python.org/pypi/nydus", "description": "Connection clustering and routing for Redis and Python.", - "authors": ["@zeeg"] + "authors": ["zeeg"] } ] From 83fb3acba896c25987a3561ee5cdaaaaf287ba61 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 9 Dec 2013 11:30:42 +0100 Subject: [PATCH 0412/2880] UPDATE messages added to the Cluster spec. This is already part of the implementatio but was not covered in the spec. --- topics/cluster-spec.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/topics/cluster-spec.md b/topics/cluster-spec.md index d3389a5c4e..c8e4cc2628 100644 --- a/topics/cluster-spec.md +++ b/topics/cluster-spec.md @@ -612,6 +612,31 @@ For this reason there is a second rule that is used in order to rebind an hash s Because of the second rule eventually all the nodes in the cluster will agree that the owner of a slot is the one with the greatest `configEpoch` among the nodes advertising it. +UPDATE messages +=== + +The described system for the propagation of hash slots configurations +only uses the normal ping and pong messages exchanged by nodes. + +It also requires that there is a node that is either a slave or a master +for a given hash slot and has the updated configuration, because nodes +send their own configuration in pong and pong packets headers. + +However sometimes a node may recover after a partition in a setup where +it is the only node serving a given hash slot, but with an old configuration. + +Example: a given hash slot is served by node A and B. A is the master, and at some point fails, so B is promoted as master. Later B fails as well, and the cluster has no way to recover since there are no more replicas for this hash slot. + +However A may recover some time later, and rejoin the cluster with an old configuration in which it was writable as a master. There is no replica that can update its configuration. This is the goal of UPDATE messages: when a node detects that another node is advertising its hash slots with an old configuration, it sends the node an UPDATE message with the ID of the new node serving the slots and the set of hash slots (send as a bitmap) that it is serving. + +NOTE: while currently configuration updates via ping / pong and UPDATE share the +same code path, there is a functional overlap between the two in the way they +update a configuration of a node with stale informations. However the two +mechanisms are both useful because ping / pong messages after some time are +able to populate the hash slots routing table of a new node, while UPDATE +messages are only sent when an old configuration is detected, and only +cover the information needed to fix the wrong configuration. + Publish/Subscribe === From 50a9f98a8822b4267786d147ff5adadb225179d0 Mon Sep 17 00:00:00 2001 From: Lennie Date: Sun, 15 Dec 2013 10:06:53 +0100 Subject: [PATCH 0413/2880] Update link to example 2.8 redis.conf in config set command documentation Maybe it is time to update the documentation to point to the example redis.conf of the stable (2.8) version. --- commands/config set.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/config set.md b/commands/config set.md index fb75449e82..f556b9a3d2 100644 --- a/commands/config set.md +++ b/commands/config set.md @@ -14,7 +14,7 @@ All the supported parameters have the same meaning of the equivalent configuration parameter used in the [redis.conf][hgcarr22rc] file, with the following important differences: -[hgcarr22rc]: http://github.com/antirez/redis/raw/2.2/redis.conf +[hgcarr22rc]: http://github.com/antirez/redis/raw/2.8/redis.conf * Where bytes or other quantities are specified, it is not possible to use the `redis.conf` abbreviated form (10k 2gb ... and so forth), everything From 4a39bc329a6a518a1c9351a55dde112b2b41d037 Mon Sep 17 00:00:00 2001 From: Sasan Rose Date: Mon, 23 Dec 2013 23:09:11 +0330 Subject: [PATCH 0414/2880] Multi-server functionality added to PHPRedMin --- tools.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools.json b/tools.json index c85a0e6d4e..f8b5e88d00 100644 --- a/tools.json +++ b/tools.json @@ -279,7 +279,7 @@ "name": "PHPRedMin", "language": "PHP", "repository": "https://github.com/sasanrose/phpredmin", - "description": "Yet another web interface for Redis", + "description": "Yet another web interface for Redis with multi-server support", "authors": ["sasanrose"] }, { From 3e2df64b262e75c8d29cd4bd8df3f26dadf33e9a Mon Sep 17 00:00:00 2001 From: Michael Neumann Date: Sat, 4 Jan 2014 11:39:54 +0100 Subject: [PATCH 0415/2880] Add rust-redis client for Rust language --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index e18661b340..2c6e44cca2 100644 --- a/clients.json +++ b/clients.json @@ -686,5 +686,14 @@ "repository": "https://github.com/chrisdinn/brando", "description": "A Redis client written with the Akka IO package introduced in Akka 2.2.", "authors": ["chrisdinn"] + }, + + { + "name": "rust-redis", + "language": "Rust", + "repository": "https://github.com/mneumann/rust-redis", + "description": "A Rust client library for Redis.", + "authors": ["mneumann"], + "active": true } ] From 4953dcd08081e320eeaf9f8b71402c2b9f108c64 Mon Sep 17 00:00:00 2001 From: Carlos Nieto Date: Sun, 5 Jan 2014 13:02:19 -0600 Subject: [PATCH 0416/2880] Updating description and url. --- clients.json | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/clients.json b/clients.json index e18661b340..f2f9f29ad1 100644 --- a/clients.json +++ b/clients.json @@ -141,7 +141,8 @@ "name": "gosexy/redis", "language": "Go", "repository": "https://github.com/gosexy/redis", - "description": "Go bindings for the official C redis client (hiredis), supports the whole command set of redis 2.6.10 and subscriptions with go channels.", + "url": "https://menteslibres.net/gosexy/redis", + "description": "A Go client for redis built on top of the hiredis C client. Supports non-blocking connections and channel-based subscriptions.", "authors": ["xiam"], "active": true }, @@ -671,7 +672,7 @@ "description": "Thread-safe client supporting async usage and key/value codecs", "authors": ["ar3te"] }, - + { "name": "csredis", "language": "C#", From 54997964be3219b396b787ad11b07ecee00bb28d Mon Sep 17 00:00:00 2001 From: Austin McKinley Date: Tue, 7 Jan 2014 12:26:31 -0800 Subject: [PATCH 0417/2880] fixing doc typos --- topics/replication.md | 82 +++++++++++++++++++++---------------------- 1 file changed, 40 insertions(+), 42 deletions(-) diff --git a/topics/replication.md b/topics/replication.md index 3191fc7c5e..5b70a2e1d8 100644 --- a/topics/replication.md +++ b/topics/replication.md @@ -6,46 +6,47 @@ replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication: -* Redis uses asynchronous replication. Starting with Redis 2.8 there is however a periodic (one time every second) acknowledge of the replication stream processed by slaves. +* Redis uses asynchronous replication. Starting with Redis 2.8, however, slaves +will periodically acknowledge the replication stream. * A master can have multiple slaves. -* Slaves are able to accept other slaves connections. Aside from +* Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a graph-like structure. -* Redis replication is non-blocking on the master side, this means that -the master will continue to serve queries when one or more slaves perform -the first synchronization. +* Redis replication is non-blocking on the master side. This means that +the master will continue to handle queries when one or more slaves perform +the initial synchronization. -* Replication is non blocking on the slave side: while the slave is performing -the first synchronization it can reply to queries using the old version of -the data set, assuming you configured Redis to do so in redis.conf. -Otherwise you can configure Redis slaves to send clients an error if the -link with the master is down. However there is a moment where the old dataset must be deleted and the new one must be loaded by the slave where it will block incoming connections. +* Replication is also non-blocking on the slave side. While the slave is performing +the initial synchronization, it can handle queries using the old version of +the dataset, assuming you configured Redis to do so in redis.conf. +Otherwise, you can configure Redis slaves to return an error to clients if the +replication stream is down. However, after the initial sync, the old dataset +must be deleted and the new one must be loaded. The slave will block incoming +connections during this brief window. -* Replications can be used both for scalability, in order to have +* Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, heavy `SORT` -operations can be offloaded to slaves, or simply for data redundancy. +operations can be offloaded to slaves), or simply for data redundancy. -* It is possible to use replication to avoid the saving process on the -master side: just configure your master redis.conf to avoid saving -(just comment all the "save" directives), then connect a slave +* It is possible to use replication to avoid the cost of writing the master +write the full dataset to disk: just configure your master redis.conf to avoid +saving (just comment all the "save" directives), then connect a slave configured to save from time to time. How Redis replication works --- -If you set up a slave, upon connection it sends a SYNC command. And -it doesn't matter if it's the first time it has connected or if it's -a reconnection. +If you set up a slave, upon connection it sends a SYNC command. It doesn't +matter if it's the first time it has connected or if it's a reconnection. -The master then starts background saving, and collects all new +The master then starts background saving, and starts to buffer all new commands received that will modify the dataset. When the background saving is complete, the master transfers the database file to the slave, which saves it on disk, and then loads it into memory. The master will -then send to the slave all accumulated commands, and all new commands -received from clients that will modify the dataset. This is done as a +then send to the slave all buffered commands. This is done as a stream of commands and is in the same format of the Redis protocol itself. You can try it yourself via telnet. Connect to the Redis port while the @@ -59,7 +60,7 @@ concurrent slave synchronization requests, it performs a single background save in order to serve all of them. When a master and a slave reconnects after the link went down, a full resync -is always performed. However starting with Redis 2.8, a partial resynchronization +is always performed. However, starting with Redis 2.8, a partial resynchronization is also possible. Partial resynchronization @@ -69,20 +70,17 @@ Starting with Redis 2.8, master and slave are usually able to continue the replication process without requiring a full resynchronization after the replication link went down. -This works using an in-memory backlog of the replication stream in the -master side. Also the master and all the slaves agree on a *replication +This works by creating an in-memory backlog of the replication stream on the +master side. The master and all the slaves agree on a *replication offset* and a *master run id*, so when the link goes down, the slave will -reconnect and ask the master to continue the replication, assuming the +reconnect and ask the master to continue the replication. Assuming the master run id is still the same, and that the offset specified is available -in the replication backlog. - -If the conditions are met, the master just sends the part of the replication -stream the master missed, and the replication continues. -Otherwise a full resynchronization is performed as in the past versions of -Redis. +in the replication backlog, replication will resume from the point where it left off. +If either of these conditions are unmet, a full resynchronization is performed +(which is the normal pre-2.8 behavior). The new partial resynchronization feature uses the `PSYNC` command internally, -while the old implementation used the `SYNC` command, however a Redis 2.8 +while the old implementation uses the `SYNC` command. Note that a Redis 2.8 slave is able to detect if the server it is talking with does not support `PSYNC`, and will use `SYNC` instead. @@ -98,19 +96,19 @@ Of course you need to replace 192.168.1.1 6379 with your master IP address (or hostname) and port. Alternatively, you can call the `SLAVEOF` command and the master host will start a sync with the slave. -There are also a few parameters in order to tune the replication backlog taken +There are also a few parameters for tuning the replication backlog taken in memory by the master to perform the partial resynchronization. See the example `redis.conf` shipped with the Redis distribution for more information. -Read only slave +Read-only slave --- -Since Redis 2.6 slaves support a read-only mode that is enabled by default. +Since Redis 2.6, slaves support a read-only mode that is enabled by default. This behavior is controlled by the `slave-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. -Read only slaves will reject all the write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is conceived to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However security of read-only instances can be improved disabling commands in redis.conf using the `rename-command` directive. +Read-only slaves will reject all write commands, so that it is not possible to write to a slave because of a mistake. This does not mean that the feature is intended to expose a slave instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. However, security of read-only instances can be improved by disabling commands in redis.conf using the `rename-command` directive. -You may wonder why it is possible to revert the default and have slave instances that can be target of write operations. The reason is that while this writes will be discarded if the slave and the master will resynchronize, or if the slave is restarted, often there is ephemeral data that is unimportant that can be stored into slaves. For instance clients may take information about reachability of master in the slave instance to coordinate a fail over strategy. +You may wonder why it is possible to revert the read-only setting and have slave instances that can be target of write operations. The reason is that these writes will be discarded if the slave and the master resynchronize, or if the slave is restarted. Often there is ephemeral data that is unimportant that can be stored on read-only slaves. For instance, clients may take information about master reachability to coordinate a failover strategy. Setting a slave to authenticate to a master --- @@ -129,12 +127,12 @@ To set it permanently, add this to your config file: Allow writes only with N attached replicas --- -Starting with Redis 2.8 it is possible to configure a Redis master in order to +Starting with Redis 2.8, it is possible to configure a Redis master to accept write queries only if at least N slaves are currently connected to the -master, in order to improve data safety. +master. -However because Redis uses asynchronous replication it is not possible to ensure -the write actually received a given write, so there is always a window for data +However, because Redis uses asynchronous replication it is not possible to ensure +the slave actually received a given write, so there is always a window for data loss. This is how the feature works: @@ -154,5 +152,5 @@ There are two configuration parameters for this feature: * min-slaves-to-write `` * min-slaves-max-lag `` -For more information please check the example `redis.conf` file shipped with the +For more information, please check the example `redis.conf` file shipped with the Redis source distribution. From 17924196017b266d98332d6477b0e3b70c521a4c Mon Sep 17 00:00:00 2001 From: xuyu Date: Wed, 8 Jan 2014 16:22:34 +0800 Subject: [PATCH 0418/2880] Update clients.json add a new redis client for golang --- clients.json | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/clients.json b/clients.json index e18661b340..22bc868b68 100644 --- a/clients.json +++ b/clients.json @@ -145,6 +145,15 @@ "authors": ["xiam"], "active": true }, + + { + "name": "goredis", + "language": "Go", + "repository": "https://github.com/xuyu/goredis", + "description": "A redis client for golang with full features", + "authors": ["xuyu"], + "active": true + }, { "name": "hedis", From c4b896b57a7feb4ecd7c387bf99f7650f741ccb3 Mon Sep 17 00:00:00 2001 From: Jon Forrest Date: Thu, 9 Jan 2014 16:55:50 -0800 Subject: [PATCH 0419/2880] Initial changes. Nothing major. --- topics/twitter-clone.md | 78 ++++++++++++++++++++--------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/topics/twitter-clone.md b/topics/twitter-clone.md index e21e189f11..194b1084f0 100644 --- a/topics/twitter-clone.md +++ b/topics/twitter-clone.md @@ -1,36 +1,36 @@ A case study: Design and implementation of a simple Twitter clone using only the Redis key-value store as database and PHP === -In this article I'll explain the design and the implementation of a [simple clone of Twitter](http://retwis.antirez.com) written using PHP and Redis as only database. The programming community uses to look at key-value stores like special databases that can't be used as drop in replacement for a relational database for the development of web applications. This article will try to prove the contrary. +In this article I'll describe the design and the implementation of a [simple clone of Twitter](http://retwis.antirez.com) written using PHP with Redis as the only database. The programming community traditionally considered key-value stores as special databases that couldn't be used as drop in replacements for a relational database for the development of web applications. This article will try to correct this impression. -Our Twitter clone, [called Retwis](http://retwis.antirez.com), is structurally simple, has very good performance, and can be distributed among N web servers and M Redis servers with very little effort. You can find the source code [here](http://code.google.com/p/redis/downloads/list). +Our Twitter clone, called [Retwis](http://retwis.antirez.com), is structurally simple, has very good performance, and can be distributed among any number of web and Redis servers with very little effort. You can find the source code [here](http://code.google.com/p/redis/downloads/list). -We use PHP for the example since it can be read by everybody. The same (or... much better) results can be obtained using Ruby, Python, Erlang, and so on. +I use PHP for the example since it can be read by everybody. The same (or... much better) results can be obtained using Ruby, Python, Erlang, and so on. **Note:** [Retwis-RB](http://retwisrb.danlucraft.com/) is a port of Retwis to Ruby and Sinatra written by Daniel Lucraft! With full source code included of -course, the Git repository is linked in the footer of the web page. The rest -of this article targets PHP, but Ruby programmers can also check the other -source code, it conceptually very similar. +course, a link to its Git repository appears in the footer of this article. The rest +of this article targets PHP, but Ruby programmers can also check the Retwis-RB +source code since it's conceptually very similar. **Note:** [Retwis-J](http://retwisj.cloudfoundry.com/) is a port of Retwis to -Java, using the Spring Data Framework, written by [Costin Leau](http://twitter.com/costinl). The source code +Java, using the Spring Data Framework, written by [Costin Leau](http://twitter.com/costinl). Its source code can be found on -[GitHub](https://github.com/SpringSource/spring-data-keyvalue-examples) and +[GitHub](https://github.com/SpringSource/spring-data-keyvalue-examples), and there is comprehensive documentation available at [springsource.org](http://j.mp/eo6z6I). -Key-value stores basics +Key-value store basics --- -The essence of a key-value store is the ability to store some data, called _value_, inside a key. This data can later be retrieved only if we know the exact key used to store it. There is no way to search something by value. In a sense, it is like a very large hash/dictionary, but it is persistent, i.e. when your application ends, the data doesn't go away. So for example I can use the command SET to store the value *bar* at key *foo*: +The essence of a key-value store is the ability to store some data, called a _value_, inside a key. The value can be retrieved later only if we know the exact key it was stored in. There is no way to search for something by value. In a sense, it is like a very large hash/dictionary, but it is persistent, i.e. when your application ends, the data doesn't go away. So, for example, I can use the command SET to store the value *bar* in the key *foo*: SET foo bar -Redis will store our data permanently, so we can later ask for "_What is the value stored at key foo?_" and Redis will reply with *bar*: +Redis stores data permanently, so if I later ask "_What is the value stored in key foo?_" Redis will reply with *bar*: GET foo => bar -Other common operations provided by key-value stores are DEL used to delete a given key, and the associated value, SET-if-not-exists (called SETNX on Redis) that sets a key only if it does not already exist, and INCR that is able to atomically increment a number stored at a given key: +Other common operations provided by key-value stores are DEL, to delete a given key and its associated value, SET-if-not-exists (called SETNX on Redis), to assign a value to a key only if the key does not already exist, and INCR, to atomically increment a number stored in a given key: SET foo 10 INCR foo => 11 @@ -40,13 +40,13 @@ Other common operations provided by key-value stores are DEL used to delete a gi Atomic operations --- -So far it should be pretty simple, but there is something special about INCR. Think about this, why to provide such an operation if we can do it ourselves with a bit of code? After all it is as simple as: +There is something special about INCR. Think about why Redis provides such an operation if we can do it ourselves with a bit of code? After all, it is as simple as: x = GET foo x = x + 1 SET foo x -The problem is that doing the increment this way will work as long as there is only a client working with the value _x_ at a time. See what happens if two computers are accessing this data at the same time: +The problem is that incrementing this way will work as long as there is only one client working with the key _foo_ at one time. See what happens if two clients are accessing this key at the same time: x = GET foo (yields 10) y = GET foo (yields 10) @@ -55,34 +55,34 @@ The problem is that doing the increment this way will work as long as there is o SET foo x (foo is now 11) SET foo y (foo is now 11) -Something is wrong with that! We incremented the value two times, but instead to go from 10 to 12 our key holds 11. This is because the INCR operation done with `GET / increment / SET` *is not an atomic operation*. Instead the INCR provided by Redis, Memcached, ..., are atomic implementations, the server will take care to protect the get-increment-set for all the time needed to complete in order to prevent simultaneous accesses. +Something is wrong! We incremented the value two times, but instead of going from 10 to 12, our key holds 11. This is because the increment done with `GET / increment / SET` *is not an atomic operation*. Instead the INCR provided by Redis, Memcached, ..., are atomic implementations, and the server will take care of protecting the key for all the time needed to complete the increment in order to prevent simultaneous accesses. -What makes Redis different from other key-value stores is that it provides more operations similar to INCR that can be used together to model complex problems. This is why you can use Redis to write whole web applications without using an SQL database and without going crazy. +What makes Redis different from other key-value stores is that it provides other operations similar to INCR that can be used to model complex problems. This is why you can use Redis to write whole web applications without using an SQL database and without going crazy. Beyond key-value stores --- -In this section we will see what Redis features we need to build our Twitter clone. The first thing to know is that Redis values can be more than strings. Redis supports Lists and Sets as values, and there are atomic operations to operate against this more advanced values so we are safe even with multiple accesses against the same key. Let's start from Lists: +In this section we will see which Redis features we need to build our Twitter clone. The first thing to know is that Redis values can be more than strings. Redis supports Lists and Sets as values, and there are atomic operations to operate on them so we are safe even with multiple accesses of the same key. Let's start with Lists: LPUSH mylist a (now mylist holds one element list 'a') LPUSH mylist b (now mylist holds 'b,a') LPUSH mylist c (now mylist holds 'c,b,a') -LPUSH means _Left Push_, that is, add an element to the left (or to the head) of the list stored at _mylist_. If the key _mylist_ does not exist it is automatically created by Redis as an empty list before the PUSH operation. As you can imagine, there is also the RPUSH operation that adds the element on the right of the list (on the tail). +LPUSH means _Left Push_, that is, add an element to the left (or to the head) of the list stored in _mylist_. If the key _mylist_ does not exist it is automatically created by Redis as an empty list before the PUSH operation. As you can imagine, there is also an RPUSH operation that adds the element to the right of the list (on the tail). -This is very useful for our Twitter clone. Updates of users can be stored into a list stored at `username:updates` for instance. There are operations to get data or information from Lists of course. For instance LRANGE returns a range of the list, or the whole list. +This is very useful for our Twitter clone. User updates can be added to a list stored in `username:updates`, for instance. There are operations to get data from Lists, of course. For instance, LRANGE returns a range of the list, or the whole list. LRANGE mylist 0 1 => c,b -LRANGE uses zero-based indexes, that is the first element is 0, the second 1, and so on. The command arguments are `LRANGE key first-index last-index`. The _last index_ argument can be negative, with a special meaning: -1 is the last element of the list, -2 the penultimate, and so on. So in order to get the whole list we can use: +LRANGE uses zero-based indexes, that is the first element is 0, the second 1, and so on. The command arguments are `LRANGE key first-index last-index`. The _last-index_ argument can be negative, with a special meaning: -1 is the last element of the list, -2 the penultimate, and so on. So in order to get the whole list we can use: LRANGE mylist 0 -1 => c,b,a -Other important operations are LLEN that returns the length of the list, and LTRIM that is like LRANGE but instead of returning the specified range *trims* the list, so it is like _Get range from mylist, Set this range as new value_ but atomic. We will use only this List operations, but make sure to check the [Redis documentation](http://code.google.com/p/redis/wiki/README) to discover all the List operations supported by Redis. +Other important operations are LLEN that returns the length of the list, and LTRIM that is like LRANGE but instead of returning the specified range *trims* the list, so it is like _Get range from mylist, Set this range as new value_ but atomically. We will use only these List operations, but make sure to check the [Redis documentation](http://code.google.com/p/redis/wiki/README) to discover all the List operations supported by Redis. The set data type --- -There is more than Lists, Redis also supports Sets, that are unsorted collection of elements. It is possible to add, remove, and test for existence of members, and perform intersection between different Sets. Of course it is possible to ask for the list or the number of elements of a Set. Some example will make it more clear. Keep in mind that SADD is the _add to set_ operation, SREM is the _remove from set_ operation, _sismember_ is the _test if it is a member_ operation, and SINTER is _perform intersection_ operation. Other operations are SCARD that is used to get the cardinality (the number of elements) of a Set, and SMEMBERS that will return all the members of a Set. +There is more than Lists. Redis also supports Sets, which are unsorted collection of elements. It is possible to add, remove, and test for existence of members, and perform intersection between different Sets. Of course it is possible to ask for the list or the number of elements of a Set. Some example will make it more clear. Keep in mind that SADD is the _add to set_ operation, SREM is the _remove from set_ operation, _sismember_ is the _test if it is a member_ operation, and SINTER is _perform intersection_ operation. Other operations are SCARD that is used to get the cardinality (the number of elements) of a Set, and SMEMBERS that will return all the members of a Set. SADD myset a SADD myset b @@ -103,21 +103,21 @@ SINTER can return the intersection between Sets but it is not limited to two set SISMEMBER myset foo => 1 SISMEMBER myset notamember => 0 -Okay, I think we are ready to start coding! +Okay, we are ready to start coding! Prerequisites --- -If you didn't download it already please grab the [source code of Retwis](http://code.google.com/p/redis/downloads/list). It's a simple tar.gz file with a few of PHP files inside. The implementation is very simple. You will find the PHP library client inside (redis.php) that is used to talk with the Redis server from PHP. This library was written by [Ludovico Magnocavallo](http://qix.it) and you are free to reuse this in your own projects, but for updated version of the library please download the Redis distribution. (Note: there are now better PHP libraries available, check our [clients page](/clients). +If you haven't downloaded the [Retwis source code](http://code.google.com/p/redis/downloads/list) already please grab it now. It's a simple tar.gz file containing a few PHP files. The implementation is very simple. You will find the PHP library client inside (redis.php) that is used to talk with the Redis server from PHP. This library was written by [Ludovico Magnocavallo](http://qix.it) and you are free to reuse this in your own projects, but for an updated version of the library please download the Redis distribution. (Note: there are now better PHP libraries available, check our [clients page](/clients). -Another thing you probably want is a working Redis server. Just get the source, compile with make, and run with ./redis-server and you are done. No configuration is required at all in order to play with it or to run Retwis in your computer. +Another thing you probably want is a working Redis server. Just get the source, build with make, run with ./redis-server and you're done. No configuration is required at all in order to play with or run Retwis in your computer. Data layout --- -Working with a relational database this is the stage were the database layout should be produced in form of tables, indexes, and so on. We don't have tables, so what should be designed? We need to identify what keys are needed to represent our objects and what kind of values this keys need to hold. +When working with a relational database, this is when the database schema should be designed so that we'd know the tables, indexes, and so on that the database will contain. We don't have tables, so what should be designed? We need to identify what keys are needed to represent our objects and what kind of values this keys need to hold. -Let's start from Users. We need to represent this users of course, with the username, userid, password, followers and following users, and so on. The first question is, what should identify a user inside our system? The username can be a good idea since it is unique, but it is also too big, and we want to stay low on memory. So like if our DB was a relational one we can associate an unique ID to every user. Every other reference to this user will be done by id. That's very simple to do, because we have our atomic INCR operation! When we create a new user we can do something like this, assuming the user is called "antirez": +Let's start with Users. We need to represent the users, of course, with their username, userid, password, followers, following users, and so on. The first question is, how should we identify a user? The username can be a good idea since it is unique, but it is also too big, and we want to stay low on memory. So like if our DB was a relational one we can associate an unique ID to every user. Every other reference to this user will be done by id. That's very simple to do, because we have our atomic INCR operation! When we create a new user we can do something like this, assuming the user is called "antirez": INCR global:nextUserId => 1000 SET uid:1000:username antirez @@ -130,10 +130,10 @@ Besides the fields already defined, we need some more stuff in order to fully de This may appear strange at first, but remember that we are only able to access data by key! It's not possible to tell Redis to return the key that holds a specific value. This is also *our strength*, this new paradigm is forcing us to organize the data so that everything is accessible by _primary key_, speaking with relational DBs language. -Following, followers and updates +Following, followers, and updates --- -There is another central need in our system. Every user has followers users and following users. We have a perfect data structure for this work! That is... Sets. So let's add this two new fields to our schema: +There is another central need in our system. Every user has users that they follow and users who follow them. We have a perfect data structure for this! That is... Sets. So let's add these two new fields to our schema: uid:1000:followers => Set of uids of all the followers users uid:1000:following => Set of uids of all the following users @@ -215,9 +215,9 @@ The code is simpler than the description, possibly: return true; } -`loadUserInfo` as separated function is an overkill for our application, but it's a good template for a complex application. The only thing it's missing from all the authentication is the logout. What we do on logout? That's simple, we'll just change the random string in uid:1000:auth, remove the old auth:`` and add a new auth:``. +`loadUserInfo` as a separate function is overkill for our application, but it's a good approach in a complex application. The only thing that's missing from all the authentication is the logout. What do we do on logout? That's simple, we'll just change the random string in uid:1000:auth, remove the old auth:`` and add a new auth:``. -*Important:* the logout procedure explains why we don't just authenticate the user after the lookup of auth:``, but double check it against uid:1000:auth. The true authentication string is the latter, the auth:`` is just an authentication key that may even be volatile, or if there are bugs in the program or a script gets interrupted we may even end with multiple auth:`` keys pointing to the same user id. The logout code is the following (logout.php): +*Important:* the logout procedure explains why we don't just authenticate the user after looking up auth:``, but double check it against uid:1000:auth. The true authentication string is the latter, the auth:`` is just an authentication key that may even be volatile, or if there are bugs in the program or a script gets interrupted we may even end with multiple auth:`` keys pointing to the same user id. The logout code is the following (logout.php): include("retwis.php"); @@ -242,12 +242,12 @@ That is just what we described and should be simple to understand. Updates --- -Updates, also known as posts, are even simpler. In order to create a new post on the database we do something like this: +Updates, also known as posts, are even simpler. In order to create a new post in the database we do something like this: INCR global:nextPostId => 10343 SET post:10343 "$owner_id|$time|I'm having fun with Retwis" -As you can see the user id and time of the post are stored directly inside the string, we don't need to lookup by time or user id in the example application so it is better to compact everything inside the post string. +As you can see, the user id and time of the post are stored directly inside the string, so we don't need to lookup by time or user id in the example application so it is better to compact everything inside the post string. After we create a post we obtain the post id. We need to LPUSH this post id in every user that's following the author of the post, and of course in the list of posts of the author. This is the file update.php that shows how this is performed: @@ -277,14 +277,14 @@ After we create a post we obtain the post id. We need to LPUSH this post id in e header("Location: index.php"); -The core of the function is the `foreach`. We get using SMEMBERS all the followers of the current user, then the loop will LPUSH the post against the uid:``:posts of every follower. +The core of the function is the `foreach` loop. We get using SMEMBERS all the followers of the current user, then the loop will LPUSH the post against the uid:``:posts of every follower. -Note that we also maintain a timeline with all the posts. In order to do so what is needed is just to LPUSH the post against global:timeline. Let's face it, do you start thinking it was a bit strange to have to sort things added in chronological order using ORDER BY with SQL? I think so indeed. +Note that we also maintain a timeline for all the posts. This requires just LPUSHing the post against global:timeline. Let's face it, do you start thinking it was a bit strange to have to sort things added in chronological order using ORDER BY with SQL? I think so indeed. Paginating updates --- -Now it should be pretty clear how we can user LRANGE in order to get ranges of posts, and render this posts on the screen. The code is simple: +Now it should be pretty clear how we can use LRANGE in order to get ranges of posts, and render these posts on the screen. The code is simple: function showPost($id) { $r = redisLink(); @@ -333,7 +333,7 @@ You can find the code that sets or removes a following/follower relation at foll Making it horizontally scalable --- -Gentle reader, if you reached this point you are already an hero, thank you. Before to talk about scaling horizontally it is worth to check the performances on a single server. Retwis is *amazingly fast*, without any kind of cache. On a very slow and loaded server, apache benchmark with 100 parallel clients issuing 100000 requests measured the average pageview to take 5 milliseconds. This means you can serve millions of users every day with just a single Linux box, and this one was monkey asses slow! Go figure with more recent hardware. +Gentle reader, if you reached this point you are already a hero. Thank you. Before talking about scaling horizontally it is worth checking the performances on a single server. Retwis is *amazingly fast*, without any kind of cache. On a very slow and loaded server, an apache benchmark with 100 parallel clients issuing 100000 requests measured the average pageview to take 5 milliseconds. This means you can serve millions of users every day with just a single Linux box, and this one was monkey ass slow! Go figure with more recent hardware. So, first of all, probably you will not need more than one server for a lot of applications, even when you have a lot of users. But let's assume we *are* Twitter and need to handle a huge amount of traffic. What to do? @@ -344,7 +344,7 @@ The first thing to do is to hash the key and issue the request on different serv server_id = crc32(key) % number_of_servers -This has a lot of problems since if you add one server you need to move too much keys and so on, but this is the general idea even if you use a better hashing scheme like consistent hashing. +This has a lot of problems since if you add one server you need to move too many keys and so on, but this is the general idea even if you use a better hashing scheme like consistent hashing. Ok, are key accesses distributed among the key space? Well, all the user data will be partitioned among different servers. There are no inter-keys operations used (like SINTER, otherwise you need to care that things you want to intersect will end in the same server. *This is why Redis unlike memcached does not force a specific hashing scheme, it's application specific*). Btw there are keys that are accessed more frequently. @@ -353,6 +353,6 @@ Special keys For example every time we post a new message, we *need* to increment the `global:nextPostId` key. How to fix this problem? A Single server will get a lot if increments. The simplest way to handle this is to have a dedicated server just for increments. This is probably an overkill btw unless you have really a lot of traffic. There is another trick. The ID does not really need to be an incremental number, but just *it needs to be unique*. So you can get a random string long enough to be unlikely (almost impossible, if it's md5-size) to collide, and you are done. We successfully eliminated our main problem to make it really horizontally scalable! -There is another one: global:timeline. There is no fix for this, if you need to take something in order you can split among different servers and *then merge* when you need to get the data back, or take it ordered and use a single key. Again if you really have so much posts per second, you can use a single server just for this. Remember that with commodity hardware Redis is able to handle 100000 writes for second, that's enough even for Twitter, I guess. +There is another one: global:timeline. There is no fix for this, if you need to take something in order you can split among different servers and *then merge* when you need to get the data back, or take it ordered and use a single key. Again if you really have so much posts per second, you can use a single server just for this. Remember that with commodity hardware Redis is able to handle 100000 writes per second. That's enough even for Twitter, I guess. Please feel free to use the comments below for questions and feedbacks. From 335c90deb8bdf2eeccfcbd24c044d437c4510cb4 Mon Sep 17 00:00:00 2001 From: Nikita Koksharov Date: Sat, 11 Jan 2014 07:15:23 -0800 Subject: [PATCH 0420/2880] Redisson entry added --- clients.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/clients.json b/clients.json index e18661b340..108bb3cdd5 100644 --- a/clients.json +++ b/clients.json @@ -176,6 +176,16 @@ "active": true }, + { + "name": "Redisson", + "language": "Java", + "repository": "https://github.com/mrniko/redisson", + "description": "distributed and scalable Java data structures on top of Redis server", + "authors": ["mrniko"], + "recommended": true, + "active": true + }, + { "name": "JRedis", "language": "Java", From 6c192be1997962d27189fe5db3e71239f89fd565 Mon Sep 17 00:00:00 2001 From: "Stuart P. Bentley" Date: Sat, 11 Jan 2014 17:02:05 -0800 Subject: [PATCH 0421/2880] Fix PTTL first version in ttl.md PTTL is first available in 2.6, not 2.8. --- commands/ttl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/commands/ttl.md b/commands/ttl.md index 17055f4884..15821e1140 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -9,7 +9,7 @@ Starting with Redis 2.8 the return value in case of error changed: * The command returns `-2` if the key does not exist. * The command returns `-1` if the key exists but has no associated expire. -See also the `PTTL` command that returns the same information with milliseconds resolution (Only available in Redis 2.8 or greater). +See also the `PTTL` command that returns the same information with milliseconds resolution (Only available in Redis 2.6 or greater). @return From cf613d1359f87e450c66f8c72a1a00a2ff0a2f99 Mon Sep 17 00:00:00 2001 From: "Stuart P. Bentley" Date: Sat, 11 Jan 2014 17:03:42 -0800 Subject: [PATCH 0422/2880] Document 2.8+ negative value behavior in pttl.md I just got bitten by this behavior in my own code. --- commands/pttl.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/commands/pttl.md b/commands/pttl.md index a3d66431a1..4e0807971b 100644 --- a/commands/pttl.md +++ b/commands/pttl.md @@ -2,10 +2,16 @@ Like `TTL` this command returns the remaining time to live of a key that has an expire set, with the sole difference that `TTL` returns the amount of remaining time in seconds while `PTTL` returns it in milliseconds. +In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. + +Starting with Redis 2.8 the return value in case of error changed: + +* The command returns `-2` if the key does not exist. +* The command returns `-1` if the key exists but has no associated expire. + @return -@integer-reply: Time to live in milliseconds or `-1` when `key` does not exist -or does not have a timeout. +@integer-reply: TTL in milliseconds, or a negative value in order to signal an error (see the description above). @examples From 584dd95882738ee2762040bb6dc2bd3bab20ad88 Mon Sep 17 00:00:00 2001 From: antirez Date: Mon, 13 Jan 2014 16:35:55 +0100 Subject: [PATCH 0423/2880] SENTINEL runtime config API documented. --- topics/sentinel.md | 47 +++++++++++++++++++++++++++++----------------- 1 file changed, 30 insertions(+), 17 deletions(-) diff --git a/topics/sentinel.md b/topics/sentinel.md index 0f0a3f6845..21fb99b2c4 100644 --- a/topics/sentinel.md +++ b/topics/sentinel.md @@ -1,8 +1,6 @@ Redis Sentinel Documentation === -**Note:** this page documents the *new* Sentinel implementation that entered the Github repository 21th of November. The old Sentinel implementation is [documented here](http://redis.io/topics/sentinel-old), however using the old implementation is discouraged. - Redis Sentinel is a system designed to help managing Redis instances. It performs the following three tasks: @@ -25,19 +23,14 @@ executable. describes how to use what we is already implemented, and may change as the Sentinel implementation evolves. -Redis Sentinel is compatible with Redis 2.4.16 or greater, and Redis 2.6.0 or greater, however it works better if used against Redis instances version 2.8.0 or greater. +Redis Sentinel is compatible with Redis 2.4.16 or greater, and Redis 2.6.0 or greater, however it works better if used with Redis instances version 2.8.0 or greater. Obtaining Sentinel --- -Currently Sentinel is part of the Redis *unstable* branch at github. -To compile it you need to clone the *unstable* branch and compile Redis. -You'll see a `redis-sentinel` executable in your `src` directory. - -Alternatively you can use directly the `redis-server` executable itself, -starting it in Sentinel mode as specified in the next paragraph. +Sentinel is currently developed in the *unstable* branch of the Redis source code at Github. However an update copy of Sentinel is provided with every patch release of Redis 2.8. -An updated version of Sentinel is also available as part of the Redis 2.8.0 release. +The simplest way to use Sentinel is to download the latest verison of Redis 2.8 or to compile Redis latest commit in the *unstable* branch at Github. Running Sentinel --- @@ -80,7 +73,7 @@ that is at address 127.0.0.1 and port 6379, with a level of agreement needed to detect this master as failing of 2 sentinels (if the agreement is not reached the automatic failover does not start). -However note that whatever the agreement you specify to detect an instance as not working, a Sentinel requires **the vote from the majority** of the known Sentinels in the system in order to start a failover and reserve a given *configuration Epoch* (that is a version to attach to a new master configuration). +However note that whatever the agreement you specify to detect an instance as not working, a Sentinel requires **the vote from the majority** of the known Sentinels in the system in order to start a failover and obtain a new *configuration Epoch* to assign to the new configuraiton afte the failiver. In other words **Sentinel is not able to perform the failover if only a minority of the Sentinel processes are working**. @@ -112,6 +105,8 @@ The other options are described in the rest of this document and documented in the example sentinel.conf file shipped with the Redis distribution. +All the configuration parameters can be modified at runtime using the `SENTINEL` command. See the **Reconfiguring Sentinel at runtime** section for more information. + SDOWN and ODOWN --- @@ -204,12 +199,30 @@ Sentinel commands The following is a list of accepted commands: -* **PING** this command simply returns PONG. -* **SENTINEL masters** show a list of monitored masters and their state. -* **SENTINEL slaves ``** show a list of slaves for this master, and their state. -* **SENTINEL get-master-addr-by-name ``** return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. -* **SENTINEL reset ``** this command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. -* **SENTINEL failover ``** force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). +* **PING** This command simply returns PONG. +* **SENTINEL masters** Show a list of monitored masters and their state. +* **SENTINEL master ``** Show the state and info of the specified master. +* **SENTINEL slaves ``** Show a list of slaves for this master, and their state. +* **SENTINEL get-master-addr-by-name ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted slave. +* **SENTINEL reset ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every slave and sentinel already discovered and associated with the master. +* **SENTINEL failover ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). + +Reconfiguring Sentinel at Runtime +--- + +Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagates the changes to the other Sentinels in the network. + +The following is a list of `SENTINEL` sub commands used in order to update the configuration of a Sentinel instance. + +* **SENTINEL MONITOR `` `` `` ``** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use an hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. +* **SENTINEL REMOVE ``** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth. +* **SENTINEL SET `` `