diff --git a/.github/workflows/spellcheck.yml b/.github/workflows/spellcheck.yml new file mode 100644 index 0000000000..3322a944d3 --- /dev/null +++ b/.github/workflows/spellcheck.yml @@ -0,0 +1,18 @@ +name: Spellcheck +on: + push: + branches: [master] + pull_request: + branches: [master] +jobs: + spellcheck: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - name: Spellcheck + uses: redis-stack/github-actions/spellcheck@main + env: + DICTIONARY: wordlist + DOCS_DIRECTORY: . + CONFIGURATION_FILE: .spellcheck.yml + COMMANDS_FILES: commands.json diff --git a/.github/workflows/trigger-build.yml b/.github/workflows/trigger-build.yml new file mode 100644 index 0000000000..74bdd7586b --- /dev/null +++ b/.github/workflows/trigger-build.yml @@ -0,0 +1,18 @@ +name: Trigger master website deploy +on: + push: + branches: + - master + +jobs: + trigger: + runs-on: ubuntu-latest + steps: + - run: | + echo "'$DATA'" | xargs \ + curl \ + -X POST https://api.netlify.com/build_hooks/${NETLIFY_BUILD_HOOK_ID} \ + -d + env: + NETLIFY_BUILD_HOOK_ID: ${{ secrets.NETLIFY_BUILD_HOOK_ID }} + DATA: '{"type": "core", "id": "redis_docs", "repository":"${{ github.repository }}", "sha":"${{ github.sha }}", "ref":"${{ github.ref }}"}}' diff --git a/.gitignore b/.gitignore index a9a5aecf42..4610ac14e8 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,3 @@ +.idea tmp +.DS_Store diff --git a/.spellcheck.yml b/.spellcheck.yml new file mode 100644 index 0000000000..b63649b810 --- /dev/null +++ b/.spellcheck.yml @@ -0,0 +1,11 @@ +files: + - '**/*.md' + - '!resources/clients/index.md' + - '!resources/libraries/index.md' + - '!resources/modules/index.md' + - '!resources/tools/index.md' + - '!docs/reference/modules/modules-api-ref.md' +dictionaries: + - wordlist +no-suggestions: true +quiet: true diff --git a/COPYRIGHT b/COPYRIGHT new file mode 100644 index 0000000000..4716e0520c --- /dev/null +++ b/COPYRIGHT @@ -0,0 +1,22 @@ +This documentation is Copyright (C) 2009-2014 Salvatore Sanfilippo and +is released under the following license: + +Creative Commons Attribution-ShareAlike 4.0 International + +You can find the full text of the license in the Creative Commons web site +at the following URL: + +http://creativecommons.org/licenses/by-sa/4.0/ + +The following is a human-readable summary of (and not a substitute for) +the license: + +You are free to: + +* Share — copy and redistribute the material in any medium or format +* Adapt — remix, transform, and build upon the material + +for any purpose, even commercially. + +The licensor cannot revoke these freedoms as long as you follow the license +terms. diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000..7e3883593b --- /dev/null +++ b/LICENSE @@ -0,0 +1,349 @@ +Creative Commons Attribution-ShareAlike 4.0 International Public +License + +By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution-ShareAlike 4.0 International Public License ("Public +License"). To the extent this Public License may be interpreted as a +contract, You are granted the Licensed Rights in consideration of Your +acceptance of these terms and conditions, and the Licensor grants You +such rights in consideration of benefits the Licensor receives from +making the Licensed Material available under these terms and +conditions. + + +Section 1 -- Definitions. + + a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image. + + b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License. + + c. BY-SA Compatible License means a license listed at + creativecommons.org/compatiblelicenses, approved by Creative + Commons as essentially the equivalent of this Public License. + + d. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights. + + e. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements. + + f. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material. + + g. License Elements means the license attributes listed in the name + of a Creative Commons Public License. The License Elements of this + Public License are Attribution and ShareAlike. + + h. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License. + + i. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license. + + j. Licensor means the individual(s) or entity(ies) granting rights + under this Public License. + + k. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them. + + l. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world. + + m. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning. + + +Section 2 -- Scope. + + a. License grant. + + 1. Subject to the terms and conditions of this Public License, + the Licensor hereby grants You a worldwide, royalty-free, + non-sublicensable, non-exclusive, irrevocable license to + exercise the Licensed Rights in the Licensed Material to: + + a. reproduce and Share the Licensed Material, in whole or + in part; and + + b. produce, reproduce, and Share Adapted Material. + + 2. Exceptions and Limitations. For the avoidance of doubt, where + Exceptions and Limitations apply to Your use, this Public + License does not apply, and You do not need to comply with + its terms and conditions. + + 3. Term. The term of this Public License is specified in Section + 6(a). + + 4. Media and formats; technical modifications allowed. The + Licensor authorizes You to exercise the Licensed Rights in + all media and formats whether now known or hereafter created, + and to make technical modifications necessary to do so. The + Licensor waives and/or agrees not to assert any right or + authority to forbid You from making technical modifications + necessary to exercise the Licensed Rights, including + technical modifications necessary to circumvent Effective + Technological Measures. For purposes of this Public License, + simply making modifications authorized by this Section 2(a) + (4) never produces Adapted Material. + + 5. Downstream recipients. + + a. Offer from the Licensor -- Licensed Material. Every + recipient of the Licensed Material automatically + receives an offer from the Licensor to exercise the + Licensed Rights under the terms and conditions of this + Public License. + + b. Additional offer from the Licensor -- Adapted Material. + Every recipient of Adapted Material from You + automatically receives an offer from the Licensor to + exercise the Licensed Rights in the Adapted Material + under the conditions of the Adapter's License You apply. + + c. No downstream restrictions. You may not offer or impose + any additional or different terms or conditions on, or + apply any Effective Technological Measures to, the + Licensed Material if doing so restricts exercise of the + Licensed Rights by any recipient of the Licensed + Material. + + 6. No endorsement. Nothing in this Public License constitutes or + may be construed as permission to assert or imply that You + are, or that Your use of the Licensed Material is, connected + with, or sponsored, endorsed, or granted official status by, + the Licensor or others designated to receive attribution as + provided in Section 3(a)(1)(A)(i). + + b. Other rights. + + 1. Moral rights, such as the right of integrity, are not + licensed under this Public License, nor are publicity, + privacy, and/or other similar personality rights; however, to + the extent possible, the Licensor waives and/or agrees not to + assert any such rights held by the Licensor to the limited + extent necessary to allow You to exercise the Licensed + Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this + Public License. + + 3. To the extent possible, the Licensor waives any right to + collect royalties from You for the exercise of the Licensed + Rights, whether directly or through a collecting society + under any voluntary or waivable statutory or compulsory + licensing scheme. In all other cases the Licensor expressly + reserves any right to collect such royalties. + + +Section 3 -- License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the +following conditions. + + a. Attribution. + + 1. If You Share the Licensed Material (including in modified + form), You must: + + a. retain the following if it is supplied by the Licensor + with the Licensed Material: + + i. identification of the creator(s) of the Licensed + Material and any others designated to receive + attribution, in any reasonable manner requested by + the Licensor (including by pseudonym if + designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of + warranties; + + v. a URI or hyperlink to the Licensed Material to the + extent reasonably practicable; + + b. indicate if You modified the Licensed Material and + retain an indication of any previous modifications; and + + c. indicate the Licensed Material is licensed under this + Public License, and include the text of, or the URI or + hyperlink to, this Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any + reasonable manner based on the medium, means, and context in + which You Share the Licensed Material. For example, it may be + reasonable to satisfy the conditions by providing a URI or + hyperlink to a resource that includes the required + information. + + 3. If requested by the Licensor, You must remove any of the + information required by Section 3(a)(1)(A) to the extent + reasonably practicable. + + b. ShareAlike. + + In addition to the conditions in Section 3(a), if You Share + Adapted Material You produce, the following conditions also apply. + + 1. The Adapter's License You apply must be a Creative Commons + license with the same License Elements, this version or + later, or a BY-SA Compatible License. + + 2. You must include the text of, or the URI or hyperlink to, the + Adapter's License You apply. You may satisfy this condition + in any reasonable manner based on the medium, means, and + context in which You Share Adapted Material. + + 3. You may not offer or impose any additional or different terms + or conditions on, or apply any Effective Technological + Measures to, Adapted Material that restrict exercise of the + rights granted under the Adapter's License You apply. + + +Section 4 -- Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material: + + a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database; + + b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material, + + including for purposes of Section 3(b); and + c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights. + + +Section 5 -- Disclaimer of Warranties and Limitation of Liability. + + a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. + + b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. + + c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability. + + +Section 6 -- Term and Termination. + + a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically. + + b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided + it is cured within 30 days of Your discovery of the + violation; or + + 2. upon express reinstatement by the Licensor. + + For the avoidance of doubt, this Section 6(b) does not affect any + right the Licensor may have to seek remedies for Your violations + of this Public License. + + c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License. + + d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License. + + +Section 7 -- Other Terms and Conditions. + + a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed. + + b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License. + + +Section 8 -- Interpretation. + + a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License. + + b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions. + + c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor. + + d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority. diff --git a/README.md b/README.md index 7843b35a56..1a93bea2d1 100644 --- a/README.md +++ b/README.md @@ -1,92 +1,120 @@ -Redis documentation -=== +# Redis documentation +> **Important**: This repository got replaced by the new [Redis docs](https://github.com/redis/docs) repository and will be archived soon. -Clients ---- -All clients are listed in the `clients.json` file. Each key in the JSON -object represents a single client library. For example: +## License vs Trademarks - "Rediska": { +OPEN SOURCE LICENSE VS. TRADEMARKS. The three-clause BSD license gives you the right to redistribute and use the software in source and binary forms, with or without modification, under certain conditions. However, open source licenses like the three-clause BSD license do not address trademarks. For further details please read the [Redis Trademark Policy](https://www.redis.com/legal/trademark-policy)." - # A programming language should be specified. - "language": "PHP", +## Clients - # If the project has a website of its own, put it here. - # Otherwise, lose the "url" key. - "url": "http://rediska.geometria-lab.net", +All clients are listed under language specific sub-folders of [clients](./clients) - # A URL pointing to the repository where users can - # find the code. - "repository": "http://github.com/Shumkov/Rediska", +The path follows the pattern: ``clients/{language}/github.com/{owner}/{repository}.json``. +The ``{language}`` component of the path is the path-safe representation +of the full language name which is mapped in [languages.json](./languages.json). - # A short, free-text description of the client. - # Should be objective. The goal is to help users - # choose the correct client they need. - "description": "A PHP client", +Each client's JSON object represents the details displayed on the [clients documentation page](https://redis.io/docs/clients). - # An array of Twitter usernames for the authors - # and maintainers of the library. - "authors": ["shumkov"] +For example [clients/python/github.com/redis](./clients/python/github.com/redis/redis-py.json): - } +``` +{ + "name": "redis-py", + "description": "Mature and supported. Currently the way to go for Python.", + "recommended": true +} +``` +## Commands -Commands ---- - -Redis commands are described in the `commands.json` file. +Redis commands are described in the `commands.json` file that is auto generated +from the Redis repo based on the JSON files in the commands folder. +See: https://github.com/redis/redis/tree/unstable/src/commands +See: https://github.com/redis/redis/tree/unstable/utils/generate-commands-json.py For each command there's a Markdown file with a complete, human-readable -description. We process this Markdown to provide a better experience, so -some things to take into account: - -* Inside text, all commands should be written in all caps, in between -backticks. For example: `INCR`. - -* You can use some magic keywords to name common elements in Redis. For -example: `@multi-bulk-reply`. These keywords will get expanded and -auto-linked to relevant parts of the documentation. - -There should be at least three predefined sections: time complexity, -description and return value. These sections are marked using magic -keywords, too: - - @complexity - - O(n), where N is the number of keys in the database. - - - @description - - Returns all keys matching the given pattern. - - - @return - - @multi-bulk-reply: all the keys that matched the pattern. - - -Styling guidelines ---- - -Please wrap your text to 80 characters. You can easily accomplish this -using a CLI tool called `par`. - - -Checking your work ---- - -Once you're done, the very least you should do is make sure that all -files compile properly. You can do this by running Rake inside your -working directory. - - $ rake - -Additionally, if you have [Aspell](http://aspell.net/) installed, you -can spell check the documentation: - - $ rake spellcheck - -Exceptions can be added to `./wordlist`. +description. +We process this Markdown to provide a better experience, so some things to take +into account: + +* Inside text, all commands should be written in all caps, in between + backticks. + For example: `INCR`. + +* You can use some magic keywords to name common elements in Redis. + For example: `@multi-bulk-reply`. + These keywords will get expanded and auto-linked to relevant parts of the + documentation. + +Each command will have a description and both RESP2 and RESP3 return values. +Regarding the return values, these are contained in the files: + +* `resp2_replies.json` +* `resp3_replies.json` + +Each file is a dictionary with a matching set of keys. Each key is an array of strings that, +when processed, produce Markdown content. Here's an example: + +``` +{ + ... + "ACL CAT": [ + "One of the following:", + "* [Array reply](/docs/reference/protocol-spec#arrays): an array of [Bulk string reply](/docs/reference/protocol-spec#bulk-strings) elements representing ACL categories or commands in a given category.", + "* [Simple error reply](/docs/reference/protocol-spec#simple-errors): the command returns an error if an invalid category name is given." + ], + ... +} +``` + +**Important**: when adding or editing return values, be sure to edit both files. Use the following +links for the reply type. Note: do not use `@reply-type` specifiers; use only the Markdown link. + +```md +@simple-string-reply: [Simple string reply](https://redis.io/docs/reference/protocol-spec#simple-strings) +@simple-error-reply: [Simple error reply](https://redis.io/docs/reference/protocol-spec#simple-errors) +@integer-reply: [Integer reply](https://redis.io/docs/reference/protocol-spec#integers) +@bulk-string-reply: [Bulk string reply](https://redis.io/docs/reference/protocol-spec#bulk-strings) +@array-reply: [Array reply](https://redis.io/docs/reference/protocol-spec#arrays) +@nil-reply: [Nil reply](https://redis.io/docs/reference/protocol-spec#bulk-strings) +@null-reply: [Null reply](https://redis.io/docs/reference/protocol-spec#nulls) +@boolean-reply: [Boolean reply](https://redis.io/docs/reference/protocol-spec#booleans) +@double-reply: [Double reply](https://redis.io/docs/reference/protocol-spec#doubles) +@big-number-reply: [Big number reply](https://redis.io/docs/reference/protocol-spec#big-numbers) +@bulk-error-reply: [Bulk error reply](https://redis.io/docs/reference/protocol-spec#bulk-errors) +@verbatim-string-reply: [Verbatim string reply](https://redis.io/docs/reference/protocol-spec#verbatim-strings) +@map-reply: [Map reply](https://redis.io/docs/reference/protocol-spec#maps) +@set-reply: [Set reply](https://redis.io/docs/reference/protocol-spec#sets) +@push-reply: [Push reply](https://redis.io/docs/reference/protocol-spec#pushes) +``` + +**Note:** RESP3 return schemas are not currently included in the `resp2/resp3_replies.json` files for Redis Stack modules. + +## Styling guidelines + +Please use the following formatting rules (aiming for smaller diffs that are easier to review): + +* No need for manual lines wrapping at any specific length, + doing so usually means that adding a word creates a cascade effect and changes other lines. +* Please avoid writing lines that are too long, + this makes the diff harder to review when only one word is changed. +* Start every sentence on a new line. + + +## Checking your work + +After making changes to the documentation, you can use the [spellchecker-cli package](https://www.npmjs.com/package/spellchecker-cli) to validate your spelling as well as some minor grammatical errors. You can install the spellchecker locally by running: + +```bash +npm install --global spellchecker-cli +``` + +You can than validate your spelling by running the following + +``` +spellchecker --no-suggestions -f '**/*.md' -l en-US -q -d wordlist +``` + +Any exceptions you need for spelling can be added to the `wordlist` file. diff --git a/Rakefile b/Rakefile deleted file mode 100644 index ffd9609ffb..0000000000 --- a/Rakefile +++ /dev/null @@ -1,41 +0,0 @@ -task :default => [:parse, :spellcheck] - -task :parse do - require "json" - require "batch" - require "rdiscount" - - Batch.each(Dir["**/*.json"] + Dir["**/*.md"]) do |file| - if File.extname(file) == ".md" - RDiscount.new(File.read(file)).to_html - else - JSON.parse(File.read(file)) - end - end -end - -task :spellcheck do - require "json" - - `mkdir -p tmp` - - IO.popen("aspell --lang=en create master ./tmp/dict", "w") do |io| - io.puts(JSON.parse(File.read("commands.json")).keys.map(&:split).flatten.join("\n")) - io.puts(File.read("wordlist")) - end - - Dir["**/*.md"].each do |file| - command = %q{ - ruby -pe 'gsub /^ .*$/, ""' | - ruby -pe 'gsub /`[^`]+`/, ""' | - ruby -e 'puts $stdin.read.gsub /\[([^\]]+)\]\(([^\)]+)\)/m, "\\1"' | - aspell -H -a --extra-dicts=./tmp/dict 2>/dev/null - } - - words = `cat '#{file}' | #{command}`.lines.map do |line| - line[/^& ([^ ]+)/, 1] - end.compact - - puts "#{file}: #{words.uniq.sort.join(" ")}" if words.any? - end -end diff --git a/_index.md b/_index.md new file mode 100644 index 0000000000..12d73aefa9 --- /dev/null +++ b/_index.md @@ -0,0 +1,6 @@ +--- +title: Redis +linkTitle: Redis +--- + +Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions, and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster. [Learn more →](/topics/introduction) diff --git a/clients.json b/clients.json deleted file mode 100644 index 029c91e320..0000000000 --- a/clients.json +++ /dev/null @@ -1,454 +0,0 @@ -[ - { - "name": "redis-rb", - "language": "Ruby", - "url": "http://redis-rb.keyvalue.org", - "repository": "https://github.com/ezmobius/redis-rb", - "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performances.", - "authors": ["ezmobius", "soveran", "djanowski", "pnoordhuis"], - "recommended": true - }, - - { - "name": "as3redis", - "language": "ActionScript", - "repository": "https://github.com/claus/as3redis", - "description": "", - "authors": ["cwahlers"] - }, - - { - "name": "redis-clojure", - "language": "Clojure", - "repository": "https://github.com/tavisrudd/redis-clojure", - "description": "", - "authors": ["tavisrudd"] - }, - - { - "name": "CL-Redis", - "language": "Common Lisp", - "url": "http://www.cliki.net/cl-redis", - "repository": "https://github.com/vseloved/cl-redis", - "description": "", - "authors": ["BigThingist"] - }, - - { - "name": "Erldis", - "language": "Erlang", - "repository": "https://github.com/japerk/erldis", - "description": "", - "authors": ["dialtone_","japerk"] - }, - - { - "name": "Eredis", - "language": "Erlang", - "repository": "https://github.com/wooga/eredis", - "description": "Redis client with a focus on performance", - "authors": ["wooga"] - }, - - { - "name": "redis.fy", - "language": "Fancy", - "repository": "https://github.com/bakkdoor/redis.fy", - "description": "A Fancy Redis client library", - "authors": ["bakkdoor"] - }, - - { - "name": "Go-Redis", - "language": "Go", - "repository": "https://github.com/alphazero/Go-Redis", - "description": "", - "authors": ["SunOf27"] - }, - - { - "name": "Tideland RDC", - "language": "Go", - "repository": "http://code.google.com/p/tideland-rdc/", - "description": "", - "authors": ["themue"] - }, - - { - "name": "godis", - "language": "Go", - "repository": "https://github.com/simonz05/godis", - "description": "", - "authors": ["simonz05"] - }, - - { - "name": "redis", - "language": "Haskell", - "url": "http://hackage.haskell.org/package/redis", - "description": "", - "authors": [] - }, - - { - "name": "haskell-redis", - "language": "Haskell", - "url": "http://bitbucket.org/videlalvaro/redis-haskell/wiki/Home", - "repository": "http://bitbucket.org/videlalvaro/redis-haskell/src", - "description": "Not actively maintained, supports Redis <= 2.0.", - "authors": ["old_sound"] - }, - - { - "name": "Jedis", - "language": "Java", - "repository": "https://github.com/xetorthio/jedis", - "description": "", - "authors": ["xetorthio"], - "recommended": true - }, - - { - "name": "JRedis", - "language": "Java", - "url": "http://code.google.com/p/jredis", - "repository": "https://github.com/alphazero/jredis", - "description": "", - "authors": ["SunOf27"] - }, - - { - "name": "JDBC-Redis", - "language": "Java", - "url": "http://code.google.com/p/jdbc-redis", - "repository": "http://code.google.com/p/jdbc-redis/source/browse", - "description": "", - "authors": ["mavcunha"] - }, - - { - "name": "RJC", - "language": "Java", - "repository": "https://github.com/e-mzungu/rjc", - "description": "", - "authors": ["e_mzungu"] - }, - - { - "name": "redis-lua", - "language": "Lua", - "repository": "https://github.com/nrk/redis-lua", - "description": "", - "authors": ["JoL1hAHN"], - "recommended": true - }, - - { - "name": "lua-hiredis", - "language": "Lua", - "repository": "https://github.com/agladysh/lua-hiredis", - "description": "Lua bindings for the hiredis library", - "authors": ["agladysh"] - }, - - { - "name": "Redis", - "language": "Perl", - "url": "http://search.cpan.org/dist/Redis", - "repository": "https://github.com/melo/perl-redis", - "description": "Perl binding for Redis database", - "authors": ["pedromelo"], - "recommended": true - }, - - { - "name": "Redis::hiredis", - "language": "Perl", - "url": "http://search.cpan.org/dist/Redis-hiredis/", - "description": "Perl binding for the hiredis C client", - "authors": ["neophenix"] - }, - - { - "name": "AnyEvent::Redis", - "language": "Perl", - "url": "http://search.cpan.org/dist/AnyEvent-Redis", - "repository": "https://github.com/miyagawa/AnyEvent-Redis", - "description": "Non-blocking Redis client", - "authors": ["miyagawa"] - }, - - { - "name": "MojoX::Redis", - "language": "Perl", - "url": "http://search.cpan.org/dist/MojoX-Redis", - "repository": "https://github.com/und3f/mojox-redis", - "description": "asynchronous Redis client for Mojolicious", - "authors": ["und3f"] - }, - - { - "name": "Danga::Socket::Redis", - "language": "Perl", - "url": "http://search.cpan.org/dist/Danga-Socket-Redis", - "description": "An asynchronous redis client using the Danga::Socket async library", - "authors": ["martinredmond"] - }, - - { - "name": "Predis", - "language": "PHP", - "repository": "https://github.com/nrk/predis", - "description": "Mature and supported", - "authors": ["JoL1hAHN"], - "recommended": true - }, - - { - "name": "phpredis", - "language": "PHP", - "repository": "https://github.com/nicolasff/phpredis", - "description": "This is a client written in C as a PHP module.", - "authors": ["yowgi"], - "recommended": true - }, - - { - "name": "Rediska", - "language": "PHP", - "url": "http://rediska.geometria-lab.net", - "repository": "https://github.com/Shumkov/Rediska", - "description": "", - "authors": ["shumkov"] - }, - - { - "name": "RedisServer", - "language": "PHP", - "repository": "https://github.com/jamm/Memory/blob/master/RedisServer.php", - "description": "Standalone and full-featured class for Redis in PHP", - "authors": ["OZ"] - }, - - { - "name": "Redisent", - "language": "PHP", - "repository": "https://github.com/jdp/redisent", - "description": "", - "authors": ["justinpoliey"] - }, - - { - "name": "redis-py", - "language": "Python", - "repository": "https://github.com/andymccurdy/redis-py", - "description": "Mature and supported. Currently the way to go for Python.", - "authors": ["andymccurdy"], - "recommended": true - }, - - { - "name": "txredis", - "language": "Python", - "url": "http://pypi.python.org/pypi/txredis/0.1.1", - "description": "", - "authors": ["dio_rian"] - }, - - { - "name": "desir", - "language": "Python", - "repository": "https://github.com/aallamaa/desir", - "description": "", - "authors": ["aallamaa"] - }, - - { - "name": "scala-redis", - "language": "Scala", - "repository": "https://github.com/acrosa/scala-redis", - "description": "", - "authors": ["alejandrocrosa"] - }, - - { - "name": "scala-redis", - "language": "Scala", - "repository": "https://github.com/debasishg/scala-redis", - "description": "Apparently a fork of the original client from @alejandrocrosa", - "authors": ["debasishg"], - "recommended": true - }, - - { - "name": "redis-client-scala-netty", - "language": "Scala", - "repository": "https://github.com/andreyk0/redis-client-scala-netty", - "description": "", - "authors": [""] - }, - { - "name": "sedis", - "language": "Scala", - "repository": "https://github.com/pk11/sedis", - "description": "a thin scala wrapper for the popular Redis Java client, Jedis", - "authors": ["pk11"] - }, - - { - "name": "Tcl Client", - "language": "Tcl", - "repository": "https://github.com/antirez/redis/blob/master/tests/support/redis.tcl", - "description": "The client used in the Redis test suite.", - "authors": ["antirez"] - }, - - { - "name": "ServiceStack.Redis", - "language": "C#", - "url": "https://github.com/ServiceStack/ServiceStack.Redis", - "description": "This is a fork and improvement of the original C# client written by Miguel De Icaza.", - "authors": ["demisbellot"], - "recommended": true - }, - - { - "name": "Booksleeve", - "language": "C#", - "url": "http://code.google.com/p/booksleeve/", - "description": "This client was developed by Stack Exchange for very high performance needs.", - "authors": ["marcgravell"], - "recommended": true - }, - - { - "name": "Sider", - "language": "C#", - "url": "http://nuget.org/List/Packages/Sider", - "description": "Minimalistic client for C#/.NET 4.0", - "authors": ["chakrit"] - }, - - { - "name": "hxneko-redis", - "language": "haXe", - "url": "http://code.google.com/p/hxneko-redis", - "repository": "http://code.google.com/p/hxneko-redis/source/browse", - "description": "", - "authors": [] - }, - - { - "name": "em-redis", - "language": "Ruby", - "repository": "https://github.com/madsimian/em-redis", - "description": "", - "authors": ["madsimian"] - }, - - { - "name": "hiredis", - "language": "C", - "repository": "https://github.com/antirez/hiredis", - "description": "This is the official C client. Support for the whole command set, pipelining, event driven programming.", - "authors": ["antirez","pnoordhuis"], - "recommended": true - }, - - { - "name": "credis", - "language": "C", - "repository": "http://code.google.com/p/credis/source/browse", - "description": "", - "authors": [""] - }, - - { - "name": "node_redis", - "language": "Node.js", - "repository": "https://github.com/mranney/node_redis", - "description": "Recommended client for node.", - "authors": ["mranney"], - "recommended": true - }, - - { - "name": "redis-node-client", - "language": "Node.js", - "repository": "https://github.com/fictorial/redis-node-client", - "description": "No longer maintained, does not work with node 0.3.", - "authors": ["fictorial"] - }, - - { - "name": "iodis", - "language": "Io", - "repository": "https://github.com/vangberg/iodis", - "description": "", - "authors": ["ichverstehe"] - }, - - { - "name": "redis.go", - "language": "Go", - "repository": "https://github.com/hoisie/redis.go", - "description": "", - "authors": ["hoisie"] - }, - - { - "name": "Smalltalk Redis Client", - "language": "Smalltalk", - "repository": "http://www.squeaksource.com/Redis.html", - "description": "", - "authors": [] - }, - - { - "name": "TeamDev Redis Client", - "language": "C#", - "repository": "http://redis.codeplex.com/", - "description": "Redis Client is based on redis-sharp for the basic communication functions, but it offers some differences.", - "authors": ["TeamDevPerugia"] - }, - - { - "name": "redis-sharp", - "language": "C#", - "repository": "https://github.com/migueldeicaza/redis-sharp", - "description": "", - "authors": ["migueldeicaza"] - }, - - { - "name": "ObjCHiredis", - "language": "Objective-C", - "repository": "https://github.com/lp/ObjCHiredis", - "description": "Static Library for iOS4 device and Simulator, plus Objective-C Framework for MacOS 10.5 and higher", - "authors": ["loopole"] - }, - - { - "name": "Puredis", - "language": "Pure Data", - "repository": "https://github.com/lp/puredis", - "description": "Pure Data Redis sync, async and subscriber client", - "authors": ["loopole"] - }, - - { - "name": "C++ Client", - "language": "C++", - "repository": "https://github.com/mrpi/redis-cplusplus-client", - "authors": [] - }, - - { - "name": "libredis", - "language": "C", - "repository": "https://github.com/toymachine/libredis", - "description": "Support for executing commands on multiple servers in parallel via poll(2), ketama hashing. Includes PHP bindings.", - "authors": [] - } - -] diff --git a/clients/actionscript/github.com/mikeheier/Redis-AS3.json b/clients/actionscript/github.com/mikeheier/Redis-AS3.json new file mode 100644 index 0000000000..b5d7b29384 --- /dev/null +++ b/clients/actionscript/github.com/mikeheier/Redis-AS3.json @@ -0,0 +1,4 @@ +{ + "name": "Redis-AS3", + "description": "An as3 client library for redis." +} \ No newline at end of file diff --git a/clients/activex-com/gitlab.com/erik4/redis-com-client.json b/clients/activex-com/gitlab.com/erik4/redis-com-client.json new file mode 100644 index 0000000000..4ae1edd2a2 --- /dev/null +++ b/clients/activex-com/gitlab.com/erik4/redis-com-client.json @@ -0,0 +1,6 @@ +{ + "name": "Redis COM client", + "description": "A COM wrapper for StackExchange.Redis that allows using Redis from a COM environment like Classic ASP (ASP 3.0) using vbscript, jscript or any other COM capable language.", + "recommended": true, + "homepage": "https://gitlab.com/erik4/redis-com-client" +} \ No newline at end of file diff --git a/clients/ballerina/github.com/ballerina-platform/module-ballerinax-redis.json b/clients/ballerina/github.com/ballerina-platform/module-ballerinax-redis.json new file mode 100644 index 0000000000..49340f841d --- /dev/null +++ b/clients/ballerina/github.com/ballerina-platform/module-ballerinax-redis.json @@ -0,0 +1,7 @@ +{ + "name": "Ballerina Redis Client", + "description": "Official Redis client for Ballerina language with the support for Redis clusters, connection pooling and secure connections.", + "homepage": "https://central.ballerina.io/ballerinax/redis/latest", + "repository": "https://github.com/ballerina-platform/module-ballerinax-redis", + "recommended": true +} diff --git a/clients/bash/github.com/SomajitDey/redis-client.json b/clients/bash/github.com/SomajitDey/redis-client.json new file mode 100644 index 0000000000..377998d85e --- /dev/null +++ b/clients/bash/github.com/SomajitDey/redis-client.json @@ -0,0 +1,4 @@ +{ + "name": "redis-client", + "description": "extensible client library for Bash scripting or command-line + connection pooling + redis-cli" +} \ No newline at end of file diff --git a/clients/bash/github.com/caquino/redis-bash.json b/clients/bash/github.com/caquino/redis-bash.json new file mode 100644 index 0000000000..319f6fed0c --- /dev/null +++ b/clients/bash/github.com/caquino/redis-bash.json @@ -0,0 +1,7 @@ +{ + "name": "redis-bash", + "description": "Bash library and example client to access Redis Databases", + "twitter": [ + "syshero" + ] +} diff --git a/clients/bash/github.com/crypt1d/redi.sh.json b/clients/bash/github.com/crypt1d/redi.sh.json new file mode 100644 index 0000000000..3e6606b634 --- /dev/null +++ b/clients/bash/github.com/crypt1d/redi.sh.json @@ -0,0 +1,7 @@ +{ + "name": "Redi.sh", + "description": "Simple, Bash-based, Redis client to store your script's variables", + "twitter": [ + "nkrzalic" + ] +} \ No newline at end of file diff --git a/clients/boomi/github.com/zachary-samsel/boomi-redis-connector.json b/clients/boomi/github.com/zachary-samsel/boomi-redis-connector.json new file mode 100644 index 0000000000..27ae739d29 --- /dev/null +++ b/clients/boomi/github.com/zachary-samsel/boomi-redis-connector.json @@ -0,0 +1,4 @@ +{ + "name": "Redis Connector for Dell Boomi", + "description": "A custom connector for Dell Boomi that utilizes the lettuce.io Java client to add Redis client support to the Dell Boomi iPaaS." +} \ No newline at end of file diff --git a/clients/c/code.google.com/p/credis/source/browse.json b/clients/c/code.google.com/p/credis/source/browse.json new file mode 100644 index 0000000000..3899f98515 --- /dev/null +++ b/clients/c/code.google.com/p/credis/source/browse.json @@ -0,0 +1,4 @@ +{ + "name": "credis", + "description": "A Redis client." +} diff --git a/clients/c/github.com/EulerianTechnologies/eredis.json b/clients/c/github.com/EulerianTechnologies/eredis.json new file mode 100644 index 0000000000..19e65b3524 --- /dev/null +++ b/clients/c/github.com/EulerianTechnologies/eredis.json @@ -0,0 +1,7 @@ +{ + "name": "eredis", + "description": "Fast and light Redis C client library extending Hiredis: thread-safe, write replication, auto-reconnect, sync pool, async libev", + "twitter": [ + "EulerianTech" + ] +} \ No newline at end of file diff --git a/clients/c/github.com/Nordix/hiredis-cluster.json b/clients/c/github.com/Nordix/hiredis-cluster.json new file mode 100644 index 0000000000..3865284663 --- /dev/null +++ b/clients/c/github.com/Nordix/hiredis-cluster.json @@ -0,0 +1,5 @@ +{ + "name": "hiredis-cluster", + "description": "This is an updated fork of hiredis-cluster, the C client for Redis Cluster, with added TLS and AUTH support, decoupling hiredis as an external dependency, leak corrections and improved testing.", + "recommended": true +} \ No newline at end of file diff --git a/clients/c/github.com/aclisp/hiredispool.json b/clients/c/github.com/aclisp/hiredispool.json new file mode 100644 index 0000000000..4fca97757e --- /dev/null +++ b/clients/c/github.com/aclisp/hiredispool.json @@ -0,0 +1,4 @@ +{ + "name": "hiredispool", + "description": "Provides connection pooling and auto-reconnect for hiredis. It is also minimalistic and easy to do customization." +} \ No newline at end of file diff --git a/clients/c/github.com/redis/hiredis.json b/clients/c/github.com/redis/hiredis.json new file mode 100644 index 0000000000..334dd3d958 --- /dev/null +++ b/clients/c/github.com/redis/hiredis.json @@ -0,0 +1,10 @@ +{ + "name": "hiredis", + "description": "This is the official C client. Support for the whole command set, pipelining, event driven programming.", + "recommended": true, + "twitter": [ + "antirez", + "pnoordhuis", + "badboy_" + ] +} \ No newline at end of file diff --git a/clients/c/github.com/toymachine/libredis.json b/clients/c/github.com/toymachine/libredis.json new file mode 100644 index 0000000000..a17161e4c8 --- /dev/null +++ b/clients/c/github.com/toymachine/libredis.json @@ -0,0 +1,4 @@ +{ + "name": "libredis", + "description": "Support for executing commands on multiple servers in parallel via poll(2), ketama hashing. Includes PHP bindings." +} \ No newline at end of file diff --git a/clients/c/github.com/vipshop/hiredis-vip.json b/clients/c/github.com/vipshop/hiredis-vip.json new file mode 100644 index 0000000000..27ffcfc83b --- /dev/null +++ b/clients/c/github.com/vipshop/hiredis-vip.json @@ -0,0 +1,7 @@ +{ + "name": "hiredis-vip", + "description": "This was the original C client for Redis Cluster. Support for synchronous and asyncronous APIs, MSET/MGET/DEL, pipelining. Built around an outdated version of hiredis.", + "twitter": [ + "diguo58" + ] +} \ No newline at end of file diff --git a/clients/clojure/github.com/ptaoussanis/carmine.json b/clients/clojure/github.com/ptaoussanis/carmine.json new file mode 100644 index 0000000000..557d77c4e0 --- /dev/null +++ b/clients/clojure/github.com/ptaoussanis/carmine.json @@ -0,0 +1,8 @@ +{ + "name": "carmine", + "description": "Simple, high-performance Redis (2.0+) client for Clojure.", + "recommended": true, + "twitter": [ + "ptaoussanis" + ] +} \ No newline at end of file diff --git a/clients/common-lisp/github.com/vseloved/cl-redis.json b/clients/common-lisp/github.com/vseloved/cl-redis.json new file mode 100644 index 0000000000..7e8e51ccf6 --- /dev/null +++ b/clients/common-lisp/github.com/vseloved/cl-redis.json @@ -0,0 +1,8 @@ +{ + "name": "CL-Redis", + "description": "A Redis client.", + "homepage": "http://www.cliki.net/cl-redis", + "twitter": [ + "BigThingist" + ] +} diff --git a/clients/cpp/github.com/0xsky/xredis.json b/clients/cpp/github.com/0xsky/xredis.json new file mode 100644 index 0000000000..1589652db3 --- /dev/null +++ b/clients/cpp/github.com/0xsky/xredis.json @@ -0,0 +1,8 @@ +{ + "name": "xredis", + "description": "Redis C++ client with data slice storage, Redis cluster, connection pool, master replica connection, read/write separation; requires hiredis only", + "homepage": "http://xredis.0xsky.com/", + "twitter": [ + "0xsky" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/Levhav/SimpleRedisClient.json b/clients/cpp/github.com/Levhav/SimpleRedisClient.json new file mode 100644 index 0000000000..d2b49c40d9 --- /dev/null +++ b/clients/cpp/github.com/Levhav/SimpleRedisClient.json @@ -0,0 +1,7 @@ +{ + "name": "SimpleRedisClient", + "description": "Simple Redis client for C++", + "twitter": [ + "Levhav" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/acl-dev/acl/tree/master/lib_acl_cpp/include/acl_cpp/redis.json b/clients/cpp/github.com/acl-dev/acl/tree/master/lib_acl_cpp/include/acl_cpp/redis.json new file mode 100644 index 0000000000..80e2a04c7a --- /dev/null +++ b/clients/cpp/github.com/acl-dev/acl/tree/master/lib_acl_cpp/include/acl_cpp/redis.json @@ -0,0 +1,8 @@ +{ + "name": "acl-redis", + "description": "Standard C++ Redis Client with high performance and stl-like interface, supporting Redis Cluster, thread safety", + "homepage": "https://github.com/acl-dev/acl/tree/master/lib_acl_cpp/samples/redis", + "twitter": [ + "zhengshuxin" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/basiliscos/cpp-bredis.json b/clients/cpp/github.com/basiliscos/cpp-bredis.json new file mode 100644 index 0000000000..48293f5bf1 --- /dev/null +++ b/clients/cpp/github.com/basiliscos/cpp-bredis.json @@ -0,0 +1,4 @@ +{ + "name": "bredis", + "description": "Boost::ASIO low-level redis client" +} \ No newline at end of file diff --git a/clients/cpp/github.com/cpp-redis/cpp_redis.json b/clients/cpp/github.com/cpp-redis/cpp_redis.json new file mode 100644 index 0000000000..b43ad994be --- /dev/null +++ b/clients/cpp/github.com/cpp-redis/cpp_redis.json @@ -0,0 +1,7 @@ +{ + "name": "cpp_redis", + "description": "C++11 Lightweight Redis client: async, thread-safe, no dependency, pipelining, multi-platform.", + "twitter": [ + "simon_ninon" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/eyjian/r3c.json b/clients/cpp/github.com/eyjian/r3c.json new file mode 100644 index 0000000000..c947df7d43 --- /dev/null +++ b/clients/cpp/github.com/eyjian/r3c.json @@ -0,0 +1,7 @@ +{ + "name": "r3c", + "description": "Redis Cluster C++ Client, based on hiredis, support password and standalone, it's easy to make and use, not depends on C++11 or later.", + "twitter": [ + "eyjian" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/hamidr/async-redis.json b/clients/cpp/github.com/hamidr/async-redis.json new file mode 100644 index 0000000000..2cec30640b --- /dev/null +++ b/clients/cpp/github.com/hamidr/async-redis.json @@ -0,0 +1,8 @@ +{ + "name": "async-redis", + "description": "An async redis library for C++ based on libevpp/boost-asio", + "homepage": "https://github.com/hamidr/async-redis", + "twitter": [ + "hamidr_" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/hmartiro/redox.json b/clients/cpp/github.com/hmartiro/redox.json new file mode 100644 index 0000000000..12a5055ada --- /dev/null +++ b/clients/cpp/github.com/hmartiro/redox.json @@ -0,0 +1,7 @@ +{ + "name": "redox", + "description": "Modern, asynchronous, and fast C++11 client for Redis", + "twitter": [ + "hmartiros" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/icerlion/FlyRedis.json b/clients/cpp/github.com/icerlion/FlyRedis.json new file mode 100644 index 0000000000..d709458e28 --- /dev/null +++ b/clients/cpp/github.com/icerlion/FlyRedis.json @@ -0,0 +1,4 @@ +{ + "name": "FlyRedis", + "description": "C++ Redis Client, base on Boost.asio, Easy To Use" +} \ No newline at end of file diff --git a/clients/cpp/github.com/luca3m/redis3m.json b/clients/cpp/github.com/luca3m/redis3m.json new file mode 100644 index 0000000000..9f1c3fb4f8 --- /dev/null +++ b/clients/cpp/github.com/luca3m/redis3m.json @@ -0,0 +1,7 @@ +{ + "name": "redis3m", + "description": "A C++ wrapper of hiredis, with also connection pooling, high availability and ready-to-use patterns", + "twitter": [ + "luca3m" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/mrpi/redis-cplusplus-client.json b/clients/cpp/github.com/mrpi/redis-cplusplus-client.json new file mode 100644 index 0000000000..456709c127 --- /dev/null +++ b/clients/cpp/github.com/mrpi/redis-cplusplus-client.json @@ -0,0 +1,3 @@ +{ + "name": "C++ Client" +} \ No newline at end of file diff --git a/clients/cpp/github.com/mzimbres/aedis.json b/clients/cpp/github.com/mzimbres/aedis.json new file mode 100644 index 0000000000..c9054dd8b4 --- /dev/null +++ b/clients/cpp/github.com/mzimbres/aedis.json @@ -0,0 +1,7 @@ +{ + "name": "aedis", + "description": "An async redis client designed for simplicity and reliability.", + "twitter": [ + "mzimbres" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/nekipelov/redisclient.json b/clients/cpp/github.com/nekipelov/redisclient.json new file mode 100644 index 0000000000..aba5de6088 --- /dev/null +++ b/clients/cpp/github.com/nekipelov/redisclient.json @@ -0,0 +1,7 @@ +{ + "name": "redisclient", + "description": "A C++ asynchronous client based on boost::asio", + "twitter": [ + "nekipelov" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/nokia/wiredis.json b/clients/cpp/github.com/nokia/wiredis.json new file mode 100644 index 0000000000..0cb2abce9d --- /dev/null +++ b/clients/cpp/github.com/nokia/wiredis.json @@ -0,0 +1,5 @@ +{ + "name": "wiredis", + "description": "Standalone, asynchronous Redis client library based on ::boost::asio and c++11 standard", + "homepage": "https://github.com/nokia/wiredis" +} \ No newline at end of file diff --git a/clients/cpp/github.com/sewenew/redis-plus-plus.json b/clients/cpp/github.com/sewenew/redis-plus-plus.json new file mode 100644 index 0000000000..27e8863b4d --- /dev/null +++ b/clients/cpp/github.com/sewenew/redis-plus-plus.json @@ -0,0 +1,8 @@ +{ + "name": "redis-plus-plus", + "description": "This is a Redis client, based on hiredis and written in C++11. It supports scritpting, pub/sub, pipeline, transaction, Redis Cluster, Redis Sentinel, connection pool, ACL, SSL and thread safety.", + "recommended": true, + "twitter": [ + "sewenew" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/shawn246/redis_client.json b/clients/cpp/github.com/shawn246/redis_client.json new file mode 100644 index 0000000000..ade108716f --- /dev/null +++ b/clients/cpp/github.com/shawn246/redis_client.json @@ -0,0 +1,4 @@ +{ + "name": "c+redis+client", + "description": "A redis client based on hiredis, supports cluster/pipeline and is thread safe and includes two files only. The transaction is on the way:)" +} \ No newline at end of file diff --git a/clients/cpp/github.com/tdv/redis-cpp.json b/clients/cpp/github.com/tdv/redis-cpp.json new file mode 100644 index 0000000000..f1aaa96f08 --- /dev/null +++ b/clients/cpp/github.com/tdv/redis-cpp.json @@ -0,0 +1,4 @@ +{ + "name": "redis-cpp", + "description": "redis-cpp is a library in C++17 for executing Redis commands with support of the pipelines and publish / subscribe pattern" +} \ No newline at end of file diff --git a/clients/cpp/github.com/uglide/qredisclient.json b/clients/cpp/github.com/uglide/qredisclient.json new file mode 100644 index 0000000000..651e674bbb --- /dev/null +++ b/clients/cpp/github.com/uglide/qredisclient.json @@ -0,0 +1,7 @@ +{ + "name": "qredisclient", + "description": "Asynchronous Qt-based Redis client with SSL and SSH tunnelling support.", + "twitter": [ + "u_glide" + ] +} \ No newline at end of file diff --git a/clients/cpp/github.com/wusongwei/soce.json b/clients/cpp/github.com/wusongwei/soce.json new file mode 100644 index 0000000000..26f6972f1f --- /dev/null +++ b/clients/cpp/github.com/wusongwei/soce.json @@ -0,0 +1,4 @@ +{ + "name": "soce-redis", + "description": "Based on hiredis, accesses the sever(single, sentinel, cluster) with the same interface, supports pipeline and async(by coroutine)" +} \ No newline at end of file diff --git a/clients/crystal/github.com/stefanwille/crystal-redis.json b/clients/crystal/github.com/stefanwille/crystal-redis.json new file mode 100644 index 0000000000..4728809f43 --- /dev/null +++ b/clients/crystal/github.com/stefanwille/crystal-redis.json @@ -0,0 +1,9 @@ +{ + "name": "crystal-redis", + "description": "Full featured, high performance Redis client for Crystal", + "recommended": true, + "homepage": "http://www.stefanwille.com/projects/crystal-redis/", + "twitter": [ + "stefanwille" + ] +} \ No newline at end of file diff --git a/clients/csharp/github.com/2881099/FreeRedis.json b/clients/csharp/github.com/2881099/FreeRedis.json new file mode 100644 index 0000000000..23f0e7205b --- /dev/null +++ b/clients/csharp/github.com/2881099/FreeRedis.json @@ -0,0 +1,4 @@ +{ + "name": "FreeRedis", + "description": "This .NET client supports redis6.0+, cluster, sentinel, pipeline, And simple api." +} \ No newline at end of file diff --git a/clients/csharp/github.com/IKende/BeetleX.Redis.json b/clients/csharp/github.com/IKende/BeetleX.Redis.json new file mode 100644 index 0000000000..9dc6ae71f2 --- /dev/null +++ b/clients/csharp/github.com/IKende/BeetleX.Redis.json @@ -0,0 +1,5 @@ +{ + "name": "BeetleX.Redis", + "description": "A high-performance async/non-blocking redis client components for dotnet core, default support json and protobuf data format", + "homepage": "https://github.com/IKende/BeetleX.Redis" +} \ No newline at end of file diff --git a/clients/csharp/github.com/NewLifeX/NewLife.Redis.json b/clients/csharp/github.com/NewLifeX/NewLife.Redis.json new file mode 100644 index 0000000000..9bc4e73dea --- /dev/null +++ b/clients/csharp/github.com/NewLifeX/NewLife.Redis.json @@ -0,0 +1,5 @@ +{ + "name": "NewLife.Redis", + "description": "The high-performance redis client supports .NETCORE/.NET4.0/.NET4.5, which is specially optimized for big data and message queuing. The average daily call volume of single online application is 10 billion", + "homepage": "https://github.com/NewLifeX/NewLife.Redis" +} \ No newline at end of file diff --git a/clients/csharp/github.com/ServiceStack/ServiceStack.Redis.json b/clients/csharp/github.com/ServiceStack/ServiceStack.Redis.json new file mode 100644 index 0000000000..02b28343ca --- /dev/null +++ b/clients/csharp/github.com/ServiceStack/ServiceStack.Redis.json @@ -0,0 +1,8 @@ +{ + "name": "ServiceStack.Redis", + "description": "This is a fork and improvement of the original C# client written by Miguel De Icaza.", + "recommended": true, + "twitter": [ + "demisbellot" + ] +} \ No newline at end of file diff --git a/clients/csharp/github.com/StackExchange/StackExchange.Redis.json b/clients/csharp/github.com/StackExchange/StackExchange.Redis.json new file mode 100644 index 0000000000..ae316b2b4d --- /dev/null +++ b/clients/csharp/github.com/StackExchange/StackExchange.Redis.json @@ -0,0 +1,8 @@ +{ + "name": "StackExchange.Redis", + "description": "This .NET client was developed by Stack Exchange for very high performance needs (replacement to the earlier BookSleeve).", + "recommended": true, + "twitter": [ + "marcgravell" + ] +} \ No newline at end of file diff --git a/clients/csharp/github.com/andrew-bn/RedisBoost.json b/clients/csharp/github.com/andrew-bn/RedisBoost.json new file mode 100644 index 0000000000..83155eb919 --- /dev/null +++ b/clients/csharp/github.com/andrew-bn/RedisBoost.json @@ -0,0 +1,5 @@ +{ + "name": "redisboost", + "description": "Thread-safe async Redis client. Offers high performance and simple api", + "homepage": "http://andrew-bn.github.io/RedisBoost/" +} \ No newline at end of file diff --git a/clients/csharp/github.com/ctstone/csredis.json b/clients/csharp/github.com/ctstone/csredis.json new file mode 100644 index 0000000000..5bfef14794 --- /dev/null +++ b/clients/csharp/github.com/ctstone/csredis.json @@ -0,0 +1,7 @@ +{ + "name": "csredis", + "description": "Async (and sync) client for Redis and Sentinel", + "twitter": [ + "ctnstone" + ] +} \ No newline at end of file diff --git a/clients/csharp/github.com/mhowlett/Nhiredis.json b/clients/csharp/github.com/mhowlett/Nhiredis.json new file mode 100644 index 0000000000..f775d71e33 --- /dev/null +++ b/clients/csharp/github.com/mhowlett/Nhiredis.json @@ -0,0 +1,7 @@ +{ + "name": "Nhiredis", + "description": "A lightweight wrapper around the C client hiredis.", + "twitter": [ + "matt_howlett" + ] +} \ No newline at end of file diff --git a/clients/csharp/github.com/migueldeicaza/redis-sharp.json b/clients/csharp/github.com/migueldeicaza/redis-sharp.json new file mode 100644 index 0000000000..c501699016 --- /dev/null +++ b/clients/csharp/github.com/migueldeicaza/redis-sharp.json @@ -0,0 +1,7 @@ +{ + "name": "redis-sharp", + "description": "A Redis client.", + "twitter": [ + "migueldeicaza" + ] +} diff --git a/clients/csharp/github.com/pepelev/Rediska.json b/clients/csharp/github.com/pepelev/Rediska.json new file mode 100644 index 0000000000..5eb11874dc --- /dev/null +++ b/clients/csharp/github.com/pepelev/Rediska.json @@ -0,0 +1,4 @@ +{ + "name": "Rediska", + "description": "Rediska is a Redis client for .NET with a focus on flexibility and extensibility." +} \ No newline at end of file diff --git a/clients/csharp/github.com/redis/NRedisStack.json b/clients/csharp/github.com/redis/NRedisStack.json new file mode 100644 index 0000000000..b02b4895c6 --- /dev/null +++ b/clients/csharp/github.com/redis/NRedisStack.json @@ -0,0 +1,5 @@ +{ + "name": "NRedisStack", + "description": "This client is developed by Redis to bring RedisStack support to CSharp.", + "official": true +} diff --git a/clients/csharp/github.com/zhuovi/XiaoFeng.Redis.json b/clients/csharp/github.com/zhuovi/XiaoFeng.Redis.json new file mode 100644 index 0000000000..8f73792fef --- /dev/null +++ b/clients/csharp/github.com/zhuovi/XiaoFeng.Redis.json @@ -0,0 +1,4 @@ +{ + "name": "XiaoFeng.Redis", + "description": "A useful Redis client that supports the .NET FRAMEWORK,.NET CORE,.NET STANDARD. A client tool that is quite convenient to operate." +} \ No newline at end of file diff --git a/clients/csharp/www.nuget.org/packages/Sider.json b/clients/csharp/www.nuget.org/packages/Sider.json new file mode 100644 index 0000000000..4fd1047125 --- /dev/null +++ b/clients/csharp/www.nuget.org/packages/Sider.json @@ -0,0 +1,7 @@ +{ + "name": "Sider", + "description": "Minimalistic client for C#/.NET 4.0", + "twitter": [ + "chakrit" + ] +} \ No newline at end of file diff --git a/clients/d/github.com/adilbaig/Tiny-Redis.json b/clients/d/github.com/adilbaig/Tiny-Redis.json new file mode 100644 index 0000000000..b41235488e --- /dev/null +++ b/clients/d/github.com/adilbaig/Tiny-Redis.json @@ -0,0 +1,8 @@ +{ + "name": "Tiny Redis", + "description": "A Redis client for D2. Supports pipelining, transactions and Lua scripting", + "homepage": "http://adilbaig.github.io/Tiny-Redis/", + "twitter": [ + "aidezigns" + ] +} \ No newline at end of file diff --git a/clients/dart/github.com/SiLeader/dedis.json b/clients/dart/github.com/SiLeader/dedis.json new file mode 100644 index 0000000000..6d3a6747ad --- /dev/null +++ b/clients/dart/github.com/SiLeader/dedis.json @@ -0,0 +1,8 @@ +{ + "name": "dedis", + "description": "Simple Redis Client for Dart", + "homepage": "https://pub.dev/packages/dedis", + "twitter": [ + "cerussite127" + ] +} \ No newline at end of file diff --git a/clients/dart/github.com/dartist/redis_client.json b/clients/dart/github.com/dartist/redis_client.json new file mode 100644 index 0000000000..f4c75df886 --- /dev/null +++ b/clients/dart/github.com/dartist/redis_client.json @@ -0,0 +1,8 @@ +{ + "name": "DartRedisClient", + "description": "A high-performance async/non-blocking Redis client for Dart", + "recommended": true, + "twitter": [ + "demisbellot" + ] +} \ No newline at end of file diff --git a/clients/dart/github.com/himulawang/i_redis.json b/clients/dart/github.com/himulawang/i_redis.json new file mode 100644 index 0000000000..0f23ce7e94 --- /dev/null +++ b/clients/dart/github.com/himulawang/i_redis.json @@ -0,0 +1,7 @@ +{ + "name": "IRedis", + "description": "A redis client for Dart", + "twitter": [ + "ila" + ] +} \ No newline at end of file diff --git a/clients/dart/github.com/jcmellado/dartis.json b/clients/dart/github.com/jcmellado/dartis.json new file mode 100644 index 0000000000..dac77c422e --- /dev/null +++ b/clients/dart/github.com/jcmellado/dartis.json @@ -0,0 +1,4 @@ +{ + "name": "dartis", + "description": "A Redis client for Dart 2" +} \ No newline at end of file diff --git a/clients/dart/github.com/ra1u/redis-dart.json b/clients/dart/github.com/ra1u/redis-dart.json new file mode 100644 index 0000000000..63152b3a8a --- /dev/null +++ b/clients/dart/github.com/ra1u/redis-dart.json @@ -0,0 +1,4 @@ +{ + "name": "redis", + "description": "Simple and fast client" +} \ No newline at end of file diff --git a/clients/delphi/github.com/danieleteti/delphiredisclient.json b/clients/delphi/github.com/danieleteti/delphiredisclient.json new file mode 100644 index 0000000000..23719261bf --- /dev/null +++ b/clients/delphi/github.com/danieleteti/delphiredisclient.json @@ -0,0 +1,7 @@ +{ + "name": "delphiredisclient", + "description": "A Delphi Redis Client", + "twitter": [ + "danieleteti" + ] +} \ No newline at end of file diff --git a/clients/deno/github.com/denodrivers/redis.json b/clients/deno/github.com/denodrivers/redis.json new file mode 100644 index 0000000000..36ee3007f2 --- /dev/null +++ b/clients/deno/github.com/denodrivers/redis.json @@ -0,0 +1,4 @@ +{ + "name": "redis", + "description": "🦕 Redis client for Deno 🍕" +} diff --git a/clients/deno/github.com/iuioiua/r2d2.json b/clients/deno/github.com/iuioiua/r2d2.json new file mode 100644 index 0000000000..636cd2f05e --- /dev/null +++ b/clients/deno/github.com/iuioiua/r2d2.json @@ -0,0 +1,4 @@ +{ + "name": "r2d2", + "description": "Fast, lightweight Redis client library for Deno." +} \ No newline at end of file diff --git a/clients/elixir/github.com/artemeff/exredis.json b/clients/elixir/github.com/artemeff/exredis.json new file mode 100644 index 0000000000..4c23422aa7 --- /dev/null +++ b/clients/elixir/github.com/artemeff/exredis.json @@ -0,0 +1,7 @@ +{ + "name": "exredis", + "description": "Redis client for Elixir.", + "twitter": [ + "artemeff" + ] +} \ No newline at end of file diff --git a/clients/elixir/github.com/whatyouhide/redix.json b/clients/elixir/github.com/whatyouhide/redix.json new file mode 100644 index 0000000000..384a167141 --- /dev/null +++ b/clients/elixir/github.com/whatyouhide/redix.json @@ -0,0 +1,7 @@ +{ + "name": "redix", + "description": "Superfast, pipelined, resilient Redis client written in pure Elixir.", + "twitter": [ + "whatyouhide" + ] +} \ No newline at end of file diff --git a/clients/emacs-lisp/code.google.com/p/eredis.json b/clients/emacs-lisp/code.google.com/p/eredis.json new file mode 100644 index 0000000000..35384367b6 --- /dev/null +++ b/clients/emacs-lisp/code.google.com/p/eredis.json @@ -0,0 +1,7 @@ +{ + "name": "eredis", + "description": "Full Redis API plus ways to pull Redis data into an org-mode table and push it back when edited", + "twitter": [ + "justinhj" + ] +} \ No newline at end of file diff --git a/clients/erlang/github.com/HalloAppInc/ecredis.json b/clients/erlang/github.com/HalloAppInc/ecredis.json new file mode 100644 index 0000000000..a4ac45da8e --- /dev/null +++ b/clients/erlang/github.com/HalloAppInc/ecredis.json @@ -0,0 +1,7 @@ +{ + "name": "ecredis", + "description": "Redis Cluster client that allows for connections to multiple clusters. Queries are send directly to eredis clients allowing for large throughput.", + "twitter": [ + "HalloAppInc" + ] +} \ No newline at end of file diff --git a/clients/erlang/github.com/Nordix/eredis.json b/clients/erlang/github.com/Nordix/eredis.json new file mode 100644 index 0000000000..ffd41e3302 --- /dev/null +++ b/clients/erlang/github.com/Nordix/eredis.json @@ -0,0 +1,5 @@ +{ + "name": "Eredis (Nordix fork)", + "description": "An updated fork of eredis, adding TLS and various corrections and testing", + "recommended": true +} \ No newline at end of file diff --git a/clients/erlang/github.com/Nordix/eredis_cluster.json b/clients/erlang/github.com/Nordix/eredis_cluster.json new file mode 100644 index 0000000000..ac5d5b3a5c --- /dev/null +++ b/clients/erlang/github.com/Nordix/eredis_cluster.json @@ -0,0 +1,5 @@ +{ + "name": "eredis_cluster (Nordix fork)", + "description": "An updated fork of eredis_cluster (providing cluster support and connection pooling), with added TLS support, ASK redirects, various corrections and testing", + "recommended": true +} \ No newline at end of file diff --git a/clients/erlang/github.com/adrienmo/eredis_cluster.json b/clients/erlang/github.com/adrienmo/eredis_cluster.json new file mode 100644 index 0000000000..af3a990098 --- /dev/null +++ b/clients/erlang/github.com/adrienmo/eredis_cluster.json @@ -0,0 +1,7 @@ +{ + "name": "eredis_cluster", + "description": "Eredis wrapper providing cluster support and connection pooling", + "twitter": [ + "adrienmo" + ] +} \ No newline at end of file diff --git a/clients/erlang/github.com/wooga/eredis.json b/clients/erlang/github.com/wooga/eredis.json new file mode 100644 index 0000000000..38bac56606 --- /dev/null +++ b/clients/erlang/github.com/wooga/eredis.json @@ -0,0 +1,8 @@ +{ + "name": "Eredis", + "description": "Redis client with a focus on performance", + "recommended": true, + "twitter": [ + "wooga" + ] +} \ No newline at end of file diff --git a/clients/gawk/sourceforge.net/projects/gawkextlib.json b/clients/gawk/sourceforge.net/projects/gawkextlib.json new file mode 100644 index 0000000000..d6db48df98 --- /dev/null +++ b/clients/gawk/sourceforge.net/projects/gawkextlib.json @@ -0,0 +1,7 @@ +{ + "name": "gawk-redis", + "description": "Gawk extension, using the hiredis C library. Supports pipelining and pub/sub", + "twitter": [ + "paulinohuerta" + ] +} \ No newline at end of file diff --git a/clients/gleam/github.com/massivefermion/radish.json b/clients/gleam/github.com/massivefermion/radish.json new file mode 100644 index 0000000000..519ed4bf82 --- /dev/null +++ b/clients/gleam/github.com/massivefermion/radish.json @@ -0,0 +1,8 @@ +{ + "name": "Radish", + "description": "Simple and Fast Redis client written in and for Gleam", + "homepage": "https://hexdocs.pm/radish", + "twitter": [ + "massivefermion" + ] +} \ No newline at end of file diff --git a/clients/gnu-prolog/github.com/emacstheviking/gnuprolog-redisclient.json b/clients/gnu-prolog/github.com/emacstheviking/gnuprolog-redisclient.json new file mode 100644 index 0000000000..efad0bc5a5 --- /dev/null +++ b/clients/gnu-prolog/github.com/emacstheviking/gnuprolog-redisclient.json @@ -0,0 +1,7 @@ +{ + "name": "gnuprolog-redisclient", + "description": "Simple Redis client for GNU Prolog in native Prolog, no FFI, libraries etc.", + "twitter": [ + "seancharles" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/alphazero/Go-Redis.json b/clients/go/github.com/alphazero/Go-Redis.json new file mode 100644 index 0000000000..20a656b10a --- /dev/null +++ b/clients/go/github.com/alphazero/Go-Redis.json @@ -0,0 +1,7 @@ +{ + "name": "Go-Redis", + "description": "Google Go Client and Connectors for Redis.", + "twitter": [ + "SunOf27" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/gistao/RedisGo-Async.json b/clients/go/github.com/gistao/RedisGo-Async.json new file mode 100644 index 0000000000..673b310dc4 --- /dev/null +++ b/clients/go/github.com/gistao/RedisGo-Async.json @@ -0,0 +1,7 @@ +{ + "name": "RedisGo-Async", + "description": "RedisGo-Async is a Go client for Redis, both asynchronous and synchronous modes are supported,,its API is fully compatible with redigo.", + "twitter": [ + "gistao" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/gomodule/redigo.json b/clients/go/github.com/gomodule/redigo.json new file mode 100644 index 0000000000..a2b058e8c7 --- /dev/null +++ b/clients/go/github.com/gomodule/redigo.json @@ -0,0 +1,8 @@ +{ + "name": "Redigo", + "description": "Redigo is a Go client for the Redis database with support for Print-alike API, Pipelining (including transactions), Pub/Sub, Connection pooling, scripting.", + "recommended": true, + "twitter": [ + "gburd" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/gosexy/redis.json b/clients/go/github.com/gosexy/redis.json new file mode 100644 index 0000000000..dfa2c3d837 --- /dev/null +++ b/clients/go/github.com/gosexy/redis.json @@ -0,0 +1,8 @@ +{ + "name": "gosexy/redis", + "description": "Redis client library for Go that maps the full redis command list into equivalent Go functions.", + "homepage": "https://menteslibres.net/gosexy/redis", + "twitter": [ + "xiam" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/hoisie/redis.json b/clients/go/github.com/hoisie/redis.json new file mode 100644 index 0000000000..3ab0f69f0d --- /dev/null +++ b/clients/go/github.com/hoisie/redis.json @@ -0,0 +1,7 @@ +{ + "name": "redis.go", + "description": "A client for the Redis key-value store.", + "twitter": [ + "hoisie" + ] +} diff --git a/clients/go/github.com/joomcode/redispipe.json b/clients/go/github.com/joomcode/redispipe.json new file mode 100644 index 0000000000..83f3eb3179 --- /dev/null +++ b/clients/go/github.com/joomcode/redispipe.json @@ -0,0 +1,7 @@ +{ + "name": "RedisPipe", + "description": "RedisPipe is the high-throughput Go client with implicit pipelining and robust Cluster support.", + "twitter": [ + "funny_falcon" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/keimoon/gore.json b/clients/go/github.com/keimoon/gore.json new file mode 100644 index 0000000000..f88bb2a58d --- /dev/null +++ b/clients/go/github.com/keimoon/gore.json @@ -0,0 +1,7 @@ +{ + "name": "gore", + "description": "A full feature redis Client for Go. Supports Pipeline, Transaction, LUA scripting, Pubsub, Connection Pool, Sentinel and client sharding", + "twitter": [ + "keimoon" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/mediocregopher/radix.json b/clients/go/github.com/mediocregopher/radix.json new file mode 100644 index 0000000000..386b834cf9 --- /dev/null +++ b/clients/go/github.com/mediocregopher/radix.json @@ -0,0 +1,9 @@ +{ + "name": "Radix", + "description": "MIT licensed Redis client which supports pipelining, pooling, redis cluster, scripting, pub/sub, scanning, and more.", + "recommended": true, + "twitter": [ + "fzzbt", + "mediocre_gopher" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/pascaldekloe/redis.json b/clients/go/github.com/pascaldekloe/redis.json new file mode 100644 index 0000000000..59a35b193b --- /dev/null +++ b/clients/go/github.com/pascaldekloe/redis.json @@ -0,0 +1,4 @@ +{ + "name": "Redis", + "description": "clean, fully asynchronous, high-performance, low-memory" +} \ No newline at end of file diff --git a/clients/go/github.com/piaohao/godis.json b/clients/go/github.com/piaohao/godis.json new file mode 100644 index 0000000000..ee7e12b3fe --- /dev/null +++ b/clients/go/github.com/piaohao/godis.json @@ -0,0 +1,4 @@ +{ + "name": "godis", + "description": "redis client implement by golang, inspired by jedis." +} \ No newline at end of file diff --git a/clients/go/github.com/redis/go-redis.json b/clients/go/github.com/redis/go-redis.json new file mode 100644 index 0000000000..06de74d8c1 --- /dev/null +++ b/clients/go/github.com/redis/go-redis.json @@ -0,0 +1,5 @@ +{ + "name": "go-redis", + "description": "Redis client for Golang supporting Redis Sentinel and Redis Cluster out of the box.", + "official": true +} diff --git a/clients/go/github.com/rueian/rueidis.json b/clients/go/github.com/rueian/rueidis.json new file mode 100644 index 0000000000..68a602b149 --- /dev/null +++ b/clients/go/github.com/rueian/rueidis.json @@ -0,0 +1,4 @@ +{ + "name": "rueidis", + "description": "A Fast Golang Redis RESP3 client that does auto pipelining and supports client side caching." +} \ No newline at end of file diff --git a/clients/go/github.com/shipwire/redis.json b/clients/go/github.com/shipwire/redis.json new file mode 100644 index 0000000000..baf81f270a --- /dev/null +++ b/clients/go/github.com/shipwire/redis.json @@ -0,0 +1,7 @@ +{ + "name": "shipwire/redis", + "description": "A Redis client focused on streaming, with support for a print-like API, pipelining, Pub/Sub, and connection pooling.", + "twitter": [ + "stephensearles" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/simonz05/godis.json b/clients/go/github.com/simonz05/godis.json new file mode 100644 index 0000000000..d12c799f2e --- /dev/null +++ b/clients/go/github.com/simonz05/godis.json @@ -0,0 +1,4 @@ +{ + "name": "godis", + "description": "A Redis client for Go." +} \ No newline at end of file diff --git a/clients/go/github.com/stfnmllr/go-resp3.json b/clients/go/github.com/stfnmllr/go-resp3.json new file mode 100644 index 0000000000..f7379714f3 --- /dev/null +++ b/clients/go/github.com/stfnmllr/go-resp3.json @@ -0,0 +1,4 @@ +{ + "name": "go-resp3", + "description": "A Redis Go client implementation based on the Redis RESP3 protocol." +} \ No newline at end of file diff --git a/clients/go/github.com/tideland/golib.json b/clients/go/github.com/tideland/golib.json new file mode 100644 index 0000000000..79303f5e38 --- /dev/null +++ b/clients/go/github.com/tideland/golib.json @@ -0,0 +1,8 @@ +{ + "name": "Tideland Go Redis Client", + "description": "A flexible Go Redis client able to handle all commands", + "homepage": "https://github.com/tideland/golib/tree/master/redis", + "twitter": [ + "themue" + ] +} \ No newline at end of file diff --git a/clients/go/github.com/xuyu/goredis.json b/clients/go/github.com/xuyu/goredis.json new file mode 100644 index 0000000000..960da01092 --- /dev/null +++ b/clients/go/github.com/xuyu/goredis.json @@ -0,0 +1,7 @@ +{ + "name": "goredis", + "description": "A redis client for golang with full features", + "twitter": [ + "xuyu" + ] +} \ No newline at end of file diff --git a/clients/haskell/github.com/informatikr/hedis.json b/clients/haskell/github.com/informatikr/hedis.json new file mode 100644 index 0000000000..40387e9e30 --- /dev/null +++ b/clients/haskell/github.com/informatikr/hedis.json @@ -0,0 +1,5 @@ +{ + "name": "hedis", + "description": "Supports the complete command set and cluster. Commands are automatically pipelined for high performance.", + "homepage": "http://hackage.haskell.org/package/hedis" +} \ No newline at end of file diff --git a/clients/io/github.com/vangberg/iodis.json b/clients/io/github.com/vangberg/iodis.json new file mode 100644 index 0000000000..8cb8684254 --- /dev/null +++ b/clients/io/github.com/vangberg/iodis.json @@ -0,0 +1,7 @@ +{ + "name": "iodis", + "description": "A redis client library for io.", + "twitter": [ + "ichverstehe" + ] +} diff --git a/clients/java/code.google.com/p/jdbc-redis/source/browse.json b/clients/java/code.google.com/p/jdbc-redis/source/browse.json new file mode 100644 index 0000000000..ed34c6cf83 --- /dev/null +++ b/clients/java/code.google.com/p/jdbc-redis/source/browse.json @@ -0,0 +1,5 @@ +{ + "name": "JDBC-Redis", + "description": "A JDBC client for Redis.", + "homepage": "https://code.google.com/p/jdbc-redis/" +} diff --git a/clients/java/github.com/alphazero/jredis.json b/clients/java/github.com/alphazero/jredis.json new file mode 100644 index 0000000000..f4fb3cd1a9 --- /dev/null +++ b/clients/java/github.com/alphazero/jredis.json @@ -0,0 +1,8 @@ +{ + "name": "JRedis", + "description": "A Redis client.", + "homepage": "https://code.google.com/p/jredis/", + "twitter": [ + "SunOf27" + ] +} diff --git a/clients/java/github.com/drm/java-redis-client.json b/clients/java/github.com/drm/java-redis-client.json new file mode 100644 index 0000000000..a875af9940 --- /dev/null +++ b/clients/java/github.com/drm/java-redis-client.json @@ -0,0 +1,4 @@ +{ + "name": "java-redis-client", + "description": "A very simple yet very complete java client in less than 200 lines with 0 dependencies." +} \ No newline at end of file diff --git a/clients/java/github.com/e-mzungu/rjc.json b/clients/java/github.com/e-mzungu/rjc.json new file mode 100644 index 0000000000..bb26e313c4 --- /dev/null +++ b/clients/java/github.com/e-mzungu/rjc.json @@ -0,0 +1,7 @@ +{ + "name": "RJC", + "description": "A Java Client that provides connection pooling in Apache DBCP style, sharding, pipelines, transactions and messages.", + "twitter": [ + "e_mzungu" + ] +} diff --git a/clients/java/github.com/lettuce-io/lettuce-core.json b/clients/java/github.com/lettuce-io/lettuce-core.json new file mode 100644 index 0000000000..60f14870af --- /dev/null +++ b/clients/java/github.com/lettuce-io/lettuce-core.json @@ -0,0 +1,10 @@ +{ + "name": "lettuce", + "description": "Advanced Redis client for thread-safe sync, async, and reactive usage. Supports Cluster, Sentinel, Pipelining, and codecs.", + "recommended": true, + "homepage": "https://lettuce.io/", + "twitter": [ + "ar3te", + "mp911de" + ] +} \ No newline at end of file diff --git a/clients/java/github.com/mrniko/redisson.json b/clients/java/github.com/mrniko/redisson.json new file mode 100644 index 0000000000..861873ffa2 --- /dev/null +++ b/clients/java/github.com/mrniko/redisson.json @@ -0,0 +1,8 @@ +{ + "name": "Redisson", + "description": "distributed and scalable Java data structures on top of Redis server", + "recommended": true, + "twitter": [ + "mrniko" + ] +} diff --git a/clients/java/github.com/redis/jedis.json b/clients/java/github.com/redis/jedis.json new file mode 100644 index 0000000000..f7324174e9 --- /dev/null +++ b/clients/java/github.com/redis/jedis.json @@ -0,0 +1,9 @@ +{ + "name": "Jedis", + "description": "A blazingly small and sane Redis Java client", + "official": true, + "twitter": [ + "xetorthio", + "g_korland" + ] +} diff --git a/clients/java/github.com/spullara/redis-protocol.json b/clients/java/github.com/spullara/redis-protocol.json new file mode 100644 index 0000000000..9848e82141 --- /dev/null +++ b/clients/java/github.com/spullara/redis-protocol.json @@ -0,0 +1,7 @@ +{ + "name": "redis-protocol", + "description": "Up to 2.6 compatible high-performance Java, Java w/Netty & Scala (finagle) client", + "twitter": [ + "spullara" + ] +} \ No newline at end of file diff --git a/clients/java/github.com/vert-x3/vertx-redis-client.json b/clients/java/github.com/vert-x3/vertx-redis-client.json new file mode 100644 index 0000000000..b3c84ca273 --- /dev/null +++ b/clients/java/github.com/vert-x3/vertx-redis-client.json @@ -0,0 +1,7 @@ +{ + "name": "vertx-redis-client", + "description": "The Vert.x Redis client provides an asynchronous API to interact with a Redis data-structure server.", + "twitter": [ + "pmlopes" + ] +} \ No newline at end of file diff --git a/clients/java/github.com/virendradhankar/viredis.json b/clients/java/github.com/virendradhankar/viredis.json new file mode 100644 index 0000000000..694c2163e8 --- /dev/null +++ b/clients/java/github.com/virendradhankar/viredis.json @@ -0,0 +1,4 @@ +{ + "name": "viredis", + "description": "A simple and small redis client for java." +} \ No newline at end of file diff --git a/clients/java/sourceforge.net/projects/aredis.json b/clients/java/sourceforge.net/projects/aredis.json new file mode 100644 index 0000000000..d59cb9f73d --- /dev/null +++ b/clients/java/sourceforge.net/projects/aredis.json @@ -0,0 +1,4 @@ +{ + "name": "aredis", + "description": "Asynchronous, pipelined client based on the Java 7 NIO Channel API" +} \ No newline at end of file diff --git a/clients/julia/github.com/captchanjack/Jedis.jl.json b/clients/julia/github.com/captchanjack/Jedis.jl.json new file mode 100644 index 0000000000..3d2bb8feec --- /dev/null +++ b/clients/julia/github.com/captchanjack/Jedis.jl.json @@ -0,0 +1,7 @@ +{ + "name": "Jedis.jl", + "description": "A lightweight Redis client, implemented in Julia.", + "twitter": [ + "captchanjack" + ] +} \ No newline at end of file diff --git a/clients/julia/github.com/jkaye2012/redis.jl.json b/clients/julia/github.com/jkaye2012/redis.jl.json new file mode 100644 index 0000000000..bf8b26f340 --- /dev/null +++ b/clients/julia/github.com/jkaye2012/redis.jl.json @@ -0,0 +1,7 @@ +{ + "name": "Redis.jl", + "description": "A fully-featured Redis client for the Julia programming language", + "twitter": [ + "jkaye2012" + ] +} \ No newline at end of file diff --git a/clients/kotlin/github.com/crackthecodeabhi/kreds.json b/clients/kotlin/github.com/crackthecodeabhi/kreds.json new file mode 100644 index 0000000000..fc5781d427 --- /dev/null +++ b/clients/kotlin/github.com/crackthecodeabhi/kreds.json @@ -0,0 +1,7 @@ +{ + "name": "Kreds", + "description": "A thread-safe, non-blocking, coroutine-based Redis client for Kotlin/JVM", + "twitter": [ + "abhi_19t" + ] +} \ No newline at end of file diff --git a/clients/kotlin/github.com/domgew/kedis.json b/clients/kotlin/github.com/domgew/kedis.json new file mode 100644 index 0000000000..a620f65a5c --- /dev/null +++ b/clients/kotlin/github.com/domgew/kedis.json @@ -0,0 +1,4 @@ +{ + "name": "Kedis", + "description": "Redis client library for Kotlin Multiplatform (JVM + Native)" +} diff --git a/clients/lasso/github.com/Zeroloop/lasso-redis.json b/clients/lasso/github.com/Zeroloop/lasso-redis.json new file mode 100644 index 0000000000..c9238480b6 --- /dev/null +++ b/clients/lasso/github.com/Zeroloop/lasso-redis.json @@ -0,0 +1,4 @@ +{ + "name": "lasso-redis", + "description": "High performance Redis client for Lasso, supports pub/sub and piping." +} \ No newline at end of file diff --git a/clients/lua/github.com/agladysh/lua-hiredis.json b/clients/lua/github.com/agladysh/lua-hiredis.json new file mode 100644 index 0000000000..210b455068 --- /dev/null +++ b/clients/lua/github.com/agladysh/lua-hiredis.json @@ -0,0 +1,7 @@ +{ + "name": "lua-hiredis", + "description": "Lua bindings for the hiredis library", + "twitter": [ + "agladysh" + ] +} \ No newline at end of file diff --git a/clients/lua/github.com/daurnimator/lredis.json b/clients/lua/github.com/daurnimator/lredis.json new file mode 100644 index 0000000000..cb0515b4d5 --- /dev/null +++ b/clients/lua/github.com/daurnimator/lredis.json @@ -0,0 +1,7 @@ +{ + "name": "lredis", + "description": "Redis library for Lua", + "twitter": [ + "daurnimator" + ] +} \ No newline at end of file diff --git a/clients/lua/github.com/nrk/redis-lua.json b/clients/lua/github.com/nrk/redis-lua.json new file mode 100644 index 0000000000..e185df3b35 --- /dev/null +++ b/clients/lua/github.com/nrk/redis-lua.json @@ -0,0 +1,8 @@ +{ + "name": "redis-lua", + "description": "A Redis client.", + "recommended": true, + "twitter": [ + "JoL1hAHN" + ] +} diff --git a/clients/matlab/github.com/GummyJum/MatlabRedis.json b/clients/matlab/github.com/GummyJum/MatlabRedis.json new file mode 100644 index 0000000000..c302d9615e --- /dev/null +++ b/clients/matlab/github.com/GummyJum/MatlabRedis.json @@ -0,0 +1,4 @@ +{ + "name": "MatlabRedis", + "description": "Pure Matlab Redis interface for Matlab>=2014B" +} \ No newline at end of file diff --git a/clients/matlab/github.com/markuman/go-redis.json b/clients/matlab/github.com/markuman/go-redis.json new file mode 100644 index 0000000000..71489271c8 --- /dev/null +++ b/clients/matlab/github.com/markuman/go-redis.json @@ -0,0 +1,7 @@ +{ + "name": "redis-octave", + "description": "A Redis client in pure Octave", + "twitter": [ + "markuman" + ] +} \ No newline at end of file diff --git a/clients/mruby/github.com/Asmod4n/mruby-hiredis.json b/clients/mruby/github.com/Asmod4n/mruby-hiredis.json new file mode 100644 index 0000000000..4eb26a6af7 --- /dev/null +++ b/clients/mruby/github.com/Asmod4n/mruby-hiredis.json @@ -0,0 +1,7 @@ +{ + "name": "mruby-hiredis", + "description": "Redis Client for mruby with Async support, pipelines and transactions", + "twitter": [ + "Asmod4n" + ] +} \ No newline at end of file diff --git a/clients/mruby/github.com/matsumoto-r/mruby-redis.json b/clients/mruby/github.com/matsumoto-r/mruby-redis.json new file mode 100644 index 0000000000..3b096a29de --- /dev/null +++ b/clients/mruby/github.com/matsumoto-r/mruby-redis.json @@ -0,0 +1,7 @@ +{ + "name": "mruby-redis", + "description": "Redis class for mruby based on Hiredis", + "twitter": [ + "matsumotory" + ] +} \ No newline at end of file diff --git a/clients/nim/github.com/nim-lang/redis.json b/clients/nim/github.com/nim-lang/redis.json new file mode 100644 index 0000000000..a2c5be6a35 --- /dev/null +++ b/clients/nim/github.com/nim-lang/redis.json @@ -0,0 +1,4 @@ +{ + "name": "redis", + "description": "Redis client for Nim" +} \ No newline at end of file diff --git a/clients/nim/github.com/xmonader/nim-redisclient.json b/clients/nim/github.com/xmonader/nim-redisclient.json new file mode 100644 index 0000000000..fddbba05b0 --- /dev/null +++ b/clients/nim/github.com/xmonader/nim-redisclient.json @@ -0,0 +1,7 @@ +{ + "name": "redisclient", + "description": "Redis client for Nim", + "twitter": [ + "xmonader" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/AWS/GLIDE-for-Redis.json b/clients/nodejs/github.com/AWS/GLIDE-for-Redis.json new file mode 100644 index 0000000000..cfb35fce02 --- /dev/null +++ b/clients/nodejs/github.com/AWS/GLIDE-for-Redis.json @@ -0,0 +1,4 @@ +{ + "name": "GLIDE for Redis", + "description": "General Language Independent Driver for the Enterprise (GLIDE) for Redis is an advanced multi-language Redis client that is feature rich, highly performant, and built for reliability and operational stability. GLIDE for Redis is supported by AWS." +} diff --git a/clients/nodejs/github.com/CapacitorSet/rebridge.json b/clients/nodejs/github.com/CapacitorSet/rebridge.json new file mode 100644 index 0000000000..c36dec01af --- /dev/null +++ b/clients/nodejs/github.com/CapacitorSet/rebridge.json @@ -0,0 +1,4 @@ +{ + "name": "rebridge", + "description": "Rebridge is a transparent Javascript-Redis bridge. It creates JavaScript objects that are automatically synchronized to a Redis database. (Requires Node 6)" +} \ No newline at end of file diff --git a/clients/nodejs/github.com/anchovycation/metronom.json b/clients/nodejs/github.com/anchovycation/metronom.json new file mode 100644 index 0000000000..af473fdda0 --- /dev/null +++ b/clients/nodejs/github.com/anchovycation/metronom.json @@ -0,0 +1,8 @@ +{ + "name": "metronom", + "description": "User friendly Redis ORM for Node.js with asynchronous and TypeScript support.", + "homepage": "https://anchovycation.github.io/metronom/", + "twitter": [ + "saracaIihan" + ] +} diff --git a/clients/nodejs/github.com/camarojs/redis.json b/clients/nodejs/github.com/camarojs/redis.json new file mode 100644 index 0000000000..7bc8b0903d --- /dev/null +++ b/clients/nodejs/github.com/camarojs/redis.json @@ -0,0 +1,4 @@ +{ + "name": "Camaro Redis", + "description": "Redis client for node, support resp2/3 and redis6." +} \ No newline at end of file diff --git a/clients/nodejs/github.com/djanowski/yoredis.json b/clients/nodejs/github.com/djanowski/yoredis.json new file mode 100644 index 0000000000..3dd34e84ca --- /dev/null +++ b/clients/nodejs/github.com/djanowski/yoredis.json @@ -0,0 +1,7 @@ +{ + "name": "yoredis", + "description": "A minimalistic Redis client using modern Node.js.", + "twitter": [ + "djanowski" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/fictorial/redis-node-client.json b/clients/nodejs/github.com/fictorial/redis-node-client.json new file mode 100644 index 0000000000..aaaa2c8f68 --- /dev/null +++ b/clients/nodejs/github.com/fictorial/redis-node-client.json @@ -0,0 +1,4 @@ +{ + "name": "redis-node-client", + "description": "No longer maintained, does not work with node 0.3." +} \ No newline at end of file diff --git a/clients/nodejs/github.com/h0x91b/fast-redis-cluster.json b/clients/nodejs/github.com/h0x91b/fast-redis-cluster.json new file mode 100644 index 0000000000..886327d3d8 --- /dev/null +++ b/clients/nodejs/github.com/h0x91b/fast-redis-cluster.json @@ -0,0 +1,7 @@ +{ + "name": "fast-redis-cluster", + "description": "Simple and fast cluster driver with error handling, uses redis-fast-driver as main adapter and node_redis as backup for windows", + "twitter": [ + "h0x91b" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/h0x91b/redis-fast-driver.json b/clients/nodejs/github.com/h0x91b/redis-fast-driver.json new file mode 100644 index 0000000000..a5fb30af79 --- /dev/null +++ b/clients/nodejs/github.com/h0x91b/redis-fast-driver.json @@ -0,0 +1,7 @@ +{ + "name": "redis-fast-driver", + "description": "Driver based on hiredis async lib, can do PUBSUB and MONITOR, simple and really fast, written with NaN so works fine with node >=0.8", + "twitter": [ + "h0x91b" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/luin/ioredis.json b/clients/nodejs/github.com/luin/ioredis.json new file mode 100644 index 0000000000..e22256c4be --- /dev/null +++ b/clients/nodejs/github.com/luin/ioredis.json @@ -0,0 +1,8 @@ +{ + "name": "ioredis", + "description": "A delightful, performance-focused and full-featured Redis client. Supports Cluster, Sentinel, Pipelining and Lua Scripting", + "recommended": true, + "twitter": [ + "luinlee" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/mjackson/then-redis.json b/clients/nodejs/github.com/mjackson/then-redis.json new file mode 100644 index 0000000000..09474bb3b3 --- /dev/null +++ b/clients/nodejs/github.com/mjackson/then-redis.json @@ -0,0 +1,7 @@ +{ + "name": "then-redis", + "description": "A small, promise-based Redis client for node", + "twitter": [ + "mjackson" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/mmkal/handy-redis.json b/clients/nodejs/github.com/mmkal/handy-redis.json new file mode 100644 index 0000000000..6779c002a4 --- /dev/null +++ b/clients/nodejs/github.com/mmkal/handy-redis.json @@ -0,0 +1,4 @@ +{ + "name": "handy-redis", + "description": "A wrapper around node_redis with Promise and TypeScript support." +} \ No newline at end of file diff --git a/clients/nodejs/github.com/razaellahi/xredis.json b/clients/nodejs/github.com/razaellahi/xredis.json new file mode 100644 index 0000000000..d82102a448 --- /dev/null +++ b/clients/nodejs/github.com/razaellahi/xredis.json @@ -0,0 +1,7 @@ +{ + "name": "xredis", + "description": "Redis client with redis ACL features", + "twitter": [ + "razaellahi531" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/redis/node-redis.json b/clients/nodejs/github.com/redis/node-redis.json new file mode 100644 index 0000000000..7caa592234 --- /dev/null +++ b/clients/nodejs/github.com/redis/node-redis.json @@ -0,0 +1,5 @@ +{ + "name": "node-redis", + "description": "Recommended client for node.", + "official": true +} diff --git a/clients/nodejs/github.com/rootslab/spade.json b/clients/nodejs/github.com/rootslab/spade.json new file mode 100644 index 0000000000..07ea553997 --- /dev/null +++ b/clients/nodejs/github.com/rootslab/spade.json @@ -0,0 +1,4 @@ +{ + "name": "spade", + "description": "\u2660 Spade, a full-featured modular client for node." +} \ No newline at end of file diff --git a/clients/nodejs/github.com/silkjs/tedis.json b/clients/nodejs/github.com/silkjs/tedis.json new file mode 100644 index 0000000000..0ee47b9422 --- /dev/null +++ b/clients/nodejs/github.com/silkjs/tedis.json @@ -0,0 +1,9 @@ +{ + "name": "tedis", + "description": "Tedis is a redis client developed for Node.js . Its name was inspired by the Jedis and TypeScript.", + "recommended": true, + "homepage": "https://tedis.silkjs.org", + "twitter": [ + "dasoncheng" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/thunks/thunk-redis.json b/clients/nodejs/github.com/thunks/thunk-redis.json new file mode 100644 index 0000000000..051fc4d495 --- /dev/null +++ b/clients/nodejs/github.com/thunks/thunk-redis.json @@ -0,0 +1,7 @@ +{ + "name": "thunk-redis", + "description": "A thunk/promise-based redis client with pipelining and cluster.", + "twitter": [ + "izensh" + ] +} \ No newline at end of file diff --git a/clients/nodejs/github.com/wallneradam/noderis.json b/clients/nodejs/github.com/wallneradam/noderis.json new file mode 100644 index 0000000000..90041b47ed --- /dev/null +++ b/clients/nodejs/github.com/wallneradam/noderis.json @@ -0,0 +1,4 @@ +{ + "name": "Noderis", + "description": "A fast, standalone Redis client without external dependencies. It can be used with callbacks, Promises and async-await as well at the same time. Clean, well designed and documented source code. Because of this supports code completion (WebStorm/PHPStorm)." +} \ No newline at end of file diff --git a/clients/objective-c/github.com/dizzus/RedisKit.json b/clients/objective-c/github.com/dizzus/RedisKit.json new file mode 100644 index 0000000000..ca5a1655cf --- /dev/null +++ b/clients/objective-c/github.com/dizzus/RedisKit.json @@ -0,0 +1,7 @@ +{ + "name": "RedisKit", + "description": "RedisKit is a asynchronious client framework for Redis server, written in Objective-C", + "twitter": [ + "dizzus" + ] +} \ No newline at end of file diff --git a/clients/objective-c/github.com/lp/ObjCHiredis.json b/clients/objective-c/github.com/lp/ObjCHiredis.json new file mode 100644 index 0000000000..010685abac --- /dev/null +++ b/clients/objective-c/github.com/lp/ObjCHiredis.json @@ -0,0 +1,7 @@ +{ + "name": "ObjCHiredis", + "description": "Static Library for iOS4 device and Simulator, plus Objective-C Framework for MacOS 10.5 and higher", + "twitter": [ + "loopole" + ] +} \ No newline at end of file diff --git a/clients/ocaml/github.com/0xffea/ocaml-redis.json b/clients/ocaml/github.com/0xffea/ocaml-redis.json new file mode 100644 index 0000000000..3009ae9fac --- /dev/null +++ b/clients/ocaml/github.com/0xffea/ocaml-redis.json @@ -0,0 +1,4 @@ +{ + "name": "ocaml-redis", + "description": "Synchronous and asynchronous (via Lwt) Redis client library in OCaml. Provides implementation of cache and mutex helpers." +} \ No newline at end of file diff --git a/clients/ocaml/github.com/janestreet/redis-async.json b/clients/ocaml/github.com/janestreet/redis-async.json new file mode 100644 index 0000000000..6f23691093 --- /dev/null +++ b/clients/ocaml/github.com/janestreet/redis-async.json @@ -0,0 +1,9 @@ +{ + "name": "redis-async", + "description": "A Redis client for OCaml Async applications with a strongly-typed API and client tracking support.", + "homepage": "https://github.com/janestreet/redis-async", + "twitter": [ + "janestreet", + "lukepalmer" + ] +} \ No newline at end of file diff --git a/clients/pascal/bitbucket.org/Gloegg/delphi-redis.git.json b/clients/pascal/bitbucket.org/Gloegg/delphi-redis.git.json new file mode 100644 index 0000000000..db738d6fb5 --- /dev/null +++ b/clients/pascal/bitbucket.org/Gloegg/delphi-redis.git.json @@ -0,0 +1,8 @@ +{ + "name": "delphi-redis", + "description": "A lightweight Redis client written in Delphi", + "homepage": "https://bitbucket.org/Gloegg/delphi-redis", + "twitter": [ + "Gloegg" + ] +} \ No newline at end of file diff --git a/clients/pascal/github.com/danieleteti/delphiredisclient.json b/clients/pascal/github.com/danieleteti/delphiredisclient.json new file mode 100644 index 0000000000..1d35fddce8 --- /dev/null +++ b/clients/pascal/github.com/danieleteti/delphiredisclient.json @@ -0,0 +1,7 @@ +{ + "name": "delphiredisclient", + "description": "Redis client for Delphi", + "twitter": [ + "danieleteti" + ] +} \ No newline at end of file diff --git a/clients/pascal/github.com/ik5/redis_client.fpc.json b/clients/pascal/github.com/ik5/redis_client.fpc.json new file mode 100644 index 0000000000..89ef6fbbf2 --- /dev/null +++ b/clients/pascal/github.com/ik5/redis_client.fpc.json @@ -0,0 +1,7 @@ +{ + "name": "redis_client.fpc", + "description": "Object Pascal client implementation for the redis protocol and commands", + "twitter": [ + "ik5" + ] +} \ No newline at end of file diff --git a/clients/pascal/github.com/isyscore/fpredis.json b/clients/pascal/github.com/isyscore/fpredis.json new file mode 100644 index 0000000000..7070231f1e --- /dev/null +++ b/clients/pascal/github.com/isyscore/fpredis.json @@ -0,0 +1,5 @@ +{ + "name": "fpredis", + "description": "FPREDIS is a FPC client library for the Redis database.", + "homepage": "https://github.com/isyscore/fpredis" +} \ No newline at end of file diff --git a/clients/perl/github.com/PerlRedis/perl-redis.json b/clients/perl/github.com/PerlRedis/perl-redis.json new file mode 100644 index 0000000000..85a93e3c9a --- /dev/null +++ b/clients/perl/github.com/PerlRedis/perl-redis.json @@ -0,0 +1,9 @@ +{ + "name": "Redis", + "description": "Perl binding for Redis database", + "recommended": true, + "homepage": "http://search.cpan.org/dist/Redis/", + "twitter": [ + "damsieboy" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/iph0/AnyEvent-RipeRedis-Cluster.json b/clients/perl/github.com/iph0/AnyEvent-RipeRedis-Cluster.json new file mode 100644 index 0000000000..34b30a224d --- /dev/null +++ b/clients/perl/github.com/iph0/AnyEvent-RipeRedis-Cluster.json @@ -0,0 +1,8 @@ +{ + "name": "AnyEvent::RipeRedis::Cluster", + "description": "Non-blocking Redis Cluster client", + "homepage": "http://search.cpan.org/dist/AnyEvent-RipeRedis-Cluster/", + "twitter": [ + "iph0" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/iph0/AnyEvent-RipeRedis.json b/clients/perl/github.com/iph0/AnyEvent-RipeRedis.json new file mode 100644 index 0000000000..727fc1d92f --- /dev/null +++ b/clients/perl/github.com/iph0/AnyEvent-RipeRedis.json @@ -0,0 +1,8 @@ +{ + "name": "AnyEvent::RipeRedis", + "description": "Flexible non-blocking Redis client", + "homepage": "http://search.cpan.org/dist/AnyEvent-RipeRedis/", + "twitter": [ + "iph0" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/iph0/Redis-ClusterRider.json b/clients/perl/github.com/iph0/Redis-ClusterRider.json new file mode 100644 index 0000000000..1eb9f274f7 --- /dev/null +++ b/clients/perl/github.com/iph0/Redis-ClusterRider.json @@ -0,0 +1,8 @@ +{ + "name": "Redis::ClusterRider", + "description": "Daring Redis Cluster client", + "homepage": "http://search.cpan.org/dist/Redis-ClusterRider/", + "twitter": [ + "iph0" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/marcusramberg/mojo-redis.json b/clients/perl/github.com/marcusramberg/mojo-redis.json new file mode 100644 index 0000000000..8fcf27d0f3 --- /dev/null +++ b/clients/perl/github.com/marcusramberg/mojo-redis.json @@ -0,0 +1,10 @@ +{ + "name": "Mojo::Redis", + "description": "asynchronous Redis client for Mojolicious", + "homepage": "http://search.cpan.org/dist/Mojo-Redis/", + "twitter": [ + "und3f", + "marcusramberg", + "jhthorsen" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/miyagawa/AnyEvent-Redis.json b/clients/perl/github.com/miyagawa/AnyEvent-Redis.json new file mode 100644 index 0000000000..24bdad3355 --- /dev/null +++ b/clients/perl/github.com/miyagawa/AnyEvent-Redis.json @@ -0,0 +1,8 @@ +{ + "name": "AnyEvent::Redis", + "description": "Non-blocking Redis client", + "homepage": "http://search.cpan.org/dist/AnyEvent-Redis/", + "twitter": [ + "miyagawa" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/plainbanana/Redis-Cluster-Fast.json b/clients/perl/github.com/plainbanana/Redis-Cluster-Fast.json new file mode 100644 index 0000000000..c23ba9a1a0 --- /dev/null +++ b/clients/perl/github.com/plainbanana/Redis-Cluster-Fast.json @@ -0,0 +1,8 @@ +{ + "name": "Redis::Cluster::Fast", + "description": "A fast Perl binding for Redis Cluster", + "homepage": "http://search.cpan.org/dist/Redis-Cluster-Fast/", + "twitter": [ + "plainbanana" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/shogo82148/Redis-Fast.json b/clients/perl/github.com/shogo82148/Redis-Fast.json new file mode 100644 index 0000000000..a442d7f02d --- /dev/null +++ b/clients/perl/github.com/shogo82148/Redis-Fast.json @@ -0,0 +1,8 @@ +{ + "name": "Redis::Fast", + "description": "Perl binding for Redis database", + "homepage": "https://metacpan.org/pod/Redis::Fast", + "twitter": [ + "shogo82148" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/smsonline/redis-cluster-perl.json b/clients/perl/github.com/smsonline/redis-cluster-perl.json new file mode 100644 index 0000000000..b43c6f22a7 --- /dev/null +++ b/clients/perl/github.com/smsonline/redis-cluster-perl.json @@ -0,0 +1,8 @@ +{ + "name": "Redis::Cluster", + "description": "Redis Cluster client for Perl", + "homepage": "http://search.cpan.org/dist/Redis-Cluster/", + "twitter": [ + "smsonline" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/trinitum/RedisDB.json b/clients/perl/github.com/trinitum/RedisDB.json new file mode 100644 index 0000000000..f495a2b662 --- /dev/null +++ b/clients/perl/github.com/trinitum/RedisDB.json @@ -0,0 +1,8 @@ +{ + "name": "RedisDB", + "description": "Perl binding for Redis database with fast XS-based protocolparser", + "homepage": "http://search.cpan.org/dist/RedisDB/", + "twitter": [ + "trinitum" + ] +} \ No newline at end of file diff --git a/clients/perl/github.com/wjackson/AnyEvent-Hiredis.json b/clients/perl/github.com/wjackson/AnyEvent-Hiredis.json new file mode 100644 index 0000000000..2dbb31a483 --- /dev/null +++ b/clients/perl/github.com/wjackson/AnyEvent-Hiredis.json @@ -0,0 +1,5 @@ +{ + "name": "AnyEvent::Hiredis", + "description": "Non-blocking client using the hiredis C library", + "homepage": "http://search.cpan.org/dist/AnyEvent-Hiredis/" +} \ No newline at end of file diff --git a/clients/perl/search.cpan.org/dist/Danga-Socket-Redis.json b/clients/perl/search.cpan.org/dist/Danga-Socket-Redis.json new file mode 100644 index 0000000000..00096d4dcc --- /dev/null +++ b/clients/perl/search.cpan.org/dist/Danga-Socket-Redis.json @@ -0,0 +1,7 @@ +{ + "name": "Danga::Socket::Redis", + "description": "An asynchronous redis client using the Danga::Socket async library", + "twitter": [ + "martinredmond" + ] +} \ No newline at end of file diff --git a/clients/perl/search.cpan.org/dist/Redis-hiredis.json b/clients/perl/search.cpan.org/dist/Redis-hiredis.json new file mode 100644 index 0000000000..c49dcfb0c4 --- /dev/null +++ b/clients/perl/search.cpan.org/dist/Redis-hiredis.json @@ -0,0 +1,7 @@ +{ + "name": "Redis::hiredis", + "description": "Perl binding for the hiredis C client", + "twitter": [ + "neophenix" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/amphp/redis.json b/clients/php/github.com/amphp/redis.json new file mode 100644 index 0000000000..2ec88eef05 --- /dev/null +++ b/clients/php/github.com/amphp/redis.json @@ -0,0 +1,7 @@ +{ + "name": "amphp/redis", + "description": "An async redis client built on the amp concurrency framework.", + "twitter": [ + "kelunik" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/cheprasov/php-redis-client.json b/clients/php/github.com/cheprasov/php-redis-client.json new file mode 100644 index 0000000000..2118aa2169 --- /dev/null +++ b/clients/php/github.com/cheprasov/php-redis-client.json @@ -0,0 +1,7 @@ +{ + "name": "cheprasov/php-redis-client", + "description": "Supported PHP client for Redis. PHP ver 5.5 - 7.4 / REDIS ver 2.6 - 6.0", + "twitter": [ + "cheprasov84" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/colinmollenhour/credis.json b/clients/php/github.com/colinmollenhour/credis.json new file mode 100644 index 0000000000..15ef4896ab --- /dev/null +++ b/clients/php/github.com/colinmollenhour/credis.json @@ -0,0 +1,7 @@ +{ + "name": "Credis", + "description": "Lightweight, standalone, unit-tested fork of Redisent which wraps phpredis for best performance if available.", + "twitter": [ + "colinmollenhour" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/jamescauwelier/PSRedis.json b/clients/php/github.com/jamescauwelier/PSRedis.json new file mode 100644 index 0000000000..7da9c02a94 --- /dev/null +++ b/clients/php/github.com/jamescauwelier/PSRedis.json @@ -0,0 +1,7 @@ +{ + "name": "PHP Sentinel Client", + "description": "A PHP sentinel client acting as an extension to your regular redis client", + "twitter": [ + "jamescauwelier" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/nrk/predis.json b/clients/php/github.com/nrk/predis.json new file mode 100644 index 0000000000..991cf5c42c --- /dev/null +++ b/clients/php/github.com/nrk/predis.json @@ -0,0 +1,8 @@ +{ + "name": "Predis", + "description": "Mature and supported", + "recommended": true, + "twitter": [ + "JoL1hAHN" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/phpredis/phpredis.json b/clients/php/github.com/phpredis/phpredis.json new file mode 100644 index 0000000000..89d7fdfe23 --- /dev/null +++ b/clients/php/github.com/phpredis/phpredis.json @@ -0,0 +1,10 @@ +{ + "name": "phpredis", + "description": "This is a client written in C as a PHP module.", + "recommended": true, + "twitter": [ + "grumi78", + "yowgi", + "yatsukhnenko" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/yampee/Redis.json b/clients/php/github.com/yampee/Redis.json new file mode 100644 index 0000000000..265e555fb7 --- /dev/null +++ b/clients/php/github.com/yampee/Redis.json @@ -0,0 +1,7 @@ +{ + "name": "Yampee Redis", + "description": "A full-featured Redis client for PHP 5.2. Easy to use and to extend.", + "twitter": [ + "tgalopin" + ] +} \ No newline at end of file diff --git a/clients/php/github.com/ziogas/PHP-Redis-implementation.json b/clients/php/github.com/ziogas/PHP-Redis-implementation.json new file mode 100644 index 0000000000..f4dd0ed7b8 --- /dev/null +++ b/clients/php/github.com/ziogas/PHP-Redis-implementation.json @@ -0,0 +1,8 @@ +{ + "name": "PHP Redis implementation / wrapper", + "description": "Simple and lightweight redis implementation. Basically wrapper for raw redis commands.", + "homepage": "https://github.com/ziogas/PHP-Redis-implementation", + "twitter": [ + "arminas" + ] +} \ No newline at end of file diff --git a/clients/plsql/github.com/SeYoungLee/oredis.json b/clients/plsql/github.com/SeYoungLee/oredis.json new file mode 100644 index 0000000000..63296b1b3f --- /dev/null +++ b/clients/plsql/github.com/SeYoungLee/oredis.json @@ -0,0 +1,7 @@ +{ + "name": "oredis", + "description": "Redis client library for Oracle PL/SQL. This support Redis cluster and asynchronous execution", + "twitter": [ + "SeyoungLee" + ] +} \ No newline at end of file diff --git a/clients/prolog/github.com/SWI-Prolog/packages-redis.json b/clients/prolog/github.com/SWI-Prolog/packages-redis.json new file mode 100644 index 0000000000..c61da6b314 --- /dev/null +++ b/clients/prolog/github.com/SWI-Prolog/packages-redis.json @@ -0,0 +1,4 @@ +{ + "name": "Redis library for SWI-Prolog", + "description": "Prolog redis client that exploits SWI-Prolog's extensions such as strings for compact replies and threads to deal with publish/subscribe." +} \ No newline at end of file diff --git a/clients/pure-data/github.com/lp/puredis.json b/clients/pure-data/github.com/lp/puredis.json new file mode 100644 index 0000000000..05dda65549 --- /dev/null +++ b/clients/pure-data/github.com/lp/puredis.json @@ -0,0 +1,7 @@ +{ + "name": "Puredis", + "description": "Pure Data Redis sync, async and subscriber client", + "twitter": [ + "loopole" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/AWS/GLIDE-for-Redis.json b/clients/python/github.com/AWS/GLIDE-for-Redis.json new file mode 100644 index 0000000000..cfb35fce02 --- /dev/null +++ b/clients/python/github.com/AWS/GLIDE-for-Redis.json @@ -0,0 +1,4 @@ +{ + "name": "GLIDE for Redis", + "description": "General Language Independent Driver for the Enterprise (GLIDE) for Redis is an advanced multi-language Redis client that is feature rich, highly performant, and built for reliability and operational stability. GLIDE for Redis is supported by AWS." +} diff --git a/clients/python/github.com/DriverX/aioredis-cluster.json b/clients/python/github.com/DriverX/aioredis-cluster.json new file mode 100644 index 0000000000..c23469ec2d --- /dev/null +++ b/clients/python/github.com/DriverX/aioredis-cluster.json @@ -0,0 +1,4 @@ +{ + "name": "aioredis-cluster", + "description": "Redis Cluster client implementation based on aioredis v1.x.x" +} diff --git a/clients/python/github.com/Grokzen/redis-py-cluster.json b/clients/python/github.com/Grokzen/redis-py-cluster.json new file mode 100644 index 0000000000..c9d0af1f6b --- /dev/null +++ b/clients/python/github.com/Grokzen/redis-py-cluster.json @@ -0,0 +1,7 @@ +{ + "name": "redis-py-cluster", + "description": "Adds cluster support to redis-py < 4.1.0. Obsolete for 4.1.0 and above.", + "twitter": [ + "grokzen" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/KissPeter/redis-streams.json b/clients/python/github.com/KissPeter/redis-streams.json new file mode 100644 index 0000000000..f56a3be15f --- /dev/null +++ b/clients/python/github.com/KissPeter/redis-streams.json @@ -0,0 +1,4 @@ +{ + "name": "redis-streams", + "description": "Redis-Streams Python library provides an easy to use interface for batch collection and processing. Simplifies the consumer group and consumers management. Designed for a highly available, scalable and distributed environment, it thus offers, in addition to the main functionality, monitoring and scaling capabilities." +} \ No newline at end of file diff --git a/clients/python/github.com/aallamaa/desir.json b/clients/python/github.com/aallamaa/desir.json new file mode 100644 index 0000000000..24ea621f39 --- /dev/null +++ b/clients/python/github.com/aallamaa/desir.json @@ -0,0 +1,7 @@ +{ + "name": "desir", + "description": "Attempt to make a minimalist redis python client.", + "twitter": [ + "aallamaa" + ] +} diff --git a/clients/python/github.com/alisaifee/coredis.json b/clients/python/github.com/alisaifee/coredis.json new file mode 100644 index 0000000000..21e340837c --- /dev/null +++ b/clients/python/github.com/alisaifee/coredis.json @@ -0,0 +1,4 @@ +{ + "name": "coredis", + "description": "Async redis client with support for redis server, cluster & sentinel" +} diff --git a/clients/python/github.com/brainix/pottery.json b/clients/python/github.com/brainix/pottery.json new file mode 100644 index 0000000000..10cf7a5ab1 --- /dev/null +++ b/clients/python/github.com/brainix/pottery.json @@ -0,0 +1,7 @@ +{ + "name": "Pottery", + "description": "High level Pythonic dict, set, and list like containers around Redis data types (Python 3 only)", + "twitter": [ + "brainix" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/cf020031308/redisio.json b/clients/python/github.com/cf020031308/redisio.json new file mode 100644 index 0000000000..da400308ec --- /dev/null +++ b/clients/python/github.com/cf020031308/redisio.json @@ -0,0 +1,4 @@ +{ + "name": "redisio", + "description": "A tiny and fast redis client for script boys." +} \ No newline at end of file diff --git a/clients/python/github.com/coleifer/walrus.json b/clients/python/github.com/coleifer/walrus.json new file mode 100644 index 0000000000..00d4817866 --- /dev/null +++ b/clients/python/github.com/coleifer/walrus.json @@ -0,0 +1,5 @@ +{ + "name": "walrus", + "description": "Lightweight Python utilities for working with Redis.", + "recommended": true +} \ No newline at end of file diff --git a/clients/python/github.com/evilkost/brukva.json b/clients/python/github.com/evilkost/brukva.json new file mode 100644 index 0000000000..f8ef1ea17f --- /dev/null +++ b/clients/python/github.com/evilkost/brukva.json @@ -0,0 +1,4 @@ +{ + "name": "brukva", + "description": "Asynchronous Redis client that works within Tornado IO loop" +} \ No newline at end of file diff --git a/clients/python/github.com/fiorix/txredisapi.json b/clients/python/github.com/fiorix/txredisapi.json new file mode 100644 index 0000000000..f05e6a1f5a --- /dev/null +++ b/clients/python/github.com/fiorix/txredisapi.json @@ -0,0 +1,7 @@ +{ + "name": "txredisapi", + "description": "Full featured, non-blocking client for Twisted.", + "twitter": [ + "fiorix" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/gh0st-work/python_redis_orm.json b/clients/python/github.com/gh0st-work/python_redis_orm.json new file mode 100644 index 0000000000..5d46a9c02f --- /dev/null +++ b/clients/python/github.com/gh0st-work/python_redis_orm.json @@ -0,0 +1,4 @@ +{ + "name": "pyton-redis-orm", + "description": "Python Redis ORM library that gives redis easy-to-use objects with fields and speeds a developmet up, inspired by Django ORM" +} \ No newline at end of file diff --git a/clients/python/github.com/groove-x/gxredis.json b/clients/python/github.com/groove-x/gxredis.json new file mode 100644 index 0000000000..14c4d449e6 --- /dev/null +++ b/clients/python/github.com/groove-x/gxredis.json @@ -0,0 +1,7 @@ +{ + "name": "gxredis", + "description": "Simple redis-py wrapper library", + "twitter": [ + "loose_agilist" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/jonathanslenders/asyncio-redis.json b/clients/python/github.com/jonathanslenders/asyncio-redis.json new file mode 100644 index 0000000000..c5bfef6dc5 --- /dev/null +++ b/clients/python/github.com/jonathanslenders/asyncio-redis.json @@ -0,0 +1,8 @@ +{ + "name": "asyncio_redis", + "description": "Asynchronous Redis client that works with the asyncio event loop", + "homepage": "http://asyncio-redis.readthedocs.org/", + "twitter": [ + "jonathan_s" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/khamin/redisca2.json b/clients/python/github.com/khamin/redisca2.json new file mode 100644 index 0000000000..2ee26a629b --- /dev/null +++ b/clients/python/github.com/khamin/redisca2.json @@ -0,0 +1,7 @@ +{ + "name": "redisca2", + "description": "Lightweight ORM for Redis", + "twitter": [ + "vitaliykhamin" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/pepijndevos/pypredis.json b/clients/python/github.com/pepijndevos/pypredis.json new file mode 100644 index 0000000000..9324b309ca --- /dev/null +++ b/clients/python/github.com/pepijndevos/pypredis.json @@ -0,0 +1,7 @@ +{ + "name": "Pypredis", + "description": "A client focused on arbitrary sharding and parallel pipelining.", + "twitter": [ + "pepijndevos" + ] +} \ No newline at end of file diff --git a/clients/python/github.com/redis/redis-py.json b/clients/python/github.com/redis/redis-py.json new file mode 100644 index 0000000000..68dd999ec1 --- /dev/null +++ b/clients/python/github.com/redis/redis-py.json @@ -0,0 +1,5 @@ +{ + "name": "redis-py", + "description": "Mature and supported. The way to go for Python.", + "official": true +} diff --git a/clients/python/github.com/schlitzered/pyredis.json b/clients/python/github.com/schlitzered/pyredis.json new file mode 100644 index 0000000000..ac16c72529 --- /dev/null +++ b/clients/python/github.com/schlitzered/pyredis.json @@ -0,0 +1,4 @@ +{ + "name": "pyredis", + "description": "Python Client with support for Redis Cluster. Currently only Python 3 is supported." +} \ No newline at end of file diff --git a/clients/python/github.com/thefab/tornadis.json b/clients/python/github.com/thefab/tornadis.json new file mode 100644 index 0000000000..4a5e92f46e --- /dev/null +++ b/clients/python/github.com/thefab/tornadis.json @@ -0,0 +1,5 @@ +{ + "name": "tornadis", + "description": "Async minimal redis client for tornado ioloop designed for performances (use C hiredis parser)", + "homepage": "http://tornadis.readthedocs.org" +} \ No newline at end of file diff --git a/clients/python/pypi.python.org/pypi/txredis.json b/clients/python/pypi.python.org/pypi/txredis.json new file mode 100644 index 0000000000..8aa6432c65 --- /dev/null +++ b/clients/python/pypi.python.org/pypi/txredis.json @@ -0,0 +1,7 @@ +{ + "name": "txredis", + "description": "Python/Twisted client for Redis key-value store", + "twitter": [ + "dio_rian" + ] +} diff --git a/clients/r/bitbucket.org/cmbce/r-package-rediscli.json b/clients/r/bitbucket.org/cmbce/r-package-rediscli.json new file mode 100644 index 0000000000..2ada10f30f --- /dev/null +++ b/clients/r/bitbucket.org/cmbce/r-package-rediscli.json @@ -0,0 +1,7 @@ +{ + "name": "RedisCli", + "description": "Basic client passing a (batch of) command(s) to redis-cli, getting back a (list of) character vector(s).", + "twitter": [ + "CorentinBarbu" + ] +} \ No newline at end of file diff --git a/clients/r/github.com/bwlewis/rredis.json b/clients/r/github.com/bwlewis/rredis.json new file mode 100644 index 0000000000..e8f429a45a --- /dev/null +++ b/clients/r/github.com/bwlewis/rredis.json @@ -0,0 +1,8 @@ +{ + "name": "rredis", + "description": "Redis client for R", + "homepage": "https://cran.r-project.org/web/packages/rredis/index.html", + "twitter": [ + "bwlewis" + ] +} \ No newline at end of file diff --git a/clients/r/github.com/eddelbuettel/rcppredis.json b/clients/r/github.com/eddelbuettel/rcppredis.json new file mode 100644 index 0000000000..f7b3ebf0f0 --- /dev/null +++ b/clients/r/github.com/eddelbuettel/rcppredis.json @@ -0,0 +1,8 @@ +{ + "name": "RcppRedis", + "description": "R interface to Redis using the hiredis library.", + "homepage": "https://cran.rstudio.com/web/packages/RcppRedis/index.html", + "twitter": [ + "eddelbuettel" + ] +} \ No newline at end of file diff --git a/clients/r/github.com/richfitz/redux.json b/clients/r/github.com/richfitz/redux.json new file mode 100644 index 0000000000..284896ab50 --- /dev/null +++ b/clients/r/github.com/richfitz/redux.json @@ -0,0 +1,8 @@ +{ + "name": "Redux", + "description": "Provides a low-level interface to Redis, allowing execution of arbitrary Redis commands with almost no interface.", + "homepage": "http://richfitz.github.io/redux/", + "twitter": [ + "rgfitzjohn" + ] +} \ No newline at end of file diff --git a/clients/racket/github.com/eu90h/rackdis.json b/clients/racket/github.com/eu90h/rackdis.json new file mode 100644 index 0000000000..e95bd9a708 --- /dev/null +++ b/clients/racket/github.com/eu90h/rackdis.json @@ -0,0 +1,7 @@ +{ + "name": "Rackdis", + "description": "A Redis client for Racket", + "twitter": [ + "eu90h" + ] +} \ No newline at end of file diff --git a/clients/racket/github.com/stchang/redis.json b/clients/racket/github.com/stchang/redis.json new file mode 100644 index 0000000000..bb58c479f8 --- /dev/null +++ b/clients/racket/github.com/stchang/redis.json @@ -0,0 +1,7 @@ +{ + "name": "redis-racket", + "description": "A Redis client for Racket.", + "twitter": [ + "s_chng" + ] +} \ No newline at end of file diff --git a/clients/rebol/github.com/rebolek/prot-redis.json b/clients/rebol/github.com/rebolek/prot-redis.json new file mode 100644 index 0000000000..ddddb16df0 --- /dev/null +++ b/clients/rebol/github.com/rebolek/prot-redis.json @@ -0,0 +1,7 @@ +{ + "name": "prot-redis", + "description": "Redis network scheme for Rebol 3", + "twitter": [ + "rebolek" + ] +} \ No newline at end of file diff --git a/clients/ruby/github.com/amakawa/redic.json b/clients/ruby/github.com/amakawa/redic.json new file mode 100644 index 0000000000..fc88ee3167 --- /dev/null +++ b/clients/ruby/github.com/amakawa/redic.json @@ -0,0 +1,8 @@ +{ + "name": "redic", + "description": "Lightweight Redis Client", + "twitter": [ + "soveran", + "cyx" + ] +} \ No newline at end of file diff --git a/clients/ruby/github.com/bukalapak/redis-cluster.json b/clients/ruby/github.com/bukalapak/redis-cluster.json new file mode 100644 index 0000000000..64a7cf707e --- /dev/null +++ b/clients/ruby/github.com/bukalapak/redis-cluster.json @@ -0,0 +1,7 @@ +{ + "name": "redis-cluster", + "description": "Redis cluster client on top of redis-rb. Support pipelining.", + "twitter": [ + "bukalapak" + ] +} \ No newline at end of file diff --git a/clients/ruby/github.com/madsimian/em-redis.json b/clients/ruby/github.com/madsimian/em-redis.json new file mode 100644 index 0000000000..8ba88ce114 --- /dev/null +++ b/clients/ruby/github.com/madsimian/em-redis.json @@ -0,0 +1,7 @@ +{ + "name": "em-redis", + "description": "An eventmachine-based implementation of the Redis protocol. No longer actively maintained.", + "twitter": [ + "madsimian" + ] +} \ No newline at end of file diff --git a/clients/ruby/github.com/mloughran/em-hiredis.json b/clients/ruby/github.com/mloughran/em-hiredis.json new file mode 100644 index 0000000000..d82193e83c --- /dev/null +++ b/clients/ruby/github.com/mloughran/em-hiredis.json @@ -0,0 +1,7 @@ +{ + "name": "em-hiredis", + "description": "An EventMachine Redis client (uses hiredis).", + "twitter": [ + "mloughran" + ] +} \ No newline at end of file diff --git a/clients/ruby/github.com/redis-rb/redis-client.json b/clients/ruby/github.com/redis-rb/redis-client.json new file mode 100644 index 0000000000..06eb81aca8 --- /dev/null +++ b/clients/ruby/github.com/redis-rb/redis-client.json @@ -0,0 +1,4 @@ +{ + "name": "redis-client", + "description": "Simple low level client for Redis 6+" +} diff --git a/clients/ruby/github.com/redis-rb/redis-cluster-client.json b/clients/ruby/github.com/redis-rb/redis-cluster-client.json new file mode 100644 index 0000000000..bed2139126 --- /dev/null +++ b/clients/ruby/github.com/redis-rb/redis-cluster-client.json @@ -0,0 +1,4 @@ +{ + "name": "redis-cluster-client", + "description": "A simple client for Redis 6+ cluster" +} diff --git a/clients/ruby/github.com/redis/redis-rb.json b/clients/ruby/github.com/redis/redis-rb.json new file mode 100644 index 0000000000..35e112d5d2 --- /dev/null +++ b/clients/ruby/github.com/redis/redis-rb.json @@ -0,0 +1,11 @@ +{ + "name": "redis-rb", + "description": "Very stable and mature client. Install and require the hiredis gem before redis-rb for maximum performance.", + "recommended": true, + "twitter": [ + "ezmobius", + "soveran", + "djanowski", + "pnoordhuis" + ] +} \ No newline at end of file diff --git a/clients/rust/github.com/AsoSunag/redis-client.json b/clients/rust/github.com/AsoSunag/redis-client.json new file mode 100644 index 0000000000..b203afea3e --- /dev/null +++ b/clients/rust/github.com/AsoSunag/redis-client.json @@ -0,0 +1,4 @@ +{ + "name": "redis-client", + "description": "A Redis client library for Rust." +} \ No newline at end of file diff --git a/clients/rust/github.com/dahomey-technologies/rustis.json b/clients/rust/github.com/dahomey-technologies/rustis.json new file mode 100644 index 0000000000..df88e4ecf2 --- /dev/null +++ b/clients/rust/github.com/dahomey-technologies/rustis.json @@ -0,0 +1,4 @@ +{ + "name": "rustis", + "description": "An asynchronous Redis client for Rust." +} diff --git a/clients/rust/github.com/ltoddy/redis-rs.json b/clients/rust/github.com/ltoddy/redis-rs.json new file mode 100644 index 0000000000..98915e8dfa --- /dev/null +++ b/clients/rust/github.com/ltoddy/redis-rs.json @@ -0,0 +1,7 @@ +{ + "name": "redisclient", + "description": "Redis client for Rust.", + "twitter": [ + "ltoddygen" + ] +} \ No newline at end of file diff --git a/clients/rust/github.com/mitsuhiko/redis-rs.json b/clients/rust/github.com/mitsuhiko/redis-rs.json new file mode 100644 index 0000000000..14d2e7f10c --- /dev/null +++ b/clients/rust/github.com/mitsuhiko/redis-rs.json @@ -0,0 +1,8 @@ +{ + "name": "redis-rs", + "description": "A high and low level client library for Redis tracking Rust nightly.", + "recommended": true, + "twitter": [ + "mitsuhiko" + ] +} \ No newline at end of file diff --git a/clients/rust/github.com/mneumann/rust-redis.json b/clients/rust/github.com/mneumann/rust-redis.json new file mode 100644 index 0000000000..5c2374af66 --- /dev/null +++ b/clients/rust/github.com/mneumann/rust-redis.json @@ -0,0 +1,7 @@ +{ + "name": "rust-redis", + "description": "A Rust client library for Redis.", + "twitter": [ + "mneumann" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/acrosa/scala-redis.json b/clients/scala/github.com/acrosa/scala-redis.json new file mode 100644 index 0000000000..612343f5bf --- /dev/null +++ b/clients/scala/github.com/acrosa/scala-redis.json @@ -0,0 +1,8 @@ +{ + "name": "scala-redis", + "description": "A Redis client.", + "recommended": true, + "twitter": [ + "alejandrocrosa" + ] +} diff --git a/clients/scala/github.com/andreyk0/redis-client-scala-netty.json b/clients/scala/github.com/andreyk0/redis-client-scala-netty.json new file mode 100644 index 0000000000..2bfa68de34 --- /dev/null +++ b/clients/scala/github.com/andreyk0/redis-client-scala-netty.json @@ -0,0 +1,4 @@ +{ + "name": "redis-client-scala-netty", + "description": "A Redis client." +} diff --git a/clients/scala/github.com/chiradip/RedisClient.json b/clients/scala/github.com/chiradip/RedisClient.json new file mode 100644 index 0000000000..6b20670337 --- /dev/null +++ b/clients/scala/github.com/chiradip/RedisClient.json @@ -0,0 +1,7 @@ +{ + "name": "RedisClient", + "description": "A no nonsense Redis Client using pure scala. Preserves elegant Redis style without any need to learn any special API", + "twitter": [ + "chiradip" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/chrisdinn/brando.json b/clients/scala/github.com/chrisdinn/brando.json new file mode 100644 index 0000000000..5871678cdb --- /dev/null +++ b/clients/scala/github.com/chrisdinn/brando.json @@ -0,0 +1,7 @@ +{ + "name": "Brando", + "description": "A Redis client written with the Akka IO package introduced in Akka 2.2.", + "twitter": [ + "chrisdinn" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/debasishg/scala-redis.json b/clients/scala/github.com/debasishg/scala-redis.json new file mode 100644 index 0000000000..c3423e2732 --- /dev/null +++ b/clients/scala/github.com/debasishg/scala-redis.json @@ -0,0 +1,7 @@ +{ + "name": "scala-redis", + "description": "Apparently a fork of the original client from @alejandrocrosa", + "twitter": [ + "debasishg" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/etaty/rediscala.json b/clients/scala/github.com/etaty/rediscala.json new file mode 100644 index 0000000000..78e84d6a7d --- /dev/null +++ b/clients/scala/github.com/etaty/rediscala.json @@ -0,0 +1,7 @@ +{ + "name": "rediscala", + "description": "A Redis client for Scala (2.10+) and (AKKA 2.2+) with non-blocking and asynchronous I/O operations.", + "twitter": [ + "etaty" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/jodersky/redicl.json b/clients/scala/github.com/jodersky/redicl.json new file mode 100644 index 0000000000..ffd4d45e88 --- /dev/null +++ b/clients/scala/github.com/jodersky/redicl.json @@ -0,0 +1,5 @@ +{ + "name": "redicl", + "description": "A lean and mean redis client implementation that uses only the Scala standard library. Available for the JVM and native.", + "homepage": "https://github.com/jodersky/redicl" +} diff --git a/clients/scala/github.com/laserdisc-io/laserdisc.json b/clients/scala/github.com/laserdisc-io/laserdisc.json new file mode 100644 index 0000000000..1fd86de0e7 --- /dev/null +++ b/clients/scala/github.com/laserdisc-io/laserdisc.json @@ -0,0 +1,8 @@ +{ + "name": "laserdisc", + "description": "Future free Fs2 native pure FP Redis client http://laserdisc.io", + "twitter": [ + "JSirocchi", + "barambani" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/monix/monix-connect.json b/clients/scala/github.com/monix/monix-connect.json new file mode 100644 index 0000000000..e76790c23d --- /dev/null +++ b/clients/scala/github.com/monix/monix-connect.json @@ -0,0 +1,8 @@ +{ + "name": "monix-connect", + "description": "Monix integration with Redis", + "homepage": "https://monix.github.io/monix-connect/docs/redis", + "twitter": [ + "paualarco" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/naoh87/lettucef.json b/clients/scala/github.com/naoh87/lettucef.json new file mode 100644 index 0000000000..50478f5d71 --- /dev/null +++ b/clients/scala/github.com/naoh87/lettucef.json @@ -0,0 +1,7 @@ +{ + "name": "LettuceF", + "description": "Scala FP wrapper for Lettuce with Cats Effect", + "twitter": [ + "naoh87" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/pk11/sedis.json b/clients/scala/github.com/pk11/sedis.json new file mode 100644 index 0000000000..3f32325958 --- /dev/null +++ b/clients/scala/github.com/pk11/sedis.json @@ -0,0 +1,4 @@ +{ + "name": "sedis", + "description": "a thin scala wrapper for the popular Redis Java client, Jedis" +} \ No newline at end of file diff --git a/clients/scala/github.com/profunktor/redis4cats.json b/clients/scala/github.com/profunktor/redis4cats.json new file mode 100644 index 0000000000..2441c8298f --- /dev/null +++ b/clients/scala/github.com/profunktor/redis4cats.json @@ -0,0 +1,8 @@ +{ + "name": "Redis4Cats", + "description": "Purely functional Redis client for Cats Effect & Fs2", + "homepage": "https://redis4cats.profunktor.dev/", + "twitter": [ + "volpegabriel87" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/redislabs/spark-redis.json b/clients/scala/github.com/redislabs/spark-redis.json new file mode 100644 index 0000000000..9d56304927 --- /dev/null +++ b/clients/scala/github.com/redislabs/spark-redis.json @@ -0,0 +1,9 @@ +{ + "name": "spark-redis", + "description": "A connector between Apache Spark and Redis.", + "twitter": [ + "redislabs", + "sunheehnus", + "dvirsky" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/scredis/scredis.json b/clients/scala/github.com/scredis/scredis.json new file mode 100644 index 0000000000..bc7b9ccd12 --- /dev/null +++ b/clients/scala/github.com/scredis/scredis.json @@ -0,0 +1,7 @@ +{ + "name": "scredis", + "description": "Non-blocking, ultra-fast Scala Redis client built on top of Akka IO, used in production at Livestream", + "twitter": [ + "livestream" + ] +} \ No newline at end of file diff --git a/clients/scala/github.com/twitter/finagle.json b/clients/scala/github.com/twitter/finagle.json new file mode 100644 index 0000000000..d85604ce4c --- /dev/null +++ b/clients/scala/github.com/twitter/finagle.json @@ -0,0 +1,4 @@ +{ + "name": "finagle", + "description": "Redis client based on Finagle" +} \ No newline at end of file diff --git a/clients/scala/github.com/yarosman/redis-client-scala-netty.json b/clients/scala/github.com/yarosman/redis-client-scala-netty.json new file mode 100644 index 0000000000..3619300d4f --- /dev/null +++ b/clients/scala/github.com/yarosman/redis-client-scala-netty.json @@ -0,0 +1,4 @@ +{ + "name": "scala-redis", + "description": "Non-blocking, netty 4.1.x based Scala Redis client" +} \ No newline at end of file diff --git a/clients/scheme/github.com/aconchillo/guile-redis.json b/clients/scheme/github.com/aconchillo/guile-redis.json new file mode 100644 index 0000000000..598cfb4d74 --- /dev/null +++ b/clients/scheme/github.com/aconchillo/guile-redis.json @@ -0,0 +1,7 @@ +{ + "name": "guile-redis", + "description": "A Redis client for Guile", + "twitter": [ + "aconchillo" + ] +} \ No newline at end of file diff --git a/clients/scheme/github.com/carld/redis-client.egg.json b/clients/scheme/github.com/carld/redis-client.egg.json new file mode 100644 index 0000000000..4c7a9a4216 --- /dev/null +++ b/clients/scheme/github.com/carld/redis-client.egg.json @@ -0,0 +1,8 @@ +{ + "name": "redis-client", + "description": "A Redis client for Chicken Scheme 4.7", + "homepage": "http://wiki.call-cc.org/eggref/4/redis-client", + "twitter": [ + "carld" + ] +} \ No newline at end of file diff --git a/clients/smalltalk/github.com/mumez/RediStick.json b/clients/smalltalk/github.com/mumez/RediStick.json new file mode 100644 index 0000000000..4f1a9d53ea --- /dev/null +++ b/clients/smalltalk/github.com/mumez/RediStick.json @@ -0,0 +1,9 @@ +{ + "name": "RediStick", + "language": "Smalltalk", + "repository": "https://github.com/mumez/RediStick", + "description": "A Redis client for Pharo using Stick auto-reconnection layer.", + "authors": [ + "umejava" + ] +} \ No newline at end of file diff --git a/clients/smalltalk/github.com/svenvc/SimpleRedisClient.json b/clients/smalltalk/github.com/svenvc/SimpleRedisClient.json new file mode 100644 index 0000000000..0ef762e6f5 --- /dev/null +++ b/clients/smalltalk/github.com/svenvc/SimpleRedisClient.json @@ -0,0 +1,10 @@ +{ + "name": "SimpleRedisClient", + "language": "Smalltalk", + "repository": "https://github.com/svenvc/SimpleRedisClient", + "description": "A minimal Redis client for Pharo.", + "homepage": "https://medium.com/concerning-pharo/quick-write-me-a-redis-client-5fbe4ddfb13d", + "authors": [ + "SvenVC" + ] +} \ No newline at end of file diff --git a/clients/smalltalk/github.com/tblanchard/Pharo-Redis.json b/clients/smalltalk/github.com/tblanchard/Pharo-Redis.json new file mode 100644 index 0000000000..cbd1afb89a --- /dev/null +++ b/clients/smalltalk/github.com/tblanchard/Pharo-Redis.json @@ -0,0 +1,8 @@ +{ + "name": "Pharo-Redis", + "language": "Smalltalk", + "description": "A full featured Redis client for Pharo. This was forked from svenvc/SimpleRedisClient and that simple client is still at the center of this.", + "authors": [ + "ToddBlanchard10" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/Farhaddc/Swidis.json b/clients/swift/github.com/Farhaddc/Swidis.json new file mode 100644 index 0000000000..a618284af8 --- /dev/null +++ b/clients/swift/github.com/Farhaddc/Swidis.json @@ -0,0 +1,7 @@ +{ + "name": "Swidis", + "description": "iOS Framework Allowing you to connect to Redis server with Swift programming language.", + "twitter": [ + "Farhaddc" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/Mordil/RediStack.json b/clients/swift/github.com/Mordil/RediStack.json new file mode 100644 index 0000000000..e6afd082ce --- /dev/null +++ b/clients/swift/github.com/Mordil/RediStack.json @@ -0,0 +1,9 @@ +{ + "name": "RediStack", + "description": "Non-blocking, event-driven Swift client for Redis built with SwiftNIO for all official Swift deployment environments.", + "recommended": true, + "homepage": "https://docs.redistack.info", + "twitter": [ + "mordil" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/Zewo/Redis.json b/clients/swift/github.com/Zewo/Redis.json new file mode 100644 index 0000000000..4e34f9c6f3 --- /dev/null +++ b/clients/swift/github.com/Zewo/Redis.json @@ -0,0 +1,7 @@ +{ + "name": "Redis", + "description": "Redis client for Swift. OpenSwift C7 Compliant, OS X and Linux compatible.", + "twitter": [ + "rabc" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/czechboy0/Redbird.json b/clients/swift/github.com/czechboy0/Redbird.json new file mode 100644 index 0000000000..4584fd510f --- /dev/null +++ b/clients/swift/github.com/czechboy0/Redbird.json @@ -0,0 +1,7 @@ +{ + "name": "Redbird", + "description": "Pure-Swift implementation of a Redis client from the original protocol spec (OS X + Linux compatible)", + "twitter": [ + "czechboy0" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/michaelvanstraten/Swifty-Redis.json b/clients/swift/github.com/michaelvanstraten/Swifty-Redis.json new file mode 100644 index 0000000000..ddf86c957f --- /dev/null +++ b/clients/swift/github.com/michaelvanstraten/Swifty-Redis.json @@ -0,0 +1,5 @@ +{ + "name": "SwiftyRedis", + "description": "SwiftyRedis is a high level async redis library for Swift. ", + "homepage": "https://michaelvanstraten.github.io/swifty-redis/documentation/swiftyredis/" +} diff --git a/clients/swift/github.com/perrystreetsoftware/PSSRedisClient.json b/clients/swift/github.com/perrystreetsoftware/PSSRedisClient.json new file mode 100644 index 0000000000..4553eeaef9 --- /dev/null +++ b/clients/swift/github.com/perrystreetsoftware/PSSRedisClient.json @@ -0,0 +1,7 @@ +{ + "name": "PSSRedisClient", + "description": "Swift redis client using the CocoaAsyncSocket library, installable via Cocoapods", + "twitter": [ + "esilverberg" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/ronp001/SwiftRedis.json b/clients/swift/github.com/ronp001/SwiftRedis.json new file mode 100644 index 0000000000..181376acff --- /dev/null +++ b/clients/swift/github.com/ronp001/SwiftRedis.json @@ -0,0 +1,7 @@ +{ + "name": "SwiftRedis", + "description": "Basic async client for Redis in Swift (iOS)", + "twitter": [ + "ronp001" + ] +} \ No newline at end of file diff --git a/clients/swift/github.com/seznam/swift-uniredis.json b/clients/swift/github.com/seznam/swift-uniredis.json new file mode 100644 index 0000000000..00e4542408 --- /dev/null +++ b/clients/swift/github.com/seznam/swift-uniredis.json @@ -0,0 +1,4 @@ +{ + "name": "UniRedis", + "description": "Redis client for Swift on macOS and Linux, capable of pipelining and transactions, with transparent support for authentication and sentinel." +} \ No newline at end of file diff --git a/clients/tcl/github.com/gahr/retcl.json b/clients/tcl/github.com/gahr/retcl.json new file mode 100644 index 0000000000..5208ce5456 --- /dev/null +++ b/clients/tcl/github.com/gahr/retcl.json @@ -0,0 +1,7 @@ +{ + "name": "Retcl", + "description": "Retcl is an asynchronous, event-driven Redis client library implemented as a single-file Tcl module.", + "twitter": [ + "gahrgahr" + ] +} \ No newline at end of file diff --git a/clients/tcl/github.com/redis/redis.json b/clients/tcl/github.com/redis/redis.json new file mode 100644 index 0000000000..703f450e7f --- /dev/null +++ b/clients/tcl/github.com/redis/redis.json @@ -0,0 +1,7 @@ +{ + "name": "Tcl Client", + "description": "The client used in the Redis test suite. Not really full featured nor designed to be used in the real world.", + "twitter": [ + "antirez" + ] +} \ No newline at end of file diff --git a/clients/vb/github.com/hishamco/vRedis.json b/clients/vb/github.com/hishamco/vRedis.json new file mode 100644 index 0000000000..37f50c6df8 --- /dev/null +++ b/clients/vb/github.com/hishamco/vRedis.json @@ -0,0 +1,7 @@ +{ + "name": "vRedis", + "description": "Redis client using VB.NET.", + "twitter": [ + "hishambinateya" + ] +} \ No newline at end of file diff --git a/clients/vcl/github.com/carlosabalde/libvmod-redis.json b/clients/vcl/github.com/carlosabalde/libvmod-redis.json new file mode 100644 index 0000000000..85f11e99ba --- /dev/null +++ b/clients/vcl/github.com/carlosabalde/libvmod-redis.json @@ -0,0 +1,7 @@ +{ + "name": "libvmod-redis", + "description": "Varnish Cache module using the synchronous hiredis library API to access Redis servers from VCL.", + "twitter": [ + "carlosabalde" + ] +} \ No newline at end of file diff --git a/clients/xojo/github.com/ktekinay/XOJO-Redis.json b/clients/xojo/github.com/ktekinay/XOJO-Redis.json new file mode 100644 index 0000000000..8144b11f67 --- /dev/null +++ b/clients/xojo/github.com/ktekinay/XOJO-Redis.json @@ -0,0 +1,7 @@ +{ + "name": "Redis_MTC", + "description": "A Xojo library to connect to a Redis server.", + "twitter": [ + "kemtekinay" + ] +} \ No newline at end of file diff --git a/clients/zig/github.com/kristoff-it/zig-okredis.json b/clients/zig/github.com/kristoff-it/zig-okredis.json new file mode 100644 index 0000000000..e7c387b2ca --- /dev/null +++ b/clients/zig/github.com/kristoff-it/zig-okredis.json @@ -0,0 +1,9 @@ +{ + "name": "OkRedis", + "description": "OkRedis is a zero-allocation client for Redis 6+ ", + "recommended": true, + "homepage": "https://github.com/kristoff-it/zig-okredis", + "twitter": [ + "croloris" + ] +} \ No newline at end of file diff --git a/commands.json b/commands.json index e8bc6f758e..63637edabd 100644 --- a/commands.json +++ b/commands.json @@ -1,1779 +1,18333 @@ { - "APPEND": { - "summary": "Append a value to a key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "1.3.3", - "group": "string" - }, - "AUTH": { - "summary": "Authenticate to the server", - "arguments": [ - { - "name": "password", - "type": "string" - } - ], - "since": "0.08", - "group": "connection" - }, - "BGREWRITEAOF": { - "summary": "Asynchronously rewrite the append-only file", - "since": "1.07", - "group": "server" - }, - "BGSAVE": { - "summary": "Asynchronously save the dataset to disk", - "since": "0.07", - "group": "server" - }, - "BLPOP": { - "summary": "Remove and get the first element in a list, or block until one is available", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "name": "timeout", - "type": "integer" - } - ], - "since": "1.3.1", - "group": "list" - }, - "BRPOP": { - "summary": "Remove and get the last element in a list, or block until one is available", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "name": "timeout", - "type": "integer" - } - ], - "since": "1.3.1", - "group": "list" - }, - "BRPOPLPUSH": { - "summary": "Pop a value from a list, push it to another list and return it; or block until one is available", - "arguments": [ - { - "name": "source", - "type": "key" - }, - { - "name": "destination", - "type": "key" - }, - { - "name": "timeout", - "type": "integer" - } - ], - "since": "2.1.7", - "group": "list" - }, - "CONFIG GET": { - "summary": "Get the value of a configuration parameter", - "arguments": [ - { - "name": "parameter", - "type": "string" - } - ], - "since": "2.0", - "group": "server" - }, - "CONFIG SET": { - "summary": "Set a configuration parameter to the given value", - "arguments": [ - { - "name": "parameter", - "type": "string" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.0", - "group": "server" - }, - "CONFIG RESETSTAT": { - "summary": "Reset the stats returned by INFO", - "since": "2.0", - "group": "server" - }, - "DBSIZE": { - "summary": "Return the number of keys in the selected database", - "since": "0.07", - "group": "server" - }, - "DEBUG OBJECT": { - "summary": "Get debugging information about a key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.101", - "group": "server" - }, - "DEBUG SEGFAULT": { - "summary": "Make the server crash", - "since": "0.101", - "group": "server" - }, - "DECR": { - "summary": "Decrement the integer value of a key by one", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "string" - }, - "DECRBY": { - "summary": "Decrement the integer value of a key by the given number", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "decrement", - "type": "integer" - } - ], - "since": "0.07", - "group": "string" - }, - "DEL": { - "summary": "Delete a key", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.07", - "group": "generic" - }, - "DISCARD": { - "summary": "Discard all commands issued after MULTI", - "since": "1.3.3", - "group": "transactions" - }, - "ECHO": { - "summary": "Echo the given string", - "arguments": [ - { - "name": "message", - "type": "string" - } - ], - "since": "0.07", - "group": "connection" - }, - "EXEC": { - "summary": "Execute all commands issued after MULTI", - "since": "1.1.95", - "group": "transactions" - }, - "EXISTS": { - "summary": "Determine if a key exists", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "generic" - }, - "EXPIRE": { - "summary": "Set a key's time to live in seconds", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "seconds", - "type": "integer" - } - ], - "since": "0.09", - "group": "generic" - }, - "EXPIREAT": { - "summary": "Set the expiration for a key as a UNIX timestamp", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "timestamp", - "type": "posix time" - } - ], - "since": "1.1", - "group": "generic" - }, - "FLUSHALL": { - "summary": "Remove all keys from all databases", - "since": "0.07", - "group": "server" - }, - "FLUSHDB": { - "summary": "Remove all keys from the current database", - "since": "0.07", - "group": "server" - }, - "GET": { - "summary": "Get the value of a key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "string" - }, - "GETBIT": { - "summary": "Returns the bit value at offset in the string value stored at key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "offset", - "type": "integer" - } - ], - "since": "2.1.8", - "group": "string" - }, - "GETRANGE": { - "summary": "Get a substring of the string stored at a key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "end", - "type": "integer" - } - ], - "since": "1.3.4", - "group": "string" - }, - "GETSET": { - "summary": "Set the string value of a key and return its old value", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "0.091", - "group": "string" - }, - "HDEL": { - "summary": "Delete one or more hash fields", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string", - "multiple": true - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HEXISTS": { - "summary": "Determine if a hash field exists", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HGET": { - "summary": "Get the value of a hash field", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HGETALL": { - "summary": "Get all the fields and values in a hash", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HINCRBY": { - "summary": "Increment the integer value of a hash field by the given number", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string" - }, - { - "name": "increment", - "type": "integer" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HKEYS": { - "summary": "Get all the fields in a hash", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HLEN": { - "summary": "Get the number of fields in a hash", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HMGET": { - "summary": "Get the values of all the given hash fields", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string", - "multiple": true - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HMSET": { - "summary": "Set multiple hash fields to multiple values", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": ["field", "value"], - "type": ["string", "string"], - "multiple": true - } - ], - "since": "1.3.8", - "group": "hash" - }, - "HSET": { - "summary": "Set the string value of a hash field", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "HSETNX": { - "summary": "Set the value of a hash field, only if the field does not exist", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "field", - "type": "string" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "1.3.8", - "group": "hash" - }, - "HVALS": { - "summary": "Get all the values in a hash", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.3.10", - "group": "hash" - }, - "INCR": { - "summary": "Increment the integer value of a key by one", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "string" - }, - "INCRBY": { - "summary": "Increment the integer value of a key by the given number", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "increment", - "type": "integer" - } - ], - "since": "0.07", - "group": "string" - }, - "INFO": { - "summary": "Get information and statistics about the server", - "since": "0.07", - "group": "server" - }, - "KEYS": { - "summary": "Find all keys matching the given pattern", - "arguments": [ - { - "name": "pattern", - "type": "pattern" - } - ], - "since": "0.07", - "group": "generic" - }, - "LASTSAVE": { - "summary": "Get the UNIX time stamp of the last successful save to disk", - "since": "0.07", - "group": "server" - }, - "LINDEX": { - "summary": "Get an element from a list by its index", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "index", - "type": "integer" - } - ], - "since": "0.07", - "group": "list" - }, - "LINSERT": { - "summary": "Insert an element before or after another element in a list", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "where", - "type": "enum", - "enum": ["BEFORE", "AFTER"] - }, - { - "name": "pivot", - "type": "string" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.1.1", - "group": "list" - }, - "LLEN": { - "summary": "Get the length of a list", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "list" - }, - "LPOP": { - "summary": "Remove and get the first element in a list", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "list" - }, - "LPUSH": { - "summary": "Prepend one or multiple values to a list", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string", - "multiple": true - } - ], - "since": "0.07", - "group": "list" - }, - "LPUSHX": { - "summary": "Prepend a value to a list, only if the list exists", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.1.1", - "group": "list" - }, - "LRANGE": { - "summary": "Get a range of elements from a list", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "stop", - "type": "integer" - } - ], - "since": "0.07", - "group": "list" - }, - "LREM": { - "summary": "Remove elements from a list", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "count", - "type": "integer" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "0.07", - "group": "list" - }, - "LSET": { - "summary": "Set the value of an element in a list by its index", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "index", - "type": "integer" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "0.07", - "group": "list" - }, - "LTRIM": { - "summary": "Trim a list to the specified range", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "stop", - "type": "integer" - } - ], - "since": "0.07", - "group": "list" - }, - "MGET": { - "summary": "Get the values of all the given keys", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.07", - "group": "string" - }, - "MONITOR": { - "summary": "Listen for all requests received by the server in real time", - "since": "0.07", - "group": "server" - }, - "MOVE": { - "summary": "Move a key to another database", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "db", - "type": "integer" - } - ], - "since": "0.07", - "group": "generic" - }, - "MSET": { - "summary": "Set multiple keys to multiple values", - "arguments": [ - { - "name": ["key", "value"], - "type": ["key", "string"], - "multiple": true - } - ], - "since": "1.001", - "group": "string" - }, - "MSETNX": { - "summary": "Set multiple keys to multiple values, only if none of the keys exist", - "arguments": [ - { - "name": ["key", "value"], - "type": ["key", "string"], - "multiple": true - } - ], - "since": "1.001", - "group": "string" - }, - "MULTI": { - "summary": "Mark the start of a transaction block", - "since": "1.1.95", - "group": "transactions" - }, - "OBJECT": { - "summary": "Inspect the internals of Redis objects", - "since": "2.2.3", - "group": "generic", - "arguments": [ - { - "name": "subcommand", - "type": "string" - }, - { - "name": "arguments", - "type": "string", - "optional": true, - "multiple": true - } - ] - }, - "PERSIST": { - "summary": "Remove the expiration from a key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "2.1.2", - "group": "generic" - }, - "PING": { - "summary": "Ping the server", - "since": "0.07", - "group": "connection" - }, - "PSUBSCRIBE": { - "summary": "Listen for messages published to channels matching the given patterns", - "arguments": [ - { - "name": ["pattern"], - "type": ["pattern"], - "multiple": true - } - ], - "since": "1.3.8", - "group": "pubsub" - }, - "PUBLISH": { - "summary": "Post a message to a channel", - "arguments": [ - { - "name": "channel", - "type": "string" - }, - { - "name": "message", - "type": "string" - } - ], - "since": "1.3.8", - "group": "pubsub" - }, - "PUNSUBSCRIBE": { - "summary": "Stop listening for messages posted to channels matching the given patterns", - "arguments": [ - { - "name": "pattern", - "type": "pattern", - "optional": true, - "multiple": true - } - ], - "since": "1.3.8", - "group": "pubsub" - }, - "QUIT": { - "summary": "Close the connection", - "since": "0.07", - "group": "connection" - }, - "RANDOMKEY": { - "summary": "Return a random key from the keyspace", - "since": "0.07", - "group": "generic" - }, - "RENAME": { - "summary": "Rename a key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "newkey", - "type": "key" - } - ], - "since": "0.07", - "group": "generic" - }, - "RENAMENX": { - "summary": "Rename a key, only if the new key does not exist", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "newkey", - "type": "key" - } - ], - "since": "0.07", - "group": "generic" - }, - "RPOP": { - "summary": "Remove and get the last element in a list", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "list" - }, - "RPOPLPUSH": { - "summary": "Remove the last element in a list, append it to another list and return it", - "arguments": [ - { - "name": "source", - "type": "key" - }, - { - "name": "destination", - "type": "key" - } - ], - "since": "1.1", - "group": "list" - }, - "RPUSH": { - "summary": "Append one or multiple values to a list", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string", - "multiple": true - } - ], - "since": "0.07", - "group": "list" - }, - "RPUSHX": { - "summary": "Append a value to a list, only if the list exists", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.1.1", - "group": "list" - }, - "SADD": { - "summary": "Add one or more members to a set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string", - "multiple": true - } - ], - "since": "0.07", - "group": "set" - }, - "SAVE": { - "summary": "Synchronously save the dataset to disk", - "since": "0.07", - "group": "server" - }, - "SCARD": { - "summary": "Get the number of members in a set", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "set" - }, - "SDIFF": { - "summary": "Subtract multiple sets", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.100", - "group": "set" - }, - "SDIFFSTORE": { - "summary": "Subtract multiple sets and store the resulting set in a key", - "arguments": [ - { - "name": "destination", - "type": "key" - }, - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.100", - "group": "set" - }, - "SELECT": { - "summary": "Change the selected database for the current connection", - "arguments": [ - { - "name": "index", - "type": "integer" - } - ], - "since": "0.07", - "group": "connection" - }, - "SET": { - "summary": "Set the string value of a key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "0.07", - "group": "string" - }, - "SETBIT": { - "summary": "Sets or clears the bit at offset in the string value stored at key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "offset", - "type": "integer" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.1.8", - "group": "string" - }, - "SETEX": { - "summary": "Set the value and expiration of a key", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "seconds", - "type": "integer" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "1.3.10", - "group": "string" - }, - "SETNX": { - "summary": "Set the value of a key, only if the key does not exist", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "0.07", - "group": "string" - }, - "SETRANGE": { - "summary": "Overwrite part of a string at key starting at the specified offset", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "offset", - "type": "integer" - }, - { - "name": "value", - "type": "string" - } - ], - "since": "2.1.8", - "group": "string" - }, - "SHUTDOWN": { - "summary": "Synchronously save the dataset to disk and then shut down the server", - "since": "0.07", - "group": "server" - }, - "SINTER": { - "summary": "Intersect multiple sets", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.07", - "group": "set" - }, - "SINTERSTORE": { - "summary": "Intersect multiple sets and store the resulting set in a key", - "arguments": [ - { - "name": "destination", - "type": "key" - }, - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.07", - "group": "set" - }, - "SISMEMBER": { - "summary": "Determine if a given value is a member of a set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "0.07", - "group": "set" - }, - "SLAVEOF": { - "summary": "Make the server a slave of another instance, or promote it as master", - "arguments": [ - { - "name": "host", - "type": "string" - }, - { - "name": "port", - "type": "string" - } - ], - "since": "0.100", - "group": "server" - }, - "SLOWLOG": { - "summary": "Manages the Redis slow queries log", - "arguments": [ - { - "name": "subcommand", - "type": "string" - }, - { - "name": "argument", - "type": "string", - "optional": true - } - ], - "since": "2.2.12", - "group": "server" - }, - "SMEMBERS": { - "summary": "Get all the members in a set", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "set" - }, - "SMOVE": { - "summary": "Move a member from one set to another", - "arguments": [ - { - "name": "source", - "type": "key" - }, - { - "name": "destination", - "type": "key" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "0.091", - "group": "set" - }, - "SORT": { - "summary": "Sort the elements in a list, set or sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "command": "BY", - "name": "pattern", - "type": "pattern", - "optional": true - }, - { - "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], - "optional": true - }, - { - "command": "GET", - "name": "pattern", - "type": "string", - "optional": true, - "multiple": true - }, - { - "name": "order", - "type": "enum", - "enum": ["ASC", "DESC"], - "optional": true - }, - { - "name": "sorting", - "type": "enum", - "enum": ["ALPHA"], - "optional": true - }, - { - "command": "STORE", - "name": "destination", - "type": "key", - "optional": true - } - ], - "since": "0.07", - "group": "generic" - }, - "SPOP": { - "summary": "Remove and return a random member from a set", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.101", - "group": "set" - }, - "SRANDMEMBER": { - "summary": "Get a random member from a set", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.001", - "group": "set" - }, - "SREM": { - "summary": "Remove one or more members from a set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string", - "multiple": true - } - ], - "since": "0.07", - "group": "set" - }, - "STRLEN": { - "summary": "Get the length of the value stored in a key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "2.1.2", - "group": "string" - }, - "SUBSCRIBE": { - "summary": "Listen for messages published to the given channels", - "arguments": [ - { - "name": ["channel"], - "type": ["string"], - "multiple": true - } - ], - "since": "1.3.8", - "group": "pubsub" - }, - "SUNION": { - "summary": "Add multiple sets", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.091", - "group": "set" - }, - "SUNIONSTORE": { - "summary": "Add multiple sets and store the resulting set in a key", - "arguments": [ - { - "name": "destination", - "type": "key" - }, - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "0.091", - "group": "set" - }, - "SYNC": { - "summary": "Internal command used for replication", - "since": "0.07", - "group": "server" - }, - "TTL": { - "summary": "Get the time to live for a key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.100", - "group": "generic" - }, - "TYPE": { - "summary": "Determine the type stored at key", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "0.07", - "group": "generic" - }, - "UNSUBSCRIBE": { - "summary": "Stop listening for messages posted to the given channels", - "arguments": [ - { - "name": "channel", - "type": "string", - "optional": true, - "multiple": true - } - ], - "since": "1.3.8", - "group": "pubsub" - }, - "UNWATCH": { - "summary": "Forget about all watched keys", - "since": "2.1.0", - "group": "transactions" - }, - "WATCH": { - "summary": "Watch the given keys to determine execution of the MULTI/EXEC block", - "arguments": [ - { - "name": "key", - "type": "key", - "multiple": true - } - ], - "since": "2.1.0", - "group": "transactions" - }, - "ZADD": { - "summary": "Add one or more members to a sorted set, or update its score if it already exists", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "score", - "type": "double" - }, - { - "name": "member", - "type": "string" - }, - { - "name": "score", - "type": "double", - "optional": true - }, - { - "name": "member", - "type": "string", - "optional": true - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZCARD": { - "summary": "Get the number of members in a sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZCOUNT": { - "summary": "Count the members in a sorted set with scores within the given values", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "min", - "type": "double" - }, - { - "name": "max", - "type": "double" - } - ], - "since": "1.3.3", - "group": "sorted_set" - }, - "ZINCRBY": { - "summary": "Increment the score of a member in a sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "increment", - "type": "integer" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZINTERSTORE": { - "summary": "Intersect multiple sorted sets and store the resulting sorted set in a new key", - "arguments": [ - { - "name": "destination", - "type": "key" - }, - { - "name": "numkeys", - "type": "integer" - }, - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "command": "WEIGHTS", - "name": "weight", - "type": "integer", - "variadic": true, - "optional": true - }, - { - "command": "AGGREGATE", - "name": "aggregate", - "type": "enum", - "enum": ["SUM", "MIN", "MAX"], - "optional": true - } - ], - "since": "1.3.10", - "group": "sorted_set" - }, - "ZRANGE": { - "summary": "Return a range of members in a sorted set, by index", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "stop", - "type": "integer" - }, - { - "name": "withscores", - "type": "enum", - "enum": ["WITHSCORES"], - "optional": true - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZRANGEBYSCORE": { - "summary": "Return a range of members in a sorted set, by score", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "min", - "type": "double" - }, - { - "name": "max", - "type": "double" - }, - { - "name": "withscores", - "type": "enum", - "enum": ["WITHSCORES"], - "optional": true - }, - { - "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], - "optional": true - } - ], - "since": "1.050", - "group": "sorted_set" - }, - "ZRANK": { - "summary": "Determine the index of a member in a sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "1.3.4", - "group": "sorted_set" - }, - "ZREM": { - "summary": "Remove one or more members from a sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string", - "multiple": true - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZREMRANGEBYRANK": { - "summary": "Remove all members in a sorted set within the given indexes", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "stop", - "type": "integer" - } - ], - "since": "1.3.4", - "group": "sorted_set" - }, - "ZREMRANGEBYSCORE": { - "summary": "Remove all members in a sorted set within the given scores", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "min", - "type": "double" - }, - { - "name": "max", - "type": "double" - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZREVRANGE": { - "summary": "Return a range of members in a sorted set, by index, with scores ordered from high to low", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "start", - "type": "integer" - }, - { - "name": "stop", - "type": "integer" - }, - { - "name": "withscores", - "type": "enum", - "enum": ["WITHSCORES"], - "optional": true - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZREVRANGEBYSCORE": { - "summary": "Return a range of members in a sorted set, by score, with scores ordered from high to low", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "max", - "type": "double" - }, - { - "name": "min", - "type": "double" - }, - { - "name": "withscores", - "type": "enum", - "enum": ["WITHSCORES"], - "optional": true - }, - { - "command": "LIMIT", - "name": ["offset", "count"], - "type": ["integer", "integer"], - "optional": true - } - ], - "since": "2.1.6", - "group": "sorted_set" - }, - "ZREVRANK": { - "summary": "Determine the index of a member in a sorted set, with scores ordered from high to low", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "1.3.4", - "group": "sorted_set" - }, - "ZSCORE": { - "summary": "Get the score associated with the given member in a sorted set", - "arguments": [ - { - "name": "key", - "type": "key" - }, - { - "name": "member", - "type": "string" - } - ], - "since": "1.1", - "group": "sorted_set" - }, - "ZUNIONSTORE": { - "summary": "Add multiple sorted sets and store the resulting sorted set in a new key", - "arguments": [ - { - "name": "destination", - "type": "key" - }, - { - "name": "numkeys", - "type": "integer" - }, - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "command": "WEIGHTS", - "name": "weight", - "type": "integer", - "variadic": true, - "optional": true - }, - { - "command": "AGGREGATE", - "name": "aggregate", - "type": "enum", - "enum": ["SUM", "MIN", "MAX"], - "optional": true - } - ], - "since": "1.3.10", - "group": "sorted_set" - }, - "EVAL": { - "summary": "Execute a Lua script server side", - "arguments": [ - { - "name": "script", - "type": "string" - }, - { - "name": "numkeys", - "type": "integer" - }, - { - "name": "key", - "type": "key", - "multiple": true - }, - { - "name": "arg", - "type": "string", - "multiple": true - } - ], - "since": "2.6.0", - "group": "generic" - } + "ACL": { + "summary": "A container for Access List Control commands.", + "since": "6.0.0", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "ACL CAT": { + "summary": "Lists the ACL categories, or the commands inside a category.", + "since": "6.0.0", + "group": "server", + "complexity": "O(1) since the categories and commands are a fixed set.", + "acl_categories": [ + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "category", + "type": "string", + "display_text": "category", + "optional": true + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "ACL DELUSER": { + "summary": "Deletes ACL users, and terminates their connections.", + "since": "6.0.0", + "group": "server", + "complexity": "O(1) amortized time considering the typical user.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username", + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "ACL DRYRUN": { + "summary": "Simulates the execution of a command by a user, without executing the command.", + "since": "7.0.0", + "group": "server", + "complexity": "O(1).", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username" + }, + { + "name": "command", + "type": "string", + "display_text": "command" + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL GENPASS": { + "summary": "Generates a pseudorandom, secure password that can be used to identify ACL users.", + "since": "6.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "bits", + "type": "integer", + "display_text": "bits", + "optional": true + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "ACL GETUSER": { + "summary": "Lists the ACL rules of a user.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of password, command and pattern rules that the user has.", + "history": [ + [ + "6.2.0", + "Added Pub/Sub channel patterns." + ], + [ + "7.0.0", + "Added selectors and changed the format of key and channel patterns from a list to their rule representation." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username" + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "6.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "ACL LIST": { + "summary": "Dumps the effective rules in ACL file format.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of configured users.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL LOAD": { + "summary": "Reloads the rules from the configured ACL file.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of configured users.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL LOG": { + "summary": "Lists recent security events generated due to ACL rules.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N) with N being the number of entries shown.", + "history": [ + [ + "7.2.0", + "Added entry ID, timestamp created, and timestamp last updated." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "operation", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count" + }, + { + "name": "reset", + "type": "pure-token", + "display_text": "reset", + "token": "RESET" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL SAVE": { + "summary": "Saves the effective ACL rules in the configured ACL file.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of configured users.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "ACL SETUSER": { + "summary": "Creates and modifies an ACL user and its rules.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of rules provided.", + "history": [ + [ + "6.2.0", + "Added Pub/Sub channel patterns." + ], + [ + "7.0.0", + "Added selectors and key based permissions." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username" + }, + { + "name": "rule", + "type": "string", + "display_text": "rule", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "ACL USERS": { + "summary": "Lists all ACL users.", + "since": "6.0.0", + "group": "server", + "complexity": "O(N). Where N is the number of configured users.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "ACL WHOAMI": { + "summary": "Returns the authenticated username of the current connection.", + "since": "6.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "APPEND": { + "summary": "Appends a string to the value of a key. Creates the key if it doesn't exist.", + "since": "2.0.0", + "group": "string", + "complexity": "O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "ASKING": { + "summary": "Signals that a cluster client is following an -ASK redirect.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 1, + "command_flags": [ + "fast" + ] + }, + "AUTH": { + "summary": "Authenticates the connection.", + "since": "1.0.0", + "group": "connection", + "complexity": "O(N) where N is the number of passwords defined for the user", + "history": [ + [ + "6.0.0", + "Added ACL style (username and password)." + ] + ], + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": -2, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username", + "since": "6.0.0", + "optional": true + }, + { + "name": "password", + "type": "string", + "display_text": "password" + } + ], + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "no_auth", + "allow_busy" + ] + }, + "BGREWRITEAOF": { + "summary": "Asynchronously rewrites the append-only file to disk.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "admin", + "noscript", + "no_async_loading" + ] + }, + "BGSAVE": { + "summary": "Asynchronously saves the database(s) to disk.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "history": [ + [ + "3.2.2", + "Added the `SCHEDULE` option." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "schedule", + "type": "pure-token", + "display_text": "schedule", + "token": "SCHEDULE", + "since": "3.2.2", + "optional": true + } + ], + "command_flags": [ + "admin", + "noscript", + "no_async_loading" + ] + }, + "BITCOUNT": { + "summary": "Counts the number of set bits (population counting) in a string.", + "since": "2.6.0", + "group": "bitmap", + "complexity": "O(N)", + "history": [ + [ + "7.0.0", + "Added the `BYTE|BIT` option." + ] + ], + "acl_categories": [ + "@read", + "@bitmap", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "range", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "end", + "type": "integer", + "display_text": "end" + }, + { + "name": "unit", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "byte", + "type": "pure-token", + "display_text": "byte", + "token": "BYTE" + }, + { + "name": "bit", + "type": "pure-token", + "display_text": "bit", + "token": "BIT" + } + ] + } + ] + } + ], + "command_flags": [ + "readonly" + ] + }, + "BITFIELD": { + "summary": "Performs arbitrary bitfield integer operations on strings.", + "since": "3.2.0", + "group": "bitmap", + "complexity": "O(1) for each subcommand specified", + "acl_categories": [ + "@write", + "@bitmap", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "notes": "This command allows both access and modification of the key", + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true, + "variable_flags": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "operation", + "type": "oneof", + "optional": true, + "multiple": true, + "arguments": [ + { + "name": "get-block", + "type": "block", + "token": "GET", + "arguments": [ + { + "name": "encoding", + "type": "string", + "display_text": "encoding" + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + } + ] + }, + { + "name": "write", + "type": "block", + "arguments": [ + { + "name": "overflow-block", + "type": "oneof", + "token": "OVERFLOW", + "optional": true, + "arguments": [ + { + "name": "wrap", + "type": "pure-token", + "display_text": "wrap", + "token": "WRAP" + }, + { + "name": "sat", + "type": "pure-token", + "display_text": "sat", + "token": "SAT" + }, + { + "name": "fail", + "type": "pure-token", + "display_text": "fail", + "token": "FAIL" + } + ] + }, + { + "name": "write-operation", + "type": "oneof", + "arguments": [ + { + "name": "set-block", + "type": "block", + "token": "SET", + "arguments": [ + { + "name": "encoding", + "type": "string", + "display_text": "encoding" + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "value", + "type": "integer", + "display_text": "value" + } + ] + }, + { + "name": "incrby-block", + "type": "block", + "token": "INCRBY", + "arguments": [ + { + "name": "encoding", + "type": "string", + "display_text": "encoding" + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "increment", + "type": "integer", + "display_text": "increment" + } + ] + } + ] + } + ] + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "BITFIELD_RO": { + "summary": "Performs arbitrary read-only bitfield integer operations on strings.", + "since": "6.0.0", + "group": "bitmap", + "complexity": "O(1) for each subcommand specified", + "acl_categories": [ + "@read", + "@bitmap", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "get-block", + "type": "block", + "token": "GET", + "optional": true, + "multiple": true, + "multiple_token": true, + "arguments": [ + { + "name": "encoding", + "type": "string", + "display_text": "encoding" + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + } + ] + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "BITOP": { + "summary": "Performs bitwise operations on multiple strings, and stores the result.", + "since": "2.6.0", + "group": "bitmap", + "complexity": "O(N)", + "acl_categories": [ + "@write", + "@bitmap", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 3 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "operation", + "type": "oneof", + "arguments": [ + { + "name": "and", + "type": "pure-token", + "display_text": "and", + "token": "AND" + }, + { + "name": "or", + "type": "pure-token", + "display_text": "or", + "token": "OR" + }, + { + "name": "xor", + "type": "pure-token", + "display_text": "xor", + "token": "XOR" + }, + { + "name": "not", + "type": "pure-token", + "display_text": "not", + "token": "NOT" + } + ] + }, + { + "name": "destkey", + "type": "key", + "display_text": "destkey", + "key_spec_index": 0 + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "BITPOS": { + "summary": "Finds the first set (1) or clear (0) bit in a string.", + "since": "2.8.7", + "group": "bitmap", + "complexity": "O(N)", + "history": [ + [ + "7.0.0", + "Added the `BYTE|BIT` option." + ] + ], + "acl_categories": [ + "@read", + "@bitmap", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "bit", + "type": "integer", + "display_text": "bit" + }, + { + "name": "range", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "end-unit-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "end", + "type": "integer", + "display_text": "end" + }, + { + "name": "unit", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "byte", + "type": "pure-token", + "display_text": "byte", + "token": "BYTE" + }, + { + "name": "bit", + "type": "pure-token", + "display_text": "bit", + "token": "BIT" + } + ] + } + ] + } + ] + } + ], + "command_flags": [ + "readonly" + ] + }, + "BLMOVE": { + "summary": "Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved.", + "since": "6.2.0", + "group": "list", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@list", + "@slow", + "@blocking" + ], + "arity": 6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + }, + { + "name": "wherefrom", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + }, + { + "name": "whereto", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "denyoom", + "blocking" + ] + }, + "BLMPOP": { + "summary": "Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped.", + "since": "7.0.0", + "group": "list", + "complexity": "O(N+M) where N is the number of provided keys and M is the number of elements returned.", + "acl_categories": [ + "@write", + "@list", + "@slow", + "@blocking" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "where", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "write", + "blocking", + "movablekeys" + ] + }, + "BLPOP": { + "summary": "Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.", + "since": "2.0.0", + "group": "list", + "complexity": "O(N) where N is the number of provided keys.", + "history": [ + [ + "6.0.0", + "`timeout` is interpreted as a double instead of an integer." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@slow", + "@blocking" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -2, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "blocking" + ] + }, + "BRPOP": { + "summary": "Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.", + "since": "2.0.0", + "group": "list", + "complexity": "O(N) where N is the number of provided keys.", + "history": [ + [ + "6.0.0", + "`timeout` is interpreted as a double instead of an integer." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@slow", + "@blocking" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -2, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "blocking" + ] + }, + "BRPOPLPUSH": { + "summary": "Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped.", + "since": "2.2.0", + "group": "list", + "complexity": "O(1)", + "deprecated_since": "6.2.0", + "replaced_by": "`BLMOVE` with the `RIGHT` and `LEFT` arguments", + "history": [ + [ + "6.0.0", + "`timeout` is interpreted as a double instead of an integer." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@slow", + "@blocking" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "denyoom", + "blocking" + ], + "doc_flags": [ + "deprecated" + ] + }, + "BZMPOP": { + "summary": "Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.", + "since": "7.0.0", + "group": "sorted-set", + "complexity": "O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow", + "@blocking" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "where", + "type": "oneof", + "arguments": [ + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "write", + "blocking", + "movablekeys" + ] + }, + "BZPOPMAX": { + "summary": "Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise. Deletes the sorted set if the last element was popped.", + "since": "5.0.0", + "group": "sorted-set", + "complexity": "O(log(N)) with N being the number of elements in the sorted set.", + "history": [ + [ + "6.0.0", + "`timeout` is interpreted as a double instead of an integer." + ] + ], + "acl_categories": [ + "@write", + "@sortedset", + "@fast", + "@blocking" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -2, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "blocking", + "fast" + ] + }, + "BZPOPMIN": { + "summary": "Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.", + "since": "5.0.0", + "group": "sorted-set", + "complexity": "O(log(N)) with N being the number of elements in the sorted set.", + "history": [ + [ + "6.0.0", + "`timeout` is interpreted as a double instead of an integer." + ] + ], + "acl_categories": [ + "@write", + "@sortedset", + "@fast", + "@blocking" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -2, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "timeout", + "type": "double", + "display_text": "timeout" + } + ], + "command_flags": [ + "write", + "blocking", + "fast" + ] + }, + "CLIENT": { + "summary": "A container for client connection commands.", + "since": "2.4.0", + "group": "connection", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "CLIENT CACHING": { + "summary": "Instructs the server whether to track the keys in the next request.", + "since": "6.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "mode", + "type": "oneof", + "arguments": [ + { + "name": "yes", + "type": "pure-token", + "display_text": "yes", + "token": "YES" + }, + { + "name": "no", + "type": "pure-token", + "display_text": "no", + "token": "NO" + } + ] + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT GETNAME": { + "summary": "Returns the name of the connection.", + "since": "2.6.9", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT GETREDIR": { + "summary": "Returns the client ID to which the connection's tracking notifications are redirected.", + "since": "6.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "CLIENT ID": { + "summary": "Returns the unique client ID of the connection.", + "since": "5.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT INFO": { + "summary": "Returns information about the connection.", + "since": "6.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLIENT KILL": { + "summary": "Terminates open connections.", + "since": "2.4.0", + "group": "connection", + "complexity": "O(N) where N is the number of client connections", + "history": [ + [ + "2.8.12", + "Added new filter format." + ], + [ + "2.8.12", + "`ID` option." + ], + [ + "3.2.0", + "Added `master` type in for `TYPE` option." + ], + [ + "5.0.0", + "Replaced `slave` `TYPE` with `replica`. `slave` still supported for backward compatibility." + ], + [ + "6.2.0", + "`LADDR` option." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "filter", + "type": "oneof", + "arguments": [ + { + "name": "old-format", + "type": "string", + "display_text": "ip:port", + "deprecated_since": "2.8.12" + }, + { + "name": "new-format", + "type": "oneof", + "multiple": true, + "arguments": [ + { + "name": "client-id", + "type": "integer", + "display_text": "client-id", + "token": "ID", + "since": "2.8.12", + "optional": true + }, + { + "name": "client-type", + "type": "oneof", + "token": "TYPE", + "since": "2.8.12", + "optional": true, + "arguments": [ + { + "name": "normal", + "type": "pure-token", + "display_text": "normal", + "token": "NORMAL" + }, + { + "name": "master", + "type": "pure-token", + "display_text": "master", + "token": "MASTER", + "since": "3.2.0" + }, + { + "name": "slave", + "type": "pure-token", + "display_text": "slave", + "token": "SLAVE" + }, + { + "name": "replica", + "type": "pure-token", + "display_text": "replica", + "token": "REPLICA", + "since": "5.0.0" + }, + { + "name": "pubsub", + "type": "pure-token", + "display_text": "pubsub", + "token": "PUBSUB" + } + ] + }, + { + "name": "username", + "type": "string", + "display_text": "username", + "token": "USER", + "optional": true + }, + { + "name": "addr", + "type": "string", + "display_text": "ip:port", + "token": "ADDR", + "optional": true + }, + { + "name": "laddr", + "type": "string", + "display_text": "ip:port", + "token": "LADDR", + "since": "6.2.0", + "optional": true + }, + { + "name": "skipme", + "type": "oneof", + "token": "SKIPME", + "optional": true, + "arguments": [ + { + "name": "yes", + "type": "pure-token", + "display_text": "yes", + "token": "YES" + }, + { + "name": "no", + "type": "pure-token", + "display_text": "no", + "token": "NO" + } + ] + } + ] + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CLIENT LIST": { + "summary": "Lists open connections.", + "since": "2.4.0", + "group": "connection", + "complexity": "O(N) where N is the number of client connections", + "history": [ + [ + "2.8.12", + "Added unique client `id` field." + ], + [ + "5.0.0", + "Added optional `TYPE` filter." + ], + [ + "6.0.0", + "Added `user` field." + ], + [ + "6.2.0", + "Added `argv-mem`, `tot-mem`, `laddr` and `redir` fields and the optional `ID` filter." + ], + [ + "7.0.0", + "Added `resp`, `multi-mem`, `rbs` and `rbp` fields." + ], + [ + "7.0.3", + "Added `ssub` field." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": -2, + "arguments": [ + { + "name": "client-type", + "type": "oneof", + "token": "TYPE", + "since": "5.0.0", + "optional": true, + "arguments": [ + { + "name": "normal", + "type": "pure-token", + "display_text": "normal", + "token": "NORMAL" + }, + { + "name": "master", + "type": "pure-token", + "display_text": "master", + "token": "MASTER" + }, + { + "name": "replica", + "type": "pure-token", + "display_text": "replica", + "token": "REPLICA" + }, + { + "name": "pubsub", + "type": "pure-token", + "display_text": "pubsub", + "token": "PUBSUB" + } + ] + }, + { + "name": "client-id", + "type": "integer", + "display_text": "client-id", + "token": "ID", + "since": "6.2.0", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLIENT NO-EVICT": { + "summary": "Sets the client eviction mode of the connection.", + "since": "7.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "enabled", + "type": "oneof", + "arguments": [ + { + "name": "on", + "type": "pure-token", + "display_text": "on", + "token": "ON" + }, + { + "name": "off", + "type": "pure-token", + "display_text": "off", + "token": "OFF" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CLIENT NO-TOUCH": { + "summary": "Controls whether commands sent by the client affect the LRU/LFU of accessed keys.", + "since": "7.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "enabled", + "type": "oneof", + "arguments": [ + { + "name": "on", + "type": "pure-token", + "display_text": "on", + "token": "ON" + }, + { + "name": "off", + "type": "pure-token", + "display_text": "off", + "token": "OFF" + } + ] + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT PAUSE": { + "summary": "Suspends commands processing.", + "since": "3.0.0", + "group": "connection", + "complexity": "O(1)", + "history": [ + [ + "6.2.0", + "`CLIENT PAUSE WRITE` mode added along with the `mode` option." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "timeout", + "type": "integer", + "display_text": "timeout" + }, + { + "name": "mode", + "type": "oneof", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "write", + "type": "pure-token", + "display_text": "write", + "token": "WRITE" + }, + { + "name": "all", + "type": "pure-token", + "display_text": "all", + "token": "ALL" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CLIENT REPLY": { + "summary": "Instructs the server whether to reply to commands.", + "since": "3.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "action", + "type": "oneof", + "arguments": [ + { + "name": "on", + "type": "pure-token", + "display_text": "on", + "token": "ON" + }, + { + "name": "off", + "type": "pure-token", + "display_text": "off", + "token": "OFF" + }, + { + "name": "skip", + "type": "pure-token", + "display_text": "skip", + "token": "SKIP" + } + ] + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT SETINFO": { + "summary": "Sets information specific to the client or connection.", + "since": "7.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 4, + "arguments": [ + { + "name": "attr", + "type": "oneof", + "arguments": [ + { + "name": "libname", + "type": "string", + "display_text": "libname", + "token": "LIB-NAME" + }, + { + "name": "libver", + "type": "string", + "display_text": "libver", + "token": "LIB-VER" + } + ] + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "CLIENT SETNAME": { + "summary": "Sets the connection name.", + "since": "2.6.9", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "connection-name", + "type": "string", + "display_text": "connection-name" + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "CLIENT TRACKING": { + "summary": "Controls server-assisted client-side caching for the connection.", + "since": "6.0.0", + "group": "connection", + "complexity": "O(1). Some options may introduce additional complexity.", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "status", + "type": "oneof", + "arguments": [ + { + "name": "on", + "type": "pure-token", + "display_text": "on", + "token": "ON" + }, + { + "name": "off", + "type": "pure-token", + "display_text": "off", + "token": "OFF" + } + ] + }, + { + "name": "client-id", + "type": "integer", + "display_text": "client-id", + "token": "REDIRECT", + "optional": true + }, + { + "name": "prefix", + "type": "string", + "display_text": "prefix", + "token": "PREFIX", + "optional": true, + "multiple": true, + "multiple_token": true + }, + { + "name": "bcast", + "type": "pure-token", + "display_text": "bcast", + "token": "BCAST", + "optional": true + }, + { + "name": "optin", + "type": "pure-token", + "display_text": "optin", + "token": "OPTIN", + "optional": true + }, + { + "name": "optout", + "type": "pure-token", + "display_text": "optout", + "token": "OPTOUT", + "optional": true + }, + { + "name": "noloop", + "type": "pure-token", + "display_text": "noloop", + "token": "NOLOOP", + "optional": true + } + ], + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT TRACKINGINFO": { + "summary": "Returns information about server-assisted client-side caching for the connection.", + "since": "6.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "noscript", + "loading", + "stale" + ] + }, + "CLIENT UNBLOCK": { + "summary": "Unblocks a client blocked by a blocking command from a different connection.", + "since": "5.0.0", + "group": "connection", + "complexity": "O(log N) where N is the number of client connections", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "client-id", + "type": "integer", + "display_text": "client-id" + }, + { + "name": "unblock-type", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "timeout", + "type": "pure-token", + "display_text": "timeout", + "token": "TIMEOUT" + }, + { + "name": "error", + "type": "pure-token", + "display_text": "error", + "token": "ERROR" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CLIENT UNPAUSE": { + "summary": "Resumes processing commands from paused clients.", + "since": "6.2.0", + "group": "connection", + "complexity": "O(N) Where N is the number of paused clients", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous", + "@connection" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CLUSTER": { + "summary": "A container for Redis Cluster commands.", + "since": "3.0.0", + "group": "cluster", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "CLUSTER ADDSLOTS": { + "summary": "Assigns new hash slots to a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of hash slot arguments", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "slot", + "type": "integer", + "display_text": "slot", + "multiple": true + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER ADDSLOTSRANGE": { + "summary": "Assigns new hash slot ranges to a node.", + "since": "7.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of the slots between the start slot and end slot arguments.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "range", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "start-slot", + "type": "integer", + "display_text": "start-slot" + }, + { + "name": "end-slot", + "type": "integer", + "display_text": "end-slot" + } + ] + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER BUMPEPOCH": { + "summary": "Advances the cluster config epoch.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER COUNT-FAILURE-REPORTS": { + "summary": "Returns the number of active failure reports active for a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of failure reports", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "node-id", + "type": "string", + "display_text": "node-id" + } + ], + "command_flags": [ + "admin", + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER COUNTKEYSINSLOT": { + "summary": "Returns the number of keys in a hash slot.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 3, + "arguments": [ + { + "name": "slot", + "type": "integer", + "display_text": "slot" + } + ], + "command_flags": [ + "stale" + ] + }, + "CLUSTER DELSLOTS": { + "summary": "Sets hash slots as unbound for a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of hash slot arguments", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "slot", + "type": "integer", + "display_text": "slot", + "multiple": true + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER DELSLOTSRANGE": { + "summary": "Sets hash slot ranges as unbound for a node.", + "since": "7.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of the slots between the start slot and end slot arguments.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "range", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "start-slot", + "type": "integer", + "display_text": "start-slot" + }, + { + "name": "end-slot", + "type": "integer", + "display_text": "end-slot" + } + ] + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER FAILOVER": { + "summary": "Forces a replica to perform a manual failover of its master.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "options", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "force", + "type": "pure-token", + "display_text": "force", + "token": "FORCE" + }, + { + "name": "takeover", + "type": "pure-token", + "display_text": "takeover", + "token": "TAKEOVER" + } + ] + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER FLUSHSLOTS": { + "summary": "Deletes all slots information from a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER FORGET": { + "summary": "Removes a node from the nodes table.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "node-id", + "type": "string", + "display_text": "node-id" + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER GETKEYSINSLOT": { + "summary": "Returns the key names in a hash slot.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of requested keys", + "acl_categories": [ + "@slow" + ], + "arity": 4, + "arguments": [ + { + "name": "slot", + "type": "integer", + "display_text": "slot" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ], + "command_flags": [ + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "CLUSTER INFO": { + "summary": "Returns information about the state of a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER KEYSLOT": { + "summary": "Returns the hash slot for a key.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of bytes in the key", + "acl_categories": [ + "@slow" + ], + "arity": 3, + "arguments": [ + { + "name": "key", + "type": "string", + "display_text": "key" + } + ], + "command_flags": [ + "stale" + ] + }, + "CLUSTER LINKS": { + "summary": "Returns a list of all TCP links to and from peer nodes.", + "since": "7.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of Cluster nodes", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER MEET": { + "summary": "Forces a node to handshake with another node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "history": [ + [ + "4.0.0", + "Added the optional `cluster_bus_port` argument." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "ip", + "type": "string", + "display_text": "ip" + }, + { + "name": "port", + "type": "integer", + "display_text": "port" + }, + { + "name": "cluster-bus-port", + "type": "integer", + "display_text": "cluster-bus-port", + "since": "4.0.0", + "optional": true + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER MYID": { + "summary": "Returns the ID of a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "stale" + ] + }, + "CLUSTER MYSHARDID": { + "summary": "Returns the shard ID of a node.", + "since": "7.2.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER NODES": { + "summary": "Returns the cluster configuration for a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of Cluster nodes", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER REPLICAS": { + "summary": "Lists the replica nodes of a master node.", + "since": "5.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of replicas.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "node-id", + "type": "string", + "display_text": "node-id" + } + ], + "command_flags": [ + "admin", + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER REPLICATE": { + "summary": "Configure a node as replica of a master node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "node-id", + "type": "string", + "display_text": "node-id" + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER RESET": { + "summary": "Resets a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of known nodes. The command may execute a FLUSHALL as a side effect.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "reset-type", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "hard", + "type": "pure-token", + "display_text": "hard", + "token": "HARD" + }, + { + "name": "soft", + "type": "pure-token", + "display_text": "soft", + "token": "SOFT" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "stale" + ] + }, + "CLUSTER SAVECONFIG": { + "summary": "Forces a node to save the cluster configuration to disk.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER SET-CONFIG-EPOCH": { + "summary": "Sets the configuration epoch for a new node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "config-epoch", + "type": "integer", + "display_text": "config-epoch" + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER SETSLOT": { + "summary": "Binds a hash slot to a node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "slot", + "type": "integer", + "display_text": "slot" + }, + { + "name": "subcommand", + "type": "oneof", + "arguments": [ + { + "name": "importing", + "type": "string", + "display_text": "node-id", + "token": "IMPORTING" + }, + { + "name": "migrating", + "type": "string", + "display_text": "node-id", + "token": "MIGRATING" + }, + { + "name": "node", + "type": "string", + "display_text": "node-id", + "token": "NODE" + }, + { + "name": "stable", + "type": "pure-token", + "display_text": "stable", + "token": "STABLE" + } + ] + } + ], + "command_flags": [ + "admin", + "stale", + "no_async_loading" + ] + }, + "CLUSTER SHARDS": { + "summary": "Returns the mapping of cluster slots to shards.", + "since": "7.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of cluster nodes", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER SLAVES": { + "summary": "Lists the replica nodes of a master node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the number of replicas.", + "deprecated_since": "5.0.0", + "replaced_by": "`CLUSTER REPLICAS`", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "node-id", + "type": "string", + "display_text": "node-id" + } + ], + "command_flags": [ + "admin", + "stale" + ], + "doc_flags": [ + "deprecated" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "CLUSTER SLOTS": { + "summary": "Returns the mapping of cluster slots to nodes.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(N) where N is the total number of Cluster nodes", + "deprecated_since": "7.0.0", + "replaced_by": "`CLUSTER SHARDS`", + "history": [ + [ + "4.0.0", + "Added node IDs." + ], + [ + "7.0.0", + "Added additional networking metadata field." + ] + ], + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ], + "doc_flags": [ + "deprecated" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "COMMAND": { + "summary": "Returns detailed information about all commands.", + "since": "2.8.13", + "group": "server", + "complexity": "O(N) where N is the total number of Redis commands", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -1, + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "COMMAND COUNT": { + "summary": "Returns a count of commands.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "COMMAND DOCS": { + "summary": "Returns documentary information about one, multiple or all commands.", + "since": "7.0.0", + "group": "server", + "complexity": "O(N) where N is the number of commands to look up", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -2, + "arguments": [ + { + "name": "command-name", + "type": "string", + "display_text": "command-name", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "COMMAND GETKEYS": { + "summary": "Extracts the key names from an arbitrary command.", + "since": "2.8.13", + "group": "server", + "complexity": "O(N) where N is the number of arguments to the command", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "command", + "type": "string", + "display_text": "command" + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "loading", + "stale" + ] + }, + "COMMAND GETKEYSANDFLAGS": { + "summary": "Extracts the key names and access flags for an arbitrary command.", + "since": "7.0.0", + "group": "server", + "complexity": "O(N) where N is the number of arguments to the command", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -3, + "arguments": [ + { + "name": "command", + "type": "string", + "display_text": "command" + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "loading", + "stale" + ] + }, + "COMMAND HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "COMMAND INFO": { + "summary": "Returns information about one, multiple or all commands.", + "since": "2.8.13", + "group": "server", + "complexity": "O(N) where N is the number of commands to look up", + "history": [ + [ + "7.0.0", + "Allowed to be called with no argument to get info on all commands." + ] + ], + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -2, + "arguments": [ + { + "name": "command-name", + "type": "string", + "display_text": "command-name", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "COMMAND LIST": { + "summary": "Returns a list of command names.", + "since": "7.0.0", + "group": "server", + "complexity": "O(N) where N is the total number of Redis commands", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": -2, + "arguments": [ + { + "name": "filterby", + "type": "oneof", + "token": "FILTERBY", + "optional": true, + "arguments": [ + { + "name": "module-name", + "type": "string", + "display_text": "module-name", + "token": "MODULE" + }, + { + "name": "category", + "type": "string", + "display_text": "category", + "token": "ACLCAT" + }, + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "token": "PATTERN" + } + ] + } + ], + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "CONFIG": { + "summary": "A container for server configuration commands.", + "since": "2.0.0", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "CONFIG GET": { + "summary": "Returns the effective values of configuration parameters.", + "since": "2.0.0", + "group": "server", + "complexity": "O(N) when N is the number of configuration parameters provided", + "history": [ + [ + "7.0.0", + "Added the ability to pass multiple pattern parameters in one call" + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "parameter", + "type": "string", + "display_text": "parameter", + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "CONFIG HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "CONFIG RESETSTAT": { + "summary": "Resets the server's statistics.", + "since": "2.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "CONFIG REWRITE": { + "summary": "Persists the effective configuration to file.", + "since": "2.8.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "CONFIG SET": { + "summary": "Sets configuration parameters in-flight.", + "since": "2.0.0", + "group": "server", + "complexity": "O(N) when N is the number of configuration parameters provided", + "history": [ + [ + "7.0.0", + "Added the ability to set multiple parameters in one call." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -4, + "arguments": [ + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "parameter", + "type": "string", + "display_text": "parameter" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "COPY": { + "summary": "Copies the value of a key to a new key.", + "since": "6.2.0", + "group": "generic", + "complexity": "O(N) worst case for collections, where N is the number of nested items. O(1) for string values.", + "acl_categories": [ + "@keyspace", + "@write", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + }, + { + "name": "destination-db", + "type": "integer", + "display_text": "destination-db", + "token": "DB", + "optional": true + }, + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "DBSIZE": { + "summary": "Returns the number of keys in the database.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 1, + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:agg_sum" + ] + }, + "DEBUG": { + "summary": "A container for debugging commands.", + "since": "1.0.0", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "doc_flags": [ + "syscmd" + ] + }, + "DECR": { + "summary": "Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "DECRBY": { + "summary": "Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "decrement", + "type": "integer", + "display_text": "decrement" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "DEL": { + "summary": "Deletes one or more keys.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1).", + "acl_categories": [ + "@keyspace", + "@write", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RM": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "write" + ], + "hints": [ + "request_policy:multi_shard", + "response_policy:agg_sum" + ] + }, + "DISCARD": { + "summary": "Discards a transaction.", + "since": "2.0.0", + "group": "transactions", + "complexity": "O(N), when N is the number of queued commands", + "acl_categories": [ + "@fast", + "@transaction" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "allow_busy" + ] + }, + "DUMP": { + "summary": "Returns a serialized representation of the value stored at a key.", + "since": "2.6.0", + "group": "generic", + "complexity": "O(1) to access the key and additional O(N*M) to serialize it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "ECHO": { + "summary": "Returns the given string.", + "since": "1.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 2, + "arguments": [ + { + "name": "message", + "type": "string", + "display_text": "message" + } + ], + "command_flags": [ + "loading", + "stale", + "fast" + ] + }, + "EVAL": { + "summary": "Executes a server-side Lua script.", + "since": "2.6.0", + "group": "scripting", + "complexity": "Depends on the script that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "notes": "We cannot tell how the keys will be used so we assume the worst, RW and UPDATE", + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "script", + "type": "string", + "display_text": "script" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "EVALSHA": { + "summary": "Executes a server-side Lua script by SHA1 digest.", + "since": "2.6.0", + "group": "scripting", + "complexity": "Depends on the script that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "sha1", + "type": "string", + "display_text": "sha1" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "EVALSHA_RO": { + "summary": "Executes a read-only server-side Lua script by SHA1 digest.", + "since": "7.0.0", + "group": "scripting", + "complexity": "Depends on the script that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "sha1", + "type": "string", + "display_text": "sha1" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "EVAL_RO": { + "summary": "Executes a read-only server-side Lua script.", + "since": "7.0.0", + "group": "scripting", + "complexity": "Depends on the script that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "notes": "We cannot tell how the keys will be used so we assume the worst, RO and ACCESS", + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "script", + "type": "string", + "display_text": "script" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "EXEC": { + "summary": "Executes all commands in a transaction.", + "since": "1.2.0", + "group": "transactions", + "complexity": "Depends on commands in the transaction", + "acl_categories": [ + "@slow", + "@transaction" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "skip_slowlog" + ] + }, + "EXISTS": { + "summary": "Determines whether one or more keys exist.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(N) where N is the number of keys to check.", + "history": [ + [ + "3.0.3", + "Accepts multiple `key` arguments." + ] + ], + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "request_policy:multi_shard", + "response_policy:agg_sum" + ] + }, + "EXPIRE": { + "summary": "Sets the expiration time of a key in seconds.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added options: `NX`, `XX`, `GT` and `LT`." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "seconds", + "type": "integer", + "display_text": "seconds" + }, + { + "name": "condition", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + }, + { + "name": "gt", + "type": "pure-token", + "display_text": "gt", + "token": "GT" + }, + { + "name": "lt", + "type": "pure-token", + "display_text": "lt", + "token": "LT" + } + ] + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "EXPIREAT": { + "summary": "Sets the expiration time of a key to a Unix timestamp.", + "since": "1.2.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added options: `NX`, `XX`, `GT` and `LT`." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "unix-time-seconds", + "type": "unix-time", + "display_text": "unix-time-seconds" + }, + { + "name": "condition", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + }, + { + "name": "gt", + "type": "pure-token", + "display_text": "gt", + "token": "GT" + }, + { + "name": "lt", + "type": "pure-token", + "display_text": "lt", + "token": "LT" + } + ] + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "EXPIRETIME": { + "summary": "Returns the expiration time of a key as a Unix timestamp.", + "since": "7.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "FAILOVER": { + "summary": "Starts a coordinated failover from a server to one of its replicas.", + "since": "6.2.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "target", + "type": "block", + "token": "TO", + "optional": true, + "arguments": [ + { + "name": "host", + "type": "string", + "display_text": "host" + }, + { + "name": "port", + "type": "integer", + "display_text": "port" + }, + { + "name": "force", + "type": "pure-token", + "display_text": "force", + "token": "FORCE", + "optional": true + } + ] + }, + { + "name": "abort", + "type": "pure-token", + "display_text": "abort", + "token": "ABORT", + "optional": true + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds", + "token": "TIMEOUT", + "optional": true + } + ], + "command_flags": [ + "admin", + "noscript", + "stale" + ] + }, + "FCALL": { + "summary": "Invokes a function.", + "since": "7.0.0", + "group": "scripting", + "complexity": "Depends on the function that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "notes": "We cannot tell how the keys will be used so we assume the worst, RW and UPDATE", + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "function", + "type": "string", + "display_text": "function" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "FCALL_RO": { + "summary": "Invokes a read-only function.", + "since": "7.0.0", + "group": "scripting", + "complexity": "Depends on the function that is executed.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "key_specs": [ + { + "notes": "We cannot tell how the keys will be used so we assume the worst, RO and ACCESS", + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "function", + "type": "string", + "display_text": "function" + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "optional": true, + "multiple": true + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "noscript", + "stale", + "skip_monitor", + "no_mandatory_keys", + "movablekeys" + ] + }, + "FLUSHALL": { + "summary": "Removes all keys from all databases.", + "since": "1.0.0", + "group": "server", + "complexity": "O(N) where N is the total number of keys in all databases", + "history": [ + [ + "4.0.0", + "Added the `ASYNC` flushing mode modifier." + ], + [ + "6.2.0", + "Added the `SYNC` flushing mode modifier." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "flush-type", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "async", + "type": "pure-token", + "display_text": "async", + "token": "ASYNC", + "since": "4.0.0" + }, + { + "name": "sync", + "type": "pure-token", + "display_text": "sync", + "token": "SYNC", + "since": "6.2.0" + } + ] + } + ], + "command_flags": [ + "write" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FLUSHDB": { + "summary": "Remove all keys from the current database.", + "since": "1.0.0", + "group": "server", + "complexity": "O(N) where N is the number of keys in the selected database", + "history": [ + [ + "4.0.0", + "Added the `ASYNC` flushing mode modifier." + ], + [ + "6.2.0", + "Added the `SYNC` flushing mode modifier." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "flush-type", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "async", + "type": "pure-token", + "display_text": "async", + "token": "ASYNC", + "since": "4.0.0" + }, + { + "name": "sync", + "type": "pure-token", + "display_text": "sync", + "token": "SYNC", + "since": "6.2.0" + } + ] + } + ], + "command_flags": [ + "write" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FUNCTION": { + "summary": "A container for function commands.", + "since": "7.0.0", + "group": "scripting", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "FUNCTION DELETE": { + "summary": "Deletes a library and its functions.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@slow", + "@scripting" + ], + "arity": 3, + "arguments": [ + { + "name": "library-name", + "type": "string", + "display_text": "library-name" + } + ], + "command_flags": [ + "write", + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FUNCTION DUMP": { + "summary": "Dumps all libraries into a serialized binary payload.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(N) where N is the number of functions", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "noscript" + ] + }, + "FUNCTION FLUSH": { + "summary": "Deletes all libraries and functions.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(N) where N is the number of functions deleted", + "acl_categories": [ + "@write", + "@slow", + "@scripting" + ], + "arity": -2, + "arguments": [ + { + "name": "flush-type", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "async", + "type": "pure-token", + "display_text": "async", + "token": "ASYNC" + }, + { + "name": "sync", + "type": "pure-token", + "display_text": "sync", + "token": "SYNC" + } + ] + } + ], + "command_flags": [ + "write", + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FUNCTION HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "FUNCTION KILL": { + "summary": "Terminates a function during execution.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "noscript", + "allow_busy" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:one_succeeded" + ] + }, + "FUNCTION LIST": { + "summary": "Returns information about all libraries.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(N) where N is the number of functions", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -2, + "arguments": [ + { + "name": "library-name-pattern", + "type": "string", + "display_text": "library-name-pattern", + "token": "LIBRARYNAME", + "optional": true + }, + { + "name": "withcode", + "type": "pure-token", + "display_text": "withcode", + "token": "WITHCODE", + "optional": true + } + ], + "command_flags": [ + "noscript" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "FUNCTION LOAD": { + "summary": "Creates a library.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(1) (considering compilation time is redundant)", + "acl_categories": [ + "@write", + "@slow", + "@scripting" + ], + "arity": -3, + "arguments": [ + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE", + "optional": true + }, + { + "name": "function-code", + "type": "string", + "display_text": "function-code" + } + ], + "command_flags": [ + "write", + "denyoom", + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FUNCTION RESTORE": { + "summary": "Restores all libraries from a payload.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(N) where N is the number of functions on the payload", + "acl_categories": [ + "@write", + "@slow", + "@scripting" + ], + "arity": -3, + "arguments": [ + { + "name": "serialized-value", + "type": "string", + "display_text": "serialized-value" + }, + { + "name": "policy", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "flush", + "type": "pure-token", + "display_text": "flush", + "token": "FLUSH" + }, + { + "name": "append", + "type": "pure-token", + "display_text": "append", + "token": "APPEND" + }, + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "FUNCTION STATS": { + "summary": "Returns information about a function during execution.", + "since": "7.0.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "noscript", + "allow_busy" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_shards", + "response_policy:special" + ] + }, + "GEOADD": { + "summary": "Adds one or more members to a geospatial index. The key is created if it doesn't exist.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(log(N)) for each item added, where N is the number of elements in the sorted set.", + "history": [ + [ + "6.2.0", + "Added the `CH`, `NX` and `XX` options." + ] + ], + "acl_categories": [ + "@write", + "@geo", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "condition", + "type": "oneof", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + } + ] + }, + { + "name": "change", + "type": "pure-token", + "display_text": "change", + "token": "CH", + "since": "6.2.0", + "optional": true + }, + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "longitude", + "type": "double", + "display_text": "longitude" + }, + { + "name": "latitude", + "type": "double", + "display_text": "latitude" + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "GEODIST": { + "summary": "Returns the distance between two members of a geospatial index.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member1", + "type": "string", + "display_text": "member1" + }, + { + "name": "member2", + "type": "string", + "display_text": "member2" + }, + { + "name": "unit", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + } + ], + "command_flags": [ + "readonly" + ] + }, + "GEOHASH": { + "summary": "Returns members from a geospatial index as geohash strings.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(1) for each member requested.", + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "GEOPOS": { + "summary": "Returns the longitude and latitude of members from a geospatial index.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(1) for each member requested.", + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "GEORADIUS": { + "summary": "Queries a geospatial index for members within a distance from a coordinate, optionally stores the result.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.", + "deprecated_since": "6.2.0", + "replaced_by": "`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` argument", + "history": [ + [ + "6.2.0", + "Added the `ANY` option for `COUNT`." + ], + [ + "7.0.0", + "Added support for uppercase unit names." + ] + ], + "acl_categories": [ + "@write", + "@geo", + "@slow" + ], + "arity": -6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + }, + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STORE", + "startfrom": 6 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STOREDIST", + "startfrom": 6 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "longitude", + "type": "double", + "display_text": "longitude" + }, + { + "name": "latitude", + "type": "double", + "display_text": "latitude" + }, + { + "name": "radius", + "type": "double", + "display_text": "radius" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + }, + { + "name": "withcoord", + "type": "pure-token", + "display_text": "withcoord", + "token": "WITHCOORD", + "optional": true + }, + { + "name": "withdist", + "type": "pure-token", + "display_text": "withdist", + "token": "WITHDIST", + "optional": true + }, + { + "name": "withhash", + "type": "pure-token", + "display_text": "withhash", + "token": "WITHHASH", + "optional": true + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "since": "6.2.0", + "optional": true + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "store", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "storekey", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "token": "STORE" + }, + { + "name": "storedistkey", + "type": "key", + "display_text": "key", + "key_spec_index": 2, + "token": "STOREDIST" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ], + "doc_flags": [ + "deprecated" + ] + }, + "GEORADIUSBYMEMBER": { + "summary": "Queries a geospatial index for members within a distance from a member, optionally stores the result.", + "since": "3.2.0", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.", + "deprecated_since": "6.2.0", + "replaced_by": "`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` and `FROMMEMBER` arguments", + "history": [ + [ + "7.0.0", + "Added support for uppercase unit names." + ] + ], + "acl_categories": [ + "@write", + "@geo", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + }, + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STORE", + "startfrom": 5 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STOREDIST", + "startfrom": 5 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + }, + { + "name": "radius", + "type": "double", + "display_text": "radius" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + }, + { + "name": "withcoord", + "type": "pure-token", + "display_text": "withcoord", + "token": "WITHCOORD", + "optional": true + }, + { + "name": "withdist", + "type": "pure-token", + "display_text": "withdist", + "token": "WITHDIST", + "optional": true + }, + { + "name": "withhash", + "type": "pure-token", + "display_text": "withhash", + "token": "WITHHASH", + "optional": true + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "optional": true + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "store", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "storekey", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "token": "STORE" + }, + { + "name": "storedistkey", + "type": "key", + "display_text": "key", + "key_spec_index": 2, + "token": "STOREDIST" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ], + "doc_flags": [ + "deprecated" + ] + }, + "GEORADIUSBYMEMBER_RO": { + "summary": "Returns members from a geospatial index that are within a distance from a member.", + "since": "3.2.10", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.", + "deprecated_since": "6.2.0", + "replaced_by": "`GEOSEARCH` with the `BYRADIUS` and `FROMMEMBER` arguments", + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + }, + { + "name": "radius", + "type": "double", + "display_text": "radius" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + }, + { + "name": "withcoord", + "type": "pure-token", + "display_text": "withcoord", + "token": "WITHCOORD", + "optional": true + }, + { + "name": "withdist", + "type": "pure-token", + "display_text": "withdist", + "token": "WITHDIST", + "optional": true + }, + { + "name": "withhash", + "type": "pure-token", + "display_text": "withhash", + "token": "WITHHASH", + "optional": true + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "optional": true + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "GEORADIUS_RO": { + "summary": "Returns members from a geospatial index that are within a distance from a coordinate.", + "since": "3.2.10", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.", + "deprecated_since": "6.2.0", + "replaced_by": "`GEOSEARCH` with the `BYRADIUS` argument", + "history": [ + [ + "6.2.0", + "Added the `ANY` option for `COUNT`." + ] + ], + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "longitude", + "type": "double", + "display_text": "longitude" + }, + { + "name": "latitude", + "type": "double", + "display_text": "latitude" + }, + { + "name": "radius", + "type": "double", + "display_text": "radius" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + }, + { + "name": "withcoord", + "type": "pure-token", + "display_text": "withcoord", + "token": "WITHCOORD", + "optional": true + }, + { + "name": "withdist", + "type": "pure-token", + "display_text": "withdist", + "token": "WITHDIST", + "optional": true + }, + { + "name": "withhash", + "type": "pure-token", + "display_text": "withhash", + "token": "WITHHASH", + "optional": true + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "since": "6.2.0", + "optional": true + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "GEOSEARCH": { + "summary": "Queries a geospatial index for members inside an area of a box or a circle.", + "since": "6.2.0", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape", + "history": [ + [ + "7.0.0", + "Added support for uppercase unit names." + ] + ], + "acl_categories": [ + "@read", + "@geo", + "@slow" + ], + "arity": -7, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "from", + "type": "oneof", + "arguments": [ + { + "name": "member", + "type": "string", + "display_text": "member", + "token": "FROMMEMBER" + }, + { + "name": "fromlonlat", + "type": "block", + "token": "FROMLONLAT", + "arguments": [ + { + "name": "longitude", + "type": "double", + "display_text": "longitude" + }, + { + "name": "latitude", + "type": "double", + "display_text": "latitude" + } + ] + } + ] + }, + { + "name": "by", + "type": "oneof", + "arguments": [ + { + "name": "circle", + "type": "block", + "arguments": [ + { + "name": "radius", + "type": "double", + "display_text": "radius", + "token": "BYRADIUS" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + } + ] + }, + { + "name": "box", + "type": "block", + "arguments": [ + { + "name": "width", + "type": "double", + "display_text": "width", + "token": "BYBOX" + }, + { + "name": "height", + "type": "double", + "display_text": "height" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + } + ] + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "optional": true + } + ] + }, + { + "name": "withcoord", + "type": "pure-token", + "display_text": "withcoord", + "token": "WITHCOORD", + "optional": true + }, + { + "name": "withdist", + "type": "pure-token", + "display_text": "withdist", + "token": "WITHDIST", + "optional": true + }, + { + "name": "withhash", + "type": "pure-token", + "display_text": "withhash", + "token": "WITHHASH", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "GEOSEARCHSTORE": { + "summary": "Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result.", + "since": "6.2.0", + "group": "geo", + "complexity": "O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape", + "history": [ + [ + "7.0.0", + "Added support for uppercase unit names." + ] + ], + "acl_categories": [ + "@write", + "@geo", + "@slow" + ], + "arity": -8, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 1 + }, + { + "name": "from", + "type": "oneof", + "arguments": [ + { + "name": "member", + "type": "string", + "display_text": "member", + "token": "FROMMEMBER" + }, + { + "name": "fromlonlat", + "type": "block", + "token": "FROMLONLAT", + "arguments": [ + { + "name": "longitude", + "type": "double", + "display_text": "longitude" + }, + { + "name": "latitude", + "type": "double", + "display_text": "latitude" + } + ] + } + ] + }, + { + "name": "by", + "type": "oneof", + "arguments": [ + { + "name": "circle", + "type": "block", + "arguments": [ + { + "name": "radius", + "type": "double", + "display_text": "radius", + "token": "BYRADIUS" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + } + ] + }, + { + "name": "box", + "type": "block", + "arguments": [ + { + "name": "width", + "type": "double", + "display_text": "width", + "token": "BYBOX" + }, + { + "name": "height", + "type": "double", + "display_text": "height" + }, + { + "name": "unit", + "type": "oneof", + "arguments": [ + { + "name": "m", + "type": "pure-token", + "display_text": "m", + "token": "M" + }, + { + "name": "km", + "type": "pure-token", + "display_text": "km", + "token": "KM" + }, + { + "name": "ft", + "type": "pure-token", + "display_text": "ft", + "token": "FT" + }, + { + "name": "mi", + "type": "pure-token", + "display_text": "mi", + "token": "MI" + } + ] + } + ] + } + ] + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "count-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT" + }, + { + "name": "any", + "type": "pure-token", + "display_text": "any", + "token": "ANY", + "optional": true + } + ] + }, + { + "name": "storedist", + "type": "pure-token", + "display_text": "storedist", + "token": "STOREDIST", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "GET": { + "summary": "Returns the string value of a key.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@string", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "GETBIT": { + "summary": "Returns a bit value by offset.", + "since": "2.2.0", + "group": "bitmap", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@bitmap", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "GETDEL": { + "summary": "Returns the string value of a key after deleting the key.", + "since": "6.2.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "GETEX": { + "summary": "Returns the string value of a key after setting its expiration time.", + "since": "6.2.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "notes": "RW and UPDATE because it changes the TTL", + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "expiration", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "seconds", + "type": "integer", + "display_text": "seconds", + "token": "EX" + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds", + "token": "PX" + }, + { + "name": "unix-time-seconds", + "type": "unix-time", + "display_text": "unix-time-seconds", + "token": "EXAT" + }, + { + "name": "unix-time-milliseconds", + "type": "unix-time", + "display_text": "unix-time-milliseconds", + "token": "PXAT" + }, + { + "name": "persist", + "type": "pure-token", + "display_text": "persist", + "token": "PERSIST" + } + ] + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "GETRANGE": { + "summary": "Returns a substring of the string stored at a key.", + "since": "2.4.0", + "group": "string", + "complexity": "O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.", + "acl_categories": [ + "@read", + "@string", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "end", + "type": "integer", + "display_text": "end" + } + ], + "command_flags": [ + "readonly" + ] + }, + "GETSET": { + "summary": "Returns the previous string value of a key after setting it to a new value.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "deprecated_since": "6.2.0", + "replaced_by": "`SET` with the `!GET` argument", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ], + "doc_flags": [ + "deprecated" + ] + }, + "HDEL": { + "summary": "Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the number of fields to be removed.", + "history": [ + [ + "2.4.0", + "Accepts multiple `field` arguments." + ] + ], + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field", + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "HELLO": { + "summary": "Handshakes with the Redis server.", + "since": "6.0.0", + "group": "connection", + "complexity": "O(1)", + "history": [ + [ + "6.2.0", + "`protover` made optional; when called without arguments the command reports the current connection's context." + ] + ], + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": -1, + "arguments": [ + { + "name": "arguments", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "protover", + "type": "integer", + "display_text": "protover" + }, + { + "name": "auth", + "type": "block", + "token": "AUTH", + "optional": true, + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username" + }, + { + "name": "password", + "type": "string", + "display_text": "password" + } + ] + }, + { + "name": "clientname", + "type": "string", + "display_text": "clientname", + "token": "SETNAME", + "optional": true + } + ] + } + ], + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "no_auth", + "allow_busy" + ] + }, + "HEXISTS": { + "summary": "Determines whether a field exists in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@hash", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "HGET": { + "summary": "Returns the value of a field in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@hash", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "HGETALL": { + "summary": "Returns all fields and values in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the size of the hash.", + "acl_categories": [ + "@read", + "@hash", + "@slow" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "HINCRBY": { + "summary": "Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "increment", + "type": "integer", + "display_text": "increment" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "HINCRBYFLOAT": { + "summary": "Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist.", + "since": "2.6.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "increment", + "type": "double", + "display_text": "increment" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "HKEYS": { + "summary": "Returns all fields in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the size of the hash.", + "acl_categories": [ + "@read", + "@hash", + "@slow" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "HLEN": { + "summary": "Returns the number of fields in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@hash", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "HMGET": { + "summary": "Returns the values of all fields in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the number of fields being requested.", + "acl_categories": [ + "@read", + "@hash", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field", + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "HMSET": { + "summary": "Sets the values of multiple fields.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the number of fields being set.", + "deprecated_since": "4.0.0", + "replaced_by": "`HSET` with multiple field-value pairs", + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ], + "doc_flags": [ + "deprecated" + ] + }, + "HRANDFIELD": { + "summary": "Returns one or more random fields from a hash.", + "since": "6.2.0", + "group": "hash", + "complexity": "O(N) where N is the number of fields returned", + "acl_categories": [ + "@read", + "@hash", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "options", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count" + }, + { + "name": "withvalues", + "type": "pure-token", + "display_text": "withvalues", + "token": "WITHVALUES", + "optional": true + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "HSCAN": { + "summary": "Iterates over fields and values of a hash.", + "since": "2.8.0", + "group": "hash", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.", + "acl_categories": [ + "@read", + "@hash", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "cursor", + "type": "integer", + "display_text": "cursor" + }, + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "token": "MATCH", + "optional": true + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "HSET": { + "summary": "Creates or modifies the value of a field in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1) for each field/value pair added, so O(N) to add N field/value pairs when the command is called with multiple field/value pairs.", + "history": [ + [ + "4.0.0", + "Accepts multiple `field` and `value` arguments." + ] + ], + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "HSETNX": { + "summary": "Sets the value of a field in a hash only when the field doesn't exist.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@hash", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "HSTRLEN": { + "summary": "Returns the length of the value of a field.", + "since": "3.2.0", + "group": "hash", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@hash", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "field", + "type": "string", + "display_text": "field" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "HVALS": { + "summary": "Returns all values in a hash.", + "since": "2.0.0", + "group": "hash", + "complexity": "O(N) where N is the size of the hash.", + "acl_categories": [ + "@read", + "@hash", + "@slow" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "INCR": { + "summary": "Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "INCRBY": { + "summary": "Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "increment", + "type": "integer", + "display_text": "increment" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "INCRBYFLOAT": { + "summary": "Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist.", + "since": "2.6.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "increment", + "type": "double", + "display_text": "increment" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "INFO": { + "summary": "Returns information and statistics about the server.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added support for taking multiple section arguments." + ] + ], + "acl_categories": [ + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "section", + "type": "string", + "display_text": "section", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_shards", + "response_policy:special" + ] + }, + "KEYS": { + "summary": "Returns all key names that match a pattern.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.", + "acl_categories": [ + "@keyspace", + "@read", + "@slow", + "@dangerous" + ], + "arity": 2, + "arguments": [ + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern" + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "request_policy:all_shards", + "nondeterministic_output_order" + ] + }, + "LASTSAVE": { + "summary": "Returns the Unix timestamp of the last successful save to disk.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@fast", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "loading", + "stale", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "LATENCY": { + "summary": "A container for latency diagnostics commands.", + "since": "2.8.13", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "LATENCY DOCTOR": { + "summary": "Returns a human-readable latency analysis report.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_nodes", + "response_policy:special" + ] + }, + "LATENCY GRAPH": { + "summary": "Returns a latency graph for an event.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "event", + "type": "string", + "display_text": "event" + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_nodes", + "response_policy:special" + ] + }, + "LATENCY HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "LATENCY HISTOGRAM": { + "summary": "Returns the cumulative distribution of latencies of a subset or all commands.", + "since": "7.0.0", + "group": "server", + "complexity": "O(N) where N is the number of commands with latency information being retrieved.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "command", + "type": "string", + "display_text": "command", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_nodes", + "response_policy:special" + ] + }, + "LATENCY HISTORY": { + "summary": "Returns timestamp-latency samples for an event.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "event", + "type": "string", + "display_text": "event" + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_nodes", + "response_policy:special" + ] + }, + "LATENCY LATEST": { + "summary": "Returns the latest latency samples for all events.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "nondeterministic_output", + "request_policy:all_nodes", + "response_policy:special" + ] + }, + "LATENCY RESET": { + "summary": "Resets the latency data for one or more events.", + "since": "2.8.13", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "event", + "type": "string", + "display_text": "event", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:agg_sum" + ] + }, + "LCS": { + "summary": "Finds the longest common substring.", + "since": "7.0.0", + "group": "string", + "complexity": "O(N*M) where N and M are the lengths of s1 and s2, respectively", + "acl_categories": [ + "@read", + "@string", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key1", + "type": "key", + "display_text": "key1", + "key_spec_index": 0 + }, + { + "name": "key2", + "type": "key", + "display_text": "key2", + "key_spec_index": 0 + }, + { + "name": "len", + "type": "pure-token", + "display_text": "len", + "token": "LEN", + "optional": true + }, + { + "name": "idx", + "type": "pure-token", + "display_text": "idx", + "token": "IDX", + "optional": true + }, + { + "name": "min-match-len", + "type": "integer", + "display_text": "min-match-len", + "token": "MINMATCHLEN", + "optional": true + }, + { + "name": "withmatchlen", + "type": "pure-token", + "display_text": "withmatchlen", + "token": "WITHMATCHLEN", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "LINDEX": { + "summary": "Returns an element from a list by its index.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).", + "acl_categories": [ + "@read", + "@list", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "index", + "type": "integer", + "display_text": "index" + } + ], + "command_flags": [ + "readonly" + ] + }, + "LINSERT": { + "summary": "Inserts an element before or after another element in a list.", + "since": "2.2.0", + "group": "list", + "complexity": "O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "where", + "type": "oneof", + "arguments": [ + { + "name": "before", + "type": "pure-token", + "display_text": "before", + "token": "BEFORE" + }, + { + "name": "after", + "type": "pure-token", + "display_text": "after", + "token": "AFTER" + } + ] + }, + { + "name": "pivot", + "type": "string", + "display_text": "pivot" + }, + { + "name": "element", + "type": "string", + "display_text": "element" + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "LLEN": { + "summary": "Returns the length of a list.", + "since": "1.0.0", + "group": "list", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@list", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "LMOVE": { + "summary": "Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved.", + "since": "6.2.0", + "group": "list", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + }, + { + "name": "wherefrom", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + }, + { + "name": "whereto", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "LMPOP": { + "summary": "Returns multiple elements from a list after removing them. Deletes the list if the last element was popped.", + "since": "7.0.0", + "group": "list", + "complexity": "O(N+M) where N is the number of provided keys and M is the number of elements returned.", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "where", + "type": "oneof", + "arguments": [ + { + "name": "left", + "type": "pure-token", + "display_text": "left", + "token": "LEFT" + }, + { + "name": "right", + "type": "pure-token", + "display_text": "right", + "token": "RIGHT" + } + ] + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "write", + "movablekeys" + ] + }, + "LOLWUT": { + "summary": "Displays computer art and the Redis version", + "since": "5.0.0", + "group": "server", + "acl_categories": [ + "@read", + "@fast" + ], + "arity": -1, + "arguments": [ + { + "name": "version", + "type": "integer", + "display_text": "version", + "token": "VERSION", + "optional": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "LPOP": { + "summary": "Returns the first elements in a list after removing it. Deletes the list if the last element was popped.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N) where N is the number of elements returned", + "history": [ + [ + "6.2.0", + "Added the `count` argument." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "since": "6.2.0", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "LPOS": { + "summary": "Returns the index of matching elements in a list.", + "since": "6.0.6", + "group": "list", + "complexity": "O(N) where N is the number of elements in the list, for the average case. When searching for elements near the head or the tail of the list, or when the MAXLEN option is provided, the command may run in constant time.", + "acl_categories": [ + "@read", + "@list", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element" + }, + { + "name": "rank", + "type": "integer", + "display_text": "rank", + "token": "RANK", + "optional": true + }, + { + "name": "num-matches", + "type": "integer", + "display_text": "num-matches", + "token": "COUNT", + "optional": true + }, + { + "name": "len", + "type": "integer", + "display_text": "len", + "token": "MAXLEN", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "LPUSH": { + "summary": "Prepends one or more elements to a list. Creates the key if it doesn't exist.", + "since": "1.0.0", + "group": "list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", + "history": [ + [ + "2.4.0", + "Accepts multiple `element` arguments." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element", + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "LPUSHX": { + "summary": "Prepends one or more elements to a list only when the list exists.", + "since": "2.2.0", + "group": "list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", + "history": [ + [ + "4.0.0", + "Accepts multiple `element` arguments." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element", + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "LRANGE": { + "summary": "Returns a range of elements from a list.", + "since": "1.0.0", + "group": "list", + "complexity": "O(S+N) where S is the distance of start offset from HEAD for small lists, from nearest end (HEAD or TAIL) for large lists; and N is the number of elements in the specified range.", + "acl_categories": [ + "@read", + "@list", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "stop", + "type": "integer", + "display_text": "stop" + } + ], + "command_flags": [ + "readonly" + ] + }, + "LREM": { + "summary": "Removes elements from a list. Deletes the list if the last element was removed.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N+M) where N is the length of the list and M is the number of elements removed.", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + }, + { + "name": "element", + "type": "string", + "display_text": "element" + } + ], + "command_flags": [ + "write" + ] + }, + "LSET": { + "summary": "Sets the value of an element in a list by its index.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1).", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "index", + "type": "integer", + "display_text": "index" + }, + { + "name": "element", + "type": "string", + "display_text": "element" + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "LTRIM": { + "summary": "Removes elements from both ends a list. Deletes the list if all elements were trimmed.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N) where N is the number of elements to be removed by the operation.", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "stop", + "type": "integer", + "display_text": "stop" + } + ], + "command_flags": [ + "write" + ] + }, + "MEMORY": { + "summary": "A container for memory diagnostics commands.", + "since": "4.0.0", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "MEMORY DOCTOR": { + "summary": "Outputs a memory problems report.", + "since": "4.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "hints": [ + "nondeterministic_output", + "request_policy:all_shards", + "response_policy:special" + ] + }, + "MEMORY HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "4.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "MEMORY MALLOC-STATS": { + "summary": "Returns the allocator statistics.", + "since": "4.0.0", + "group": "server", + "complexity": "Depends on how much memory is allocated, could be slow", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "hints": [ + "nondeterministic_output", + "request_policy:all_shards", + "response_policy:special" + ] + }, + "MEMORY PURGE": { + "summary": "Asks the allocator to release memory.", + "since": "4.0.0", + "group": "server", + "complexity": "Depends on how much memory is allocated, could be slow", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "MEMORY STATS": { + "summary": "Returns details about memory usage.", + "since": "4.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "hints": [ + "nondeterministic_output", + "request_policy:all_shards", + "response_policy:special" + ] + }, + "MEMORY USAGE": { + "summary": "Estimates the memory usage of a key.", + "since": "4.0.0", + "group": "server", + "complexity": "O(N) where N is the number of samples.", + "acl_categories": [ + "@read", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "SAMPLES", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "MGET": { + "summary": "Atomically returns the string values of one or more keys.", + "since": "1.0.0", + "group": "string", + "complexity": "O(N) where N is the number of keys to retrieve.", + "acl_categories": [ + "@read", + "@string", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "request_policy:multi_shard" + ] + }, + "MIGRATE": { + "summary": "Atomically transfers a key from one Redis instance to another.", + "since": "2.6.0", + "group": "generic", + "complexity": "This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed.", + "history": [ + [ + "3.0.0", + "Added the `COPY` and `REPLACE` options." + ], + [ + "3.0.6", + "Added the `KEYS` option." + ], + [ + "4.0.7", + "Added the `AUTH` option." + ], + [ + "6.0.0", + "Added the `AUTH2` option." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@slow", + "@dangerous" + ], + "arity": -6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 3 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "KEYS", + "startfrom": -2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true, + "incomplete": true + } + ], + "arguments": [ + { + "name": "host", + "type": "string", + "display_text": "host" + }, + { + "name": "port", + "type": "integer", + "display_text": "port" + }, + { + "name": "key-selector", + "type": "oneof", + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "empty-string", + "type": "pure-token", + "display_text": "empty-string", + "token": "" + } + ] + }, + { + "name": "destination-db", + "type": "integer", + "display_text": "destination-db" + }, + { + "name": "timeout", + "type": "integer", + "display_text": "timeout" + }, + { + "name": "copy", + "type": "pure-token", + "display_text": "copy", + "token": "COPY", + "since": "3.0.0", + "optional": true + }, + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE", + "since": "3.0.0", + "optional": true + }, + { + "name": "authentication", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "auth", + "type": "string", + "display_text": "password", + "token": "AUTH", + "since": "4.0.7" + }, + { + "name": "auth2", + "type": "block", + "token": "AUTH2", + "since": "6.0.0", + "arguments": [ + { + "name": "username", + "type": "string", + "display_text": "username" + }, + { + "name": "password", + "type": "string", + "display_text": "password" + } + ] + } + ] + }, + { + "name": "keys", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "token": "KEYS", + "since": "3.0.6", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "write", + "movablekeys" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "MODULE": { + "summary": "A container for module commands.", + "since": "4.0.0", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "MODULE HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "MODULE LIST": { + "summary": "Returns all loaded modules.", + "since": "4.0.0", + "group": "server", + "complexity": "O(N) where N is the number of loaded modules.", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "noscript" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "MODULE LOAD": { + "summary": "Loads a module.", + "since": "4.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "path", + "type": "string", + "display_text": "path" + }, + { + "name": "arg", + "type": "string", + "display_text": "arg", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "no_async_loading" + ] + }, + "MODULE LOADEX": { + "summary": "Loads a module using extended parameters.", + "since": "7.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "path", + "type": "string", + "display_text": "path" + }, + { + "name": "configs", + "type": "block", + "token": "CONFIG", + "optional": true, + "multiple": true, + "multiple_token": true, + "arguments": [ + { + "name": "name", + "type": "string", + "display_text": "name" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + }, + { + "name": "args", + "type": "string", + "display_text": "args", + "token": "ARGS", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "admin", + "noscript", + "no_async_loading" + ] + }, + "MODULE UNLOAD": { + "summary": "Unloads a module.", + "since": "4.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "name", + "type": "string", + "display_text": "name" + } + ], + "command_flags": [ + "admin", + "noscript", + "no_async_loading" + ] + }, + "MONITOR": { + "summary": "Listens for all requests received by the server in real-time.", + "since": "1.0.0", + "group": "server", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale" + ] + }, + "MOVE": { + "summary": "Moves a key to another database.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "db", + "type": "integer", + "display_text": "db" + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "MSET": { + "summary": "Atomically creates or modifies the string values of one or more keys.", + "since": "1.0.1", + "group": "string", + "complexity": "O(N) where N is the number of keys to set.", + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 2, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ], + "hints": [ + "request_policy:multi_shard", + "response_policy:all_succeeded" + ] + }, + "MSETNX": { + "summary": "Atomically modifies the string values of one or more keys only when all keys don't exist.", + "since": "1.0.1", + "group": "string", + "complexity": "O(N) where N is the number of keys to set.", + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 2, + "limit": 0 + } + }, + "OW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "MULTI": { + "summary": "Starts a transaction.", + "since": "1.2.0", + "group": "transactions", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@transaction" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "allow_busy" + ] + }, + "OBJECT": { + "summary": "A container for object introspection commands.", + "since": "2.2.3", + "group": "generic", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "OBJECT ENCODING": { + "summary": "Returns the internal encoding of a Redis object.", + "since": "2.2.3", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "OBJECT FREQ": { + "summary": "Returns the logarithmic access frequency counter of a Redis object.", + "since": "4.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "OBJECT HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "6.2.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "OBJECT IDLETIME": { + "summary": "Returns the time since the last access to a Redis object.", + "since": "2.2.3", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "OBJECT REFCOUNT": { + "summary": "Returns the reference count of a value of a key.", + "since": "2.2.3", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "PERSIST": { + "summary": "Removes the expiration time of a key.", + "since": "2.2.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "PEXPIRE": { + "summary": "Sets the expiration time of a key in milliseconds.", + "since": "2.6.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added options: `NX`, `XX`, `GT` and `LT`." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds" + }, + { + "name": "condition", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + }, + { + "name": "gt", + "type": "pure-token", + "display_text": "gt", + "token": "GT" + }, + { + "name": "lt", + "type": "pure-token", + "display_text": "lt", + "token": "LT" + } + ] + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "PEXPIREAT": { + "summary": "Sets the expiration time of a key to a Unix milliseconds timestamp.", + "since": "2.6.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added options: `NX`, `XX`, `GT` and `LT`." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "unix-time-milliseconds", + "type": "unix-time", + "display_text": "unix-time-milliseconds" + }, + { + "name": "condition", + "type": "oneof", + "since": "7.0.0", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + }, + { + "name": "gt", + "type": "pure-token", + "display_text": "gt", + "token": "GT" + }, + { + "name": "lt", + "type": "pure-token", + "display_text": "lt", + "token": "LT" + } + ] + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "PEXPIRETIME": { + "summary": "Returns the expiration time of a key as a Unix milliseconds timestamp.", + "since": "7.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "PFADD": { + "summary": "Adds elements to a HyperLogLog key. Creates the key if it doesn't exist.", + "since": "2.8.9", + "group": "hyperloglog", + "complexity": "O(1) to add every element.", + "acl_categories": [ + "@write", + "@hyperloglog", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "PFCOUNT": { + "summary": "Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s).", + "since": "2.8.9", + "group": "hyperloglog", + "complexity": "O(1) with a very small average constant time when called with a single key. O(N) with N being the number of keys, and much bigger constant times, when called with multiple keys.", + "acl_categories": [ + "@read", + "@hyperloglog", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "notes": "RW because it may change the internal representation of the key, and propagate to replicas", + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "PFDEBUG": { + "summary": "Internal commands for debugging HyperLogLog values.", + "since": "2.8.9", + "group": "hyperloglog", + "complexity": "N/A", + "acl_categories": [ + "@write", + "@hyperloglog", + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true + } + ], + "arguments": [ + { + "name": "subcommand", + "type": "string", + "display_text": "subcommand" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "write", + "denyoom", + "admin" + ], + "doc_flags": [ + "syscmd" + ] + }, + "PFMERGE": { + "summary": "Merges one or more HyperLogLog values into a single key.", + "since": "2.8.9", + "group": "hyperloglog", + "complexity": "O(N) to merge N HyperLogLogs, but with high constant times.", + "acl_categories": [ + "@write", + "@hyperloglog", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "insert": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destkey", + "type": "key", + "display_text": "destkey", + "key_spec_index": 0 + }, + { + "name": "sourcekey", + "type": "key", + "display_text": "sourcekey", + "key_spec_index": 1, + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "PFSELFTEST": { + "summary": "An internal command for testing HyperLogLog values.", + "since": "2.8.9", + "group": "hyperloglog", + "complexity": "N/A", + "acl_categories": [ + "@hyperloglog", + "@admin", + "@slow", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "admin" + ], + "doc_flags": [ + "syscmd" + ] + }, + "PING": { + "summary": "Returns the server's liveliness response.", + "since": "1.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": -1, + "arguments": [ + { + "name": "message", + "type": "string", + "display_text": "message", + "optional": true + } + ], + "command_flags": [ + "fast" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:all_succeeded" + ] + }, + "PSETEX": { + "summary": "Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist.", + "since": "2.6.0", + "group": "string", + "complexity": "O(1)", + "deprecated_since": "2.6.12", + "replaced_by": "`SET` with the `PX` argument", + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom" + ], + "doc_flags": [ + "deprecated" + ] + }, + "PSUBSCRIBE": { + "summary": "Listens for messages published to channels that match one or more patterns.", + "since": "2.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of patterns to subscribe to.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "PSYNC": { + "summary": "An internal command used in replication.", + "since": "2.8.0", + "group": "server", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -3, + "arguments": [ + { + "name": "replicationid", + "type": "string", + "display_text": "replicationid" + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + } + ], + "command_flags": [ + "admin", + "noscript", + "no_async_loading", + "no_multi" + ] + }, + "PTTL": { + "summary": "Returns the expiration time in milliseconds of a key.", + "since": "2.6.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "2.8.0", + "Added the -2 reply." + ] + ], + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "PUBLISH": { + "summary": "Posts a message to a channel.", + "since": "2.0.0", + "group": "pubsub", + "complexity": "O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).", + "acl_categories": [ + "@pubsub", + "@fast" + ], + "arity": 3, + "arguments": [ + { + "name": "channel", + "type": "string", + "display_text": "channel" + }, + { + "name": "message", + "type": "string", + "display_text": "message" + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale", + "fast" + ] + }, + "PUBSUB": { + "summary": "A container for Pub/Sub commands.", + "since": "2.8.0", + "group": "pubsub", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "PUBSUB CHANNELS": { + "summary": "Returns the active channels.", + "since": "2.8.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns)", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "optional": true + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale" + ] + }, + "PUBSUB HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "6.2.0", + "group": "pubsub", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "PUBSUB NUMPAT": { + "summary": "Returns a count of unique pattern subscriptions.", + "since": "2.8.0", + "group": "pubsub", + "complexity": "O(1)", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": 2, + "command_flags": [ + "pubsub", + "loading", + "stale" + ] + }, + "PUBSUB NUMSUB": { + "summary": "Returns a count of subscribers to channels.", + "since": "2.8.0", + "group": "pubsub", + "complexity": "O(N) for the NUMSUB subcommand, where N is the number of requested channels", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "channel", + "type": "string", + "display_text": "channel", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale" + ] + }, + "PUBSUB SHARDCHANNELS": { + "summary": "Returns the active shard channels.", + "since": "7.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of active shard channels, and assuming constant time pattern matching (relatively short shard channels).", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "optional": true + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale" + ] + }, + "PUBSUB SHARDNUMSUB": { + "summary": "Returns the count of subscribers of shard channels.", + "since": "7.0.0", + "group": "pubsub", + "complexity": "O(N) for the SHARDNUMSUB subcommand, where N is the number of requested shard channels", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "shardchannel", + "type": "string", + "display_text": "shardchannel", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale" + ] + }, + "PUNSUBSCRIBE": { + "summary": "Stops listening to messages published to channels that match one or more patterns.", + "since": "2.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of patterns to unsubscribe.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -1, + "arguments": [ + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "QUIT": { + "summary": "Closes the connection.", + "since": "1.0.0", + "group": "connection", + "complexity": "O(1)", + "deprecated_since": "7.2.0", + "replaced_by": "just closing the connection", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": -1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "no_auth", + "allow_busy" + ], + "doc_flags": [ + "deprecated" + ] + }, + "RANDOMKEY": { + "summary": "Returns a random key name from the database.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": 1, + "command_flags": [ + "readonly" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:special", + "nondeterministic_output" + ] + }, + "READONLY": { + "summary": "Enables read-only queries for a connection to a Redis Cluster replica node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 1, + "command_flags": [ + "loading", + "stale", + "fast" + ] + }, + "READWRITE": { + "summary": "Enables read-write queries for a connection to a Reids Cluster replica node.", + "since": "3.0.0", + "group": "cluster", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 1, + "command_flags": [ + "loading", + "stale", + "fast" + ] + }, + "RENAME": { + "summary": "Renames a key and overwrites the destination.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@write", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "newkey", + "type": "key", + "display_text": "newkey", + "key_spec_index": 1 + } + ], + "command_flags": [ + "write" + ] + }, + "RENAMENX": { + "summary": "Renames a key only when the target key name doesn't exist.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "3.2.0", + "The command no longer returns an error when source and destination names are the same." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "newkey", + "type": "key", + "display_text": "newkey", + "key_spec_index": 1 + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "REPLCONF": { + "summary": "An internal command for configuring the replication stream.", + "since": "3.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -1, + "command_flags": [ + "admin", + "noscript", + "loading", + "stale", + "allow_busy" + ], + "doc_flags": [ + "syscmd" + ] + }, + "REPLICAOF": { + "summary": "Configures a server as replica of another, or promotes it to a master.", + "since": "5.0.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "args", + "type": "oneof", + "arguments": [ + { + "name": "host-port", + "type": "block", + "arguments": [ + { + "name": "host", + "type": "string", + "display_text": "host" + }, + { + "name": "port", + "type": "integer", + "display_text": "port" + } + ] + }, + { + "name": "no-one", + "type": "block", + "arguments": [ + { + "name": "no", + "type": "pure-token", + "display_text": "no", + "token": "NO" + }, + { + "name": "one", + "type": "pure-token", + "display_text": "one", + "token": "ONE" + } + ] + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "stale", + "no_async_loading" + ] + }, + "RESET": { + "summary": "Resets the connection.", + "since": "6.2.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "no_auth", + "allow_busy" + ] + }, + "RESTORE": { + "summary": "Creates a key from the serialized representation of a value.", + "since": "2.6.0", + "group": "generic", + "complexity": "O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).", + "history": [ + [ + "3.0.0", + "Added the `REPLACE` modifier." + ], + [ + "5.0.0", + "Added the `ABSTTL` modifier." + ], + [ + "5.0.0", + "Added the `IDLETIME` and `FREQ` options." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@slow", + "@dangerous" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "ttl", + "type": "integer", + "display_text": "ttl" + }, + { + "name": "serialized-value", + "type": "string", + "display_text": "serialized-value" + }, + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE", + "since": "3.0.0", + "optional": true + }, + { + "name": "absttl", + "type": "pure-token", + "display_text": "absttl", + "token": "ABSTTL", + "since": "5.0.0", + "optional": true + }, + { + "name": "seconds", + "type": "integer", + "display_text": "seconds", + "token": "IDLETIME", + "since": "5.0.0", + "optional": true + }, + { + "name": "frequency", + "type": "integer", + "display_text": "frequency", + "token": "FREQ", + "since": "5.0.0", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "RESTORE-ASKING": { + "summary": "An internal command for migrating keys in a cluster.", + "since": "3.0.0", + "group": "server", + "complexity": "O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).", + "history": [ + [ + "3.0.0", + "Added the `REPLACE` modifier." + ], + [ + "5.0.0", + "Added the `ABSTTL` modifier." + ], + [ + "5.0.0", + "Added the `IDLETIME` and `FREQ` options." + ] + ], + "acl_categories": [ + "@keyspace", + "@write", + "@slow", + "@dangerous" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "ttl", + "type": "integer", + "display_text": "ttl" + }, + { + "name": "serialized-value", + "type": "string", + "display_text": "serialized-value" + }, + { + "name": "replace", + "type": "pure-token", + "display_text": "replace", + "token": "REPLACE", + "since": "3.0.0", + "optional": true + }, + { + "name": "absttl", + "type": "pure-token", + "display_text": "absttl", + "token": "ABSTTL", + "since": "5.0.0", + "optional": true + }, + { + "name": "seconds", + "type": "integer", + "display_text": "seconds", + "token": "IDLETIME", + "since": "5.0.0", + "optional": true + }, + { + "name": "frequency", + "type": "integer", + "display_text": "frequency", + "token": "FREQ", + "since": "5.0.0", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom", + "asking" + ], + "doc_flags": [ + "syscmd" + ] + }, + "ROLE": { + "summary": "Returns the replication role.", + "since": "2.8.12", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@fast", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast" + ] + }, + "RPOP": { + "summary": "Returns and removes the last elements of a list. Deletes the list if the last element was popped.", + "since": "1.0.0", + "group": "list", + "complexity": "O(N) where N is the number of elements returned", + "history": [ + [ + "6.2.0", + "Added the `count` argument." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "since": "6.2.0", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "RPOPLPUSH": { + "summary": "Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped.", + "since": "1.2.0", + "group": "list", + "complexity": "O(1)", + "deprecated_since": "6.2.0", + "replaced_by": "`LMOVE` with the `RIGHT` and `LEFT` arguments", + "acl_categories": [ + "@write", + "@list", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + } + ], + "command_flags": [ + "write", + "denyoom" + ], + "doc_flags": [ + "deprecated" + ] + }, + "RPUSH": { + "summary": "Appends one or more elements to a list. Creates the key if it doesn't exist.", + "since": "1.0.0", + "group": "list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", + "history": [ + [ + "2.4.0", + "Accepts multiple `element` arguments." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element", + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "RPUSHX": { + "summary": "Appends an element to a list only when the list exists.", + "since": "2.2.0", + "group": "list", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", + "history": [ + [ + "4.0.0", + "Accepts multiple `element` arguments." + ] + ], + "acl_categories": [ + "@write", + "@list", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "element", + "type": "string", + "display_text": "element", + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "SADD": { + "summary": "Adds one or more members to a set. Creates the key if it doesn't exist.", + "since": "1.0.0", + "group": "set", + "complexity": "O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.", + "history": [ + [ + "2.4.0", + "Accepts multiple `member` arguments." + ] + ], + "acl_categories": [ + "@write", + "@set", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "SAVE": { + "summary": "Synchronously saves the database(s) to disk.", + "since": "1.0.0", + "group": "server", + "complexity": "O(N) where N is the total number of keys in all databases", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "admin", + "noscript", + "no_async_loading", + "no_multi" + ] + }, + "SCAN": { + "summary": "Iterates over the key names in the database.", + "since": "2.8.0", + "group": "generic", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.", + "history": [ + [ + "6.0.0", + "Added the `TYPE` subcommand." + ] + ], + "acl_categories": [ + "@keyspace", + "@read", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "cursor", + "type": "integer", + "display_text": "cursor" + }, + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "token": "MATCH", + "optional": true + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + }, + { + "name": "type", + "type": "string", + "display_text": "type", + "token": "TYPE", + "since": "6.0.0", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output", + "request_policy:special", + "response_policy:special" + ] + }, + "SCARD": { + "summary": "Returns the number of members in a set.", + "since": "1.0.0", + "group": "set", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@set", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "SCRIPT": { + "summary": "A container for Lua scripts management commands.", + "since": "2.6.0", + "group": "scripting", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "SCRIPT DEBUG": { + "summary": "Sets the debug mode of server-side Lua scripts.", + "since": "3.2.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 3, + "arguments": [ + { + "name": "mode", + "type": "oneof", + "arguments": [ + { + "name": "yes", + "type": "pure-token", + "display_text": "yes", + "token": "YES" + }, + { + "name": "sync", + "type": "pure-token", + "display_text": "sync", + "token": "SYNC" + }, + { + "name": "no", + "type": "pure-token", + "display_text": "no", + "token": "NO" + } + ] + } + ], + "command_flags": [ + "noscript" + ] + }, + "SCRIPT EXISTS": { + "summary": "Determines whether server-side Lua scripts exist in the script cache.", + "since": "2.6.0", + "group": "scripting", + "complexity": "O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation).", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -3, + "arguments": [ + { + "name": "sha1", + "type": "string", + "display_text": "sha1", + "multiple": true + } + ], + "command_flags": [ + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:agg_logical_and" + ] + }, + "SCRIPT FLUSH": { + "summary": "Removes all server-side Lua scripts from the script cache.", + "since": "2.6.0", + "group": "scripting", + "complexity": "O(N) with N being the number of scripts in cache", + "history": [ + [ + "6.2.0", + "Added the `ASYNC` and `SYNC` flushing mode modifiers." + ] + ], + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": -2, + "arguments": [ + { + "name": "flush-type", + "type": "oneof", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "async", + "type": "pure-token", + "display_text": "async", + "token": "ASYNC" + }, + { + "name": "sync", + "type": "pure-token", + "display_text": "sync", + "token": "SYNC" + } + ] + } + ], + "command_flags": [ + "noscript" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "SCRIPT HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "SCRIPT KILL": { + "summary": "Terminates a server-side Lua script during execution.", + "since": "2.6.0", + "group": "scripting", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 2, + "command_flags": [ + "noscript", + "allow_busy" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:one_succeeded" + ] + }, + "SCRIPT LOAD": { + "summary": "Loads a server-side Lua script to the script cache.", + "since": "2.6.0", + "group": "scripting", + "complexity": "O(N) with N being the length in bytes of the script body.", + "acl_categories": [ + "@slow", + "@scripting" + ], + "arity": 3, + "arguments": [ + { + "name": "script", + "type": "string", + "display_text": "script" + } + ], + "command_flags": [ + "noscript", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "SDIFF": { + "summary": "Returns the difference of multiple sets.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the total number of elements in all given sets.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "SDIFFSTORE": { + "summary": "Stores the difference of multiple sets in a key.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the total number of elements in all given sets.", + "acl_categories": [ + "@write", + "@set", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SELECT": { + "summary": "Changes the selected database.", + "since": "1.0.0", + "group": "connection", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@connection" + ], + "arity": 2, + "arguments": [ + { + "name": "index", + "type": "integer", + "display_text": "index" + } + ], + "command_flags": [ + "loading", + "stale", + "fast" + ] + }, + "SET": { + "summary": "Sets the string value of a key, ignoring its type. The key is created if it doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "history": [ + [ + "2.6.12", + "Added the `EX`, `PX`, `NX` and `XX` options." + ], + [ + "6.0.0", + "Added the `KEEPTTL` option." + ], + [ + "6.2.0", + "Added the `GET`, `EXAT` and `PXAT` option." + ], + [ + "7.0.0", + "Allowed the `NX` and `GET` options to be used together." + ] + ], + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "notes": "RW and ACCESS due to the optional `GET` argument", + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true, + "variable_flags": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + }, + { + "name": "condition", + "type": "oneof", + "since": "2.6.12", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + } + ] + }, + { + "name": "get", + "type": "pure-token", + "display_text": "get", + "token": "GET", + "since": "6.2.0", + "optional": true + }, + { + "name": "expiration", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "seconds", + "type": "integer", + "display_text": "seconds", + "token": "EX", + "since": "2.6.12" + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds", + "token": "PX", + "since": "2.6.12" + }, + { + "name": "unix-time-seconds", + "type": "unix-time", + "display_text": "unix-time-seconds", + "token": "EXAT", + "since": "6.2.0" + }, + { + "name": "unix-time-milliseconds", + "type": "unix-time", + "display_text": "unix-time-milliseconds", + "token": "PXAT", + "since": "6.2.0" + }, + { + "name": "keepttl", + "type": "pure-token", + "display_text": "keepttl", + "token": "KEEPTTL", + "since": "6.0.0" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SETBIT": { + "summary": "Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist.", + "since": "2.2.0", + "group": "bitmap", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@bitmap", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "value", + "type": "integer", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SETEX": { + "summary": "Sets the string value and expiration time of a key. Creates the key if it doesn't exist.", + "since": "2.0.0", + "group": "string", + "complexity": "O(1)", + "deprecated_since": "2.6.12", + "replaced_by": "`SET` with the `EX` argument", + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "seconds", + "type": "integer", + "display_text": "seconds" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom" + ], + "doc_flags": [ + "deprecated" + ] + }, + "SETNX": { + "summary": "Set the string value of a key only when the key doesn't exist.", + "since": "1.0.0", + "group": "string", + "complexity": "O(1)", + "deprecated_since": "2.6.12", + "replaced_by": "`SET` with the `NX` argument", + "acl_categories": [ + "@write", + "@string", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ], + "doc_flags": [ + "deprecated" + ] + }, + "SETRANGE": { + "summary": "Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist.", + "since": "2.2.0", + "group": "string", + "complexity": "O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument.", + "acl_categories": [ + "@write", + "@string", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SHUTDOWN": { + "summary": "Synchronously saves the database(s) to disk and shuts down the Redis server.", + "since": "1.0.0", + "group": "server", + "complexity": "O(N) when saving, where N is the total number of keys in all databases when saving data, otherwise O(1)", + "history": [ + [ + "7.0.0", + "Added the `NOW`, `FORCE` and `ABORT` modifiers." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -1, + "arguments": [ + { + "name": "save-selector", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "nosave", + "type": "pure-token", + "display_text": "nosave", + "token": "NOSAVE" + }, + { + "name": "save", + "type": "pure-token", + "display_text": "save", + "token": "SAVE" + } + ] + }, + { + "name": "now", + "type": "pure-token", + "display_text": "now", + "token": "NOW", + "since": "7.0.0", + "optional": true + }, + { + "name": "force", + "type": "pure-token", + "display_text": "force", + "token": "FORCE", + "since": "7.0.0", + "optional": true + }, + { + "name": "abort", + "type": "pure-token", + "display_text": "abort", + "token": "ABORT", + "since": "7.0.0", + "optional": true + } + ], + "command_flags": [ + "admin", + "noscript", + "loading", + "stale", + "no_multi", + "allow_busy" + ] + }, + "SINTER": { + "summary": "Returns the intersect of multiple sets.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "SINTERCARD": { + "summary": "Returns the number of members of the intersect of multiple sets.", + "since": "7.0.0", + "group": "set", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "limit", + "type": "integer", + "display_text": "limit", + "token": "LIMIT", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "SINTERSTORE": { + "summary": "Stores the intersect of multiple sets in a key.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.", + "acl_categories": [ + "@write", + "@set", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SISMEMBER": { + "summary": "Determines whether a member belongs to a set.", + "since": "1.0.0", + "group": "set", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@set", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "SLAVEOF": { + "summary": "Sets a Redis server as a replica of another, or promotes it to being a master.", + "since": "1.0.0", + "group": "server", + "complexity": "O(1)", + "deprecated_since": "5.0.0", + "replaced_by": "`REPLICAOF`", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "args", + "type": "oneof", + "arguments": [ + { + "name": "host-port", + "type": "block", + "arguments": [ + { + "name": "host", + "type": "string", + "display_text": "host" + }, + { + "name": "port", + "type": "integer", + "display_text": "port" + } + ] + }, + { + "name": "no-one", + "type": "block", + "arguments": [ + { + "name": "no", + "type": "pure-token", + "display_text": "no", + "token": "NO" + }, + { + "name": "one", + "type": "pure-token", + "display_text": "one", + "token": "ONE" + } + ] + } + ] + } + ], + "command_flags": [ + "admin", + "noscript", + "stale", + "no_async_loading" + ], + "doc_flags": [ + "deprecated" + ] + }, + "SLOWLOG": { + "summary": "A container for slow log commands.", + "since": "2.2.12", + "group": "server", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "SLOWLOG GET": { + "summary": "Returns the slow log's entries.", + "since": "2.2.12", + "group": "server", + "complexity": "O(N) where N is the number of entries returned", + "history": [ + [ + "4.0.0", + "Added client IP address, port and name to the reply." + ] + ], + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": -2, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "optional": true + } + ], + "command_flags": [ + "admin", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "nondeterministic_output" + ] + }, + "SLOWLOG HELP": { + "summary": "Show helpful text about the different subcommands", + "since": "6.2.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "SLOWLOG LEN": { + "summary": "Returns the number of entries in the slow log.", + "since": "2.2.12", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:agg_sum", + "nondeterministic_output" + ] + }, + "SLOWLOG RESET": { + "summary": "Clears all entries from the slow log.", + "since": "2.2.12", + "group": "server", + "complexity": "O(N) where N is the number of entries in the slowlog", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 2, + "command_flags": [ + "admin", + "loading", + "stale" + ], + "hints": [ + "request_policy:all_nodes", + "response_policy:all_succeeded" + ] + }, + "SMEMBERS": { + "summary": "Returns all members of a set.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the set cardinality.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "SMISMEMBER": { + "summary": "Determines whether multiple members belong to a set.", + "since": "6.2.0", + "group": "set", + "complexity": "O(N) where N is the number of elements being checked for membership", + "acl_categories": [ + "@read", + "@set", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "SMOVE": { + "summary": "Moves a member from one set to another.", + "since": "1.0.0", + "group": "set", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@set", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "source", + "type": "key", + "display_text": "source", + "key_spec_index": 0 + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 1 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "SORT": { + "summary": "Sorts the elements in a list, a set, or a sorted set, optionally storing the result.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).", + "acl_categories": [ + "@write", + "@set", + "@sortedset", + "@list", + "@slow", + "@dangerous" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + }, + { + "notes": "For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort", + "begin_search": { + "type": "unknown", + "spec": {} + }, + "find_keys": { + "type": "unknown", + "spec": {} + }, + "RO": true, + "access": true + }, + { + "notes": "For the optional STORE keyword. It is marked 'unknown' because the keyword can appear anywhere in the argument array", + "begin_search": { + "type": "unknown", + "spec": {} + }, + "find_keys": { + "type": "unknown", + "spec": {} + }, + "OW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "by-pattern", + "type": "pattern", + "display_text": "pattern", + "key_spec_index": 1, + "token": "BY", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + }, + { + "name": "get-pattern", + "type": "pattern", + "display_text": "pattern", + "key_spec_index": 1, + "token": "GET", + "optional": true, + "multiple": true, + "multiple_token": true + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "sorting", + "type": "pure-token", + "display_text": "sorting", + "token": "ALPHA", + "optional": true + }, + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 2, + "token": "STORE", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ] + }, + "SORT_RO": { + "summary": "Returns the sorted elements of a list, a set, or a sorted set.", + "since": "7.0.0", + "group": "generic", + "complexity": "O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).", + "acl_categories": [ + "@read", + "@set", + "@sortedset", + "@list", + "@slow", + "@dangerous" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + }, + { + "notes": "For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort", + "begin_search": { + "type": "unknown", + "spec": {} + }, + "find_keys": { + "type": "unknown", + "spec": {} + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "by-pattern", + "type": "pattern", + "display_text": "pattern", + "key_spec_index": 1, + "token": "BY", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + }, + { + "name": "get-pattern", + "type": "pattern", + "display_text": "pattern", + "key_spec_index": 1, + "token": "GET", + "optional": true, + "multiple": true, + "multiple_token": true + }, + { + "name": "order", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "asc", + "type": "pure-token", + "display_text": "asc", + "token": "ASC" + }, + { + "name": "desc", + "type": "pure-token", + "display_text": "desc", + "token": "DESC" + } + ] + }, + { + "name": "sorting", + "type": "pure-token", + "display_text": "sorting", + "token": "ALPHA", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "SPOP": { + "summary": "Returns one or more random members from a set after removing them. Deletes the set if the last member was popped.", + "since": "1.0.0", + "group": "set", + "complexity": "Without the count argument O(1), otherwise O(N) where N is the value of the passed count.", + "history": [ + [ + "3.2.0", + "Added the `count` argument." + ] + ], + "acl_categories": [ + "@write", + "@set", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "since": "3.2.0", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "SPUBLISH": { + "summary": "Post a message to a shard channel", + "since": "7.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of clients subscribed to the receiving shard channel.", + "acl_categories": [ + "@pubsub", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "not_key": true + } + ], + "arguments": [ + { + "name": "shardchannel", + "type": "string", + "display_text": "shardchannel" + }, + { + "name": "message", + "type": "string", + "display_text": "message" + } + ], + "command_flags": [ + "pubsub", + "loading", + "stale", + "fast" + ] + }, + "SRANDMEMBER": { + "summary": "Get one or multiple random members from a set", + "since": "1.0.0", + "group": "set", + "complexity": "Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count.", + "history": [ + [ + "2.6.0", + "Added the optional `count` argument." + ] + ], + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "since": "2.6.0", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "SREM": { + "summary": "Removes one or more members from a set. Deletes the set if the last member was removed.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the number of members to be removed.", + "history": [ + [ + "2.4.0", + "Accepts multiple `member` arguments." + ] + ], + "acl_categories": [ + "@write", + "@set", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "SSCAN": { + "summary": "Iterates over members of a set.", + "since": "2.8.0", + "group": "set", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "cursor", + "type": "integer", + "display_text": "cursor" + }, + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "token": "MATCH", + "optional": true + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "SSUBSCRIBE": { + "summary": "Listens for messages published to shard channels.", + "since": "7.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of shard channels to subscribe to.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "not_key": true + } + ], + "arguments": [ + { + "name": "shardchannel", + "type": "string", + "display_text": "shardchannel", + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "STRLEN": { + "summary": "Returns the length of a string value.", + "since": "2.2.0", + "group": "string", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@string", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "SUBSCRIBE": { + "summary": "Listens for messages published to channels.", + "since": "2.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of channels to subscribe to.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -2, + "arguments": [ + { + "name": "channel", + "type": "string", + "display_text": "channel", + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "SUBSTR": { + "summary": "Returns a substring from a string value.", + "since": "1.0.0", + "group": "string", + "complexity": "O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.", + "deprecated_since": "2.0.0", + "replaced_by": "`GETRANGE`", + "acl_categories": [ + "@read", + "@string", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "end", + "type": "integer", + "display_text": "end" + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "SUNION": { + "summary": "Returns the union of multiple sets.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the total number of elements in all given sets.", + "acl_categories": [ + "@read", + "@set", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output_order" + ] + }, + "SUNIONSTORE": { + "summary": "Stores the union of multiple sets in a key.", + "since": "1.0.0", + "group": "set", + "complexity": "O(N) where N is the total number of elements in all given sets.", + "acl_categories": [ + "@write", + "@set", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "SUNSUBSCRIBE": { + "summary": "Stops listening to messages posted to shard channels.", + "since": "7.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of shard channels to unsubscribe.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -1, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "not_key": true + } + ], + "arguments": [ + { + "name": "shardchannel", + "type": "string", + "display_text": "shardchannel", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "SWAPDB": { + "summary": "Swaps two Redis databases.", + "since": "4.0.0", + "group": "server", + "complexity": "O(N) where N is the count of clients watching or blocking on keys from both databases.", + "acl_categories": [ + "@keyspace", + "@write", + "@fast", + "@dangerous" + ], + "arity": 3, + "arguments": [ + { + "name": "index1", + "type": "integer", + "display_text": "index1" + }, + { + "name": "index2", + "type": "integer", + "display_text": "index2" + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "SYNC": { + "summary": "An internal command used in replication.", + "since": "1.0.0", + "group": "server", + "acl_categories": [ + "@admin", + "@slow", + "@dangerous" + ], + "arity": 1, + "command_flags": [ + "admin", + "noscript", + "no_async_loading", + "no_multi" + ] + }, + "TIME": { + "summary": "Returns the server time.", + "since": "2.6.0", + "group": "server", + "complexity": "O(1)", + "acl_categories": [ + "@fast" + ], + "arity": 1, + "command_flags": [ + "loading", + "stale", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "TOUCH": { + "summary": "Returns the number of existing keys out of those specified after updating the time they were last accessed.", + "since": "3.2.1", + "group": "generic", + "complexity": "O(N) where N is the number of keys that will be touched.", + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "request_policy:multi_shard", + "response_policy:agg_sum" + ] + }, + "TTL": { + "summary": "Returns the expiration time in seconds of a key.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "history": [ + [ + "2.8.0", + "Added the -2 reply." + ] + ], + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "TYPE": { + "summary": "Determines the type of value stored at a key.", + "since": "1.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@keyspace", + "@read", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "UNLINK": { + "summary": "Asynchronously deletes one or more keys.", + "since": "4.0.0", + "group": "generic", + "complexity": "O(1) for each key removed regardless of its size. Then the command does O(N) work in a different thread in order to reclaim memory, where N is the number of allocations the deleted objects where composed of.", + "acl_categories": [ + "@keyspace", + "@write", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RM": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ], + "hints": [ + "request_policy:multi_shard", + "response_policy:agg_sum" + ] + }, + "UNSUBSCRIBE": { + "summary": "Stops listening to messages posted to channels.", + "since": "2.0.0", + "group": "pubsub", + "complexity": "O(N) where N is the number of channels to unsubscribe.", + "acl_categories": [ + "@pubsub", + "@slow" + ], + "arity": -1, + "arguments": [ + { + "name": "channel", + "type": "string", + "display_text": "channel", + "optional": true, + "multiple": true + } + ], + "command_flags": [ + "pubsub", + "noscript", + "loading", + "stale" + ] + }, + "UNWATCH": { + "summary": "Forgets about watched keys of a transaction.", + "since": "2.2.0", + "group": "transactions", + "complexity": "O(1)", + "acl_categories": [ + "@fast", + "@transaction" + ], + "arity": 1, + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "allow_busy" + ] + }, + "WAIT": { + "summary": "Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed.", + "since": "3.0.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 3, + "arguments": [ + { + "name": "numreplicas", + "type": "integer", + "display_text": "numreplicas" + }, + { + "name": "timeout", + "type": "integer", + "display_text": "timeout" + } + ], + "hints": [ + "request_policy:all_shards", + "response_policy:agg_min" + ] + }, + "WAITAOF": { + "summary": "Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas.", + "since": "7.2.0", + "group": "generic", + "complexity": "O(1)", + "acl_categories": [ + "@slow", + "@connection" + ], + "arity": 4, + "arguments": [ + { + "name": "numlocal", + "type": "integer", + "display_text": "numlocal" + }, + { + "name": "numreplicas", + "type": "integer", + "display_text": "numreplicas" + }, + { + "name": "timeout", + "type": "integer", + "display_text": "timeout" + } + ], + "command_flags": [ + "noscript" + ], + "hints": [ + "request_policy:all_shards", + "response_policy:agg_min" + ] + }, + "WATCH": { + "summary": "Monitors changes to keys to determine the execution of a transaction.", + "since": "2.2.0", + "group": "transactions", + "complexity": "O(1) for every key.", + "acl_categories": [ + "@fast", + "@transaction" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + } + ], + "command_flags": [ + "noscript", + "loading", + "stale", + "fast", + "allow_busy" + ] + }, + "XACK": { + "summary": "Returns the number of messages that were successfully acknowledged by the consumer group member of a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1) for each message ID processed.", + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "id", + "type": "string", + "display_text": "id", + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "XADD": { + "summary": "Appends a new message to a stream. Creates the key if it doesn't exist.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1) when adding a new entry, O(N) when trimming where N being the number of entries evicted.", + "history": [ + [ + "6.2.0", + "Added the `NOMKSTREAM` option, `MINID` trimming strategy and the `LIMIT` option." + ], + [ + "7.0.0", + "Added support for the `-*` explicit ID form." + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -5, + "key_specs": [ + { + "notes": "UPDATE instead of INSERT because of the optional trimming feature", + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "nomkstream", + "type": "pure-token", + "display_text": "nomkstream", + "token": "NOMKSTREAM", + "since": "6.2.0", + "optional": true + }, + { + "name": "trim", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "strategy", + "type": "oneof", + "arguments": [ + { + "name": "maxlen", + "type": "pure-token", + "display_text": "maxlen", + "token": "MAXLEN" + }, + { + "name": "minid", + "type": "pure-token", + "display_text": "minid", + "token": "MINID", + "since": "6.2.0" + } + ] + }, + { + "name": "operator", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "equal", + "type": "pure-token", + "display_text": "equal", + "token": "=" + }, + { + "name": "approximately", + "type": "pure-token", + "display_text": "approximately", + "token": "~" + } + ] + }, + { + "name": "threshold", + "type": "string", + "display_text": "threshold" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "LIMIT", + "since": "6.2.0", + "optional": true + } + ] + }, + { + "name": "id-selector", + "type": "oneof", + "arguments": [ + { + "name": "auto-id", + "type": "pure-token", + "display_text": "auto-id", + "token": "*" + }, + { + "name": "id", + "type": "string", + "display_text": "id" + } + ] + }, + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "field", + "type": "string", + "display_text": "field" + }, + { + "name": "value", + "type": "string", + "display_text": "value" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "XAUTOCLAIM": { + "summary": "Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member.", + "since": "6.2.0", + "group": "stream", + "complexity": "O(1) if COUNT is small.", + "history": [ + [ + "7.0.0", + "Added an element to the reply array, containing deleted entries the command cleared from the PEL" + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer" + }, + { + "name": "min-idle-time", + "type": "string", + "display_text": "min-idle-time" + }, + { + "name": "start", + "type": "string", + "display_text": "start" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + }, + { + "name": "justid", + "type": "pure-token", + "display_text": "justid", + "token": "JUSTID", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "XCLAIM": { + "summary": "Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(log N) with N being the number of messages in the PEL of the consumer group.", + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -6, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer" + }, + { + "name": "min-idle-time", + "type": "string", + "display_text": "min-idle-time" + }, + { + "name": "id", + "type": "string", + "display_text": "id", + "multiple": true + }, + { + "name": "ms", + "type": "integer", + "display_text": "ms", + "token": "IDLE", + "optional": true + }, + { + "name": "unix-time-milliseconds", + "type": "unix-time", + "display_text": "unix-time-milliseconds", + "token": "TIME", + "optional": true + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "RETRYCOUNT", + "optional": true + }, + { + "name": "force", + "type": "pure-token", + "display_text": "force", + "token": "FORCE", + "optional": true + }, + { + "name": "justid", + "type": "pure-token", + "display_text": "justid", + "token": "JUSTID", + "optional": true + }, + { + "name": "lastid", + "type": "string", + "display_text": "lastid", + "token": "LASTID", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "XDEL": { + "summary": "Returns the number of messages after removing them from a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1) for each single item to delete in the stream, regardless of the stream size.", + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "id", + "type": "string", + "display_text": "id", + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "XGROUP": { + "summary": "A container for consumer groups commands.", + "since": "5.0.0", + "group": "stream", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "XGROUP CREATE": { + "summary": "Creates a consumer group.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added the `entries_read` named argument." + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "id-selector", + "type": "oneof", + "arguments": [ + { + "name": "id", + "type": "string", + "display_text": "id" + }, + { + "name": "new-id", + "type": "pure-token", + "display_text": "new-id", + "token": "$" + } + ] + }, + { + "name": "mkstream", + "type": "pure-token", + "display_text": "mkstream", + "token": "MKSTREAM", + "optional": true + }, + { + "name": "entries-read", + "type": "integer", + "display_text": "entries-read", + "token": "ENTRIESREAD", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "XGROUP CREATECONSUMER": { + "summary": "Creates a consumer in a consumer group.", + "since": "6.2.0", + "group": "stream", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": 5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "insert": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer" + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "XGROUP DELCONSUMER": { + "summary": "Deletes a consumer from a consumer group.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": 5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer" + } + ], + "command_flags": [ + "write" + ] + }, + "XGROUP DESTROY": { + "summary": "Destroys a consumer group.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(N) where N is the number of entries in the group's pending entries list (PEL).", + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + } + ], + "command_flags": [ + "write" + ] + }, + "XGROUP HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "acl_categories": [ + "@stream", + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "XGROUP SETID": { + "summary": "Sets the last-delivered ID of a consumer group.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added the optional `entries_read` argument." + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "id-selector", + "type": "oneof", + "arguments": [ + { + "name": "id", + "type": "string", + "display_text": "id" + }, + { + "name": "new-id", + "type": "pure-token", + "display_text": "new-id", + "token": "$" + } + ] + }, + { + "name": "entriesread", + "type": "integer", + "display_text": "entries-read", + "token": "ENTRIESREAD", + "optional": true + } + ], + "command_flags": [ + "write" + ] + }, + "XINFO": { + "summary": "A container for stream introspection commands.", + "since": "5.0.0", + "group": "stream", + "complexity": "Depends on subcommand.", + "acl_categories": [ + "@slow" + ], + "arity": -2 + }, + "XINFO CONSUMERS": { + "summary": "Returns a list of the consumers in a consumer group.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "7.2.0", + "Added the `inactive` field." + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "XINFO GROUPS": { + "summary": "Returns a list of the consumer groups of a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added the `entries-read` and `lag` fields" + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly" + ] + }, + "XINFO HELP": { + "summary": "Returns helpful text about the different subcommands.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "acl_categories": [ + "@stream", + "@slow" + ], + "arity": 2, + "command_flags": [ + "loading", + "stale" + ] + }, + "XINFO STREAM": { + "summary": "Returns information about a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "6.0.0", + "Added the `FULL` modifier." + ], + [ + "7.0.0", + "Added the `max-deleted-entry-id`, `entries-added`, `recorded-first-entry-id`, `entries-read` and `lag` fields" + ], + [ + "7.2.0", + "Added the `active-time` field, and changed the meaning of `seen-time`." + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "full-block", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "full", + "type": "pure-token", + "display_text": "full", + "token": "FULL" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ] + } + ], + "command_flags": [ + "readonly" + ] + }, + "XLEN": { + "summary": "Return the number of messages in a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@stream", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "XPENDING": { + "summary": "Returns the information and entries from a stream consumer group's pending entries list.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(N) with N being the number of elements returned, so asking for a small fixed number of entries per call is O(1). O(M), where M is the total number of entries scanned when used with the IDLE filter. When the command returns just the summary and the list of consumers is small, it runs in O(1) time; otherwise, an additional O(N) time for iterating every consumer.", + "history": [ + [ + "6.2.0", + "Added the `IDLE` option and exclusive range intervals." + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "filters", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "min-idle-time", + "type": "integer", + "display_text": "min-idle-time", + "token": "IDLE", + "since": "6.2.0", + "optional": true + }, + { + "name": "start", + "type": "string", + "display_text": "start" + }, + { + "name": "end", + "type": "string", + "display_text": "end" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer", + "optional": true + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "XRANGE": { + "summary": "Returns the messages from a stream within a range of IDs.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(N) with N being the number of elements being returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).", + "history": [ + [ + "6.2.0", + "Added exclusive ranges." + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "string", + "display_text": "start" + }, + { + "name": "end", + "type": "string", + "display_text": "end" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "XREAD": { + "summary": "Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise.", + "since": "5.0.0", + "group": "stream", + "acl_categories": [ + "@read", + "@stream", + "@slow", + "@blocking" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STREAMS", + "startfrom": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 2 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds", + "token": "BLOCK", + "optional": true + }, + { + "name": "streams", + "type": "block", + "token": "STREAMS", + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "id", + "type": "string", + "display_text": "id", + "multiple": true + } + ] + } + ], + "command_flags": [ + "readonly", + "blocking", + "movablekeys" + ] + }, + "XREADGROUP": { + "summary": "Returns new or historical messages from a stream for a consumer in a group. Blocks until a message is available otherwise.", + "since": "5.0.0", + "group": "stream", + "complexity": "For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.", + "acl_categories": [ + "@write", + "@stream", + "@slow", + "@blocking" + ], + "arity": -7, + "key_specs": [ + { + "begin_search": { + "type": "keyword", + "spec": { + "keyword": "STREAMS", + "startfrom": 4 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": -1, + "keystep": 1, + "limit": 2 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "group-block", + "type": "block", + "token": "GROUP", + "arguments": [ + { + "name": "group", + "type": "string", + "display_text": "group" + }, + { + "name": "consumer", + "type": "string", + "display_text": "consumer" + } + ] + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + }, + { + "name": "milliseconds", + "type": "integer", + "display_text": "milliseconds", + "token": "BLOCK", + "optional": true + }, + { + "name": "noack", + "type": "pure-token", + "display_text": "noack", + "token": "NOACK", + "optional": true + }, + { + "name": "streams", + "type": "block", + "token": "STREAMS", + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "id", + "type": "string", + "display_text": "id", + "multiple": true + } + ] + } + ], + "command_flags": [ + "write", + "blocking", + "movablekeys" + ] + }, + "XREVRANGE": { + "summary": "Returns the messages from a stream within a range of IDs in reverse order.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(N) with N being the number of elements returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).", + "history": [ + [ + "6.2.0", + "Added exclusive ranges." + ] + ], + "acl_categories": [ + "@read", + "@stream", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "end", + "type": "string", + "display_text": "end" + }, + { + "name": "start", + "type": "string", + "display_text": "start" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "XSETID": { + "summary": "An internal command for replicating stream values.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(1)", + "history": [ + [ + "7.0.0", + "Added the `entries_added` and `max_deleted_entry_id` arguments." + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "last-id", + "type": "string", + "display_text": "last-id" + }, + { + "name": "entries-added", + "type": "integer", + "display_text": "entries-added", + "token": "ENTRIESADDED", + "since": "7.0.0", + "optional": true + }, + { + "name": "max-deleted-id", + "type": "string", + "display_text": "max-deleted-id", + "token": "MAXDELETEDID", + "since": "7.0.0", + "optional": true + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "XTRIM": { + "summary": "Deletes messages from the beginning of a stream.", + "since": "5.0.0", + "group": "stream", + "complexity": "O(N), with N being the number of evicted entries. Constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.", + "history": [ + [ + "6.2.0", + "Added the `MINID` trimming strategy and the `LIMIT` option." + ] + ], + "acl_categories": [ + "@write", + "@stream", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "trim", + "type": "block", + "arguments": [ + { + "name": "strategy", + "type": "oneof", + "arguments": [ + { + "name": "maxlen", + "type": "pure-token", + "display_text": "maxlen", + "token": "MAXLEN" + }, + { + "name": "minid", + "type": "pure-token", + "display_text": "minid", + "token": "MINID", + "since": "6.2.0" + } + ] + }, + { + "name": "operator", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "equal", + "type": "pure-token", + "display_text": "equal", + "token": "=" + }, + { + "name": "approximately", + "type": "pure-token", + "display_text": "approximately", + "token": "~" + } + ] + }, + { + "name": "threshold", + "type": "string", + "display_text": "threshold" + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "LIMIT", + "since": "6.2.0", + "optional": true + } + ] + } + ], + "command_flags": [ + "write" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "ZADD": { + "summary": "Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(log(N)) for each item added, where N is the number of elements in the sorted set.", + "history": [ + [ + "2.4.0", + "Accepts multiple elements." + ], + [ + "3.0.2", + "Added the `XX`, `NX`, `CH` and `INCR` options." + ], + [ + "6.2.0", + "Added the `GT` and `LT` options." + ] + ], + "acl_categories": [ + "@write", + "@sortedset", + "@fast" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "condition", + "type": "oneof", + "since": "3.0.2", + "optional": true, + "arguments": [ + { + "name": "nx", + "type": "pure-token", + "display_text": "nx", + "token": "NX" + }, + { + "name": "xx", + "type": "pure-token", + "display_text": "xx", + "token": "XX" + } + ] + }, + { + "name": "comparison", + "type": "oneof", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "gt", + "type": "pure-token", + "display_text": "gt", + "token": "GT" + }, + { + "name": "lt", + "type": "pure-token", + "display_text": "lt", + "token": "LT" + } + ] + }, + { + "name": "change", + "type": "pure-token", + "display_text": "change", + "token": "CH", + "since": "3.0.2", + "optional": true + }, + { + "name": "increment", + "type": "pure-token", + "display_text": "increment", + "token": "INCR", + "since": "3.0.2", + "optional": true + }, + { + "name": "data", + "type": "block", + "multiple": true, + "arguments": [ + { + "name": "score", + "type": "double", + "display_text": "score" + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "ZCARD": { + "summary": "Returns the number of members in a sorted set.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": 2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZCOUNT": { + "summary": "Returns the count of members in a sorted set that have scores within a range.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(log(N)) with N being the number of elements in the sorted set.", + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "double", + "display_text": "min" + }, + { + "name": "max", + "type": "double", + "display_text": "max" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZDIFF": { + "summary": "Returns the difference between multiple sorted sets.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "ZDIFFSTORE": { + "summary": "Stores the difference of multiple sorted sets in a key.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ] + }, + "ZINCRBY": { + "summary": "Increments the score of a member in a sorted set.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(log(N)) where N is the number of elements in the sorted set.", + "acl_categories": [ + "@write", + "@sortedset", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "update": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "increment", + "type": "integer", + "display_text": "increment" + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ], + "command_flags": [ + "write", + "denyoom", + "fast" + ] + }, + "ZINTER": { + "summary": "Returns the intersect of multiple sorted sets.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "weight", + "type": "integer", + "display_text": "weight", + "token": "WEIGHTS", + "optional": true, + "multiple": true + }, + { + "name": "aggregate", + "type": "oneof", + "token": "AGGREGATE", + "optional": true, + "arguments": [ + { + "name": "sum", + "type": "pure-token", + "display_text": "sum", + "token": "SUM" + }, + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "ZINTERCARD": { + "summary": "Returns the number of members of the intersect of multiple sorted sets.", + "since": "7.0.0", + "group": "sorted-set", + "complexity": "O(N*K) worst case with N being the smallest input sorted set, K being the number of input sorted sets.", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "limit", + "type": "integer", + "display_text": "limit", + "token": "LIMIT", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "ZINTERSTORE": { + "summary": "Stores the intersect of multiple sorted sets in a key.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + }, + { + "name": "weight", + "type": "integer", + "display_text": "weight", + "token": "WEIGHTS", + "optional": true, + "multiple": true + }, + { + "name": "aggregate", + "type": "oneof", + "token": "AGGREGATE", + "optional": true, + "arguments": [ + { + "name": "sum", + "type": "pure-token", + "display_text": "sum", + "token": "SUM" + }, + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ] + }, + "ZLEXCOUNT": { + "summary": "Returns the number of members in a sorted set within a lexicographical range.", + "since": "2.8.9", + "group": "sorted-set", + "complexity": "O(log(N)) with N being the number of elements in the sorted set.", + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "string", + "display_text": "min" + }, + { + "name": "max", + "type": "string", + "display_text": "max" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZMPOP": { + "summary": "Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped.", + "since": "7.0.0", + "group": "sorted-set", + "complexity": "O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "where", + "type": "oneof", + "arguments": [ + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "write", + "movablekeys" + ] + }, + "ZMSCORE": { + "summary": "Returns the score of one or more members in a sorted set.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(N) where N is the number of members being requested.", + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "multiple": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZPOPMAX": { + "summary": "Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.", + "since": "5.0.0", + "group": "sorted-set", + "complexity": "O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.", + "acl_categories": [ + "@write", + "@sortedset", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "ZPOPMIN": { + "summary": "Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.", + "since": "5.0.0", + "group": "sorted-set", + "complexity": "O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.", + "acl_categories": [ + "@write", + "@sortedset", + "@fast" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "access": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "optional": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "ZRANDMEMBER": { + "summary": "Returns one or more random members from a sorted set.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(N) where N is the number of members returned", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -2, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "options", + "type": "block", + "optional": true, + "arguments": [ + { + "name": "count", + "type": "integer", + "display_text": "count" + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "ZRANGE": { + "summary": "Returns members in a sorted set within a range of indexes.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.", + "history": [ + [ + "6.2.0", + "Added the `REV`, `BYSCORE`, `BYLEX` and `LIMIT` options." + ] + ], + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "string", + "display_text": "start" + }, + { + "name": "stop", + "type": "string", + "display_text": "stop" + }, + { + "name": "sortby", + "type": "oneof", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "byscore", + "type": "pure-token", + "display_text": "byscore", + "token": "BYSCORE" + }, + { + "name": "bylex", + "type": "pure-token", + "display_text": "bylex", + "token": "BYLEX" + } + ] + }, + { + "name": "rev", + "type": "pure-token", + "display_text": "rev", + "token": "REV", + "since": "6.2.0", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "since": "6.2.0", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ], + "command_flags": [ + "readonly" + ] + }, + "ZRANGEBYLEX": { + "summary": "Returns members in a sorted set within a lexicographical range.", + "since": "2.8.9", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", + "deprecated_since": "6.2.0", + "replaced_by": "`ZRANGE` with the `BYLEX` argument", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "string", + "display_text": "min" + }, + { + "name": "max", + "type": "string", + "display_text": "max" + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "ZRANGEBYSCORE": { + "summary": "Returns members in a sorted set within a range of scores.", + "since": "1.0.5", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", + "deprecated_since": "6.2.0", + "replaced_by": "`ZRANGE` with the `BYSCORE` argument", + "history": [ + [ + "2.0.0", + "Added the `WITHSCORES` modifier." + ] + ], + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "double", + "display_text": "min" + }, + { + "name": "max", + "type": "double", + "display_text": "max" + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "since": "2.0.0", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "ZRANGESTORE": { + "summary": "Stores a range of members from sorted set in a key.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements stored into the destination key.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": -5, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "dst", + "type": "key", + "display_text": "dst", + "key_spec_index": 0 + }, + { + "name": "src", + "type": "key", + "display_text": "src", + "key_spec_index": 1 + }, + { + "name": "min", + "type": "string", + "display_text": "min" + }, + { + "name": "max", + "type": "string", + "display_text": "max" + }, + { + "name": "sortby", + "type": "oneof", + "optional": true, + "arguments": [ + { + "name": "byscore", + "type": "pure-token", + "display_text": "byscore", + "token": "BYSCORE" + }, + { + "name": "bylex", + "type": "pure-token", + "display_text": "bylex", + "token": "BYLEX" + } + ] + }, + { + "name": "rev", + "type": "pure-token", + "display_text": "rev", + "token": "REV", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom" + ] + }, + "ZRANK": { + "summary": "Returns the index of a member in a sorted set ordered by ascending scores.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(log(N))", + "history": [ + [ + "7.2.0", + "Added the optional `WITHSCORE` argument." + ] + ], + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + }, + { + "name": "withscore", + "type": "pure-token", + "display_text": "withscore", + "token": "WITHSCORE", + "optional": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZREM": { + "summary": "Removes one or more members from a sorted set. Deletes the sorted set if all members were removed.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(M*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed.", + "history": [ + [ + "2.4.0", + "Accepts multiple elements." + ] + ], + "acl_categories": [ + "@write", + "@sortedset", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member", + "multiple": true + } + ], + "command_flags": [ + "write", + "fast" + ] + }, + "ZREMRANGEBYLEX": { + "summary": "Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed.", + "since": "2.8.9", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "string", + "display_text": "min" + }, + { + "name": "max", + "type": "string", + "display_text": "max" + } + ], + "command_flags": [ + "write" + ] + }, + "ZREMRANGEBYRANK": { + "summary": "Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "stop", + "type": "integer", + "display_text": "stop" + } + ], + "command_flags": [ + "write" + ] + }, + "ZREMRANGEBYSCORE": { + "summary": "Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": 4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RW": true, + "delete": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "min", + "type": "double", + "display_text": "min" + }, + { + "name": "max", + "type": "double", + "display_text": "max" + } + ], + "command_flags": [ + "write" + ] + }, + "ZREVRANGE": { + "summary": "Returns members in a sorted set within a range of indexes in reverse order.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.", + "deprecated_since": "6.2.0", + "replaced_by": "`ZRANGE` with the `REV` argument", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "start", + "type": "integer", + "display_text": "start" + }, + { + "name": "stop", + "type": "integer", + "display_text": "stop" + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "ZREVRANGEBYLEX": { + "summary": "Returns members in a sorted set within a lexicographical range in reverse order.", + "since": "2.8.9", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", + "deprecated_since": "6.2.0", + "replaced_by": "`ZRANGE` with the `REV` and `BYLEX` arguments", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "max", + "type": "string", + "display_text": "max" + }, + { + "name": "min", + "type": "string", + "display_text": "min" + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "ZREVRANGEBYSCORE": { + "summary": "Returns members in a sorted set within a range of scores in reverse order.", + "since": "2.2.0", + "group": "sorted-set", + "complexity": "O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).", + "deprecated_since": "6.2.0", + "replaced_by": "`ZRANGE` with the `REV` and `BYSCORE` arguments", + "history": [ + [ + "2.1.6", + "`min` and `max` can be exclusive." + ] + ], + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "max", + "type": "double", + "display_text": "max" + }, + { + "name": "min", + "type": "double", + "display_text": "min" + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + }, + { + "name": "limit", + "type": "block", + "token": "LIMIT", + "optional": true, + "arguments": [ + { + "name": "offset", + "type": "integer", + "display_text": "offset" + }, + { + "name": "count", + "type": "integer", + "display_text": "count" + } + ] + } + ], + "command_flags": [ + "readonly" + ], + "doc_flags": [ + "deprecated" + ] + }, + "ZREVRANK": { + "summary": "Returns the index of a member in a sorted set ordered by descending scores.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(log(N))", + "history": [ + [ + "7.2.0", + "Added the optional `WITHSCORE` argument." + ] + ], + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + }, + { + "name": "withscore", + "type": "pure-token", + "display_text": "withscore", + "token": "WITHSCORE", + "optional": true + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZSCAN": { + "summary": "Iterates over members and scores of a sorted set.", + "since": "2.8.0", + "group": "sorted-set", + "complexity": "O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "cursor", + "type": "integer", + "display_text": "cursor" + }, + { + "name": "pattern", + "type": "pattern", + "display_text": "pattern", + "token": "MATCH", + "optional": true + }, + { + "name": "count", + "type": "integer", + "display_text": "count", + "token": "COUNT", + "optional": true + } + ], + "command_flags": [ + "readonly" + ], + "hints": [ + "nondeterministic_output" + ] + }, + "ZSCORE": { + "summary": "Returns the score of a member in a sorted set.", + "since": "1.2.0", + "group": "sorted-set", + "complexity": "O(1)", + "acl_categories": [ + "@read", + "@sortedset", + "@fast" + ], + "arity": 3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0 + }, + { + "name": "member", + "type": "string", + "display_text": "member" + } + ], + "command_flags": [ + "readonly", + "fast" + ] + }, + "ZUNION": { + "summary": "Returns the union of multiple sorted sets.", + "since": "6.2.0", + "group": "sorted-set", + "complexity": "O(N)+O(M*log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.", + "acl_categories": [ + "@read", + "@sortedset", + "@slow" + ], + "arity": -3, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 0, + "multiple": true + }, + { + "name": "weight", + "type": "integer", + "display_text": "weight", + "token": "WEIGHTS", + "optional": true, + "multiple": true + }, + { + "name": "aggregate", + "type": "oneof", + "token": "AGGREGATE", + "optional": true, + "arguments": [ + { + "name": "sum", + "type": "pure-token", + "display_text": "sum", + "token": "SUM" + }, + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + }, + { + "name": "withscores", + "type": "pure-token", + "display_text": "withscores", + "token": "WITHSCORES", + "optional": true + } + ], + "command_flags": [ + "readonly", + "movablekeys" + ] + }, + "ZUNIONSTORE": { + "summary": "Stores the union of multiple sorted sets in a key.", + "since": "2.0.0", + "group": "sorted-set", + "complexity": "O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.", + "acl_categories": [ + "@write", + "@sortedset", + "@slow" + ], + "arity": -4, + "key_specs": [ + { + "begin_search": { + "type": "index", + "spec": { + "index": 1 + } + }, + "find_keys": { + "type": "range", + "spec": { + "lastkey": 0, + "keystep": 1, + "limit": 0 + } + }, + "OW": true, + "update": true + }, + { + "begin_search": { + "type": "index", + "spec": { + "index": 2 + } + }, + "find_keys": { + "type": "keynum", + "spec": { + "keynumidx": 0, + "firstkey": 1, + "keystep": 1 + } + }, + "RO": true, + "access": true + } + ], + "arguments": [ + { + "name": "destination", + "type": "key", + "display_text": "destination", + "key_spec_index": 0 + }, + { + "name": "numkeys", + "type": "integer", + "display_text": "numkeys" + }, + { + "name": "key", + "type": "key", + "display_text": "key", + "key_spec_index": 1, + "multiple": true + }, + { + "name": "weight", + "type": "integer", + "display_text": "weight", + "token": "WEIGHTS", + "optional": true, + "multiple": true + }, + { + "name": "aggregate", + "type": "oneof", + "token": "AGGREGATE", + "optional": true, + "arguments": [ + { + "name": "sum", + "type": "pure-token", + "display_text": "sum", + "token": "SUM" + }, + { + "name": "min", + "type": "pure-token", + "display_text": "min", + "token": "MIN" + }, + { + "name": "max", + "type": "pure-token", + "display_text": "max", + "token": "MAX" + } + ] + } + ], + "command_flags": [ + "write", + "denyoom", + "movablekeys" + ] + } } diff --git a/commands/_index.md b/commands/_index.md new file mode 100644 index 0000000000..c9636236e4 --- /dev/null +++ b/commands/_index.md @@ -0,0 +1,4 @@ +--- +title: "Redis Commands" +linkTitle: "Commands" +--- diff --git a/commands/acl-cat.md b/commands/acl-cat.md new file mode 100644 index 0000000000..97e35d16fa --- /dev/null +++ b/commands/acl-cat.md @@ -0,0 +1,78 @@ +The command shows the available ACL categories if called without arguments. +If a category name is given, the command shows all the Redis commands in +the specified category. + +ACL categories are very useful in order to create ACL rules that include or +exclude a large set of commands at once, without specifying every single +command. For instance, the following rule will let the user `karin` perform +everything but the most dangerous operations that may affect the server +stability: + + ACL SETUSER karin on +@all -@dangerous + +We first add all the commands to the set of commands that `karin` is able +to execute, but then we remove all the dangerous commands. + +Checking for all the available categories is as simple as: + +``` +> ACL CAT + 1) "keyspace" + 2) "read" + 3) "write" + 4) "set" + 5) "sortedset" + 6) "list" + 7) "hash" + 8) "string" + 9) "bitmap" +10) "hyperloglog" +11) "geo" +12) "stream" +13) "pubsub" +14) "admin" +15) "fast" +16) "slow" +17) "blocking" +18) "dangerous" +19) "connection" +20) "transaction" +21) "scripting" +``` + +Then we may want to know what commands are part of a given category: + +``` +> ACL CAT dangerous + 1) "flushdb" + 2) "acl" + 3) "slowlog" + 4) "debug" + 5) "role" + 6) "keys" + 7) "pfselftest" + 8) "client" + 9) "bgrewriteaof" +10) "replicaof" +11) "monitor" +12) "restore-asking" +13) "latency" +14) "replconf" +15) "pfdebug" +16) "bgsave" +17) "sync" +18) "config" +19) "flushall" +20) "cluster" +21) "info" +22) "lastsave" +23) "slaveof" +24) "swapdb" +25) "module" +26) "restore" +27) "migrate" +28) "save" +29) "shutdown" +30) "psync" +31) "sort" +``` diff --git a/commands/acl-deluser.md b/commands/acl-deluser.md new file mode 100644 index 0000000000..620183db8f --- /dev/null +++ b/commands/acl-deluser.md @@ -0,0 +1,12 @@ +Delete all the specified ACL users and terminate all the connections that are +authenticated with such users. Note: the special `default` user cannot be +removed from the system, this is the default user that every new connection +is authenticated with. The list of users may include usernames that do not +exist, in such case no operation is performed for the non existing users. + +@examples + +``` +> ACL DELUSER antirez +1 +``` diff --git a/commands/acl-dryrun.md b/commands/acl-dryrun.md new file mode 100644 index 0000000000..78cca1e76d --- /dev/null +++ b/commands/acl-dryrun.md @@ -0,0 +1,13 @@ +Simulate the execution of a given command by a given user. +This command can be used to test the permissions of a given user without having to enable the user or cause the side effects of running the command. + +@examples + +``` +> ACL SETUSER VIRGINIA +SET ~* +"OK" +> ACL DRYRUN VIRGINIA SET foo bar +"OK" +> ACL DRYRUN VIRGINIA GET foo +"User VIRGINIA has no permissions to run the 'get' command" +``` diff --git a/commands/acl-genpass.md b/commands/acl-genpass.md new file mode 100644 index 0000000000..0b7a274334 --- /dev/null +++ b/commands/acl-genpass.md @@ -0,0 +1,39 @@ +ACL users need a solid password in order to authenticate to the server without +security risks. Such password does not need to be remembered by humans, but +only by computers, so it can be very long and strong (unguessable by an +external attacker). The `ACL GENPASS` command generates a password starting +from /dev/urandom if available, otherwise (in systems without /dev/urandom) it +uses a weaker system that is likely still better than picking a weak password +by hand. + +By default (if /dev/urandom is available) the password is strong and +can be used for other uses in the context of a Redis application, for +instance in order to create unique session identifiers or other kind of +unguessable and not colliding IDs. The password generation is also very cheap +because we don't really ask /dev/urandom for bits at every execution. At +startup Redis creates a seed using /dev/urandom, then it will use SHA256 +in counter mode, with HMAC-SHA256(seed,counter) as primitive, in order to +create more random bytes as needed. This means that the application developer +should be feel free to abuse `ACL GENPASS` to create as many secure +pseudorandom strings as needed. + +The command output is a hexadecimal representation of a binary string. +By default it emits 256 bits (so 64 hex characters). The user can provide +an argument in form of number of bits to emit from 1 to 1024 to change +the output length. Note that the number of bits provided is always +rounded to the next multiple of 4. So for instance asking for just 1 +bit password will result in 4 bits to be emitted, in the form of a single +hex character. + +@examples + +``` +> ACL GENPASS +"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc" + +> ACL GENPASS 32 +"355ef3dd" + +> ACL GENPASS 5 +"90" +``` diff --git a/commands/acl-getuser.md b/commands/acl-getuser.md new file mode 100644 index 0000000000..c952e6483a --- /dev/null +++ b/commands/acl-getuser.md @@ -0,0 +1,39 @@ +The command returns all the rules defined for an existing ACL user. + +Specifically, it lists the user's ACL flags, password hashes, commands, key patterns, channel patterns (Added in version 6.2) and selectors (Added in version 7.0). +Additional information may be returned in the future if more metadata is added to the user. + +Command rules are always returned in the same format as the one used in the `ACL SETUSER` command. +Before version 7.0, keys and channels were returned as an array of patterns, however in version 7.0 later they are now also returned in same format as the one used in the `ACL SETUSER` command. +Note: This description of command rules reflects the user's effective permissions, so while it may not be identical to the set of rules used to configure the user, it is still functionally identical. + +Selectors are listed in the order they were applied to the user, and include information about commands, key patterns, and channel patterns. + +@examples + +Here's an example configuration for a user + +``` +> ACL SETUSER sample on nopass +GET allkeys &* (+SET ~key2) +"OK" +> ACL GETUSER sample +1) "flags" +2) 1) "on" + 2) "allkeys" + 3) "nopass" +3) "passwords" +4) (empty array) +5) "commands" +6) "+@all" +7) "keys" +8) "~*" +9) "channels" +10) "&*" +11) "selectors" +12) 1) 1) "commands" + 6) "+SET" + 7) "keys" + 8) "~key2" + 9) "channels" + 10) "&*" +``` diff --git a/commands/acl-help.md b/commands/acl-help.md new file mode 100644 index 0000000000..bccd37b243 --- /dev/null +++ b/commands/acl-help.md @@ -0,0 +1,2 @@ +The `ACL HELP` command returns a helpful text describing the different subcommands. + diff --git a/commands/acl-list.md b/commands/acl-list.md new file mode 100644 index 0000000000..df7cecde26 --- /dev/null +++ b/commands/acl-list.md @@ -0,0 +1,13 @@ +The command shows the currently active ACL rules in the Redis server. Each +line in the returned array defines a different user, and the format is the +same used in the redis.conf file or the external ACL file, so you can +cut and paste what is returned by the ACL LIST command directly inside a +configuration file if you wish (but make sure to check `ACL SAVE`). + +@examples + +``` +> ACL LIST +1) "user antirez on #9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 ~objects:* &* +@all -@admin -@dangerous" +2) "user default on nopass ~* &* +@all" +``` diff --git a/commands/acl-load.md b/commands/acl-load.md new file mode 100644 index 0000000000..f425937ef5 --- /dev/null +++ b/commands/acl-load.md @@ -0,0 +1,17 @@ +When Redis is configured to use an ACL file (with the `aclfile` configuration +option), this command will reload the ACLs from the file, replacing all +the current ACL rules with the ones defined in the file. The command makes +sure to have an *all or nothing* behavior, that is: + +* If every line in the file is valid, all the ACLs are loaded. +* If one or more line in the file is not valid, nothing is loaded, and the old ACL rules defined in the server memory continue to be used. + +@examples + +``` +> ACL LOAD ++OK + +> ACL LOAD +-ERR /tmp/foo:1: Unknown command or category name in ACL... +``` diff --git a/commands/acl-log.md b/commands/acl-log.md new file mode 100644 index 0000000000..eb102f66a6 --- /dev/null +++ b/commands/acl-log.md @@ -0,0 +1,50 @@ +The command shows a list of recent ACL security events: + +1. Failures to authenticate their connections with `AUTH` or `HELLO`. +2. Commands denied because against the current ACL rules. +3. Commands denied because accessing keys not allowed in the current ACL rules. + +The optional argument specifies how many entries to show. By default +up to ten failures are returned. The special `RESET` argument clears the log. +Entries are displayed starting from the most recent. + +@examples + +``` +> AUTH someuser wrongpassword +(error) WRONGPASS invalid username-password pair +> ACL LOG 1 +1) 1) "count" + 2) (integer) 1 + 3) "reason" + 4) "auth" + 5) "context" + 6) "toplevel" + 7) "object" + 8) "AUTH" + 9) "username" + 10) "someuser" + 11) "age-seconds" + 12) "8.038" + 13) "client-info" + 14) "id=3 addr=127.0.0.1:57275 laddr=127.0.0.1:6379 fd=8 name= age=16 idle=0 flags=N db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=48 qbuf-free=16842 argv-mem=25 multi-mem=0 rbs=1024 rbp=0 obl=0 oll=0 omem=0 tot-mem=18737 events=r cmd=auth user=default redir=-1 resp=2" + 15) "entry-id" + 16) (integer) 0 + 17) "timestamp-created" + 18) (integer) 1675361492408 + 19) "timestamp-last-updated" + 20) (integer) 1675361492408 +``` + +Each log entry is composed of the following fields: + +1. `count`: The number of security events detected within a 60 second period that are represented by this entry. +2. `reason`: The reason that the security events were logged. Either `command`, `key`, `channel`, or `auth`. +3. `context`: The context that the security events were detected in. Either `toplevel`, `multi`, `lua`, or `module`. +4. `object`: The resource that the user had insufficient permissions to access. `auth` when the reason is `auth`. +5. `username`: The username that executed the command that caused the security events or the username that had a failed authentication attempt. +6. `age-seconds`: Age of the log entry in seconds. +7. `client-info`: Displays the client info of a client which caused one of the security events. +8. `entry-id`: The sequence number of the entry (starting at 0) since the server process started. Can also be used to check if items were “lost”, if they fell between periods. +9. `timestamp-created`: A UNIX timestamp in `milliseconds` at the time the entry was first created. +10. `timestamp-last-updated`: A UNIX timestamp in `milliseconds` at the time the entry was last updated. \ No newline at end of file diff --git a/commands/acl-save.md b/commands/acl-save.md new file mode 100644 index 0000000000..bfa59a5969 --- /dev/null +++ b/commands/acl-save.md @@ -0,0 +1,12 @@ +When Redis is configured to use an ACL file (with the `aclfile` configuration +option), this command will save the currently defined ACLs from the server memory to the ACL file. + +@examples + +``` +> ACL SAVE ++OK + +> ACL SAVE +-ERR There was an error trying to save the ACLs. Please check the server logs for more information +``` diff --git a/commands/acl-setuser.md b/commands/acl-setuser.md new file mode 100644 index 0000000000..3bc31cc0ee --- /dev/null +++ b/commands/acl-setuser.md @@ -0,0 +1,95 @@ +Create an ACL user with the specified rules or modify the rules of an +existing user. + +Manipulate Redis ACL users interactively. +If the username does not exist, the command creates the username without any privilege. +It then reads from left to right all the [rules](#acl-rules) provided as successive arguments, setting the user ACL rules as specified. +If the user already exists, the provided ACL rules are simply applied +*in addition* to the rules already set. For example: + + ACL SETUSER virginia on allkeys +set + +The above command creates a user called `virginia` who is active(the _on_ rule), can access any key (_allkeys_ rule), and can call the set command (_+set_ rule). +Then, you can use another `ACL SETUSER` call to modify the user rules: + + ACL SETUSER virginia +get + +The above rule applies the new rule to the user `virginia`, so other than `SET`, the user `virginia` can now also use the `GET` command. + +Starting from Redis 7.0, ACL rules can also be grouped into multiple distinct sets of rules, called _selectors_. +Selectors are added by wrapping the rules in parentheses and providing them just like any other rule. +In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command. +For example: + + ACL SETUSER virginia on +GET allkeys (+SET ~app1*) + +This sets a user with two sets of permissions, one defined on the user and one defined with a selector. +The root user permissions only allow executing the get command, but can be executed on any keys. +The selector then grants a secondary set of permissions: access to the `SET` command to be executed on any key that starts with `app1`. +Using multiple selectors allows you to grant permissions that are different depending on what keys are being accessed. + +When we want to be sure to define a user from scratch, without caring if +it had previously defined rules associated, we can use the special rule +`reset` as first rule, in order to flush all the other existing rules: + + ACL SETUSER antirez reset [... other rules ...] + +After resetting a user, its ACL rules revert to the default: inactive, passwordless, can't execute any command nor access any key or channel: + + > ACL SETUSER antirez reset + +OK + > ACL LIST + 1) "user antirez off -@all" + +ACL rules are either words like "on", "off", "reset", "allkeys", or are +special rules that start with a special character, and are followed by +another string (without any space in between), like "+SET". + +The following documentation is a reference manual about the capabilities of this command, however our [ACL tutorial](/topics/acl) may be a more gentle introduction to how the ACL system works in general. + +## ACL rules + +Redis ACL rules are split into two categories: rules that define command permissions or _command rules_, and rules that define the user state or _user management rules_. +This is a list of all the supported Redis ACL rules: + +### Command rules + +* `~`: Adds the specified key pattern (glob style pattern, like in the `KEYS` command), to the list of key patterns accessible by the user. This grants both read and write permissions to keys that match the pattern. You can add multiple key patterns to the same user. Example: `~objects:*` +* `%R~`: (Available in Redis 7.0 and later) Adds the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See [key permissions](/topics/acl#key-permissions) for more information. +* `%W~`: (Available in Redis 7.0 and later) Adds the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See [key permissions](/topics/acl#key-permissions) for more information. +* `%RW~`: (Available in Redis 7.0 and later) Alias for `~`. +* `allkeys`: Alias for `~*`, it allows the user to access all the keys. +* `resetkeys`: Removes all the key patterns from the list of key patterns the user can access. +* `&`: (Available in Redis 6.2 and later) Adds the specified glob style pattern to the list of Pub/Sub channel patterns accessible by the user. You can add multiple channel patterns to the same user. Example: `&chatroom:*` +* `allchannels`: Alias for `&*`, it allows the user to access all Pub/Sub channels. +* `resetchannels`: Removes all channel patterns from the list of Pub/Sub channel patterns the user can access. +* `+`: Adds the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g "+config|get"). +* `+@`: Adds all the commands in the specified category to the list of commands the user is able to execute. Example: `+@string` (adds all the string commands). For a list of categories, check the `ACL CAT` command. +* `+|first-arg`: Allows a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with "+". This feature is deprecated and may be removed in the future. +* `allcommands`: Alias of `+@all`. Adds all the commands there are in the server, including *future commands* loaded via module, to be executed by this user. +* `-`: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with `|` for blocking subcommands (e.g., "-config|set"). +* `-@`: Like `+@` but removes all the commands in the category instead of adding them. +* `nocommands`: Alias for `-@all`. Removes all the commands, and the user is no longer able to execute anything. + +### User management rules + +* `on`: Set the user as active, it will be possible to authenticate as this user using `AUTH `. +* `off`: Set user as not active, it will be impossible to log as this user. Please note that if a user gets disabled (set to off) after there are connections already authenticated with such a user, the connections will continue to work as expected. To also kill the old connections you can use `CLIENT KILL` with the user option. An alternative is to delete the user with `ACL DELUSER`, that will result in all the connections authenticated as the deleted user to be disconnected. +* `nopass`: The user is set as a _no password_ user. It means that it will be possible to authenticate as such user with any password. By default, the `default` special user is set as "nopass". The `nopass` rule will also reset all the configured passwords for the user. +* `>password`: Adds the specified clear text password as a hashed password in the list of the users passwords. Every user can have many active passwords, so that password rotation will be simpler. The specified password is not stored as clear text inside the server. Example: `>mypassword`. +* `#`: Adds the specified hashed password to the list of user passwords. A Redis hashed password is hashed with SHA256 and translated into a hexadecimal string. Example: `#c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2`. +* `password` but removes the password instead of adding it. +* `!`: Like `#` but removes the password instead of adding it. +* `()`: (Available in Redis 7.0 and later) Creates a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See [selectors](/docs/management/security/acl#selectors) for more information. +* `clearselectors`: (Available in Redis 7.0 and later) Deletes all of the selectors attached to the user. +* `reset`: Removes any capability from the user. They are set to off, without passwords, unable to execute any command, unable to access any key. + +@examples + +``` +> ACL SETUSER alan allkeys +@string +@set -SADD >alanpassword ++OK + +> ACL SETUSER antirez heeyyyy +(error) ERR Error in ACL SETUSER modifier 'heeyyyy': Syntax error +``` diff --git a/commands/acl-users.md b/commands/acl-users.md new file mode 100644 index 0000000000..b8a40c4c16 --- /dev/null +++ b/commands/acl-users.md @@ -0,0 +1,11 @@ +The command shows a list of all the usernames of the currently configured +users in the Redis ACL system. + +@examples + +``` +> ACL USERS +1) "anna" +2) "antirez" +3) "default" +``` diff --git a/commands/acl-whoami.md b/commands/acl-whoami.md new file mode 100644 index 0000000000..04a759477b --- /dev/null +++ b/commands/acl-whoami.md @@ -0,0 +1,10 @@ +Return the username the current connection is authenticated with. +New connections are authenticated with the "default" user. They +can change user using `AUTH`. + +@examples + +``` +> ACL WHOAMI +"default" +``` diff --git a/commands/acl.md b/commands/acl.md new file mode 100644 index 0000000000..7e60f2e6b0 --- /dev/null +++ b/commands/acl.md @@ -0,0 +1,3 @@ +This is a container command for [Access Control List](/docs/management/security/acl/) commands. + +To see the list of available commands you can call `ACL HELP`. diff --git a/commands/append.md b/commands/append.md index bb28ed562d..2f10c8c1e8 100644 --- a/commands/append.md +++ b/commands/append.md @@ -1,23 +1,52 @@ -@complexity +If `key` already exists and is a string, this command appends the `value` at the +end of the string. +If `key` does not exist it is created and set as an empty string, so `APPEND` +will be similar to `SET` in this special case. -O(1). The amortized time complexity is O(1) assuming the appended value is -small and the already present value is of any size, since the dynamic string -library used by Redis will double the free space available on every -reallocation. +@examples -If `key` already exists and is a string, this command appends the `value` at -the end of the string. If `key` does not exist it is created and set as an -empty string, so `APPEND` will be similar to `SET` in this special case. +```cli +EXISTS mykey +APPEND mykey "Hello" +APPEND mykey " World" +GET mykey +``` -@return +## Pattern: Time series -@integer-reply: the length of the string after the append operation. +The `APPEND` command can be used to create a very compact representation of a +list of fixed-size samples, usually referred as _time series_. +Every time a new sample arrives we can store it using the command -@examples +``` +APPEND timeseries "fixed-size sample" +``` + +Accessing individual elements in the time series is not hard: + +* `STRLEN` can be used in order to obtain the number of samples. +* `GETRANGE` allows for random access of elements. + If our time series have associated time information we can easily implement + a binary search to get range combining `GETRANGE` with the Lua scripting + engine available in Redis 2.6. +* `SETRANGE` can be used to overwrite an existing time series. + +The limitation of this pattern is that we are forced into an append-only mode +of operation, there is no way to cut the time series to a given size easily +because Redis currently lacks a command able to trim string objects. +However the space efficiency of time series stored in this way is remarkable. + +Hint: it is possible to switch to a different key based on the current Unix +time, in this way it is possible to have just a relatively small amount of +samples per key, to avoid dealing with very big keys, and to make this pattern +more friendly to be distributed across many Redis instances. - @cli - EXISTS mykey - APPEND mykey "Hello" - APPEND mykey " World" - GET mykey +An example sampling the temperature of a sensor using fixed-size strings (using +a binary format is better in real implementations). +```cli +APPEND ts "0043" +APPEND ts "0035" +GETRANGE ts 0 3 +GETRANGE ts 4 7 +``` diff --git a/commands/asking.md b/commands/asking.md new file mode 100644 index 0000000000..39b0acb72b --- /dev/null +++ b/commands/asking.md @@ -0,0 +1,6 @@ +When a cluster client receives an `-ASK` redirect, the `ASKING` command is sent to the target node followed by the command which was redirected. +This is normally done automatically by cluster clients. + +If an `-ASK` redirect is received during a transaction, only one ASKING command needs to be sent to the target node before sending the complete transaction to the target node. + +See [ASK redirection in the Redis Cluster Specification](/topics/cluster-spec#ask-redirection) for details. diff --git a/commands/auth.md b/commands/auth.md index 4340d76575..144aec9a21 100644 --- a/commands/auth.md +++ b/commands/auth.md @@ -1,19 +1,32 @@ -@description +The AUTH command authenticates the current connection in two cases: -Request for authentication in a password protected Redis server. -Redis can be instructed to require a password before allowing clients -to execute commands. This is done using the `requirepass` directive in the -configuration file. +1. If the Redis server is password protected via the `requirepass` option. +2. A Redis 6.0 instance, or greater, is using the [Redis ACL system](/topics/acl). -If `password` matches the password in the configuration file, the server replies with -the `OK` status code and starts accepting commands. +Redis versions prior of Redis 6 were only able to understand the one argument +version of the command: + + AUTH + +This form just authenticates against the password set with `requirepass`. +In this configuration Redis will deny any command executed by the just +connected clients, unless the connection gets authenticated via `AUTH`. + +If the password provided via AUTH matches the password in the configuration file, the server replies with the `OK` status code and starts accepting commands. Otherwise, an error is returned and the clients needs to try a new password. -**Note**: because of the high performance nature of Redis, it is possible to try -a lot of passwords in parallel in very short time, so make sure to generate -a strong and very long password so that this attack is infeasible. +When Redis ACLs are used, the command should be given in an extended way: + + AUTH + +In order to authenticate the current connection with one of the connections +defined in the ACL list (see `ACL SETUSER`) and the official [ACL guide](/topics/acl) for more information. -@return +When ACLs are used, the single argument form of the command, where only the password is specified, assumes that the implicit username is "default". -@status-reply +## Security notice +Because of the high performance nature of Redis, it is possible to try +a lot of passwords in parallel in very short time, so make sure to generate a +strong and very long password so that this attack is infeasible. +A good way to generate strong passwords is via the `ACL GENPASS` command. diff --git a/commands/bgrewriteaof.md b/commands/bgrewriteaof.md index 10bfed18c9..2424a8631f 100644 --- a/commands/bgrewriteaof.md +++ b/commands/bgrewriteaof.md @@ -1,7 +1,25 @@ -Rewrites the [append-only file](/topics/persistence#append-only-file) to reflect the current dataset in memory. +Instruct Redis to start an [Append Only File][tpaof] rewrite process. +The rewrite will create a small optimized version of the current Append Only +File. + +[tpaof]: /topics/persistence#append-only-file If `BGREWRITEAOF` fails, no data gets lost as the old AOF will be untouched. -@return +The rewrite will be only triggered by Redis if there is not already a background +process doing persistence. + +Specifically: + +* If a Redis child is creating a snapshot on disk, the AOF rewrite is _scheduled_ but not started until the saving child producing the RDB file terminates. In this case the `BGREWRITEAOF` will still return a positive status reply, but with an appropriate message. You can check if an AOF rewrite is scheduled looking at the `INFO` command as of Redis 2.6 or successive versions. +* If an AOF rewrite is already in progress the command returns an error and no + AOF rewrite will be scheduled for a later time. +* If the AOF rewrite could start, but the attempt at starting it fails (for instance because of an error in creating the child process), an error is returned to the caller. + +Since Redis 2.4 the AOF rewrite is automatically triggered by Redis, however the +`BGREWRITEAOF` command can be used to trigger a rewrite at any time. + +Please refer to the [persistence documentation][tp] for detailed information. + +[tp]: /topics/persistence -@status-reply: always `OK`. diff --git a/commands/bgsave.md b/commands/bgsave.md index 3fd2ee45db..f6f676326e 100644 --- a/commands/bgsave.md +++ b/commands/bgsave.md @@ -1,10 +1,21 @@ +Save the DB in background. +Normally the OK code is immediately returned. +Redis forks, the parent continues to serve the clients, the child saves the DB +on disk then exits. -Save the DB in background. The OK code is immediately returned. -Redis forks, the parent continues to server the clients, the child -saves the DB on disk then exit. A client my be able to check if the -operation succeeded using the `LASTSAVE` command. +An error is returned if there is already a background save running or if there +is another non-background-save process running, specifically an in-progress AOF +rewrite. -@return +If `BGSAVE SCHEDULE` is used, the command will immediately return `OK` when an +AOF rewrite is in progress and schedule the background save to run at the next +opportunity. + +A client may be able to check if the operation succeeded using the `LASTSAVE` +command. + +Please refer to the [persistence documentation][tp] for detailed information. + +[tp]: /topics/persistence -@status-reply diff --git a/commands/bitcount.md b/commands/bitcount.md new file mode 100644 index 0000000000..3b33703751 --- /dev/null +++ b/commands/bitcount.md @@ -0,0 +1,69 @@ +Count the number of set bits (population counting) in a string. + +By default all the bytes contained in the string are examined. +It is possible to specify the counting operation only in an interval passing the +additional arguments _start_ and _end_. + +Like for the `GETRANGE` command start and end can contain negative values in +order to index bytes starting from the end of the string, where -1 is the last +byte, -2 is the penultimate, and so forth. + +Non-existent keys are treated as empty strings, so the command will return zero. + +By default, the additional arguments _start_ and _end_ specify a byte index. +We can use an additional argument `BIT` to specify a bit index. +So 0 is the first bit, 1 is the second bit, and so forth. +For negative values, -1 is the last bit, -2 is the penultimate, and so forth. + +@examples + +```cli +SET mykey "foobar" +BITCOUNT mykey +BITCOUNT mykey 0 0 +BITCOUNT mykey 1 1 +BITCOUNT mykey 1 1 BYTE +BITCOUNT mykey 5 30 BIT +``` + +## Pattern: real-time metrics using bitmaps + +Bitmaps are a very space-efficient representation of certain kinds of +information. +One example is a Web application that needs the history of user visits, so that +for instance it is possible to determine what users are good targets of beta +features. + +Using the `SETBIT` command this is trivial to accomplish, identifying every day +with a small progressive integer. +For instance day 0 is the first day the application was put online, day 1 the +next day, and so forth. + +Every time a user performs a page view, the application can register that in +the current day the user visited the web site using the `SETBIT` command setting +the bit corresponding to the current day. + +Later it will be trivial to know the number of single days the user visited the +web site simply calling the `BITCOUNT` command against the bitmap. + +A similar pattern where user IDs are used instead of days is described +in the article called "[Fast easy realtime metrics using Redis +bitmaps][hbgc212fermurb]". + +[hbgc212fermurb]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps + +## Performance considerations + +In the above example of counting days, even after 10 years the application is +online we still have just `365*10` bits of data per user, that is just 456 bytes +per user. +With this amount of data `BITCOUNT` is still as fast as any other O(1) Redis +command like `GET` or `INCR`. + +When the bitmap is big, there are two alternatives: + +* Taking a separated key that is incremented every time the bitmap is modified. + This can be very efficient and atomic using a small Redis Lua script. +* Running the bitmap incrementally using the `BITCOUNT` _start_ and _end_ + optional parameters, accumulating the results client-side, and optionally + caching the result into a key. diff --git a/commands/bitfield.md b/commands/bitfield.md new file mode 100644 index 0000000000..0a549acff9 --- /dev/null +++ b/commands/bitfield.md @@ -0,0 +1,110 @@ +The command treats a Redis string as an array of bits, and is capable of addressing specific integer fields of varying bit widths and arbitrary non (necessary) aligned offset. In practical terms using this command you can set, for example, a signed 5 bits integer at bit offset 1234 to a specific value, retrieve a 31 bit unsigned integer from offset 4567. Similarly the command handles increments and decrements of the specified integers, providing guaranteed and well specified overflow and underflow behavior that the user can configure. + +`BITFIELD` is able to operate with multiple bit fields in the same command call. It takes a list of operations to perform, and returns an array of replies, where each array matches the corresponding operation in the list of arguments. + +For example the following command increments a 5 bit signed integer at bit offset 100, and gets the value of the 4 bit unsigned integer at bit offset 0: + + > BITFIELD mykey INCRBY i5 100 1 GET u4 0 + 1) (integer) 1 + 2) (integer) 0 + +Note that: + +1. Addressing with `!GET` bits outside the current string length (including the case the key does not exist at all), results in the operation to be performed like the missing part all consists of bits set to 0. +2. Addressing with `!SET` or `!INCRBY` bits outside the current string length will enlarge the string, zero-padding it, as needed, for the minimal length needed, according to the most far bit touched. + +## Supported subcommands and integer encoding + +The following is the list of supported commands. + +* **GET** `` `` -- Returns the specified bit field. +* **SET** `` `` `` -- Set the specified bit field and returns its old value. +* **INCRBY** `` `` `` -- Increments or decrements (if a negative increment is given) the specified bit field and returns the new value. + +There is another subcommand that only changes the behavior of successive +`!INCRBY` and `!SET` subcommands calls by setting the overflow behavior: + +* **OVERFLOW** `[WRAP|SAT|FAIL]` + +Where an integer encoding is expected, it can be composed by prefixing with `i` for signed integers and `u` for unsigned integers with the number of bits of our integer encoding. So for example `u8` is an unsigned integer of 8 bits and `i16` is a +signed integer of 16 bits. + +The supported encodings are up to 64 bits for signed integers, and up to 63 bits for +unsigned integers. This limitation with unsigned integers is due to the fact +that currently the Redis protocol is unable to return 64 bit unsigned integers +as replies. + +## Bits and positional offsets + +There are two ways in order to specify offsets in the bitfield command. +If a number without any prefix is specified, it is used just as a zero based +bit offset inside the string. + +However if the offset is prefixed with a `#` character, the specified offset +is multiplied by the integer encoding's width, so for example: + + BITFIELD mystring SET i8 #0 100 SET i8 #1 200 + +Will set the first i8 integer at offset 0 and the second at offset 8. +This way you don't have to do the math yourself inside your client if what +you want is a plain array of integers of a given size. + +## Overflow control + +Using the `OVERFLOW` command the user is able to fine-tune the behavior of +the increment or decrement overflow (or underflow) by specifying one of +the following behaviors: + +* **WRAP**: wrap around, both with signed and unsigned integers. In the case of unsigned integers, wrapping is like performing the operation modulo the maximum value the integer can contain (the C standard behavior). With signed integers instead wrapping means that overflows restart towards the most negative value and underflows towards the most positive ones, so for example if an `i8` integer is set to the value 127, incrementing it by 1 will yield `-128`. +* **SAT**: uses saturation arithmetic, that is, on underflows the value is set to the minimum integer value, and on overflows to the maximum integer value. For example incrementing an `i8` integer starting from value 120 with an increment of 10, will result into the value 127, and further increments will always keep the value at 127. The same happens on underflows, but towards the value is blocked at the most negative value. +* **FAIL**: in this mode no operation is performed on overflows or underflows detected. The corresponding return value is set to NULL to signal the condition to the caller. + +Note that each `OVERFLOW` statement only affects the `!INCRBY` and `!SET` +commands that follow it in the list of subcommands, up to the next `OVERFLOW` +statement. + +By default, **WRAP** is used if not otherwise specified. + + > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 + 1) (integer) 1 + 2) (integer) 1 + > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 + 1) (integer) 2 + 2) (integer) 2 + > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 + 1) (integer) 3 + 2) (integer) 3 + > BITFIELD mykey incrby u2 100 1 OVERFLOW SAT incrby u2 102 1 + 1) (integer) 0 + 2) (integer) 3 + +The following is an example of `OVERFLOW FAIL` returning NULL. + + > BITFIELD mykey OVERFLOW FAIL incrby u2 102 1 + 1) (nil) + +## Motivations + +The motivation for this command is that the ability to store many small integers +as a single large bitmap (or segmented over a few keys to avoid having huge keys) is extremely memory efficient, and opens new use cases for Redis to be applied, especially in the field of real time analytics. This use cases are supported by the ability to specify the overflow in a controlled way. + +Fun fact: Reddit's 2017 April fools' project [r/place](https://reddit.com/r/place) was [built using the Redis BITFIELD command](https://redditblog.com/2017/04/13/how-we-built-rplace/) in order to take an in-memory representation of the collaborative canvas. + +## Performance considerations + +Usually `BITFIELD` is a fast command, however note that addressing far bits of currently short strings will trigger an allocation that may be more costly than executing the command on bits already existing. + +## Orders of bits + +The representation used by `BITFIELD` considers the bitmap as having the +bit number 0 to be the most significant bit of the first byte, and so forth, so +for example setting a 5 bits unsigned integer to value 23 at offset 7 into a +bitmap previously set to all zeroes, will produce the following representation: + + +--------+--------+ + |00000001|01110000| + +--------+--------+ + +When offsets and integer sizes are aligned to bytes boundaries, this is the +same as big endian, however when such alignment does not exist, its important +to also understand how the bits inside a byte are ordered. diff --git a/commands/bitfield_ro.md b/commands/bitfield_ro.md new file mode 100644 index 0000000000..26ec064ded --- /dev/null +++ b/commands/bitfield_ro.md @@ -0,0 +1,15 @@ +Read-only variant of the `BITFIELD` command. +It is like the original `BITFIELD` but only accepts `!GET` subcommand and can safely be used in read-only replicas. + +Since the original `BITFIELD` has `!SET` and `!INCRBY` options it is technically flagged as a writing command in the Redis command table. +For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). + +Since Redis 6.2, the `BITFIELD_RO` variant was introduced in order to allow `BITFIELD` behavior in read-only replicas without breaking compatibility on command flags. + +See original `BITFIELD` for more details. + +@examples + +``` +BITFIELD_RO hello GET i8 16 +``` diff --git a/commands/bitop.md b/commands/bitop.md new file mode 100644 index 0000000000..e679b449bf --- /dev/null +++ b/commands/bitop.md @@ -0,0 +1,55 @@ +Perform a bitwise operation between multiple keys (containing string values) and +store the result in the destination key. + +The `BITOP` command supports four bitwise operations: **AND**, **OR**, **XOR** +and **NOT**, thus the valid forms to call the command are: + + +* `BITOP AND destkey srckey1 srckey2 srckey3 ... srckeyN` +* `BITOP OR destkey srckey1 srckey2 srckey3 ... srckeyN` +* `BITOP XOR destkey srckey1 srckey2 srckey3 ... srckeyN` +* `BITOP NOT destkey srckey` + +As you can see **NOT** is special as it only takes an input key, because it +performs inversion of bits so it only makes sense as a unary operator. + +The result of the operation is always stored at `destkey`. + +## Handling of strings with different lengths + +When an operation is performed between strings having different lengths, all the +strings shorter than the longest string in the set are treated as if they were +zero-padded up to the length of the longest string. + +The same holds true for non-existent keys, that are considered as a stream of +zero bytes up to the length of the longest string. + +@examples + +```cli +SET key1 "foobar" +SET key2 "abcdef" +BITOP AND dest key1 key2 +GET dest +``` + +## Pattern: real time metrics using bitmaps + +`BITOP` is a good complement to the pattern documented in the `BITCOUNT` command +documentation. +Different bitmaps can be combined in order to obtain a target bitmap where +the population counting operation is performed. + +See the article called "[Fast easy realtime metrics using Redis +bitmaps][hbgc212fermurb]" for an interesting use cases. + +[hbgc212fermurb]: http://blog.getspool.com/2011/11/29/fast-easy-realtime-metrics-using-redis-bitmaps + +## Performance considerations + +`BITOP` is a potentially slow command as it runs in O(N) time. +Care should be taken when running it against long input strings. + +For real-time metrics and statistics involving large inputs a good approach is +to use a replica (with replica-read-only option enabled) where the bit-wise +operations are performed to avoid blocking the master instance. diff --git a/commands/bitpos.md b/commands/bitpos.md new file mode 100644 index 0000000000..689926dbc0 --- /dev/null +++ b/commands/bitpos.md @@ -0,0 +1,38 @@ +Return the position of the first bit set to 1 or 0 in a string. + +The position is returned, thinking of the string as an array of bits from left to +right, where the first byte's most significant bit is at position 0, the second +byte's most significant bit is at position 8, and so forth. + +The same bit position convention is followed by `GETBIT` and `SETBIT`. + +By default, all the bytes contained in the string are examined. +It is possible to look for bits only in a specified interval passing the additional arguments _start_ and _end_ (it is possible to just pass _start_, the operation will assume that the end is the last byte of the string. However there are semantic differences as explained later). +By default, the range is interpreted as a range of bytes and not a range of bits, so `start=0` and `end=2` means to look at the first three bytes. + +You can use the optional `BIT` modifier to specify that the range should be interpreted as a range of bits. +So `start=0` and `end=2` means to look at the first three bits. + +Note that bit positions are returned always as absolute values starting from bit zero even when _start_ and _end_ are used to specify a range. + +Like for the `GETRANGE` command start and end can contain negative values in +order to index bytes starting from the end of the string, where -1 is the last +byte, -2 is the penultimate, and so forth. When `BIT` is specified, -1 is the last +bit, -2 is the penultimate, and so forth. + +Non-existent keys are treated as empty strings. + +@examples + +```cli +SET mykey "\xff\xf0\x00" +BITPOS mykey 0 +SET mykey "\x00\xff\xf0" +BITPOS mykey 1 0 +BITPOS mykey 1 2 +BITPOS mykey 1 2 -1 BYTE +BITPOS mykey 1 7 15 BIT +set mykey "\x00\x00\x00" +BITPOS mykey 1 +BITPOS mykey 1 7 -3 BIT +``` diff --git a/commands/blmove.md b/commands/blmove.md new file mode 100644 index 0000000000..7a45b13e89 --- /dev/null +++ b/commands/blmove.md @@ -0,0 +1,19 @@ +`BLMOVE` is the blocking variant of `LMOVE`. +When `source` contains elements, this command behaves exactly like `LMOVE`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `LMOVE`. +When `source` is empty, Redis will block the connection until another client +pushes to it or until `timeout` (a double value specifying the maximum number of seconds to block) is reached. +A `timeout` of zero can be used to block indefinitely. + +This command comes in place of the now deprecated `BRPOPLPUSH`. Doing +`BLMOVE RIGHT LEFT` is equivalent. + +See `LMOVE` for more information. + +## Pattern: Reliable queue + +Please see the pattern description in the `LMOVE` documentation. + +## Pattern: Circular list + +Please see the pattern description in the `LMOVE` documentation. diff --git a/commands/blmpop.md b/commands/blmpop.md new file mode 100644 index 0000000000..73bac9d284 --- /dev/null +++ b/commands/blmpop.md @@ -0,0 +1,8 @@ +`BLMPOP` is the blocking variant of `LMPOP`. + +When any of the lists contains elements, this command behaves exactly like `LMPOP`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `LMPOP`. +When all lists are empty, Redis will block the connection until another client pushes to it or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. +A `timeout` of zero can be used to block indefinitely. + +See `LMPOP` for more information. diff --git a/commands/blpop.md b/commands/blpop.md index 82984e8be5..5ce6ffb447 100644 --- a/commands/blpop.md +++ b/commands/blpop.md @@ -1,24 +1,23 @@ -@complexity - -O(1) - - -`BLPOP` is a blocking list pop primitive. It is the blocking version of `LPOP` -because it blocks the connection when there are no elements to pop from any of -the given lists. An element is popped from the head of the first list that is -non-empty, with the given keys being checked in the order that they are given. +`BLPOP` is a blocking list pop primitive. +It is the blocking version of `LPOP` because it blocks the connection when there +are no elements to pop from any of the given lists. +An element is popped from the head of the first list that is non-empty, with the +given keys being checked in the order that they are given. ## Non-blocking behavior -When `BLPOP` is called, if at least one of the specified keys contain a +When `BLPOP` is called, if at least one of the specified keys contains a non-empty list, an element is popped from the head of the list and returned to the caller together with the `key` it was popped from. -Keys are checked in the order that they are given. Let's say that the key -`list1` doesn't exist and `list2` and `list3` hold non-empty lists. Consider -the following command: +Keys are checked in the order that they are given. +Let's say that the key `list1` doesn't exist and `list2` and `list3` hold +non-empty lists. +Consider the following command: - BLPOP list1 list2 list3 0 +``` +BLPOP list1 list2 list3 0 +``` `BLPOP` guarantees to return an element from the list stored at `list2` (since it is the first non empty list when checking `list1`, `list2` and `list3` in @@ -26,45 +25,107 @@ that order). ## Blocking behavior -If none of the specified keys exist, `BLPOP` blocks -the connection until another client performs an `LPUSH` or `RPUSH` operation -against one of the keys. +If none of the specified keys exist, `BLPOP` blocks the connection until another +client performs an `LPUSH` or `RPUSH` operation against one of the keys. Once new data is present on one of the lists, the client returns with the name of the key unblocking it and the popped value. -When `BLPOP` causes a client to block and a non-zero timeout is specified, the -client will unblock returning a `nil` multi-bulk value when the specified +When `BLPOP` causes a client to block and a non-zero timeout is specified, +the client will unblock returning a `nil` multi-bulk value when the specified timeout has expired without a push operation against at least one of the specified keys. -The timeout argument is interpreted as an integer value. A timeout of zero can -be used to block indefinitely. +**The timeout argument is interpreted as a double value specifying the maximum number of seconds to block**. A timeout of zero can be used to block indefinitely. + +## What key is served first? What client? What element? Priority ordering details. + +* If the client tries to blocks for multiple keys, but at least one key contains elements, the returned key / element pair is the first key from left to right that has one or more elements. In this case the client is not blocked. So for instance `BLPOP key1 key2 key3 key4 0`, assuming that both `key2` and `key4` are non-empty, will always return an element from `key2`. +* If multiple clients are blocked for the same key, the first client to be served is the one that was waiting for more time (the first that blocked for the key). Once a client is unblocked it does not retain any priority, when it blocks again with the next call to `BLPOP` it will be served accordingly to the number of clients already blocked for the same key, that will all be served before it (from the first to the last that blocked). +* When a client is blocking for multiple keys at the same time, and elements are available at the same time in multiple keys (because of a transaction or a Lua script added elements to multiple lists), the client will be unblocked using the first key that received a push operation (assuming it has enough elements to serve our client, as there may be other clients as well waiting for this key). Basically after the execution of every command Redis will run a list of all the keys that received data AND that have at least a client blocked. The list is ordered by new element arrival time, from the first key that received data to the last. For every key processed, Redis will serve all the clients waiting for that key in a FIFO fashion, as long as there are elements in this key. When the key is empty or there are no longer clients waiting for this key, the next key that received new data in the previous command / transaction / script is processed, and so forth. + +## Behavior of `!BLPOP` when multiple elements are pushed inside a list. + +There are times when a list can receive multiple elements in the context of the same conceptual command: + +* Variadic push operations such as `LPUSH mylist a b c`. +* After an `EXEC` of a `MULTI` block with multiple push operations against the same list. +* Executing a Lua Script with Redis 2.6 or newer. + +When multiple elements are pushed inside a list where there are clients blocking, the behavior is different for Redis 2.4 and Redis 2.6 or newer. + +For Redis 2.6 what happens is that the command performing multiple pushes is executed, and *only after* the execution of the command the blocked clients are served. Consider this sequence of commands. + + Client A: BLPOP foo 0 + Client B: LPUSH foo a b c + +If the above condition happens using a Redis 2.6 server or greater, Client **A** will be served with the `c` element, because after the `LPUSH` command the list contains `c,b,a`, so taking an element from the left means to return `c`. + +Instead Redis 2.4 works in a different way: clients are served *in the context* of the push operation, so as long as `LPUSH foo a b c` starts pushing the first element to the list, it will be delivered to the Client **A**, that will receive `a` (the first element pushed). + +The behavior of Redis 2.4 creates a lot of problems when replicating or persisting data into the AOF file, so the much more generic and semantically simpler behavior was introduced into Redis 2.6 to prevent problems. + +Note that for the same reason a Lua script or a `MULTI/EXEC` block may push elements into a list and afterward **delete the list**. In this case the blocked clients will not be served at all and will continue to be blocked as long as no data is present on the list after the execution of a single command, transaction, or script. + +## `!BLPOP` inside a `!MULTI` / `!EXEC` transaction + +`BLPOP` can be used with pipelining (sending multiple commands and +reading the replies in batch), however this setup makes sense almost solely +when it is the last command of the pipeline. + +Using `BLPOP` inside a `MULTI` / `EXEC` block does not make a lot of sense +as it would require blocking the entire server in order to execute the block +atomically, which in turn does not allow other clients to perform a push +operation. For this reason the behavior of `BLPOP` inside `MULTI` / `EXEC` when the list is empty is to return a `nil` multi-bulk reply, which is the same +thing that happens when the timeout is reached. + +If you like science fiction, think of time flowing at infinite speed inside a +`MULTI` / `EXEC` block... + +@examples + +``` +redis> DEL list1 list2 +(integer) 0 +redis> RPUSH list1 a b c +(integer) 3 +redis> BLPOP list1 list2 0 +1) "list1" +2) "a" +``` -## Multiple clients blocking for the same keys +## Reliable queues -Multiple clients can block for the same key. They are put into -a queue, so the first to be served will be the one that started to wait -earlier, in a first-`!BLPOP` first-served fashion. +When `BLPOP` returns an element to the client, it also removes the element from the list. This means that the element only exists in the context of the client: if the client crashes while processing the returned element, it is lost forever. -## `!BLPOP` inside a `!MULTI`/`!EXEC` transaction +This can be a problem with some application where we want a more reliable messaging system. When this is the case, please check the `BRPOPLPUSH` command, that is a variant of `BLPOP` that adds the returned element to a target list before returning it to the client. -`BLPOP` can be used with pipelining (sending multiple commands and reading the -replies in batch), but it does not make sense to use `BLPOP` inside a -`MULTI`/`EXEC` block. This would require blocking the entire server in order to -execute the block atomically, which in turn does not allow other clients to -perform a push operation. +## Pattern: Event notification -The behavior of `BLPOP` inside `MULTI`/`EXEC` when the list is empty is to -return a `nil` multi-bulk reply, which is the same thing that happens when the -timeout is reached. If you like science fiction, think of time flowing at -infinite speed inside a `MULTI`/`EXEC` block. +Using blocking list operations it is possible to mount different blocking +primitives. +For instance for some application you may need to block waiting for elements +into a Redis Set, so that as far as a new element is added to the Set, it is +possible to retrieve it without resort to polling. +This would require a blocking version of `SPOP` that is not available, but using +blocking list operations we can easily accomplish this task. -@return +The consumer will do: -@multi-bulk-reply: specifically: +``` +LOOP forever + WHILE SPOP(key) returns elements + ... process elements ... + END + BRPOP helper_key +END +``` -* A `nil` multi-bulk when no element could be popped and the timeout expired. -* A two-element multi-bulk with the first element being the name of the key where an element - was popped and the second element being the value of the popped element. +While in the producer side we'll use simply: +``` +MULTI +SADD key element +LPUSH helper_key x +EXEC +``` diff --git a/commands/brpop.md b/commands/brpop.md index 98c9607982..08806cff53 100644 --- a/commands/brpop.md +++ b/commands/brpop.md @@ -1,21 +1,23 @@ -@complexity - -O(1) - - -`BRPOP` is a blocking list pop primitive. It is the blocking version of `RPOP` -because it blocks the connection when there are no elements to pop from any of -the given lists. An element is popped from the tail of the first list that is -non-empty, with the given keys being checked in the order that they are given. - -See `BLPOP` for the exact semantics. `BRPOP` is identical to `BLPOP`, apart -from popping from the tail of a list instead of the head of a list. - -@return - -@multi-bulk-reply: specifically: - -* A `nil` multi-bulk when no element could be popped and the timeout expired. -* A two-element multi-bulk with the first element being the name of the key where an element - was popped and the second element being the value of the popped element. - +`BRPOP` is a blocking list pop primitive. +It is the blocking version of `RPOP` because it blocks the connection when there +are no elements to pop from any of the given lists. +An element is popped from the tail of the first list that is non-empty, with the +given keys being checked in the order that they are given. + +See the [BLPOP documentation][cb] for the exact semantics, since `BRPOP` is +identical to `BLPOP` with the only difference being that it pops elements from +the tail of a list instead of popping from the head. + +[cb]: /commands/blpop + +@examples + +``` +redis> DEL list1 list2 +(integer) 0 +redis> RPUSH list1 a b c +(integer) 3 +redis> BRPOP list1 list2 0 +1) "list1" +2) "c" +``` diff --git a/commands/brpoplpush.md b/commands/brpoplpush.md index 85fc6a6349..3989fd67fe 100644 --- a/commands/brpoplpush.md +++ b/commands/brpoplpush.md @@ -1,16 +1,16 @@ -@complexity +`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. +When `source` contains elements, this command behaves exactly like `RPOPLPUSH`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `RPOPLPUSH`. +When `source` is empty, Redis will block the connection until another client +pushes to it or until `timeout` is reached. +A `timeout` of zero can be used to block indefinitely. -O(1). +See `RPOPLPUSH` for more information. -`BRPOPLPUSH` is the blocking variant of `RPOPLPUSH`. When `source` -contains elements, this command behaves exactly like `RPOPLPUSH`. When -`source` is empty, Redis will block the connection until another client -pushes to it or until `timeout` is reached. A `timeout` of zero can be -used to block indefinitely. +## Pattern: Reliable queue -See `RPOPLPUSH` for more information. +Please see the pattern description in the `RPOPLPUSH` documentation. -@return +## Pattern: Circular list -@bulk-reply: the element being popped from `source` and pushed to -`destination`. If `timeout` is reached, a @nil-reply is returned. +Please see the pattern description in the `RPOPLPUSH` documentation. diff --git a/commands/bzmpop.md b/commands/bzmpop.md new file mode 100644 index 0000000000..50c25709c0 --- /dev/null +++ b/commands/bzmpop.md @@ -0,0 +1,8 @@ +`BZMPOP` is the blocking variant of `ZMPOP`. + +When any of the sorted sets contains elements, this command behaves exactly like `ZMPOP`. +When used inside a `MULTI`/`EXEC` block, this command behaves exactly like `ZMPOP`. +When all sorted sets are empty, Redis will block the connection until another client adds members to one of the keys or until the `timeout` (a double value specifying the maximum number of seconds to block) elapses. +A `timeout` of zero can be used to block indefinitely. + +See `ZMPOP` for more information. diff --git a/commands/bzpopmax.md b/commands/bzpopmax.md new file mode 100644 index 0000000000..c6d726f4db --- /dev/null +++ b/commands/bzpopmax.md @@ -0,0 +1,28 @@ +`BZPOPMAX` is the blocking variant of the sorted set `ZPOPMAX` primitive. + +It is the blocking version because it blocks the connection when there are no +members to pop from any of the given sorted sets. +A member with the highest score is popped from first sorted set that is +non-empty, with the given keys being checked in the order that they are given. + +The `timeout` argument is interpreted as a double value specifying the maximum +number of seconds to block. A timeout of zero can be used to block indefinitely. + +See the [BZPOPMIN documentation][cb] for the exact semantics, since `BZPOPMAX` +is identical to `BZPOPMIN` with the only difference being that it pops members +with the highest scores instead of popping the ones with the lowest scores. + +[cb]: /commands/bzpopmin + +@examples + +``` +redis> DEL zset1 zset2 +(integer) 0 +redis> ZADD zset1 0 a 1 b 2 c +(integer) 3 +redis> BZPOPMAX zset1 zset2 0 +1) "zset1" +2) "c" +3) "2" +``` diff --git a/commands/bzpopmin.md b/commands/bzpopmin.md new file mode 100644 index 0000000000..936154d051 --- /dev/null +++ b/commands/bzpopmin.md @@ -0,0 +1,28 @@ +`BZPOPMIN` is the blocking variant of the sorted set `ZPOPMIN` primitive. + +It is the blocking version because it blocks the connection when there are no +members to pop from any of the given sorted sets. +A member with the lowest score is popped from first sorted set that is +non-empty, with the given keys being checked in the order that they are given. + +The `timeout` argument is interpreted as a double value specifying the maximum +number of seconds to block. A timeout of zero can be used to block indefinitely. + +See the [BLPOP documentation][cl] for the exact semantics, since `BZPOPMIN` is +identical to `BLPOP` with the only difference being the data structure being +popped from. + +[cl]: /commands/blpop + +@examples + +``` +redis> DEL zset1 zset2 +(integer) 0 +redis> ZADD zset1 0 a 1 b 2 c +(integer) 3 +redis> BZPOPMIN zset1 zset2 0 +1) "zset1" +2) "a" +3) "0" +``` diff --git a/commands/client-caching.md b/commands/client-caching.md new file mode 100644 index 0000000000..e3bf90ee86 --- /dev/null +++ b/commands/client-caching.md @@ -0,0 +1,18 @@ +This command controls the tracking of the keys in the next command executed +by the connection, when tracking is enabled in `OPTIN` or `OPTOUT` mode. +Please check the +[client side caching documentation](/topics/client-side-caching) for +background information. + +When tracking is enabled Redis, using the `CLIENT TRACKING` command, it is +possible to specify the `OPTIN` or `OPTOUT` options, so that keys +in read only commands are not automatically remembered by the server to +be invalidated later. When we are in `OPTIN` mode, we can enable the +tracking of the keys in the next command by calling `CLIENT CACHING yes` +immediately before it. Similarly when we are in `OPTOUT` mode, and keys +are normally tracked, we can avoid the keys in the next command to be +tracked using `CLIENT CACHING no`. + +Basically the command sets a state in the connection, that is valid only +for the next command execution, that will modify the behavior of client +tracking. diff --git a/commands/client-getname.md b/commands/client-getname.md new file mode 100644 index 0000000000..0991c6eea9 --- /dev/null +++ b/commands/client-getname.md @@ -0,0 +1 @@ +The `CLIENT GETNAME` returns the name of the current connection as set by `CLIENT SETNAME`. Since every new connection starts without an associated name, if no name was assigned a null bulk reply is returned. diff --git a/commands/client-getredir.md b/commands/client-getredir.md new file mode 100644 index 0000000000..dcc623c1a8 --- /dev/null +++ b/commands/client-getredir.md @@ -0,0 +1,7 @@ +This command returns the client ID we are redirecting our +[tracking](/topics/client-side-caching) notifications to. We set a client +to redirect to when using `CLIENT TRACKING` to enable tracking. However in +order to avoid forcing client libraries implementations to remember the +ID notifications are redirected to, this command exists in order to improve +introspection and allow clients to check later if redirection is active +and towards which client ID. diff --git a/commands/client-help.md b/commands/client-help.md new file mode 100644 index 0000000000..6745f6cf24 --- /dev/null +++ b/commands/client-help.md @@ -0,0 +1 @@ +The `CLIENT HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/client-id.md b/commands/client-id.md new file mode 100644 index 0000000000..53fcac5016 --- /dev/null +++ b/commands/client-id.md @@ -0,0 +1,14 @@ +The command just returns the ID of the current connection. Every connection +ID has certain guarantees: + +1. It is never repeated, so if `CLIENT ID` returns the same number, the caller can be sure that the underlying client did not disconnect and reconnect the connection, but it is still the same connection. +2. The ID is monotonically incremental. If the ID of a connection is greater than the ID of another connection, it is guaranteed that the second connection was established with the server at a later time. + +This command is especially useful together with `CLIENT UNBLOCK` which was +introduced also in Redis 5 together with `CLIENT ID`. Check the `CLIENT UNBLOCK` command page for a pattern involving the two commands. + +@examples + +```cli +CLIENT ID +``` diff --git a/commands/client-info.md b/commands/client-info.md new file mode 100644 index 0000000000..5eea826ba8 --- /dev/null +++ b/commands/client-info.md @@ -0,0 +1,9 @@ +The command returns information and statistics about the current client connection in a mostly human readable format. + +The reply format is identical to that of `CLIENT LIST`, and the content consists only of information about the current client. + +@examples + +```cli +CLIENT INFO +``` diff --git a/commands/client-kill.md b/commands/client-kill.md new file mode 100644 index 0000000000..e54529a7c8 --- /dev/null +++ b/commands/client-kill.md @@ -0,0 +1,44 @@ +The `CLIENT KILL` command closes a given client connection. This command support two formats, the old format: + + CLIENT KILL addr:port + +The `ip:port` should match a line returned by the `CLIENT LIST` command (`addr` field). + +The new format: + + CLIENT KILL ... ... + +With the new form it is possible to kill clients by different attributes +instead of killing just by address. The following filters are available: + +* `CLIENT KILL ADDR ip:port`. This is exactly the same as the old three-arguments behavior. +* `CLIENT KILL LADDR ip:port`. Kill all clients connected to specified local (bind) address. +* `CLIENT KILL ID client-id`. Allows to kill a client by its unique `ID` field. Client `ID`'s are retrieved using the `CLIENT LIST` command. +* `CLIENT KILL TYPE type`, where *type* is one of `normal`, `master`, `replica` and `pubsub`. This closes the connections of **all the clients** in the specified class. Note that clients blocked into the `MONITOR` command are considered to belong to the `normal` class. +* `CLIENT KILL USER username`. Closes all the connections that are authenticated with the specified [ACL](/topics/acl) username, however it returns an error if the username does not map to an existing ACL user. +* `CLIENT KILL SKIPME yes/no`. By default this option is set to `yes`, that is, the client calling the command will not get killed, however setting this option to `no` will have the effect of also killing the client calling the command. +* `CLIENT KILL MAXAGE maxage`. Closes all the connections that are older than the specified age, in seconds. + +It is possible to provide multiple filters at the same time. The command will handle multiple filters via logical AND. For example: + + CLIENT KILL addr 127.0.0.1:12345 type pubsub + +is valid and will kill only a pubsub client with the specified address. This format containing multiple filters is rarely useful currently. + +When the new form is used the command no longer returns `OK` or an error, but instead the number of killed clients, that may be zero. + +## CLIENT KILL and Redis Sentinel + +Recent versions of Redis Sentinel (Redis 2.8.12 or greater) use CLIENT KILL +in order to kill clients when an instance is reconfigured, in order to +force clients to perform the handshake with one Sentinel again and update +its configuration. + +## Notes + +Due to the single-threaded nature of Redis, it is not possible to +kill a client connection while it is executing a command. From +the client point of view, the connection can never be closed +in the middle of the execution of a command. However, the client +will notice the connection has been closed only when the +next command is sent (and results in network error). diff --git a/commands/client-list.md b/commands/client-list.md new file mode 100644 index 0000000000..653e75a8ef --- /dev/null +++ b/commands/client-list.md @@ -0,0 +1,74 @@ +The `CLIENT LIST` command returns information and statistics about the client +connections server in a mostly human readable format. + +You can use one of the optional subcommands to filter the list. The `TYPE type` subcommand filters the list by clients' type, where *type* is one of `normal`, `master`, `replica`, and `pubsub`. Note that clients blocked by the `MONITOR` command belong to the `normal` class. + +The `ID` filter only returns entries for clients with IDs matching the `client-id` arguments. + +Here is the meaning of the fields: + +* `id`: a unique 64-bit client ID +* `addr`: address/port of the client +* `laddr`: address/port of local address client connected to (bind address) +* `fd`: file descriptor corresponding to the socket +* `name`: the name set by the client with `CLIENT SETNAME` +* `age`: total duration of the connection in seconds +* `idle`: idle time of the connection in seconds +* `flags`: client flags (see below) +* `db`: current database ID +* `sub`: number of channel subscriptions +* `psub`: number of pattern matching subscriptions +* `ssub`: number of shard channel subscriptions. Added in Redis 7.0.3 +* `multi`: number of commands in a MULTI/EXEC context +* `watch`: number of keys this client is currently watching. Added in Redis 8.0 +* `qbuf`: query buffer length (0 means no query pending) +* `qbuf-free`: free space of the query buffer (0 means the buffer is full) +* `argv-mem`: incomplete arguments for the next command (already extracted from query buffer) +* `multi-mem`: memory is used up by buffered multi commands. Added in Redis 7.0 +* `obl`: output buffer length +* `oll`: output list length (replies are queued in this list when the buffer is full) +* `omem`: output buffer memory usage +* `tot-mem`: total memory consumed by this client in its various buffers +* `events`: file descriptor events (see below) +* `cmd`: last command played +* `user`: the authenticated username of the client +* `redir`: client id of current client tracking redirection +* `resp`: client RESP protocol version. Added in Redis 7.0 + +The client flags can be a combination of: + +``` +A: connection to be closed ASAP +b: the client is waiting in a blocking operation +c: connection to be closed after writing entire reply +d: a watched keys has been modified - EXEC will fail +e: the client is excluded from the client eviction mechanism +i: the client is waiting for a VM I/O (deprecated) +M: the client is a master +N: no specific flag set +O: the client is a client in MONITOR mode +P: the client is a Pub/Sub subscriber +r: the client is in readonly mode against a cluster node +S: the client is a replica node connection to this instance +u: the client is unblocked +U: the client is connected via a Unix domain socket +x: the client is in a MULTI/EXEC context +t: the client enabled keys tracking in order to perform client side caching +T: the client will not touch the LRU/LFU of the keys it accesses +R: the client tracking target client is invalid +B: the client enabled broadcast tracking mode +``` + +The file descriptor events can be: + +``` +r: the client socket is readable (event loop) +w: the client socket is writable (event loop) +``` + +## Notes + +New fields are regularly added for debugging purpose. Some could be removed +in the future. A version safe Redis client using this command should parse +the output accordingly (i.e. handling gracefully missing fields, skipping +unknown fields). diff --git a/commands/client-no-evict.md b/commands/client-no-evict.md new file mode 100644 index 0000000000..2a18898e41 --- /dev/null +++ b/commands/client-no-evict.md @@ -0,0 +1,7 @@ +The `CLIENT NO-EVICT` command sets the [client eviction](/topics/clients#client-eviction) mode for the current connection. + +When turned on and client eviction is configured, the current connection will be excluded from the client eviction process even if we're above the configured client eviction threshold. + +When turned off, the current client will be re-included in the pool of potential clients to be evicted (and evicted if needed). + +See [client eviction](/topics/clients#client-eviction) for more details. diff --git a/commands/client-no-touch.md b/commands/client-no-touch.md new file mode 100644 index 0000000000..da723058bc --- /dev/null +++ b/commands/client-no-touch.md @@ -0,0 +1,5 @@ +The `CLIENT NO-TOUCH` command controls whether commands sent by the client will alter the LRU/LFU of the keys they access. + +When turned on, the current client will not change LFU/LRU stats, unless it sends the `TOUCH` command. + +When turned off, the client touches LFU/LRU stats just as a normal client. diff --git a/commands/client-pause.md b/commands/client-pause.md new file mode 100644 index 0000000000..d7c381f9c0 --- /dev/null +++ b/commands/client-pause.md @@ -0,0 +1,40 @@ +`CLIENT PAUSE` is a connections control command able to suspend all the Redis clients for the specified amount of time (in milliseconds). + +The command performs the following actions: + +* It stops processing all the pending commands from normal and pub/sub clients for the given mode. However interactions with replicas will continue normally. Note that clients are formally paused when they try to execute a command, so no work is taken on the server side for inactive clients. +* However it returns OK to the caller ASAP, so the `CLIENT PAUSE` command execution is not paused by itself. +* When the specified amount of time has elapsed, all the clients are unblocked: this will trigger the processing of all the commands accumulated in the query buffer of every client during the pause. + +Client pause currently supports two modes: + +* `ALL`: This is the default mode. All client commands are blocked. +* `WRITE`: Clients are only blocked if they attempt to execute a write command. + +For the `WRITE` mode, some commands have special behavior: + +* `EVAL`/`EVALSHA`: Will block client for all scripts. +* `PUBLISH`: Will block client. +* `PFCOUNT`: Will block client. +* `WAIT`: Acknowledgments will be delayed, so this command will appear blocked. + +This command is useful as it makes able to switch clients from a Redis instance to another one in a controlled way. For example during an instance upgrade the system administrator could do the following: + +* Pause the clients using `CLIENT PAUSE` +* Wait a few seconds to make sure the replicas processed the latest replication stream from the master. +* Turn one of the replicas into a master. +* Reconfigure clients to connect with the new master. + +Since Redis 6.2, the recommended mode for client pause is `WRITE`. This mode will stop all replication traffic, can be +aborted with the `CLIENT UNPAUSE` command, and allows reconfiguring the old master without risking accepting writes after the +failover. This is also the mode used during cluster failover. + +For versions before 6.2, it is possible to send `CLIENT PAUSE` in a MULTI/EXEC block together with the `INFO replication` command in order to get the current master offset at the time the clients are blocked. This way it is possible to wait for a specific offset in the replica side in order to make sure all the replication stream was processed. + +Since Redis 3.2.10 / 4.0.0, this command also prevents keys to be evicted or +expired during the time clients are paused. This way the dataset is guaranteed +to be static not just from the point of view of clients not being able to write, but also from the point of view of internal operations. + +## Behavior change history + +* `>= 3.2.0`: Client pause prevents client pause and key eviction as well. \ No newline at end of file diff --git a/commands/client-reply.md b/commands/client-reply.md new file mode 100644 index 0000000000..63608e60a9 --- /dev/null +++ b/commands/client-reply.md @@ -0,0 +1,7 @@ +Sometimes it can be useful for clients to completely disable replies from the Redis server. For example when the client sends fire and forget commands or performs a mass loading of data, or in caching contexts where new data is streamed constantly. In such contexts to use server time and bandwidth in order to send back replies to clients, which are going to be ignored, is considered wasteful. + +The `CLIENT REPLY` command controls whether the server will reply the client's commands. The following modes are available: + +* `ON`. This is the default mode in which the server returns a reply to every command. +* `OFF`. In this mode the server will not reply to client commands. +* `SKIP`. This mode skips the reply of command immediately after it. diff --git a/commands/client-setinfo.md b/commands/client-setinfo.md new file mode 100644 index 0000000000..64a66f3734 --- /dev/null +++ b/commands/client-setinfo.md @@ -0,0 +1,12 @@ +The `CLIENT SETINFO` command assigns various info attributes to the current connection which are displayed in the output of `CLIENT LIST` and `CLIENT INFO`. + +Client libraries are expected to pipeline this command after authentication on all connections +and ignore failures since they could be connected to an older version that doesn't support them. + +Currently the supported attributes are: +* `lib-name` - meant to hold the name of the client library that's in use. +* `lib-ver` - meant to hold the client library's version. + +There is no limit to the length of these attributes. However it is not possible to use spaces, newlines, or other non-printable characters that would violate the format of the `CLIENT LIST` reply. + +Note that these attributes are **not** cleared by the RESET command. diff --git a/commands/client-setname.md b/commands/client-setname.md new file mode 100644 index 0000000000..f0a147921b --- /dev/null +++ b/commands/client-setname.md @@ -0,0 +1,15 @@ +The `CLIENT SETNAME` command assigns a name to the current connection. + +The assigned name is displayed in the output of `CLIENT LIST` so that it is possible to identify the client that performed a given connection. + +For instance when Redis is used in order to implement a queue, producers and consumers of messages may want to set the name of the connection according to their role. + +There is no limit to the length of the name that can be assigned if not the usual limits of the Redis string type (512 MB). However it is not possible to use spaces in the connection name as this would violate the format of the `CLIENT LIST` reply. + +It is possible to entirely remove the connection name setting it to the empty string, that is not a valid connection name since it serves to this specific purpose. + +The connection name can be inspected using `CLIENT GETNAME`. + +Every new connection starts without an assigned name. + +Tip: setting names to connections is a good way to debug connection leaks due to bugs in the application using Redis. diff --git a/commands/client-tracking.md b/commands/client-tracking.md new file mode 100644 index 0000000000..503d4dca62 --- /dev/null +++ b/commands/client-tracking.md @@ -0,0 +1,29 @@ +This command enables the tracking feature of the Redis server, that is used +for [server assisted client side caching](/topics/client-side-caching). + +When tracking is enabled Redis remembers the keys that the connection +requested, in order to send later invalidation messages when such keys are +modified. Invalidation messages are sent in the same connection (only available +when the RESP3 protocol is used) or redirected in a different connection +(available also with RESP2 and Pub/Sub). A special *broadcasting* mode is +available where clients participating in this protocol receive every +notification just subscribing to given key prefixes, regardless of the +keys that they requested. Given the complexity of the argument please +refer to [the main client side caching documentation](/topics/client-side-caching) for the details. This manual page is only a reference for the options of this subcommand. + +In order to enable tracking, use: + + CLIENT TRACKING on ... options ... + +The feature will remain active in the current connection for all its life, +unless tracking is turned off with `CLIENT TRACKING off` at some point. + +The following are the list of options that modify the behavior of the +command when enabling tracking: + +* `REDIRECT `: send invalidation messages to the connection with the specified ID. The connection must exist. You can get the ID of a connection using `CLIENT ID`. If the connection we are redirecting to is terminated, when in RESP3 mode the connection with tracking enabled will receive `tracking-redir-broken` push messages in order to signal the condition. +* `BCAST`: enable tracking in broadcasting mode. In this mode invalidation messages are reported for all the prefixes specified, regardless of the keys requested by the connection. Instead when the broadcasting mode is not enabled, Redis will track which keys are fetched using read-only commands, and will report invalidation messages only for such keys. +* `PREFIX `: for broadcasting, register a given key prefix, so that notifications will be provided only for keys starting with this string. This option can be given multiple times to register multiple prefixes. If broadcasting is enabled without this option, Redis will send notifications for every key. You can't delete a single prefix, but you can delete all prefixes by disabling and re-enabling tracking. Using this option adds the additional time complexity of O(N^2), where N is the total number of prefixes tracked. +* `OPTIN`: when broadcasting is NOT active, normally don't track keys in read only commands, unless they are called immediately after a `CLIENT CACHING yes` command. +* `OPTOUT`: when broadcasting is NOT active, normally track keys in read only commands, unless they are called immediately after a `CLIENT CACHING no` command. +* `NOLOOP`: don't send notifications about keys modified by this connection itself. diff --git a/commands/client-trackinginfo.md b/commands/client-trackinginfo.md new file mode 100644 index 0000000000..6cc0df14ab --- /dev/null +++ b/commands/client-trackinginfo.md @@ -0,0 +1,16 @@ +The command returns information about the current client connection's use of the [server assisted client side caching](/topics/client-side-caching) feature. + +Here's the list of tracking information sections and their respective values: + +* **flags**: A list of tracking flags used by the connection. The flags and their meanings are as follows: + * `off`: The connection isn't using server assisted client side caching. + * `on`: Server assisted client side caching is enabled for the connection. + * `bcast`: The client uses broadcasting mode. + * `optin`: The client does not cache keys by default. + * `optout`: The client caches keys by default. + * `caching-yes`: The next command will cache keys (exists only together with `optin`). + * `caching-no`: The next command won't cache keys (exists only together with `optout`). + * `noloop`: The client isn't notified about keys modified by itself. + * `broken_redirect`: The client ID used for redirection isn't valid anymore. +* **redirect**: The client ID used for notifications redirection, or -1 when none. +* **prefixes**: A list of key prefixes for which notifications are sent to the client. diff --git a/commands/client-unblock.md b/commands/client-unblock.md new file mode 100644 index 0000000000..36e70b2924 --- /dev/null +++ b/commands/client-unblock.md @@ -0,0 +1,51 @@ +This command can unblock, from a different connection, a client blocked in a blocking operation, such as for instance `BRPOP` or `XREAD` or `WAIT`. + +By default the client is unblocked as if the timeout of the command was +reached, however if an additional (and optional) argument is passed, it is possible to specify the unblocking behavior, that can be **TIMEOUT** (the default) or **ERROR**. If **ERROR** is specified, the behavior is to unblock the client returning as error the fact that the client was force-unblocked. Specifically the client will receive the following error: + + -UNBLOCKED client unblocked via CLIENT UNBLOCK + +Note: of course as usually it is not guaranteed that the error text remains +the same, however the error code will remain `-UNBLOCKED`. + +This command is useful especially when we are monitoring many keys with +a limited number of connections. For instance we may want to monitor multiple +streams with `XREAD` without using more than N connections. However at some +point the consumer process is informed that there is one more stream key +to monitor. In order to avoid using more connections, the best behavior would +be to stop the blocking command from one of the connections in the pool, add +the new key, and issue the blocking command again. + +To obtain this behavior the following pattern is used. The process uses +an additional *control connection* in order to send the `CLIENT UNBLOCK` command +if needed. In the meantime, before running the blocking operation on the other +connections, the process runs `CLIENT ID` in order to get the ID associated +with that connection. When a new key should be added, or when a key should +no longer be monitored, the relevant connection blocking command is aborted +by sending `CLIENT UNBLOCK` in the control connection. The blocking command +will return and can be finally reissued. + +This example shows the application in the context of Redis streams, however +the pattern is a general one and can be applied to other cases. + +@examples + +``` +Connection A (blocking connection): +> CLIENT ID +2934 +> BRPOP key1 key2 key3 0 +(client is blocked) + +... Now we want to add a new key ... + +Connection B (control connection): +> CLIENT UNBLOCK 2934 +1 + +Connection A (blocking connection): +... BRPOP reply with timeout ... +NULL +> BRPOP key1 key2 key3 key4 0 +(client is blocked again) +``` diff --git a/commands/client-unpause.md b/commands/client-unpause.md new file mode 100644 index 0000000000..9878ee1b3c --- /dev/null +++ b/commands/client-unpause.md @@ -0,0 +1 @@ +`CLIENT UNPAUSE` is used to resume command processing for all clients that were paused by `CLIENT PAUSE`. diff --git a/commands/client.md b/commands/client.md new file mode 100644 index 0000000000..fdfd0e8b84 --- /dev/null +++ b/commands/client.md @@ -0,0 +1,3 @@ +This is a container command for client connection commands. + +To see the list of available commands you can call `CLIENT HELP`. \ No newline at end of file diff --git a/commands/cluster-addslots.md b/commands/cluster-addslots.md new file mode 100644 index 0000000000..101f671506 --- /dev/null +++ b/commands/cluster-addslots.md @@ -0,0 +1,47 @@ +This command is useful in order to modify a node's view of the cluster +configuration. Specifically it assigns a set of hash slots to the node +receiving the command. If the command is successful, the node will map +the specified hash slots to itself, and will start broadcasting the new +configuration. + +However note that: + +1. The command only works if all the specified slots are, from the point of view of the node receiving the command, currently not assigned. A node will refuse to take ownership for slots that already belong to some other node (including itself). +2. The command fails if the same slot is specified multiple times. +3. As a side effect of the command execution, if a slot among the ones specified as argument is set as `importing`, this state gets cleared once the node assigns the (previously unbound) slot to itself. + +## Example + +For example the following command assigns slots 1 2 3 to the node receiving +the command: + + > CLUSTER ADDSLOTS 1 2 3 + OK + +However trying to execute it again results into an error since the slots +are already assigned: + + > CLUSTER ADDSLOTS 1 2 3 + ERR Slot 1 is already busy + +## Usage in Redis Cluster + +This command only works in cluster mode and is useful in the following +Redis Cluster operations: + +1. To create a new `cluster ADDSLOTS` is used in order to initially setup master nodes splitting the available hash slots among them. +2. In order to fix a broken cluster where certain slots are unassigned. + +## Information about slots propagation and warnings + +Note that once a node assigns a set of slots to itself, it will start +propagating this information in heartbeat packet headers. However the +other nodes will accept the information only if they have the slot as +not already bound with another node, or if the configuration epoch of the +node advertising the new hash slot, is greater than the node currently listed +in the table. + +This means that this command should be used with care only by applications +orchestrating Redis Cluster, like `redis-cli`, and the command if used +out of the right context can leave the cluster in a wrong state or cause +data loss. diff --git a/commands/cluster-addslotsrange.md b/commands/cluster-addslotsrange.md new file mode 100644 index 0000000000..1fc1cd3d8b --- /dev/null +++ b/commands/cluster-addslotsrange.md @@ -0,0 +1,23 @@ +The `CLUSTER ADDSLOTSRANGE` is similar to the `CLUSTER ADDSLOTS` command in that they both assign hash slots to nodes. + +The difference between the two commands is that `CLUSTER ADDSLOTS` takes a list of slots to assign to the node, while `CLUSTER ADDSLOTSRANGE` takes a list of slot ranges (specified by start and end slots) to assign to the node. + +## Example + +To assign slots 1 2 3 4 5 to the node, the `CLUSTER ADDSLOTS` command is: + + > CLUSTER ADDSLOTS 1 2 3 4 5 + OK + +The same operation can be completed with the following `CLUSTER ADDSLOTSRANGE` command: + + > CLUSTER ADDSLOTSRANGE 1 5 + OK + + +## Usage in Redis Cluster + +This command only works in cluster mode and is useful in the following Redis Cluster operations: + +1. To create a new cluster, `CLUSTER ADDSLOTSRANGE` is used to initially set up master nodes splitting the available hash slots among them. +2. In order to fix a broken cluster where certain slots are unassigned. diff --git a/commands/cluster-bumpepoch.md b/commands/cluster-bumpepoch.md new file mode 100644 index 0000000000..68bce48797 --- /dev/null +++ b/commands/cluster-bumpepoch.md @@ -0,0 +1,5 @@ +Advances the cluster config epoch. + +The `CLUSTER BUMPEPOCH` command triggers an increment to the cluster's config epoch from the connected node. The epoch will be incremented if the node's config epoch is zero, or if it is less than the cluster's greatest epoch. + +**Note:** config epoch management is performed internally by the cluster, and relies on obtaining a consensus of nodes. The `CLUSTER BUMPEPOCH` attempts to increment the config epoch **WITHOUT** getting the consensus, so using it may violate the "last failover wins" rule. Use it with caution. diff --git a/commands/cluster-count-failure-reports.md b/commands/cluster-count-failure-reports.md new file mode 100644 index 0000000000..399a6bc000 --- /dev/null +++ b/commands/cluster-count-failure-reports.md @@ -0,0 +1,18 @@ +The command returns the number of *failure reports* for the specified node. +Failure reports are the way Redis Cluster uses in order to promote a +`PFAIL` state, that means a node is not reachable, to a `FAIL` state, +that means that the majority of masters in the cluster agreed within +a window of time that the node is not reachable. + +A few more details: + +* A node flags another node with `PFAIL` when the node is not reachable for a time greater than the configured *node timeout*, which is a fundamental configuration parameter of a Redis Cluster. +* Nodes in `PFAIL` state are provided in gossip sections of heartbeat packets. +* Every time a node processes gossip packets from other nodes, it creates (and refreshes the TTL if needed) **failure reports**, remembering that a given node said another given node is in `PFAIL` condition. +* Each failure report has a time to live of two times the *node timeout* time. +* If at a given time a node has another node flagged with `PFAIL`, and at the same time collected the majority of other master nodes *failure reports* about this node (including itself if it is a master), then it elevates the failure state of the node from `PFAIL` to `FAIL`, and broadcasts a message forcing all the nodes that can be reached to flag the node as `FAIL`. + +This command returns the number of failure reports for the current node which are currently not expired (so received within two times the *node timeout* time). The count does not include what the node we are asking this count believes about the node ID we pass as argument, the count *only* includes the failure reports the node received from other nodes. + +This command is mainly useful for debugging, when the failure detector of +Redis Cluster is not operating as we believe it should. diff --git a/commands/cluster-countkeysinslot.md b/commands/cluster-countkeysinslot.md new file mode 100644 index 0000000000..2a6e6af4f8 --- /dev/null +++ b/commands/cluster-countkeysinslot.md @@ -0,0 +1,9 @@ +Returns the number of keys in the specified Redis Cluster hash slot. The +command only queries the local data set, so contacting a node +that is not serving the specified hash slot will always result in a count of +zero being returned. + +``` +> CLUSTER COUNTKEYSINSLOT 7000 +(integer) 50341 +``` diff --git a/commands/cluster-delslots.md b/commands/cluster-delslots.md new file mode 100644 index 0000000000..4fdb4a5180 --- /dev/null +++ b/commands/cluster-delslots.md @@ -0,0 +1,43 @@ +In Redis Cluster, each node keeps track of which master is serving +a particular hash slot. + +The `CLUSTER DELSLOTS` command asks a particular Redis Cluster node to +forget which master is serving the hash slots specified as arguments. + +In the context of a node that has received a `CLUSTER DELSLOTS` command and +has consequently removed the associations for the passed hash slots, +we say those hash slots are *unbound*. Note that the existence of +unbound hash slots occurs naturally when a node has not been +configured to handle them (something that can be done with the +`CLUSTER ADDSLOTS` command) and if it has not received any information about +who owns those hash slots (something that it can learn from heartbeat +or update messages). + +If a node with unbound hash slots receives a heartbeat packet from +another node that claims to be the owner of some of those hash +slots, the association is established instantly. Moreover, if a +heartbeat or update message is received with a configuration epoch +greater than the node's own, the association is re-established. + +However, note that: + +1. The command only works if all the specified slots are already +associated with some node. +2. The command fails if the same slot is specified multiple times. +3. As a side effect of the command execution, the node may go into +*down* state because not all hash slots are covered. + +## Example + +The following command removes the association for slots 5000 and +5001 from the node receiving the command: + + > CLUSTER DELSLOTS 5000 5001 + OK + +## Usage in Redis Cluster + +This command only works in cluster mode and may be useful for +debugging and in order to manually orchestrate a cluster configuration +when a new cluster is created. It is currently not used by `redis-cli`, +and mainly exists for API completeness. diff --git a/commands/cluster-delslotsrange.md b/commands/cluster-delslotsrange.md new file mode 100644 index 0000000000..af902ff586 --- /dev/null +++ b/commands/cluster-delslotsrange.md @@ -0,0 +1,27 @@ +The `CLUSTER DELSLOTSRANGE` command is similar to the `CLUSTER DELSLOTS` command in that they both remove hash slots from the node. +The difference is that `CLUSTER DELSLOTS` takes a list of hash slots to remove from the node, while `CLUSTER DELSLOTSRANGE` takes a list of slot ranges (specified by start and end slots) to remove from the node. + +## Example + +To remove slots 1 2 3 4 5 from the node, the `CLUSTER DELSLOTS` command is: + + > CLUSTER DELSLOTS 1 2 3 4 5 + OK + +The same operation can be completed with the following `CLUSTER DELSLOTSRANGE` command: + + > CLUSTER DELSLOTSRANGE 1 5 + OK + +However, note that: + +1. The command only works if all the specified slots are already associated with the node. +2. The command fails if the same slot is specified multiple times. +3. As a side effect of the command execution, the node may go into *down* state because not all hash slots are covered. + +## Usage in Redis Cluster + +This command only works in cluster mode and may be useful for +debugging and in order to manually orchestrate a cluster configuration +when a new cluster is created. It is currently not used by `redis-cli`, +and mainly exists for API completeness. diff --git a/commands/cluster-failover.md b/commands/cluster-failover.md new file mode 100644 index 0000000000..85834cba67 --- /dev/null +++ b/commands/cluster-failover.md @@ -0,0 +1,63 @@ +This command, that can only be sent to a Redis Cluster replica node, forces +the replica to start a manual failover of its master instance. + +A manual failover is a special kind of failover that is usually executed when +there are no actual failures, but we wish to swap the current master with one +of its replicas (which is the node we send the command to), in a safe way, +without any window for data loss. It works in the following way: + +1. The replica tells the master to stop processing queries from clients. +2. The master replies to the replica with the current *replication offset*. +3. The replica waits for the replication offset to match on its side, to make sure it processed all the data from the master before it continues. +4. The replica starts a failover, obtains a new configuration epoch from the majority of the masters, and broadcasts the new configuration. +5. The old master receives the configuration update: unblocks its clients and starts replying with redirection messages so that they'll continue the chat with the new master. + +This way clients are moved away from the old master to the new master +atomically and only when the replica that is turning into the new master +has processed all of the replication stream from the old master. + +## FORCE option: manual failover when the master is down + +The command behavior can be modified by two options: **FORCE** and **TAKEOVER**. + +If the **FORCE** option is given, the replica does not perform any handshake +with the master, that may be not reachable, but instead just starts a +failover ASAP starting from point 4. This is useful when we want to start +a manual failover while the master is no longer reachable. + +However using **FORCE** we still need the majority of masters to be available +in order to authorize the failover and generate a new configuration epoch +for the replica that is going to become master. + +## TAKEOVER option: manual failover without cluster consensus + +There are situations where this is not enough, and we want a replica to failover +without any agreement with the rest of the cluster. A real world use case +for this is to mass promote replicas in a different data center to masters +in order to perform a data center switch, while all the masters are down +or partitioned away. + +The **TAKEOVER** option implies everything **FORCE** implies, but also does +not uses any cluster authorization in order to failover. A replica receiving +`CLUSTER FAILOVER TAKEOVER` will instead: + +1. Generate a new `configEpoch` unilaterally, just taking the current greatest epoch available and incrementing it if its local configuration epoch is not already the greatest. +2. Assign itself all the hash slots of its master, and propagate the new configuration to every node which is reachable ASAP, and eventually to every other node. + +Note that **TAKEOVER violates the last-failover-wins principle** of Redis Cluster, since the configuration epoch generated by the replica violates the normal generation of configuration epochs in several ways: + +1. There is no guarantee that it is actually the higher configuration epoch, since, for example, we can use the **TAKEOVER** option within a minority, nor any message exchange is performed to generate the new configuration epoch. +2. If we generate a configuration epoch which happens to collide with another instance, eventually our configuration epoch, or the one of another instance with our same epoch, will be moved away using the *configuration epoch collision resolution algorithm*. + +Because of this the **TAKEOVER** option should be used with care. + +## Implementation details and notes + +* `CLUSTER FAILOVER`, unless the **TAKEOVER** option is specified, does not execute a failover synchronously. + It only *schedules* a manual failover, bypassing the failure detection stage. +* An `OK` reply is no guarantee that the failover will succeed. +* A replica can only be promoted to a master if it is known as a replica by a majority of the masters in the cluster. + If the replica is a new node that has just been added to the cluster (for example after upgrading it), it may not yet be known to all the masters in the cluster. + To check that the masters are aware of a new replica, you can send `CLUSTER NODES` or `CLUSTER REPLICAS` to each of the master nodes and check that it appears as a replica, before sending `CLUSTER FAILOVER` to the replica. +* To check that the failover has actually happened you can use `ROLE`, `INFO REPLICATION` (which indicates "role:master" after successful failover), or `CLUSTER NODES` to verify that the state of the cluster has changed sometime after the command was sent. +* To check if the failover has failed, check the replica's log for "Manual failover timed out", which is logged if the replica has given up after a few seconds. diff --git a/commands/cluster-flushslots.md b/commands/cluster-flushslots.md new file mode 100644 index 0000000000..0c8984ff23 --- /dev/null +++ b/commands/cluster-flushslots.md @@ -0,0 +1,3 @@ +Deletes all slots from a node. + +The `CLUSTER FLUSHSLOTS` deletes all information about slots from the connected node. It can only be called when the database is empty. diff --git a/commands/cluster-forget.md b/commands/cluster-forget.md new file mode 100644 index 0000000000..afc06414df --- /dev/null +++ b/commands/cluster-forget.md @@ -0,0 +1,53 @@ +The command is used in order to remove a node, specified via its node ID, +from the set of *known nodes* of the Redis Cluster node receiving the command. +In other words the specified node is removed from the *nodes table* of the +node receiving the command. + +Because when a given node is part of the cluster, all the other nodes +participating in the cluster knows about it, in order for a node to be +completely removed from a cluster, the `CLUSTER FORGET` command must be +sent to all the remaining nodes, regardless of the fact they are masters +or replicas. + +However the command cannot simply drop the node from the internal node +table of the node receiving the command, it also implements a ban-list, not +allowing the same node to be added again as a side effect of processing the +*gossip section* of the heartbeat packets received from other nodes. + +## Details on why the ban-list is needed + +In the following example we'll show why the command must not just remove +a given node from the nodes table, but also prevent it for being re-inserted +again for some time. + +Let's assume we have four nodes, A, B, C and D. In order to +end with just a three nodes cluster A, B, C we may follow these steps: + +1. Reshard all the hash slots from D to nodes A, B, C. +2. D is now empty, but still listed in the nodes table of A, B and C. +3. We contact A, and send `CLUSTER FORGET D`. +4. B sends node A a heartbeat packet, where node D is listed. +5. A does no longer known node D (see step 3), so it starts a handshake with D. +6. D ends re-added in the nodes table of A. + +As you can see in this way removing a node is fragile, we need to send +`CLUSTER FORGET` commands to all the nodes ASAP hoping there are no +gossip sections processing in the meantime. Because of this problem the +command implements a ban-list with an expire time for each entry. + +So what the command really does is: + +1. The specified node gets removed from the nodes table. +2. The node ID of the removed node gets added to the ban-list, for 1 minute. +3. The node will skip all the node IDs listed in the ban-list when processing gossip sections received in heartbeat packets from other nodes. + +This way we have a 60 second window to inform all the nodes in the cluster that +we want to remove a node. + +## Special conditions not allowing the command execution + +The command does not succeed and returns an error in the following cases: + +1. The specified node ID is not found in the nodes table. +2. The node receiving the command is a replica, and the specified node ID identifies its current master. +3. The node ID identifies the same node we are sending the command to. diff --git a/commands/cluster-getkeysinslot.md b/commands/cluster-getkeysinslot.md new file mode 100644 index 0000000000..dec113df05 --- /dev/null +++ b/commands/cluster-getkeysinslot.md @@ -0,0 +1,16 @@ +The command returns an array of keys names stored in the contacted node and +hashing to the specified hash slot. The maximum number of keys to return +is specified via the `count` argument, so that it is possible for the user +of this API to batch-processing keys. + +The main usage of this command is during rehashing of cluster slots from one +node to another. The way the rehashing is performed is exposed in the Redis +Cluster specification, or in a more simple to digest form, as an appendix +of the `CLUSTER SETSLOT` command documentation. + +``` +> CLUSTER GETKEYSINSLOT 7000 3 +1) "key_39015" +2) "key_89793" +3) "key_92937" +``` diff --git a/commands/cluster-help.md b/commands/cluster-help.md new file mode 100644 index 0000000000..85579bd446 --- /dev/null +++ b/commands/cluster-help.md @@ -0,0 +1 @@ +The `CLUSTER HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/cluster-info.md b/commands/cluster-info.md new file mode 100644 index 0000000000..3f49280aca --- /dev/null +++ b/commands/cluster-info.md @@ -0,0 +1,48 @@ +`CLUSTER INFO` provides `INFO` style information about Redis Cluster vital parameters. +The following fields are always present in the reply: + +``` +cluster_state:ok +cluster_slots_assigned:16384 +cluster_slots_ok:16384 +cluster_slots_pfail:0 +cluster_slots_fail:0 +cluster_known_nodes:6 +cluster_size:3 +cluster_current_epoch:6 +cluster_my_epoch:2 +cluster_stats_messages_sent:1483972 +cluster_stats_messages_received:1483968 +total_cluster_links_buffer_limit_exceeded:0 +``` + +* `cluster_state`: State is `ok` if the node is able to receive queries. `fail` if there is at least one hash slot which is unbound (no node associated), in error state (node serving it is flagged with FAIL flag), or if the majority of masters can't be reached by this node. +* `cluster_slots_assigned`: Number of slots which are associated to some node (not unbound). This number should be 16384 for the node to work properly, which means that each hash slot should be mapped to a node. +* `cluster_slots_ok`: Number of hash slots mapping to a node not in `FAIL` or `PFAIL` state. +* `cluster_slots_pfail`: Number of hash slots mapping to a node in `PFAIL` state. Note that those hash slots still work correctly, as long as the `PFAIL` state is not promoted to `FAIL` by the failure detection algorithm. `PFAIL` only means that we are currently not able to talk with the node, but may be just a transient error. +* `cluster_slots_fail`: Number of hash slots mapping to a node in `FAIL` state. If this number is not zero the node is not able to serve queries unless `cluster-require-full-coverage` is set to `no` in the configuration. +* `cluster_known_nodes`: The total number of known nodes in the cluster, including nodes in `HANDSHAKE` state that may not currently be proper members of the cluster. +* `cluster_size`: The number of master nodes serving at least one hash slot in the cluster. +* `cluster_current_epoch`: The local `Current Epoch` variable. This is used in order to create unique increasing version numbers during fail overs. +* `cluster_my_epoch`: The `Config Epoch` of the node we are talking with. This is the current configuration version assigned to this node. +* `cluster_stats_messages_sent`: Number of messages sent via the cluster node-to-node binary bus. +* `cluster_stats_messages_received`: Number of messages received via the cluster node-to-node binary bus. +* `total_cluster_links_buffer_limit_exceeded`: Accumulated count of cluster links freed due to exceeding the `cluster-link-sendbuf-limit` configuration. + +The following message-related fields may be included in the reply if the value is not 0: +Each message type includes statistics on the number of messages sent and received. +Here are the explanation of these fields: + +* `cluster_stats_messages_ping_sent` and `cluster_stats_messages_ping_received`: Cluster bus PING (not to be confused with the client command `PING`). +* `cluster_stats_messages_pong_sent` and `cluster_stats_messages_pong_received`: PONG (reply to PING). +* `cluster_stats_messages_meet_sent` and `cluster_stats_messages_meet_received`: Handshake message sent to a new node, either through gossip or `CLUSTER MEET`. +* `cluster_stats_messages_fail_sent` and `cluster_stats_messages_fail_received`: Mark node xxx as failing. +* `cluster_stats_messages_publish_sent` and `cluster_stats_messages_publish_received`: Pub/Sub Publish propagation, see [Pubsub](/topics/pubsub#pubsub). +* `cluster_stats_messages_auth-req_sent` and `cluster_stats_messages_auth-req_received`: Replica initiated leader election to replace its master. +* `cluster_stats_messages_auth-ack_sent` and `cluster_stats_messages_auth-ack_received`: Message indicating a vote during leader election. +* `cluster_stats_messages_update_sent` and `cluster_stats_messages_update_received`: Another node slots configuration. +* `cluster_stats_messages_mfstart_sent` and `cluster_stats_messages_mfstart_received`: Pause clients for manual failover. +* `cluster_stats_messages_module_sent` and `cluster_stats_messages_module_received`: Module cluster API message. +* `cluster_stats_messages_publishshard_sent` and `cluster_stats_messages_publishshard_received`: Pub/Sub Publish shard propagation, see [Sharded Pubsub](/topics/pubsub#sharded-pubsub). + +More information about the Current Epoch and Config Epoch variables are available in the [Redis Cluster specification document](/topics/cluster-spec#cluster-current-epoch). diff --git a/commands/cluster-keyslot.md b/commands/cluster-keyslot.md new file mode 100644 index 0000000000..1556874255 --- /dev/null +++ b/commands/cluster-keyslot.md @@ -0,0 +1,20 @@ +Returns an integer identifying the hash slot the specified key hashes to. +This command is mainly useful for debugging and testing, since it exposes +via an API the underlying Redis implementation of the hashing algorithm. +Example use cases for this command: + +1. Client libraries may use Redis in order to test their own hashing algorithm, generating random keys and hashing them with both their local implementation and using Redis `CLUSTER KEYSLOT` command, then checking if the result is the same. +2. Humans may use this command in order to check what is the hash slot, and then the associated Redis Cluster node, responsible for a given key. + +## Example + +``` +> CLUSTER KEYSLOT somekey +(integer) 11058 +> CLUSTER KEYSLOT foo{hash_tag} +(integer) 2515 +> CLUSTER KEYSLOT bar{hash_tag} +(integer) 2515 +``` + +Note that the command implements the full hashing algorithm, including support for **hash tags**, that is the special property of Redis Cluster key hashing algorithm, of hashing just what is between `{` and `}` if such a pattern is found inside the key name, in order to force multiple keys to be handled by the same node. diff --git a/commands/cluster-links.md b/commands/cluster-links.md new file mode 100644 index 0000000000..12e7630e68 --- /dev/null +++ b/commands/cluster-links.md @@ -0,0 +1,44 @@ +Each node in a Redis Cluster maintains a pair of long-lived TCP link with each peer in the cluster: One for sending outbound messages towards the peer and one for receiving inbound messages from the peer. + +`CLUSTER LINKS` outputs information of all such peer links as an array, where each array element is a map that contains attributes and their values for an individual link. + +@examples + +The following is an example output: + +``` +> CLUSTER LINKS +1) 1) "direction" + 2) "to" + 3) "node" + 4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae" + 5) "create-time" + 6) (integer) 1639442739375 + 7) "events" + 8) "rw" + 9) "send-buffer-allocated" + 10) (integer) 4512 + 11) "send-buffer-used" + 12) (integer) 0 +2) 1) "direction" + 2) "from" + 3) "node" + 4) "8149d745fa551e40764fecaf7cab9dbdf6b659ae" + 5) "create-time" + 6) (integer) 1639442739411 + 7) "events" + 8) "r" + 9) "send-buffer-allocated" + 10) (integer) 0 + 11) "send-buffer-used" + 12) (integer) 0 +``` + +Each map is composed of the following attributes of the corresponding cluster link and their values: + +1. `direction`: This link is established by the local node `to` the peer, or accepted by the local node `from` the peer. +2. `node`: The node id of the peer. +3. `create-time`: Creation time of the link. (In the case of a `to` link, this is the time when the TCP link is created by the local node, not the time when it is actually established.) +4. `events`: Events currently registered for the link. `r` means readable event, `w` means writable event. +5. `send-buffer-allocated`: Allocated size of the link's send buffer, which is used to buffer outgoing messages toward the peer. +6. `send-buffer-used`: Size of the portion of the link's send buffer that is currently holding data(messages). diff --git a/commands/cluster-meet.md b/commands/cluster-meet.md new file mode 100644 index 0000000000..0ce7c212f6 --- /dev/null +++ b/commands/cluster-meet.md @@ -0,0 +1,38 @@ +`CLUSTER MEET` is used in order to connect different Redis nodes with cluster +support enabled, into a working cluster. + +The basic idea is that nodes by default don't trust each other, and are +considered unknown, so that it is unlikely that different cluster nodes will +mix into a single one because of system administration errors or network +addresses modifications. + +So in order for a given node to accept another one into the list of nodes +composing a Redis Cluster, there are only two ways: + +1. The system administrator sends a `CLUSTER MEET` command to force a node to meet another one. +2. An already known node sends a list of nodes in the gossip section that we are not aware of. If the receiving node trusts the sending node as a known node, it will process the gossip section and send a handshake to the nodes that are still not known. + +Note that Redis Cluster needs to form a full mesh (each node is connected with each other node), but in order to create a cluster, there is no need to send all the `CLUSTER MEET` commands needed to form the full mesh. What matter is to send enough `CLUSTER MEET` messages so that each node can reach each other node through a *chain of known nodes*. Thanks to the exchange of gossip information in heartbeat packets, the missing links will be created. + +So, if we link node A with node B via `CLUSTER MEET`, and B with C, A and C will find their ways to handshake and create a link. + +Another example: if we imagine a cluster formed of the following four nodes called A, B, C and D, we may send just the following set of commands to A: + +1. `CLUSTER MEET B-ip B-port` +2. `CLUSTER MEET C-ip C-port` +3. `CLUSTER MEET D-ip D-port` + +As a side effect of `A` knowing and being known by all the other nodes, it will send gossip sections in the heartbeat packets that will allow each other node to create a link with each other one, forming a full mesh in a matter of seconds, even if the cluster is large. + +Moreover `CLUSTER MEET` does not need to be reciprocal. If I send the command to A in order to join B, I don't need to also send it to B in order to join A. + +If the optional `cluster_bus_port` argument is not provided, the default of port + 10000 will be used. + +## Implementation details: MEET and PING packets + +When a given node receives a `CLUSTER MEET` message, the node specified in the +command still does not know the node we sent the command to. So in order for +the node to force the receiver to accept it as a trusted node, it sends a +`MEET` packet instead of a `PING` packet. The two packets have exactly the +same format, but the former forces the receiver to acknowledge the node as +trusted. diff --git a/commands/cluster-myid.md b/commands/cluster-myid.md new file mode 100644 index 0000000000..594c28b3f4 --- /dev/null +++ b/commands/cluster-myid.md @@ -0,0 +1,3 @@ +Returns the node's id. + +The `CLUSTER MYID` command returns the unique, auto-generated identifier that is associated with the connected cluster node. diff --git a/commands/cluster-myshardid.md b/commands/cluster-myshardid.md new file mode 100644 index 0000000000..6ffb3c5b9b --- /dev/null +++ b/commands/cluster-myshardid.md @@ -0,0 +1,3 @@ +Returns the node's shard id. + +The `CLUSTER MYSHARDID` command returns the unique, auto-generated identifier that is associated with the shard to which the connected cluster node belongs. diff --git a/commands/cluster-nodes.md b/commands/cluster-nodes.md new file mode 100644 index 0000000000..a802fe7cff --- /dev/null +++ b/commands/cluster-nodes.md @@ -0,0 +1,108 @@ +Each node in a Redis Cluster has its view of the current cluster configuration, +given by the set of known nodes, the state of the connection we have with such +nodes, their flags, properties and assigned slots, and so forth. + +`CLUSTER NODES` provides all this information, that is, the current cluster +configuration of the node we are contacting, in a serialization format which +happens to be exactly the same as the one used by Redis Cluster itself in +order to store on disk the cluster state (however the on disk cluster state +has a few additional info appended at the end). + +Note that normally clients willing to fetch the map between Cluster +hash slots and node addresses should use `CLUSTER SLOTS` instead. +`CLUSTER NODES`, that provides more information, should be used for +administrative tasks, debugging, and configuration inspections. +It is also used by `redis-cli` in order to manage a cluster. + +## Serialization format + +The output of the command is just a space-separated CSV string, where +each line represents a node in the cluster. The following +is an example of output on Redis 7.2.0. + +``` +07c37dfeb235213a872192d90877d0cd55635b91 127.0.0.1:30004@31004,hostname4 slave e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 0 1426238317239 4 connected +67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 127.0.0.1:30002@31002,hostname2 master - 0 1426238316232 2 connected 5461-10922 +292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 127.0.0.1:30003@31003,hostname3 master - 0 1426238318243 3 connected 10923-16383 +6ec23923021cf3ffec47632106199cb7f496ce01 127.0.0.1:30005@31005,hostname5 slave 67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1 0 1426238316232 5 connected +824fe116063bc5fcf9f4ffd895bc17aee7731ac3 127.0.0.1:30006@31006,hostname6 slave 292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f 0 1426238317741 6 connected +e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca 127.0.0.1:30001@31001,hostname1 myself,master - 0 0 1 connected 0-5460 +``` + +Each line is composed of the following fields: + +``` + ... +``` + +The meaning of each field is the following: + +1. `id`: The node ID, a 40-character globally unique string generated when a node is created and never changed again (unless `CLUSTER RESET HARD` is used). +2. `ip:port@cport`: The node address that clients should contact to run queries, along with the used cluster bus port. + `:0@0` can be expected when the address is no longer known for this node ID, hence flagged with `noaddr`. +3. `hostname`: A human readable string that can be configured via the `cluster-annouce-hostname` setting. The max length of the string is 256 characters, excluding the null terminator. The name can contain ASCII alphanumeric characters, '-', and '.' only. +5. `flags`: A list of comma separated flags: `myself`, `master`, `slave`, `fail?`, `fail`, `handshake`, `noaddr`, `nofailover`, `noflags`. Flags are explained below. +6. `master`: If the node is a replica, and the primary is known, the primary node ID, otherwise the "-" character. +7. `ping-sent`: Unix time at which the currently active ping was sent, or zero if there are no pending pings, in milliseconds. +8. `pong-recv`: Unix time the last pong was received, in milliseconds. +9. `config-epoch`: The configuration epoch (or version) of the current node (or of the current primary if the node is a replica). Each time there is a failover, a new, unique, monotonically increasing configuration epoch is created. If multiple nodes claim to serve the same hash slots, the one with the higher configuration epoch wins. +10. `link-state`: The state of the link used for the node-to-node cluster bus. Use this link to communicate with the node. Can be `connected` or `disconnected`. +11. `slot`: A hash slot number or range. Starting from argument number 9, but there may be up to 16384 entries in total (limit never reached). This is the list of hash slots served by this node. If the entry is just a number, it is parsed as such. If it is a range, it is in the form `start-end`, and means that the node is responsible for all the hash slots from `start` to `end` including the start and end values. + +Flags are: + +* `myself`: The node you are contacting. +* `master`: Node is a primary. +* `slave`: Node is a replica. +* `fail?`: Node is in `PFAIL` state. Not reachable for the node you are contacting, but still logically reachable (not in `FAIL` state). +* `fail`: Node is in `FAIL` state. It was not reachable for multiple nodes that promoted the `PFAIL` state to `FAIL`. +* `handshake`: Untrusted node, we are handshaking. +* `noaddr`: No address known for this node. +* `nofailover`: Replica will not try to failover. +* `noflags`: No flags at all. + +## Notes on published config epochs + +Replicas broadcast their primary's config epochs (in order to get an `UPDATE` +message if they are found to be stale), so the real config epoch of the +replica (which is meaningless more or less, since they don't serve hash slots) +can be only obtained checking the node flagged as `myself`, which is the entry +of the node we are asking to generate `CLUSTER NODES` output. The other +replicas epochs reflect what they publish in heartbeat packets, which is, the +configuration epoch of the primaries they are currently replicating. + +## Special slot entries + +Normally hash slots associated to a given node are in one of the following formats, +as already explained above: + +1. Single number: 3894 +2. Range: 3900-4000 + +However node hash slots can be in a special state, used in order to communicate errors after a node restart (mismatch between the keys in the AOF/RDB file, and the node hash slots configuration), or when there is a resharding operation in progress. This two states are **importing** and **migrating**. + +The meaning of the two states is explained in the Redis Specification, however the gist of the two states is the following: + +* **Importing** slots are yet not part of the nodes hash slot, there is a migration in progress. The node will accept queries about these slots only if the `ASK` command is used. +* **Migrating** slots are assigned to the node, but are being migrated to some other node. The node will accept queries if all the keys in the command exist already, otherwise it will emit what is called an **ASK redirection**, to force new keys creation directly in the importing node. + +Importing and migrating slots are emitted in the `CLUSTER NODES` output as follows: + +* **Importing slot:** `[slot_number-<-importing_from_node_id]` +* **Migrating slot:** `[slot_number->-migrating_to_node_id]` + +The following are a few examples of importing and migrating slots: + +* `[93-<-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]` +* `[1002-<-67ed2db8d677e59ec4a4cefb06858cf2a1a89fa1]` +* `[77->-e7d1eecce10fd6bb5eb35b9f99a514335d9ba9ca]` +* `[16311->-292f8b365bb7edb5e285caf0b7e6ddc7265d2f4f]` + +Note that the format does not have any space, so `CLUSTER NODES` output format is plain CSV with space as separator even when this special slots are emitted. However a complete parser for the format should be able to handle them. + +Note that: + +1. Migration and importing slots are only added to the node flagged as `myself`. This information is local to a node, for its own slots. +2. Importing and migrating slots are provided as **additional info**. If the node has a given hash slot assigned, it will be also a plain number in the list of hash slots, so clients that don't have a clue about hash slots migrations can just skip this special fields. + +**A note about the word slave used in this man page and command name**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/cluster-replicas.md b/commands/cluster-replicas.md new file mode 100644 index 0000000000..6d0e63708e --- /dev/null +++ b/commands/cluster-replicas.md @@ -0,0 +1,11 @@ +The command provides a list of replica nodes replicating from the specified +master node. The list is provided in the same format used by `CLUSTER NODES` (please refer to its documentation for the specification of the format). + +The command will fail if the specified node is not known or if it is not +a master according to the node table of the node receiving the command. + +Note that if a replica is added, moved, or removed from a given master node, +and we ask `CLUSTER REPLICAS` to a node that has not yet received the +configuration update, it may show stale information. However eventually +(in a matter of seconds if there are no network partitions) all the nodes +will agree about the set of nodes associated with a given master. diff --git a/commands/cluster-replicate.md b/commands/cluster-replicate.md new file mode 100644 index 0000000000..9d3c36d280 --- /dev/null +++ b/commands/cluster-replicate.md @@ -0,0 +1,22 @@ +The command reconfigures a node as a replica of the specified master. +If the node receiving the command is an *empty master*, as a side effect +of the command, the node role is changed from master to replica. + +Once a node is turned into the replica of another master node, there is no need +to inform the other cluster nodes about the change: heartbeat packets exchanged +between nodes will propagate the new configuration automatically. + +A replica will always accept the command, assuming that: + +1. The specified node ID exists in its nodes table. +2. The specified node ID does not identify the instance we are sending the command to. +3. The specified node ID is a master. + +If the node receiving the command is not already a replica, but is a master, +the command will only succeed, and the node will be converted into a replica, +only if the following additional conditions are met: + +1. The node is not serving any hash slots. +2. The node is empty, no keys are stored at all in the key space. + +If the command succeeds the new replica will immediately try to contact its master in order to replicate from it. diff --git a/commands/cluster-reset.md b/commands/cluster-reset.md new file mode 100644 index 0000000000..1d76229342 --- /dev/null +++ b/commands/cluster-reset.md @@ -0,0 +1,21 @@ +Reset a Redis Cluster node, in a more or less drastic way depending on the +reset type, that can be **hard** or **soft**. Note that this command +**does not work for masters if they hold one or more keys**, in that case +to completely reset a master node keys must be removed first, e.g. by using `FLUSHALL` first, +and then `CLUSTER RESET`. + +Effects on the node: + +1. All the other nodes in the cluster are forgotten. +2. All the assigned / open slots are reset, so the slots-to-nodes mapping is totally cleared. +3. If the node is a replica it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master. +4. **Hard reset only**: a new Node ID is generated. +5. **Hard reset only**: `currentEpoch` and `configEpoch` vars are set to 0. +6. The new configuration is persisted on disk in the node cluster configuration file. + +This command is mainly useful to re-provision a Redis Cluster node +in order to be used in the context of a new, different cluster. The command +is also extensively used by the Redis Cluster testing framework in order to +reset the state of the cluster every time a new test unit is executed. + +If no reset type is specified, the default is **soft**. diff --git a/commands/cluster-saveconfig.md b/commands/cluster-saveconfig.md new file mode 100644 index 0000000000..3f23701963 --- /dev/null +++ b/commands/cluster-saveconfig.md @@ -0,0 +1,11 @@ +Forces a node to save the `nodes.conf` configuration on disk. Before to return +the command calls `fsync(2)` in order to make sure the configuration is +flushed on the computer disk. + +This command is mainly used in the event a `nodes.conf` node state file +gets lost / deleted for some reason, and we want to generate it again from +scratch. It can also be useful in case of mundane alterations of a node cluster +configuration via the `CLUSTER` command in order to ensure the new configuration +is persisted on disk, however all the commands should normally be able to +auto schedule to persist the configuration on disk when it is important +to do so for the correctness of the system in the event of a restart. diff --git a/commands/cluster-set-config-epoch.md b/commands/cluster-set-config-epoch.md new file mode 100644 index 0000000000..5eb7fca262 --- /dev/null +++ b/commands/cluster-set-config-epoch.md @@ -0,0 +1,21 @@ +This command sets a specific *config epoch* in a fresh node. It only works when: + +1. The nodes table of the node is empty. +2. The node current *config epoch* is zero. + +These prerequisites are needed since usually, manually altering the +configuration epoch of a node is unsafe, we want to be sure that the node with +the higher configuration epoch value (that is the last that failed over) wins +over other nodes in claiming the hash slots ownership. + +However there is an exception to this rule, and it is when a new +cluster is created from scratch. Redis Cluster *config epoch collision +resolution* algorithm can deal with new nodes all configured with the +same configuration at startup, but this process is slow and should be +the exception, only to make sure that whatever happens, two more +nodes eventually always move away from the state of having the same +configuration epoch. + +So, using `CLUSTER SET-CONFIG-EPOCH`, when a new cluster is created, we can +assign a different progressive configuration epoch to each node before +joining the cluster together. diff --git a/commands/cluster-setslot.md b/commands/cluster-setslot.md new file mode 100644 index 0000000000..ab32c6772c --- /dev/null +++ b/commands/cluster-setslot.md @@ -0,0 +1,81 @@ +`CLUSTER SETSLOT` is responsible of changing the state of a hash slot in the receiving node in different ways. It can, depending on the subcommand used: + +1. `MIGRATING` subcommand: Set a hash slot in *migrating* state. +2. `IMPORTING` subcommand: Set a hash slot in *importing* state. +3. `STABLE` subcommand: Clear any importing / migrating state from hash slot. +4. `NODE` subcommand: Bind the hash slot to a different node. + +The command with its set of subcommands is useful in order to start and end cluster live resharding operations, which are accomplished by setting a hash slot in migrating state in the source node, and importing state in the destination node. + +Each subcommand is documented below. At the end you'll find a description of +how live resharding is performed using this command and other related commands. + +## CLUSTER SETSLOT `` MIGRATING `` + +This subcommand sets a slot to *migrating* state. In order to set a slot +in this state, the node receiving the command must be the hash slot owner, +otherwise an error is returned. + +When a slot is set in migrating state, the node changes behavior in the +following way: + +1. If a command is received about an existing key, the command is processed as usually. +2. If a command is received about a key that does not exists, an `ASK` redirection is emitted by the node, asking the client to retry only that specific query into `destination-node`. In this case the client should not update its hash slot to node mapping. +3. If the command contains multiple keys, in case none exist, the behavior is the same as point 2, if all exist, it is the same as point 1, however if only a partial number of keys exist, the command emits a `TRYAGAIN` error in order for the keys interested to finish being migrated to the target node, so that the multi keys command can be executed. + +## CLUSTER SETSLOT `` IMPORTING `` + +This subcommand is the reverse of `MIGRATING`, and prepares the destination +node to import keys from the specified source node. The command only works if +the node is not already owner of the specified hash slot. + +When a slot is set in importing state, the node changes behavior in the following way: + +1. Commands about this hash slot are refused and a `MOVED` redirection is generated as usually, but in the case the command follows an `ASKING` command, in this case the command is executed. + +In this way when a node in migrating state generates an `ASK` redirection, the client contacts the target node, sends `ASKING`, and immediately after sends the command. This way commands about non-existing keys in the old node or keys already migrated to the target node are executed in the target node, so that: + +1. New keys are always created in the target node. During a hash slot migration we'll have to move only old keys, not new ones. +2. Commands about keys already migrated are correctly processed in the context of the node which is the target of the migration, the new hash slot owner, in order to guarantee consistency. +3. Without `ASKING` the behavior is the same as usually. This guarantees that clients with a broken hash slots mapping will not write for error in the target node, creating a new version of a key that has yet to be migrated. + +## CLUSTER SETSLOT `` STABLE + +This subcommand just clears migrating / importing state from the slot. It is +mainly used to fix a cluster stuck in a wrong state by `redis-cli --cluster fix`. +Normally the two states are cleared automatically at the end of the migration +using the `SETSLOT ... NODE ...` subcommand as explained in the next section. + +## CLUSTER SETSLOT `` NODE `` + +The `NODE` subcommand is the one with the most complex semantics. It +associates the hash slot with the specified node, however the command works +only in specific situations and has different side effects depending on the +slot state. The following is the set of pre-conditions and side effects of the +command: + +1. If the current hash slot owner is the node receiving the command, but for effect of the command the slot would be assigned to a different node, the command will return an error if there are still keys for that hash slot in the node receiving the command. +2. If the slot is in *migrating* state, the state gets cleared when the slot is assigned to another node. +3. If the slot was in *importing* state in the node receiving the command, and the command assigns the slot to this node (which happens in the target node at the end of the resharding of a hash slot from one node to another), the command has the following side effects: A) the *importing* state is cleared. B) If the node config epoch is not already the greatest of the cluster, it generates a new one and assigns the new config epoch to itself. This way its new hash slot ownership will win over any past configuration created by previous failovers or slot migrations. + +It is important to note that step 3 is the only time when a Redis Cluster node will create a new config epoch without agreement from other nodes. This only happens when a manual configuration is operated. However it is impossible that this creates a non-transient setup where two nodes have the same config epoch, since Redis Cluster uses a config epoch collision resolution algorithm. + +## Redis Cluster live resharding explained + +The `CLUSTER SETSLOT` command is an important piece used by Redis Cluster in order to migrate all the keys contained in one hash slot from one node to another. This is how the migration is orchestrated, with the help of other commands as well. We'll call the node that has the current ownership of the hash slot the `source` node, and the node where we want to migrate the `destination` node. + +1. Set the destination node slot to *importing* state using `CLUSTER SETSLOT IMPORTING `. +2. Set the source node slot to *migrating* state using `CLUSTER SETSLOT MIGRATING `. +3. Get keys from the source node with `CLUSTER GETKEYSINSLOT` command and move them into the destination node using the `MIGRATE` command. +4. Send `CLUSTER SETSLOT NODE ` to the destination node. +5. Send `CLUSTER SETSLOT NODE ` to the source node. +6. Send `CLUSTER SETSLOT NODE ` to the other master nodes (optional). + +Notes: + +* The order of step 1 and 2 is important. We want the destination node to be ready to accept `ASK` redirections when the source node is configured to redirect. +* The order of step 4 and 5 is important. + The destination node is responsible for propagating the change to the rest of the cluster. + If the source node is informed before the destination node and the destination node crashes before it is set as new slot owner, the slot is left with no owner, even after a successful failover. +* Step 6, sending `SETSLOT` to the nodes not involved in the resharding, is not technically necessary since the configuration will eventually propagate itself. + However, it is a good idea to do so in order to stop nodes from pointing to the wrong node for the hash slot moved as soon as possible, resulting in less redirections to find the right node. diff --git a/commands/cluster-shards.md b/commands/cluster-shards.md new file mode 100644 index 0000000000..a6989a3910 --- /dev/null +++ b/commands/cluster-shards.md @@ -0,0 +1,153 @@ +`CLUSTER SHARDS` returns details about the shards of the cluster. +A shard is defined as a collection of nodes that serve the same set of slots and that replicate from each other. +A shard may only have a single master at a given time, but may have multiple or no replicas. +It is possible for a shard to not be serving any slots while still having replicas. + +This command replaces the `CLUSTER SLOTS` command, by providing a more efficient and extensible representation of the cluster. + +The command is suitable to be used by Redis Cluster client libraries in order to understand the topology of the cluster. +A client should issue this command on startup in order to retrieve the map associating cluster *hash slots* with actual node information. +This map should be used to direct commands to the node that is likely serving the slot associated with a given command. +In the event the command is sent to the wrong node, in that it received a '-MOVED' redirect, this command can then be used to update the topology of the cluster. + +The command returns an array of shards, with each shard containing two fields, 'slots' and 'nodes'. + +The 'slots' field is a list of slot ranges served by this shard, stored as pair of integers representing the inclusive start and end slots of the ranges. +For example, if a node owns the slots 1, 2, 3, 5, 7, 8 and 9, the slots ranges would be stored as [1-3], [5-5], [7-9]. +The slots field would therefore be represented by the following list of integers. + +``` +1) 1) "slots" + 2) 1) (integer) 1 + 2) (integer) 3 + 3) (integer) 5 + 4) (integer) 5 + 5) (integer) 7 + 6) (integer) 9 +``` + +The 'nodes' field contains a list of all nodes within the shard. +Each individual node is a map of attributes that describe the node. +Some attributes are optional and more attributes may be added in the future. +The current list of attributes: + +* id: The unique node id for this particular node. +* endpoint: The preferred endpoint to reach the node, see below for more information about the possible values of this field. +* ip: The IP address to send requests to for this node. +* hostname (optional): The announced hostname to send requests to for this node. +* port (optional): The TCP (non-TLS) port of the node. At least one of port or tls-port will be present. +* tls-port (optional): The TLS port of the node. At least one of port or tls-port will be present. +* role: The replication role of this node. +* replication-offset: The replication offset of this node. This information can be used to send commands to the most up to date replicas. +* health: Either `online`, `failed`, or `loading`. This information should be used to determine which nodes should be sent traffic. The `loading` health state should be used to know that a node is not currently eligible to serve traffic, but may be eligible in the future. + +The endpoint, along with the port, defines the location that clients should use to send requests for a given slot. +A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the `CLUSTER SHARDS` command but with the port returned from the command. +This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. +Which endpoint is set is determined by the `cluster-preferred-endpoint-type` config. +An empty string `""` is another abnormal value of the endpoint field, as well as for the ip field, which is returned if the node doesn't know its own IP address. +This can happen in a cluster that consists of only one node or the node has not yet been joined with the rest of the cluster. +The value `?` is displayed if the node is incorrectly configured to use announced hostnames but no hostname is configured using `cluster-announce-hostname`. +Clients may treat the empty string in the same way as NULL, that is the same endpoint it used to send the current command to, while `"?"` should be treated as an unknown node, not necessarily the same node as the one serving the current command. + +@examples + +``` +> CLUSTER SHARDS +1) 1) "slots" + 2) 1) (integer) 0 + 2) (integer) 5460 + 3) "nodes" + 4) 1) 1) "id" + 2) "e10b7051d6bf2d5febd39a2be297bbaea6084111" + 3) "port" + 4) (integer) 30001 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "master" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" + 2) 1) "id" + 2) "1901f5962d865341e81c85f9f596b1e7160c35ce" + 3) "port" + 4) (integer) 30006 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "replica" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" +2) 1) "slots" + 2) 1) (integer) 10923 + 2) (integer) 16383 + 3) "nodes" + 4) 1) 1) "id" + 2) "fd20502fe1b32fc32c15b69b0a9537551f162f1f" + 3) "port" + 4) (integer) 30003 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "master" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" + 2) 1) "id" + 2) "6daa25c08025a0c7e4cc0d1ab255949ce6cee902" + 3) "port" + 4) (integer) 30005 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "replica" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" +3) 1) "slots" + 2) 1) (integer) 5461 + 2) (integer) 10922 + 3) "nodes" + 4) 1) 1) "id" + 2) "a4a3f445ead085eb3eb9ee7d8c644ec4481ec9be" + 3) "port" + 4) (integer) 30002 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "master" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" + 2) 1) "id" + 2) "da6d5847aa019e9b9d2a8aa24a75f856fd3456cc" + 3) "port" + 4) (integer) 30004 + 5) "ip" + 6) "127.0.0.1" + 7) "endpoint" + 8) "127.0.0.1" + 9) "role" + 10) "replica" + 11) "replication-offset" + 12) (integer) 72156 + 13) "health" + 14) "online" +``` diff --git a/commands/cluster-slaves.md b/commands/cluster-slaves.md new file mode 100644 index 0000000000..2f4f9628af --- /dev/null +++ b/commands/cluster-slaves.md @@ -0,0 +1,13 @@ +**A note about the word slave used in this man page and command name**: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command `CLUSTER REPLICAS`. The command `CLUSTER SLAVES` will continue to work for backward compatibility. + +The command provides a list of replica nodes replicating from the specified +master node. The list is provided in the same format used by `CLUSTER NODES` (please refer to its documentation for the specification of the format). + +The command will fail if the specified node is not known or if it is not +a master according to the node table of the node receiving the command. + +Note that if a replica is added, moved, or removed from a given master node, +and we ask `CLUSTER SLAVES` to a node that has not yet received the +configuration update, it may show stale information. However eventually +(in a matter of seconds if there are no network partitions) all the nodes +will agree about the set of nodes associated with a given master. diff --git a/commands/cluster-slots.md b/commands/cluster-slots.md new file mode 100644 index 0000000000..be07af62e8 --- /dev/null +++ b/commands/cluster-slots.md @@ -0,0 +1,90 @@ +`CLUSTER SLOTS` returns details about which cluster slots map to which Redis instances. +The command is suitable to be used by Redis Cluster client libraries implementations in order to retrieve (or update when a redirection is received) the map associating cluster *hash slots* with actual nodes network information, so that when a command is received, it can be sent to what is likely the right instance for the keys specified in the command. + +The networking information for each node is an array containing the following elements: + +* Preferred endpoint (Either an IP address, hostname, or NULL) +* Port number +* The node ID +* A map of additional networking metadata + +The preferred endpoint, along with the port, defines the location that clients should use to send requests for a given slot. +A NULL value for the endpoint indicates the node has an unknown endpoint and the client should connect to the same endpoint it used to send the `CLUSTER SLOTS` command but with the port returned from the command. +This unknown endpoint configuration is useful when the Redis nodes are behind a load balancer that Redis doesn't know the endpoint of. +Which endpoint is set as preferred is determined by the `cluster-preferred-endpoint-type` config. +An empty string `""` is another abnormal value of the endpoint field, as well as for the ip field, which is returned if the node doesn't know its own IP address. +This can happen in a cluster that consists of only one node or the node has not yet been joined with the rest of the cluster. +The value `?` is displayed if the node is incorrectly configured to use announced hostnames but no hostname is configured using `cluster-announce-hostname`. +Clients may treat the empty string in the same way as NULL, that is the same endpoint it used to send the current command to, while `"?"` should be treated as an unknown node, not necessarily the same node as the one serving the current command. + +Additional networking metadata is provided as a map on the fourth argument for each node. +The following networking metadata may be returned: + +* IP: When the preferred endpoint is not set to IP. +* Hostname: When a node has an announced hostname but the primary endpoint is not set to hostname. + +## Nested Result Array +Each nested result is: + + - Start slot range + - End slot range + - Master for slot range represented as nested networking information + - First replica of master for slot range + - Second replica + - ...continues until all replicas for this master are returned. + +Each result includes all active replicas of the master instance +for the listed slot range. Failed replicas are not returned. + +The third nested reply is guaranteed to be the networking information of the master instance for the slot range. +All networking information after the third nested reply are replicas of the master. + +If a cluster instance has non-contiguous slots (e.g. 1-400,900,1800-6000) then master and replica networking information results will be duplicated for each top-level slot range reply. + +``` +> CLUSTER SLOTS +1) 1) (integer) 0 + 2) (integer) 5460 + 3) 1) "127.0.0.1" + 2) (integer) 30001 + 3) "09dbe9720cda62f7865eabc5fd8857c5d2678366" + 4) 1) hostname + 2) "host-1.redis.example.com" + 4) 1) "127.0.0.1" + 2) (integer) 30004 + 3) "821d8ca00d7ccf931ed3ffc7e3db0599d2271abf" + 4) 1) hostname + 2) "host-2.redis.example.com" +2) 1) (integer) 5461 + 2) (integer) 10922 + 3) 1) "127.0.0.1" + 2) (integer) 30002 + 3) "c9d93d9f2c0c524ff34cc11838c2003d8c29e013" + 4) 1) hostname + 2) "host-3.redis.example.com" + 4) 1) "127.0.0.1" + 2) (integer) 30005 + 3) "faadb3eb99009de4ab72ad6b6ed87634c7ee410f" + 4) 1) hostname + 2) "host-4.redis.example.com" +3) 1) (integer) 10923 + 2) (integer) 16383 + 3) 1) "127.0.0.1" + 2) (integer) 30003 + 3) "044ec91f325b7595e76dbcb18cc688b6a5b434a1" + 4) 1) hostname + 2) "host-5.redis.example.com" + 4) 1) "127.0.0.1" + 2) (integer) 30006 + 3) "58e6e48d41228013e5d9c1c37c5060693925e97e" + 4) 1) hostname + 2) "host-6.redis.example.com" +``` + +**Warning:** In future versions there could be more elements describing the node better. +In general a client implementation should just rely on the fact that certain parameters are at fixed positions as specified, but more parameters may follow and should be ignored. +Similarly a client library should try if possible to cope with the fact that older versions may just have the primary endpoint and port parameter. + +## Behavior change history + +* `>= 7.0.0`: Added support for hostnames and unknown endpoints in first field of node response. diff --git a/commands/cluster.md b/commands/cluster.md new file mode 100644 index 0000000000..86d5c00ca4 --- /dev/null +++ b/commands/cluster.md @@ -0,0 +1,3 @@ +This is a container command for Redis Cluster commands. + +To see the list of available commands you can call `CLUSTER HELP`. diff --git a/commands/command-count.md b/commands/command-count.md new file mode 100644 index 0000000000..2eee36f22e --- /dev/null +++ b/commands/command-count.md @@ -0,0 +1,7 @@ +Returns @integer-reply of number of total commands in this Redis server. + +@examples + +```cli +COMMAND COUNT +``` diff --git a/commands/command-docs.md b/commands/command-docs.md new file mode 100644 index 0000000000..98943d73a7 --- /dev/null +++ b/commands/command-docs.md @@ -0,0 +1,51 @@ +Return documentary information about commands. + +By default, the reply includes all of the server's commands. +You can use the optional _command-name_ argument to specify the names of one or more commands. + +The reply includes a map for each returned command. +The following keys may be included in the mapped reply: + +* **summary:** short command description. +* **since:** the Redis version that added the command (or for module commands, the module version). +* **group:** the functional group to which the command belongs. + Possible values are: + - _bitmap_ + - _cluster_ + - _connection_ + - _generic_ + - _geo_ + - _hash_ + - _hyperloglog_ + - _list_ + - _module_ + - _pubsub_ + - _scripting_ + - _sentinel_ + - _server_ + - _set_ + - _sorted-set_ + - _stream_ + - _string_ + - _transactions_ +* **complexity:** a short explanation about the command's time complexity. +* **doc_flags:** an array of documentation flags. + Possible values are: + - _deprecated:_ the command is deprecated. + - _syscmd:_ a system command that isn't meant to be called by users. +* **deprecated_since:** the Redis version that deprecated the command (or for module commands, the module version).. +* **replaced_by:** the alternative for a deprecated command. +* **history:** an array of historical notes describing changes to the command's output or arguments. It should not contain information about behavioral changes. + Each entry is an array itself, made up of two elements: + 1. The Redis version that the entry applies to. + 2. The description of the change. +* **arguments:** an array of maps that describe the command's arguments. + Please refer to the [Redis command arguments][td] page for more information. + +[td]: /topics/command-arguments + +@examples + +```cli +COMMAND DOCS SET +``` diff --git a/commands/command-getkeys.md b/commands/command-getkeys.md new file mode 100644 index 0000000000..6e9b756adf --- /dev/null +++ b/commands/command-getkeys.md @@ -0,0 +1,16 @@ +Returns @array-reply of keys from a full Redis command. + +`COMMAND GETKEYS` is a helper command to let you find the keys +from a full Redis command. + +`COMMAND` provides information on how to find the key names of each command (see `firstkey`, [key specifications](/topics/key-specs#logical-operation-flags), and `movablekeys`), +but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. +You can use `COMMAND GETKEYS` or `COMMAND GETKEYSANDFLAGS` to discover key names directly from how Redis parses the commands. + +@examples + +```cli +COMMAND GETKEYS MSET a b c d e f +COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN +COMMAND GETKEYS SORT mylist ALPHA STORE outlist +``` diff --git a/commands/command-getkeysandflags.md b/commands/command-getkeysandflags.md new file mode 100644 index 0000000000..0f83afa7db --- /dev/null +++ b/commands/command-getkeysandflags.md @@ -0,0 +1,17 @@ +Returns @array-reply of keys from a full Redis command and their usage flags. + +`COMMAND GETKEYSANDFLAGS` is a helper command to let you find the keys from a full Redis command together with flags indicating what each key is used for. + +`COMMAND` provides information on how to find the key names of each command (see `firstkey`, [key specifications](/topics/key-specs#logical-operation-flags), and `movablekeys`), +but in some cases it's not possible to find keys of certain commands and then the entire command must be parsed to discover some / all key names. +You can use `COMMAND GETKEYS` or `COMMAND GETKEYSANDFLAGS` to discover key names directly from how Redis parses the commands. + +Refer to [key specifications](/topics/key-specs#logical-operation-flags) for information about the meaning of the key flags. + +@examples + +```cli +COMMAND GETKEYS MSET a b c d e f +COMMAND GETKEYS EVAL "not consulted" 3 key1 key2 key3 arg1 arg2 arg3 argN +COMMAND GETKEYSANDFLAGS LMOVE mylist1 mylist2 left left +``` diff --git a/commands/command-help.md b/commands/command-help.md new file mode 100644 index 0000000000..80aa033dea --- /dev/null +++ b/commands/command-help.md @@ -0,0 +1 @@ +The `COMMAND HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/command-info.md b/commands/command-info.md new file mode 100644 index 0000000000..568f70a47b --- /dev/null +++ b/commands/command-info.md @@ -0,0 +1,14 @@ +Returns @array-reply of details about multiple Redis commands. + +Same result format as `COMMAND` except you can specify which commands +get returned. + +If you request details about non-existing commands, their return +position will be nil. + +@examples + +```cli +COMMAND INFO get set eval +COMMAND INFO foo evalsha config bar +``` diff --git a/commands/command-list.md b/commands/command-list.md new file mode 100644 index 0000000000..60b6981b43 --- /dev/null +++ b/commands/command-list.md @@ -0,0 +1,7 @@ +Return an array of the server's command names. + +You can use the optional _FILTERBY_ modifier to apply one of the following filters: + + - **MODULE module-name**: get the commands that belong to the module specified by _module-name_. + - **ACLCAT category**: get the commands in the [ACL category](/docs/management/security/acl/#command-categories) specified by _category_. + - **PATTERN pattern**: get the commands that match the given glob-like _pattern_. diff --git a/commands/command.md b/commands/command.md new file mode 100644 index 0000000000..9a66f6cd07 --- /dev/null +++ b/commands/command.md @@ -0,0 +1,235 @@ +Return an array with details about every Redis command. + +The `COMMAND` command is introspective. +Its reply describes all commands that the server can process. +Redis clients can call it to obtain the server's runtime capabilities during the handshake. + +`COMMAND` also has several subcommands. +Please refer to its subcommands for further details. + +**Cluster note:** +this command is especially beneficial for cluster-aware clients. +Such clients must identify the names of keys in commands to route requests to the correct shard. +Although most commands accept a single key as their first argument, there are many exceptions to this rule. +You can call `COMMAND` and then keep the mapping between commands and their respective key specification rules cached in the client. + +The reply it returns is an array with an element per command. +Each element that describes a Redis command is represented as an array by itself. + +The command's array consists of a fixed number of elements. +The exact number of elements in the array depends on the server's version. + +1. Name +1. Arity +1. Flags +1. First key +1. Last key +1. Step +1. [ACL categories][ta] (as of Redis 6.0) +1. [Tips][tb] (as of Redis 7.0) +1. [Key specifications][td] (as of Redis 7.0) +1. Subcommands (as of Redis 7.0) + +## Name + +This is the command's name in lowercase. + +**Note:** +Redis command names are case-insensitive. + +## Arity + +Arity is the number of arguments a command expects. +It follows a simple pattern: + +* A positive integer means a fixed number of arguments. +* A negative integer means a minimal number of arguments. + +Command arity _always includes_ the command's name itself (and the subcommand when applicable). + +Examples: + +* `GET`'s arity is _2_ since the command only accepts one argument and always has the format `GET _key_`. +* `MGET`'s arity is _-2_ since the command accepts at least one argument, but possibly multiple ones: `MGET _key1_ [key2] [key3] ...`. + +## Flags + +Command flags are an array. It can contain the following simple strings (status reply): + +* **admin:** the command is an administrative command. +* **asking:** the command is allowed even during hash slot migration. + This flag is relevant in Redis Cluster deployments. +* **blocking:** the command may block the requesting client. +* **denyoom**: the command is rejected if the server's memory usage is too high (see the _maxmemory_ configuration directive). +* **fast:** the command operates in constant or log(N) time. + This flag is used for monitoring latency with the `LATENCY` command. +* **loading:** the command is allowed while the database is loading. +* **movablekeys:** the _first key_, _last key_, and _step_ values don't determine all key positions. + Clients need to use `COMMAND GETKEYS` or [key specifications][td] in this case. + See below for more details. +* **no_auth:** executing the command doesn't require authentication. +* **no_async_loading:** the command is denied during asynchronous loading (that is when a replica uses disk-less `SWAPDB SYNC`, and allows access to the old dataset). +* **no_mandatory_keys:** the command may accept key name arguments, but these aren't mandatory. +* **no_multi:** the command isn't allowed inside the context of a [transaction](/topics/transactions). +* **noscript:** the command can't be called from [scripts](/topics/eval-intro) or [functions](/topics/functions-intro). +* **pubsub:** the command is related to [Redis Pub/Sub](/topics/pubsub). +* **random**: the command returns random results, which is a concern with verbatim script replication. + As of Redis 7.0, this flag is a [command tip][tb]. +* **readonly:** the command doesn't modify data. +* **sort_for_script:** the command's output is sorted when called from a script. +* **skip_monitor:** the command is not shown in `MONITOR`'s output. +* **skip_slowlog:** the command is not shown in `SLOWLOG`'s output. + As of Redis 7.0, this flag is a [command tip][tb]. +* **stale:** the command is allowed while a replica has stale data. +* **write:** the command may modify data. + +### Movablekeys + +Consider `SORT`: + +``` +1) 1) "sort" + 2) (integer) -2 + 3) 1) write + 2) denyoom + 3) movablekeys + 4) (integer) 1 + 5) (integer) 1 + 6) (integer) 1 + ... +``` + +Some Redis commands have no predetermined key locations or are not easy to find. +For those commands, the _movablekeys_ flag indicates that the _first key_, _last key_, and _step_ values are insufficient to find all the keys. + +Here are several examples of commands that have the _movablekeys_ flag: + +* `SORT`: the optional _STORE_, _BY_, and _GET_ modifiers are followed by names of keys. +* `ZUNION`: the _numkeys_ argument specifies the number key name arguments. +* `MIGRATE`: the keys appear _KEYS_ keyword and only when the second argument is the empty string. + +Redis Cluster clients need to use other measures, as follows, to locate the keys for such commands. + +You can use the `COMMAND GETKEYS` command and have your Redis server report all keys of a given command's invocation. + +As of Redis 7.0, clients can use the [key specifications](#key-specifications) to identify the positions of key names. +The only commands that require using `COMMAND GETKEYS` are `SORT` and `MIGRATE` for clients that parse keys' specifications. + +For more information, please refer to the [key specifications page][tr]. + +## First key + +The position of the command's first key name argument. +For most commands, the first key's position is 1. +Position 0 is always the command name itself. + +## Last key + +The position of the command's last key name argument. +Redis commands usually accept one, two or multiple number of keys. + +Commands that accept a single key have both _first key_ and _last key_ set to 1. + +Commands that accept two key name arguments, e.g. `BRPOPLPUSH`, `SMOVE` and `RENAME`, have this value set to the position of their second key. + +Multi-key commands that accept an arbitrary number of keys, such as `MSET`, use the value -1. + +## Step + +The step, or increment, between the _first key_ and the position of the next key. + +Consider the following two examples: + +``` +1) 1) "mset" + 2) (integer) -3 + 3) 1) write + 2) denyoom + 4) (integer) 1 + 5) (integer) -1 + 6) (integer) 2 + ... +``` + +``` +1) 1) "mget" + 2) (integer) -2 + 3) 1) readonly + 2) fast + 4) (integer) 1 + 5) (integer) -1 + 6) (integer) 1 + ... +``` + +The step count allows us to find keys' positions. +For example `MSET`: Its syntax is `MSET _key1_ _val1_ [key2] [val2] [key3] [val3]...`, so the keys are at every other position (step value of _2_). +Unlike `MGET`, which uses a step value of _1_. + +## ACL categories + +This is an array of simple strings that are the ACL categories to which the command belongs. +Please refer to the [Access Control List][ta] page for more information. + +## Command tips + +Helpful information about the command. +To be used by clients/proxies. + +Please check the [Command tips][tb] page for more information. + +## Key specifications + +This is an array consisting of the command's key specifications. +Each element in the array is a map describing a method for locating keys in the command's arguments. + +For more information please check the [key specifications page][td]. + +## Subcommands + +This is an array containing all of the command's subcommands, if any. +Some Redis commands have subcommands (e.g., the `REWRITE` subcommand of `CONFIG`). +Each element in the array represents one subcommand and follows the same specifications as those of `COMMAND`'s reply. + +[ta]: /topics/acl +[tb]: /topics/command-tips +[td]: /topics/key-specs +[tr]: /topics/key-specs + +@examples + +The following is `COMMAND`'s output for the `GET` command: + +``` +1) 1) "get" + 2) (integer) 2 + 3) 1) readonly + 2) fast + 4) (integer) 1 + 5) (integer) 1 + 6) (integer) 1 + 7) 1) @read + 2) @string + 3) @fast + 8) (empty array) + 9) 1) 1) "flags" + 2) 1) read + 3) "begin_search" + 4) 1) "type" + 2) "index" + 3) "spec" + 4) 1) "index" + 2) (integer) 1 + 5) "find_keys" + 6) 1) "type" + 2) "range" + 3) "spec" + 4) 1) "lastkey" + 2) (integer) 0 + 3) "keystep" + 4) (integer) 1 + 5) "limit" + 6) (integer) 0 + 10) (empty array) +... +``` diff --git a/commands/config get.md b/commands/config get.md deleted file mode 100644 index e5c0ff6a30..0000000000 --- a/commands/config get.md +++ /dev/null @@ -1,44 +0,0 @@ -@complexity - -Not applicable. - -@description - -The `CONFIG GET` command is used to read the configuration parameters of a running -Redis server. Not all the configuration parameters are supported. -The symmetric command used to alter the configuration at run time is -`CONFIG SET`. - -`CONFIG GET` takes a single argument, that is glob style pattern. All the -configuration parameters matching this parameter are reported as a -list of key-value pairs. Example: - - redis> config get *max-*-entries* - 1) "hash-max-zipmap-entries" - 2) "512" - 3) "list-max-ziplist-entries" - 4) "512" - 5) "set-max-intset-entries" - 6) "512" - -You can obtain a list of all the supported configuration parameters typing -`CONFIG GET *` in an open `redis-cli` prompt. - -All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: - -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. - -For instance what in redis.conf looks like: - - save 900 1 - save 300 10 - -that means, save after 900 seconds if there is at least 1 change to the -dataset, and after 300 seconds if there are at least 10 changes to the -datasets, will be reported by `CONFIG GET` as "900 1 300 10". - -@return - -The return type of the command is a @bulk-reply. diff --git a/commands/config resetstat.md b/commands/config resetstat.md deleted file mode 100644 index 7e7a6002d2..0000000000 --- a/commands/config resetstat.md +++ /dev/null @@ -1,17 +0,0 @@ -@complexity - -O(1). - -Resets the statistics reported by Redis using the `INFO` command. - -These are the counters that are reset: - -* Keyspace hits -* Keyspace misses -* Number of commands processed -* Number of connections received -* Number of expired keys - -@return - -@status-reply: always `OK`. diff --git a/commands/config set.md b/commands/config set.md deleted file mode 100644 index 0c1d41aa1d..0000000000 --- a/commands/config set.md +++ /dev/null @@ -1,50 +0,0 @@ -@complexity - -Not applicable. - -@description - -The `CONFIG SET` command is used in order to reconfigure the server at runtime -without the need to restart Redis. You can change both trivial parameters or -switch from one to another persistence option using this command. - -The list of configuration parameters supported by `CONFIG SET` can be -obtained issuing a `CONFIG GET *` command, that is the symmetrical command -used to obtain information about the configuration of a running -Redis instance. - -All the configuration parameters set using `CONFIG SET` are immediately loaded -by Redis that will start acting as specified starting from the next command -executed. - -All the supported parameters have the same meaning of the equivalent -configuration parameter used in the [redis.conf](http://github.com/antirez/redis/raw/2.2/redis.conf) file, with the following important differences: - -* Where bytes or other quantities are specified, it is not possible to use the redis.conf abbreviated form (10k 2gb ... and so forth), everything should be specified as a well formed 64 bit integer, in the base unit of the configuration directive. -* The save parameter is a single string of space separated integers. Every pair of integers represent a seconds/modifications threshold. - -For instance what in redis.conf looks like: - - save 900 1 - save 300 10 - -that means, save after 900 seconds if there is at least 1 change to the -dataset, and after 300 seconds if there are at least 10 changes to the -datasets, should be set using `CONFIG SET` as "900 1 300 10". - -It is possible to switch persistence form .rdb snapshotting to append only file -(and the other way around) using the `CONFIG SET` command. For more information -about how to do that please check [persistence page](/topics/persistence). - -In general what you should know is that setting the *appendonly* parameter to -*yes* will start a background process to save the initial append only file -(obtained from the in memory data set), and will append all the subsequent -commands on the append only file, thus obtaining exactly the same effect of -a Redis server that started with AOF turned on since the start. - -You can have both the AOF enabled with .rdb snapshotting if you want, the -two options are not mutually exclusive. - -@return - -@status-reply: `OK` when the configuration was set properly. Otherwise an error is returned. diff --git a/commands/config-get.md b/commands/config-get.md new file mode 100644 index 0000000000..312abd4137 --- /dev/null +++ b/commands/config-get.md @@ -0,0 +1,40 @@ +The `CONFIG GET` command is used to read the configuration parameters of a +running Redis server. +Not all the configuration parameters are supported in Redis 2.4, while Redis 2.6 +can read the whole configuration of a server using this command. + +The symmetric command used to alter the configuration at run time is `CONFIG +SET`. + +`CONFIG GET` takes multiple arguments, which are glob-style patterns. +Any configuration parameter matching any of the patterns are reported as a list +of key-value pairs. +Example: + +``` +redis> config get *max-*-entries* maxmemory + 1) "maxmemory" + 2) "0" + 3) "hash-max-listpack-entries" + 4) "512" + 5) "hash-max-ziplist-entries" + 6) "512" + 7) "set-max-intset-entries" + 8) "512" + 9) "zset-max-listpack-entries" +10) "128" +11) "zset-max-ziplist-entries" +12) "128" +``` + +You can obtain a list of all the supported configuration parameters by typing +`CONFIG GET *` in an open `redis-cli` prompt. + +All the supported parameters have the same meaning of the equivalent +configuration parameter used in the [redis.conf][hgcarr22rc] file: + +[hgcarr22rc]: http://github.com/redis/redis/raw/unstable/redis.conf + +Note that you should look at the redis.conf file relevant to the version you're +working with as configuration options might change between versions. The link +above is to the latest development version. diff --git a/commands/config-help.md b/commands/config-help.md new file mode 100644 index 0000000000..b45bebd154 --- /dev/null +++ b/commands/config-help.md @@ -0,0 +1 @@ +The `CONFIG HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/config-resetstat.md b/commands/config-resetstat.md new file mode 100644 index 0000000000..0c8789e7d0 --- /dev/null +++ b/commands/config-resetstat.md @@ -0,0 +1,10 @@ +Resets the statistics reported by Redis using the `INFO` and `LATENCY HISTOGRAM` commands. + +The following is a non-exhaustive list of values that are reset: + +* Keyspace hits and misses +* Number of expired keys +* Command and error statistics +* Connections received, rejected and evicted +* Persistence statistics +* Active defragmentation statistics diff --git a/commands/config-rewrite.md b/commands/config-rewrite.md new file mode 100644 index 0000000000..f4714975fd --- /dev/null +++ b/commands/config-rewrite.md @@ -0,0 +1,15 @@ +The `CONFIG REWRITE` command rewrites the `redis.conf` file the server was started with, applying the minimal changes needed to make it reflect the configuration currently used by the server, which may be different compared to the original one because of the use of the `CONFIG SET` command. + +The rewrite is performed in a very conservative way: + +* Comments and the overall structure of the original redis.conf are preserved as much as possible. +* If an option already exists in the old redis.conf file, it will be rewritten at the same position (line number). +* If an option was not already present, but it is set to its default value, it is not added by the rewrite process. +* If an option was not already present, but it is set to a non-default value, it is appended at the end of the file. +* Non used lines are blanked. For instance if you used to have multiple `save` directives, but the current configuration has fewer or none as you disabled RDB persistence, all the lines will be blanked. + +CONFIG REWRITE is also able to rewrite the configuration file from scratch if the original one no longer exists for some reason. However if the server was started without a configuration file at all, the CONFIG REWRITE will just return an error. + +## Atomic rewrite process + +In order to make sure the redis.conf file is always consistent, that is, on errors or crashes you always end with the old file, or the new one, the rewrite is performed with a single `write(2)` call that has enough content to be at least as big as the old file. Sometimes additional padding in the form of comments is added in order to make sure the resulting file is big enough, and later the file gets truncated to remove the padding at the end. diff --git a/commands/config-set.md b/commands/config-set.md new file mode 100644 index 0000000000..02576fadc4 --- /dev/null +++ b/commands/config-set.md @@ -0,0 +1,36 @@ +The `CONFIG SET` command is used in order to reconfigure the server at run time +without the need to restart Redis. +You can change both trivial parameters or switch from one to another persistence +option using this command. + +The list of configuration parameters supported by `CONFIG SET` can be obtained +issuing a `CONFIG GET *` command, that is the symmetrical command used to obtain +information about the configuration of a running Redis instance. + +All the configuration parameters set using `CONFIG SET` are immediately loaded +by Redis and will take effect starting with the next command executed. + +All the supported parameters have the same meaning of the equivalent +configuration parameter used in the [redis.conf][hgcarr22rc] file. + +[hgcarr22rc]: http://github.com/redis/redis/raw/unstable/redis.conf + +Note that you should look at the redis.conf file relevant to the version you're +working with as configuration options might change between versions. The link +above is to the latest development version. + +It is possible to switch persistence from RDB snapshotting to append-only file +(and the other way around) using the `CONFIG SET` command. +For more information about how to do that please check the [persistence +page][tp]. + +[tp]: /topics/persistence + +In general what you should know is that setting the `appendonly` parameter to +`yes` will start a background process to save the initial append-only file +(obtained from the in memory data set), and will append all the subsequent +commands on the append-only file, thus obtaining exactly the same effect of a +Redis server that started with AOF turned on since the start. + +You can have both the AOF enabled with RDB snapshotting if you want, the two +options are not mutually exclusive. diff --git a/commands/config.md b/commands/config.md new file mode 100644 index 0000000000..d4b37e90c7 --- /dev/null +++ b/commands/config.md @@ -0,0 +1,3 @@ +This is a container command for runtime configuration commands. + +To see the list of available commands you can call `CONFIG HELP`. diff --git a/commands/copy.md b/commands/copy.md new file mode 100644 index 0000000000..247c6c6478 --- /dev/null +++ b/commands/copy.md @@ -0,0 +1,17 @@ +This command copies the value stored at the `source` key to the `destination` +key. + +By default, the `destination` key is created in the logical database used by the +connection. The `DB` option allows specifying an alternative logical database +index for the destination key. + +The command returns zero when the `destination` key already exists. The +`REPLACE` option removes the `destination` key before copying the value to it. + +@examples + +``` +SET dolly "sheep" +COPY dolly clone +GET clone +``` diff --git a/commands/dbsize.md b/commands/dbsize.md index 468ac7d678..7aa2fb857e 100644 --- a/commands/dbsize.md +++ b/commands/dbsize.md @@ -1,7 +1 @@ - - -Return the number of keys in the currently selected database. - -@return - -@integer-reply +Return the number of keys in the currently-selected database. diff --git a/commands/debug object.md b/commands/debug object.md deleted file mode 100644 index 8dafb3c81c..0000000000 --- a/commands/debug object.md +++ /dev/null @@ -1,7 +0,0 @@ -@complexity - -@description - -@examples - -@return \ No newline at end of file diff --git a/commands/debug segfault.md b/commands/debug segfault.md deleted file mode 100644 index 8dafb3c81c..0000000000 --- a/commands/debug segfault.md +++ /dev/null @@ -1,7 +0,0 @@ -@complexity - -@description - -@examples - -@return \ No newline at end of file diff --git a/commands/debug.md b/commands/debug.md new file mode 100644 index 0000000000..fc3c3f347b --- /dev/null +++ b/commands/debug.md @@ -0,0 +1,2 @@ +The `DEBUG` command is an internal command. +It is meant to be used for developing and testing Redis. \ No newline at end of file diff --git a/commands/decr.md b/commands/decr.md index 385bbea9e7..ca6150a40e 100644 --- a/commands/decr.md +++ b/commands/decr.md @@ -1,23 +1,16 @@ -@complexity - -O(1) - - Decrements the number stored at `key` by one. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 -bit signed integers. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to **64 bit signed integers**. See `INCR` for extra information on increment/decrement operations. -@return - -@integer-reply: the value of `key` after the decrement - @examples - @cli - SET mykey "10" - DECR mykey - +```cli +SET mykey "10" +DECR mykey +SET mykey "234293482390480948029348230948" +DECR mykey +``` diff --git a/commands/decrby.md b/commands/decrby.md index 773d07f039..b0b4ebadb3 100644 --- a/commands/decrby.md +++ b/commands/decrby.md @@ -1,23 +1,14 @@ -@complexity - -O(1) - - Decrements the number stored at `key` by `decrement`. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 -bit signed integers. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. -@return - -@integer-reply: the value of `key` after the decrement - @examples - @cli - SET mykey "10" - DECRBY mykey 5 - +```cli +SET mykey "10" +DECRBY mykey 3 +``` diff --git a/commands/del.md b/commands/del.md index 0a05f3642f..b20b1e863f 100644 --- a/commands/del.md +++ b/commands/del.md @@ -1,20 +1,10 @@ -@complexity - -O(N) where N is the number of keys that will be removed. When a key to remove -holds a value other than a string, the individual complexity for this key is -O(M) where M is the number of elements in the list, set, sorted set or hash. -Removing a single key that holds a string value is O(1). - -Removes the specified keys. A key is ignored if it does not exist. - -@return - -@integer-reply: The number of keys that were removed. +Removes the specified keys. +A key is ignored if it does not exist. @examples - @cli - SET key1 "Hello" - SET key2 "World" - DEL key1 key2 key3 - +```cli +SET key1 "Hello" +SET key2 "World" +DEL key1 key2 key3 +``` diff --git a/commands/discard.md b/commands/discard.md index 27640342b3..a4064ddd74 100644 --- a/commands/discard.md +++ b/commands/discard.md @@ -1,9 +1,6 @@ -Flushes all previously queued commands in a -[transaction](/topics/transactions) and restores the connection state to -normal. +Flushes all previously queued commands in a [transaction][tt] and restores the +connection state to normal. -If `WATCH` was used, `DISCARD` unwatches all keys. +[tt]: /topics/transactions -@return - -@status-reply: always `OK`. +If `WATCH` was used, `DISCARD` unwatches all keys watched by the connection. diff --git a/commands/dump.md b/commands/dump.md new file mode 100644 index 0000000000..e06b501911 --- /dev/null +++ b/commands/dump.md @@ -0,0 +1,31 @@ +Serialize the value stored at key in a Redis-specific format and return it to +the user. +The returned value can be synthesized back into a Redis key using the `RESTORE` +command. + +The serialization format is opaque and non-standard, however it has a few +semantic characteristics: + +* It contains a 64-bit checksum that is used to make sure errors will be + detected. + The `RESTORE` command makes sure to check the checksum before synthesizing a + key using the serialized value. +* Values are encoded in the same format used by RDB. +* An RDB version is encoded inside the serialized value, so that different Redis + versions with incompatible RDB formats will refuse to process the serialized + value. + +The serialized value does NOT contain expire information. +In order to capture the time to live of the current value the `PTTL` command +should be used. + +If `key` does not exist a nil bulk reply is returned. + +@examples + +``` +> SET mykey 10 +OK +> DUMP mykey +"\x00\xc0\n\n\x00n\x9fWE\x0e\xaec\xbb" +``` diff --git a/commands/echo.md b/commands/echo.md index b06a94bdf0..e158e8910d 100644 --- a/commands/echo.md +++ b/commands/echo.md @@ -1,13 +1,7 @@ -@description - Returns `message`. -@return - -@bulk-reply - @examples - @cli - ECHO "Hello World!" - +```cli +ECHO "Hello World!" +``` diff --git a/commands/eval.md b/commands/eval.md index b4cce17bef..4dd5c8f452 100644 --- a/commands/eval.md +++ b/commands/eval.md @@ -1,453 +1,30 @@ -@complexity +Invoke the execution of a server-side Lua script. -Looking up the script both with `EVAL` or `EVALSHA` is an O(1) business. The -additional complexity is up to the script you execute. +The first argument is the script's source code. +Scripts are written in [Lua](https://lua.org) and executed by the embedded [Lua 5.1](/topics/lua-api) interpreter in Redis. -Warning ---- +The second argument is the number of input key name arguments, followed by all the keys accessed by the script. +These names of input keys are available to the script as the [_KEYS_ global runtime variable](/topics/lua-api#the-keys-global-variable) +Any additional input arguments **should not** represent names of keys. -Redis scripting support is currently a work in progress. This feature -will be shipped as stable with the release of Redis 2.6. The information -in this document reflects what is currently implemented, but it is -possible that changes will be made before the release of the stable -version. +**Important:** +to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. +The script **should only** access keys whose names are given as input arguments. +Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. -Introduction to EVAL ---- +**Note:** +in some cases, users will abuse Lua EVAL by embedding values in the script instead of providing them as argument, and thus generating a different script on each call to EVAL. +These are added to the Lua interpreter and cached to redis-server, consuming a large amount of memory over time. +Starting from Redis 8.0, scripts loaded with `EVAL` or `EVAL_RO` will be deleted from redis after a certain number (least recently used order). +The number of evicted scripts can be viewed through `INFO`'s `evicted_scripts`. -`EVAL` and `EVALSHA` are used to evaluate scripts using the Lua interpreter -built into Redis starting from version 2.6.0. +Please refer to the [Redis Programmability](/topics/programmability) and [Introduction to Eval Scripts](/topics/eval-intro) for more information about Lua scripts. -The first argument of `EVAL` itself is a Lua script. The script does not need -to define a Lua function, it is just a Lua program that will run in the context -of the Redis server. +@examples -The second argument of `EVAL` is the number of arguments that follows -(starting from the third argument) that represent Redis key names. -This arguments can be accessed by Lua using the `KEYS` global variable in -the form of a one-based array (so `KEYS[1]`, `KEYS[2]`, ...). +The following example will run a script that returns the first argument that it gets. -All the additional arguments that should not represent key names can -be accessed by Lua using the `ARGV` global variable, very similarly to -what happens with keys (so `ARGV[1]`, `ARGV[2]`, ...). - -The following example can clarify what stated above: - - > eval "return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}" 2 key1 key2 first second - 1) "key1" - 2) "key2" - 3) "first" - 4) "second" - -Note: as you can see Lua arrays are returned as Redis multi bulk -replies, that is a Redis return type that your client library will -likely convert into an Array in your programming language. - -It is possible to call Redis program from a Lua script using two different -Lua functions: - -* `redis.call()` -* `redis.pcall()` - -`redis.call()` is similar to `redis.pcall()`, the only difference is that if a -Redis command call will result into an error, `redis.call()` will raise a -Lua error that in turn will make `EVAL` to fail, while `redis.pcall` will trap -the error returning a Lua table representing the error. - -The arguments of the `redis.call()` and `redis.pcall()` functions are simply -all the arguments of a well formed Redis command: - - > eval "return redis.call('set','foo','bar')" 0 - OK - -The above script works and will set the key `foo` to the string "bar". -However it violates the `EVAL` command semantics as all the keys that the -script uses should be passed using the KEYS array, in the following way: - - > eval "return redis.call('set',KEYS[1],'bar')" 1 foo - OK - -The reason for passing keys in the proper way is that, before of `EVAL` all -the Redis commands could be analyzed before execution in order to -establish what are the keys the command will operate on. - -In order for this to be true for `EVAL` also keys must be explicit. -This is useful in many ways, but especially in order to make sure Redis Cluster -is able to forward your request to the appropriate cluster node (Redis -Cluster is a work in progress, but the scripting feature was designed -in order to play well with it). - -Lua scripts can return a value that is converted from Lua to the Redis protocol -using a set of conversion rules. - -Conversion between Lua and Redis data types ---- - -Redis return values are converted into Lua data types when Lua calls a -Redis command using call() or pcall(). Similarly Lua data types are -converted into Redis data types when a script returns some value, that -we need to use as the `EVAL` reply. - -This conversion between data types is designed in a way that if -a Redis type is converted into a Lua type, and then the result is converted -back into a Redis type, the result is the same as of the initial value. - -In other words there is a one to one conversion between Lua and Redis types. -The following table shows you all the conversions rules: - -**Redis to Lua** conversion table. - -* Redis integer reply -> Lua number -* Redis bulk reply -> Lua string -* Redis multi bulk reply -> Lua table (may have other Redis data types nested) -* Redis status reply -> Lua table with a single `ok` field containing the status -* Redis error reply -> Lua table with a single `err` field containing the error -* Redis Nil bulk reply and Nil multi bulk reply -> Lua false boolean type - -**Lua to Redis** conversion table. - -* Lua number -> Redis integer reply -* Lua string -> Redis bulk reply -* Lua table (array) -> Redis multi bulk reply -* Lua table with a single `ok` field -> Redis status reply -* Lua table with a single `err` field -> Redis error reply -* Lua boolean false -> Redis Nil bulk reply. - -There is an additional Lua to Redis conversion that has no corresponding -Redis to Lua conversion: - - * Lua boolean true -> Redis integer reply with value of 1. - -The followings are a few conversion examples: - - > eval "return 10" 0 - (integer) 10 - - > eval "return {1,2,{3,'Hello World!'}}" 0 - 1) (integer) 1 - 2) (integer) 2 - 3) 1) (integer) 3 - 2) "Hello World!" - - > eval "return redis.call('get','foo')" 0 - "bar" - -The last example shows how it is possible to directly return from Lua -the return value of `redis.call()` and `redis.pcall()` with the result of -returning exactly what the called command would return if called directly. - -Atomicity of scripts ---- - -Redis uses the same Lua interpreter to run all the commands. Also Redis -guarantees that a script is executed in an atomic way: no other script -or Redis command will be executed while a script is being executed. -This semantics is very similar to the one of `MULTI` / `EXEC`. - -However this also means that executing slow scripts is not a good idea. -It is not hard to create fast scripts, as the script overhead is very low, -but if you are going to use slow scripts you should be aware that while the -script is running no other client can execute commands since the server -is busy. - -Error handling ---- - -As already stated calls to `redis.call()` resulting into a Redis command error -will stop the execution of the script and will return that error back, in a -way that makes it obvious the error was generated by a script: - - > del foo - (integer) 1 - > lpush foo a - (integer) 1 - > eval "return redis.call('get','foo')" 0 - (error) ERR Error running script (call to f_6b1bf486c81ceb7edf3c093f4c48582e38c0e791): ERR Operation against a key holding the wrong kind of value - -Using the `redis.pcall()` command no error is raised, but an error object -is returned in the format specified above (as a Lua table with an `err` -field). The user can later return this error to the user just returning the -error object returned by `redis.pcall()`. - -Bandwidth and EVALSHA ---- - -The `EVAL` command forces you to send the script body again and again, even if -it does not need to recompile the script every time as it uses an internal -caching mechanism. However paying the cost of the additional bandwidth may -not be optimal in all the contexts. - -On the other hand defining commands using a special command or via `redis.conf` -would be a problem for a few reasons: - -* Different instances may have different versions of a command -implementation. - -* Deployment is hard if there is to make sure all the instances contain -a given command, especially in a distributed environment. - -* Reading an application code the full semantic could not be clear since -the application would call commands defined server side. - -In order to avoid the above three problems and at the same time don't incur -in the bandwidth penalty Redis implements the `EVALSHA` command. - -`EVALSHA` works exactly as `EVAL`, but instead of having a script as first argument -it has the SHA1 sum of a script. The behavior is the following: - -* If the server still remembers a script whose SHA1 sum was the one -specified, the script is executed. - -* If the server does not remember a script with this SHA1 sum, a special -error is returned that will tell the client to use `EVAL` instead. - -Example: - - > set foo bar - OK - > eval "return redis.call('get','foo')" 0 - "bar" - > evalsha 6b1bf486c81ceb7edf3c093f4c48582e38c0e791 0 - "bar" - > evalsha ffffffffffffffffffffffffffffffffffffffff 0 - (error) `NOSCRIPT` No matching script. Please use `EVAL`. - -The client library implementation can always optimistically send `EVALSHA` under -the hoods even when the client actually called `EVAL`, in the hope the script -was already seen by the server. If the `NOSCRIPT` error is returned `EVAL` will be -used instead. Passing keys and arguments as `EVAL` additional arguments is also -very useful in this context as the script string remains constant and can be -efficiently cached by Redis. - -Script cache semantics ---- - -Executed scripts are guaranteed to be in the script cache forever. -This means that if an `EVAL` is performed against a Redis instance all the -subsequent `EVALSHA` calls will succeed. - -The only way to flush the script cache is by explicitly calling the -SCRIPT FLUSH command, that will flush the scripts cache. This is usually -needed only when the instance is going to be instantiated for another -customer in a cloud environment. - -The reason why scripts can be cached for long time is that it is unlikely -for a well written application to have so many different scripts to create -memory problems. Every script is conceptually like the implementation of -a new command, and even a large application will likely have just a few -hundreds of that. Even if the application is modified many times and -scripts will change, still the memory used is negligible. - -The fact that the user can count on Redis not removing scripts -is semantically a very good thing. For instance an application taking -a persistent connection to Redis can stay sure that if a script was -sent once it is still in memory, thus for instance can use EVALSHA -against those scripts in a pipeline without the change that an error -will be generated since the script is not known (we'll see this problem -in its details later). - -The SCRIPT command ---- - -Redis offers a SCRIPT command that can be used in order to control -the scripting subsystem. SCRIPT currently accepts three different commands: - -* SCRIPT FLUSH. This command is the only way to force Redis to flush the -scripts cache. It is mostly useful in a cloud environment where the same -instance can be reassigned to a different user. It is also useful for -testing client libraries implementations of the scripting feature. - -* SCRIPT EXISTS *sha1* *sha2* ... *shaN*. Given a list of SHA1 digests -as arguments this command returns an array of 1 or 0, where 1 means the -specific SHA1 is recognized as a script already present in the scripting -cache, while 0 means that a script with this SHA1 was never seen before -(or at least never seen after the latest SCRIPT FLUSH command). - -* SCRIPT LOAD *script*. This command registers the specified script in -the Redis script cache. The command is useful in all the contexts where -we want to make sure that `EVALSHA` will not fail (for instance during a -pipeline or MULTI/EXEC operation). - -* SCRIPT KILL. This command is the only wait to interrupt a long running -script that reached the configured maximum execution time for scripts. -The SCRIPT KILL command can only be used with scripts that did not modified -the dataset during their execution (since stopping a read only script does -not violate the scripting engine guaranteed atomicity). -See the next sections for more information about long running scripts. - -Scripts as pure functions ---- - -A very important part of scripting is writing scripts that are pure functions. -Scripts executed in a Redis instance are replicated on slaves sending the -same script, instead of the resulting commands. The same happens for the -Append Only File. The reason is that scripts are much faster than sending -commands one after the other to a Redis instance, so if the client is -taking the master very busy sending scripts, turning this scripts into single -commands for the slave / AOF would result in too much load for the replication -link or the Append Only File. - -The only drawback with this approach is that scripts are required to -have the following property: - -* The script always evaluates the same Redis *write* commands with the -same arguments given the same input data set. Operations performed by -the script cannot depend on any hidden information or state that may -change as script execution proceeds or between different executions of -the script, nor can it depend on any external input from I/O devices. - -Things like using the system time, calling Redis random commands like -RANDOMKEY, or using Lua random number generator, could result into scripts -that will not evaluate always in the same way. - -In order to enforce this behavior in scripts Redis does the following: - -* Lua does not export commands to access the system time or other -external state. - -* Redis will block the script with an error if a script will call a -Redis command able to alter the data set **after** a Redis random -command like RANDOMKEY or SRANDMEMBER. This means that if a script is -read only and does not modify the data set it is free to call those -commands. - -* Lua pseudo random number generation functions `math.random` and -`math.randomseed` are modified in order to always have the same seed every -time a new script is executed. This means that calling `math.random` will -always generate the same sequence of numbers every time a script is -executed if `math.randomseed` is not used. - -However the user is still able to write commands with random behaviors -using the following simple trick. For example I want to write a Redis -script that will populate a list with N random integers. - -I can start writing the following script, using a small Ruby program: - - require 'rubygems' - require 'redis' - - r = Redis.new - - RandomPushScript = < 0) do - res = redis.call('lpush',KEYS[1],math.random()) - i = i-1 - end - return res - EOF - - r.del(:mylist) - puts r.eval(RandomPushScript,1,:mylist,10) - -Every time this script executed the resulting list will have exactly the -following elements: - - > lrange mylist 0 -1 - 1) "0.74509509873814" - 2) "0.87390407681181" - 3) "0.36876626981831" - 4) "0.6921941534114" - 5) "0.7857992587545" - 6) "0.57730350670279" - 7) "0.87046522734243" - 8) "0.09637165539729" - 9) "0.74990198051087" - 10) "0.17082803611217" - -In order to make it a pure function, but still making sure that every -invocation of the script will result in a different random elements, we can -simply add an additional argument to the script, that will be used in order to -seed the Lua PRNG. The new script will be like the following: - - RandomPushScript = < 0) do - res = redis.call('lpush',KEYS[1],math.random()) - i = i-1 - end - return res - EOF - - r.del(:mylist) - puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) - -What we are doing here is to send the seed of the PRNG as one of the -arguments. This way the script output will be the same given the same -arguments, but we are changing one of the argument at every invocation, -generating the random seed client side. The seed will be propagated as -one of the arguments both in the replication link and in the Append Only -File, guaranteeing that the same changes will be generated when the AOF -is reloaded or when the slave will process the script. - -Note: an important part of this behavior is that the PRNG that Redis implements -as `math.random` and `math.randomseed` is guaranteed to have the same output -regardless of the architecture of the system running Redis. 32 or 64 bit systems -like big or little endian systems will still produce the same output. - -Available libraries ---- - -The Redis Lua interpreter loads the following Lua libraries: - -* Base lib. -* Table lib. -* String lib. -* Math lib. -* Debug lib. -* CJSON lib. - -Every Redis instance is *guaranteed* to have all the above libraries so you -can be sure that the environment for your Redis scripts is always the same. - -The CJSON library allows to manipulate JSON data in a very fast way from Lua. -All the other libraries are standard Lua libraries. - -Sandbox and maximum execution time ---- - -Scripts should never try to access the external system, like the file system, -nor calling any other system call. A script should just do its work operating -on Redis data, starting form Redis data. - -Scripts also are subject to a maximum execution time of five seconds. -This default timeout is huge since a script should run usually in a sub -millisecond amount of time. The limit is mostly needed in order to avoid -problems when developing scripts that may loop forever for a programming -error. - -It is possible to modify the maximum time a script can be executed -with milliseconds precision, either via `redis.conf` or using the -CONFIG GET / CONFIG SET command. The configuration parameter -affecting max execution time is called `lua-time-limit`. - -When a script reaches the timeout it is not automatically terminated by -Redis since this violates the contract Redis has with the scripting engine -to ensure that scripts are atomic in nature. Stopping a script half-way means -to possibly leave the dataset with half-written data inside. -For this reasons when a script executes for more than the specified time -the following happens: - -* Redis logs that a script that is running for too much time is still in execution. -* It starts accepting commands again from other clients, but will reply with a BUSY error to all the clients sending normal commands. The only allowed commands in this status are `SCRIPT KILL` and `SHUTDOWN NOSAVE`. -* It is possible to terminate a script that executed only read-only commands using the `SCRIPT KILL` command. This does not violate the scripting semantic as no data was yet written on the dataset by the script. -* If the script already called write commands against the data set the only allowed command becomes `SHUTDOWN NOSAVE` that stops the server not saving the current data set on disk (basically the server is aborted). - -EVALSHA in the context of pipelining ---- - -Care should be taken when executing `EVALSHA` in the context of a pipelined -request, since even in a pipeline the order of execution of commands must -be guaranteed. If `EVALSHA` will return a `NOSCRIPT` error the command can not -be reissued later otherwise the order of execution is violated. - -The client library implementation should take one of the following -approaches: - -* Always use plain `EVAL` when in the context of a pipeline. - -* Accumulate all the commands to send into the pipeline, then check for -`EVAL` commands and use the SCRIPT EXISTS command to check if all the -scripts are already defined. If not add SCRIPT LOAD commands on top of -the pipeline as required, and use `EVALSHA` for all the `EVAL` calls. +``` +> EVAL "return ARGV[1]" 0 hello +"hello" +``` diff --git a/commands/eval_ro.md b/commands/eval_ro.md new file mode 100644 index 0000000000..bc166c84dd --- /dev/null +++ b/commands/eval_ro.md @@ -0,0 +1,18 @@ +This is a read-only variant of the `EVAL` command that cannot execute commands that modify data. + +For more information about when to use this command vs `EVAL`, please refer to [Read-only scripts](/docs/manual/programmability/#read-only-scripts). + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). + +@examples + +``` +> SET mykey "Hello" +OK + +> EVAL_RO "return redis.call('GET', KEYS[1])" 1 mykey +"Hello" + +> EVAL_RO "return redis.call('DEL', KEYS[1])" 1 mykey +(error) ERR Error running script (call to b0d697da25b13e49157b2c214a4033546aba2104): @user_script:1: @user_script: 1: Write commands are not allowed from read-only scripts. +``` diff --git a/commands/evalsha.md b/commands/evalsha.md new file mode 100644 index 0000000000..c8b2329b5e --- /dev/null +++ b/commands/evalsha.md @@ -0,0 +1,6 @@ +Evaluate a script from the server's cache by its SHA1 digest. + +The server caches scripts by using the `SCRIPT LOAD` command. +The command is otherwise identical to `EVAL`. + +Please refer to the [Redis Programmability](/topics/programmability) and [Introduction to Eval Scripts](/topics/eval-intro) for more information about Lua scripts. diff --git a/commands/evalsha_ro.md b/commands/evalsha_ro.md new file mode 100644 index 0000000000..b6164b3303 --- /dev/null +++ b/commands/evalsha_ro.md @@ -0,0 +1,5 @@ +This is a read-only variant of the `EVALSHA` command that cannot execute commands that modify data. + +For more information about when to use this command vs `EVALSHA`, please refer to [Read-only scripts](/docs/manual/programmability/#read-only-scripts). + +For more information about `EVALSHA` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). diff --git a/commands/exec.md b/commands/exec.md index 2fd1157589..2dada72b72 100644 --- a/commands/exec.md +++ b/commands/exec.md @@ -1,15 +1,9 @@ -Executes all previously queued commands in a -[transaction](/topics/transactions) and restores the connection state to -normal. +Executes all previously queued commands in a [transaction][tt] and restores the +connection state to normal. -When using `WATCH`, `EXEC` will execute commands only if the -watched keys were not modified, allowing for a [check-and-set -mechanism](/topics/transactions#cas). +[tt]: /topics/transactions -@return +When using `WATCH`, `EXEC` will execute commands only if the watched keys were +not modified, allowing for a [check-and-set mechanism][ttc]. -@multi-bulk-reply: each element being the reply to each of the commands -in the atomic transaction. - -When using `WATCH`, `EXEC` can return a @nil-reply if the execution was -aborted. +[ttc]: /topics/transactions#cas diff --git a/commands/exists.md b/commands/exists.md index 5dab4f67b6..c8a20239d7 100644 --- a/commands/exists.md +++ b/commands/exists.md @@ -1,21 +1,13 @@ -@complexity - -O(1) - - Returns if `key` exists. -@return - -@integer-reply, specifically: - -* `1` if the key exists. -* `0` if the key does not exist. +The user should be aware that if the same existing key is mentioned in the arguments multiple times, it will be counted multiple times. So if `somekey` exists, `EXISTS somekey somekey` will return 2. @examples - @cli - SET key1 "Hello" - EXISTS key1 - EXISTS key2 - +```cli +SET key1 "Hello" +EXISTS key1 +EXISTS nosuchkey +SET key2 "World" +EXISTS key1 key2 nosuchkey +``` diff --git a/commands/expire.md b/commands/expire.md index 878bce9a32..7aca62f81f 100644 --- a/commands/expire.md +++ b/commands/expire.md @@ -1,40 +1,182 @@ -@complexity +Set a timeout on `key`. +After the timeout has expired, the key will automatically be deleted. +A key with an associated timeout is often said to be _volatile_ in Redis +terminology. -O(1) +The timeout will only be cleared by commands that delete or overwrite the +contents of the key, including `DEL`, `SET`, `GETSET` and all the `*STORE` +commands. +This means that all the operations that conceptually _alter_ the value stored at +the key without replacing it with a new one will leave the timeout untouched. +For instance, incrementing the value of a key with `INCR`, pushing a new value +into a list with `LPUSH`, or altering the field value of a hash with `HSET` are +all operations that will leave the timeout untouched. +The timeout can also be cleared, turning the key back into a persistent key, +using the `PERSIST` command. -Set a timeout on `key`. After the timeout has expired, the key will -automatically be deleted. A key with an associated timeout is said to be -_volatile_ in Redis terminology. +If a key is renamed with `RENAME`, the associated time to live is transferred to +the new key name. -If `key` is updated before the timeout has expired, then the timeout is removed -as if the `PERSIST` command was invoked on `key`. +If a key is overwritten by `RENAME`, like in the case of an existing key `Key_A` +that is overwritten by a call like `RENAME Key_B Key_A`, it does not matter if +the original `Key_A` had a timeout associated or not, the new key `Key_A` will +inherit all the characteristics of `Key_B`. -For Redis versions **< 2.1.3**, existing timeouts cannot be overwritten. So, if -`key` already has an associated timeout, it will do nothing and return `0`. -Since Redis **2.1.3**, you can update the timeout of a key. It is also possible -to remove the timeout using the `PERSIST` command. See the page on [key expiry][1] -for more information. +Note that calling `EXPIRE`/`PEXPIRE` with a non-positive timeout or +`EXPIREAT`/`PEXPIREAT` with a time in the past will result in the key being +[deleted][del] rather than expired (accordingly, the emitted [key event][ntf] +will be `del`, not `expired`). -Note that in Redis 2.4 the expire might not be pin-point accurate, and -it could be between zero to one seconds out. Development versions of -Redis fixed this bug and Redis 2.6 will feature a millisecond precision -`EXPIRE`. +[del]: /commands/del +[ntf]: /topics/notifications -[1]: /topics/expire +## Options -@return +The `EXPIRE` command supports a set of options: -@integer-reply, specifically: +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one -* `1` if the timeout was set. -* `0` if `key` does not exist or the timeout could not be set. +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + +## Refreshing expires + +It is possible to call `EXPIRE` using as argument a key that already has an +existing expire set. +In this case the time to live of a key is _updated_ to the new value. +There are many useful applications for this, an example is documented in the +_Navigation session_ pattern section below. + +## Differences in Redis prior 2.1.3 + +In Redis versions prior **2.1.3** altering a key with an expire set using a +command altering its value had the effect of removing the key entirely. +This semantics was needed because of limitations in the replication layer that +are now fixed. + +`EXPIRE` would return 0 and not alter the timeout for a key with a timeout set. @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - SET mykey "Hello World" - TTL mykey +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +SET mykey "Hello World" +TTL mykey +EXPIRE mykey 10 XX +TTL mykey +EXPIRE mykey 10 NX +TTL mykey +``` + +## Pattern: Navigation session + +Imagine you have a web service and you are interested in the latest N pages +_recently_ visited by your users, such that each adjacent page view was not +performed more than 60 seconds after the previous. +Conceptually you may consider this set of page views as a _Navigation session_ +of your user, that may contain interesting information about what kind of +products he or she is looking for currently, so that you can recommend related +products. + +You can easily model this pattern in Redis using the following strategy: every +time the user does a page view you call the following commands: + +``` +MULTI +RPUSH pagewviews.user: http://..... +EXPIRE pagewviews.user: 60 +EXEC +``` + +If the user will be idle more than 60 seconds, the key will be deleted and only +subsequent page views that have less than 60 seconds of difference will be +recorded. + +This pattern is easily modified to use counters using `INCR` instead of lists +using `RPUSH`. + +# Appendix: Redis expires + +## Keys with an expire + +Normally Redis keys are created without an associated time to live. +The key will simply live forever, unless it is removed by the user in an +explicit way, for instance using the `DEL` command. + +The `EXPIRE` family of commands is able to associate an expire to a given key, +at the cost of some additional memory used by the key. +When a key has an expire set, Redis will make sure to remove the key when the +specified amount of time elapsed. + +The key time to live can be updated or entirely removed using the `EXPIRE` and +`PERSIST` command (or other strictly related commands). + +## Expire accuracy + +In Redis 2.4 the expire might not be pin-point accurate, and it could be between +zero to one seconds out. + +Since Redis 2.6 the expire error is from 0 to 1 milliseconds. + +## Expires and persistence + +Keys expiring information is stored as absolute Unix timestamps (in milliseconds +in case of Redis version 2.6 or greater). +This means that the time is flowing even when the Redis instance is not active. + +For expires to work well, the computer time must be taken stable. +If you move an RDB file from two computers with a big desync in their clocks, +funny things may happen (like all the keys loaded to be expired at loading +time). + +Even running instances will always check the computer clock, so for instance if +you set a key with a time to live of 1000 seconds, and then set your computer +time 2000 seconds in the future, the key will be expired immediately, instead of +lasting for 1000 seconds. + +## How Redis expires keys + +Redis keys are expired in two ways: a passive way, and an active way. + +A key is passively expired simply when some client tries to access it, and the +key is found to be timed out. + +Of course this is not enough as there are expired keys that will never be +accessed again. +These keys should be expired anyway, so periodically Redis tests a few keys at +random among keys with an expire set. +All the keys that are already expired are deleted from the keyspace. + +Specifically this is what Redis does 10 times per second: + +1. Test 20 random keys from the set of keys with an associated expire. +2. Delete all the keys found expired. +3. If more than 25% of keys were expired, start again from step 1. + +This is a trivial probabilistic algorithm, basically the assumption is that our +sample is representative of the whole key space, and we continue to expire until +the percentage of keys that are likely to be expired is under 25% + +This means that at any given moment the maximum amount of keys already expired +that are using memory is at max equal to max amount of write operations per +second divided by 4. + +## How expires are handled in the replication link and AOF file + +In order to obtain a correct behavior without sacrificing consistency, when a +key expires, a `DEL` operation is synthesized in both the AOF file and gains all +the attached replicas nodes. +This way the expiration process is centralized in the master instance, and there +is no chance of consistency errors. + +However while the replicas connected to a master will not expire keys +independently (but will wait for the `DEL` coming from the master), they'll +still take the full state of the expires existing in the dataset, so when a +replica is elected to master it will be able to expire the keys independently, +fully acting as a master. diff --git a/commands/expireat.md b/commands/expireat.md index f5773faab3..fed6bff7a8 100644 --- a/commands/expireat.md +++ b/commands/expireat.md @@ -1,40 +1,37 @@ -@complexity - -O(1) - - -Set a timeout on `key`. After the timeout has expired, the key will -automatically be deleted. A key with an associated timeout is said to be -_volatile_ in Redis terminology. - `EXPIREAT` has the same effect and semantic as `EXPIRE`, but instead of specifying the number of seconds representing the TTL (time to live), it takes -an absolute [UNIX timestamp][2] (seconds since January 1, 1970). +an absolute [Unix timestamp][hewowu] (seconds since January 1, 1970). A +timestamp in the past will delete the key immediately. -As in the case of `EXPIRE` command, if `key` is updated before the timeout has -expired, then the timeout is removed as if the `PERSIST` command was invoked on -`key`. +[hewowu]: http://en.wikipedia.org/wiki/Unix_time -[2]: http://en.wikipedia.org/wiki/Unix_time +Please for the specific semantics of the command refer to the documentation of +`EXPIRE`. ## Background `EXPIREAT` was introduced in order to convert relative timeouts to absolute -timeouts for the AOF persistence mode. Of course, it can be used directly to -specify that a given key should expire at a given time in the future. +timeouts for the AOF persistence mode. +Of course, it can be used directly to specify that a given key should expire at +a given time in the future. -@return +## Options -@integer-reply, specifically: +The `EXPIREAT` command supports a set of options: -* `1` if the timeout was set. -* `0` if `key` does not exist or the timeout could not be set (see: `EXPIRE`). +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one -@examples +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. - @cli - SET mykey "Hello" - EXISTS mykey - EXPIREAT mykey 1293840000 - EXISTS mykey +@examples +```cli +SET mykey "Hello" +EXISTS mykey +EXPIREAT mykey 1293840000 +EXISTS mykey +``` diff --git a/commands/expiretime.md b/commands/expiretime.md new file mode 100644 index 0000000000..afcd37b8b9 --- /dev/null +++ b/commands/expiretime.md @@ -0,0 +1,11 @@ +Returns the absolute Unix timestamp (since January 1, 1970) in seconds at which the given key will expire. + +See also the `PEXPIRETIME` command which returns the same information with milliseconds resolution. + +@examples + +```cli +SET mykey "Hello" +EXPIREAT mykey 33177117420 +EXPIRETIME mykey +``` diff --git a/commands/failover.md b/commands/failover.md new file mode 100644 index 0000000000..dd4ce43399 --- /dev/null +++ b/commands/failover.md @@ -0,0 +1,44 @@ +This command will start a coordinated failover between the currently-connected-to master and one of its replicas. +The failover is not synchronous, instead a background task will handle coordinating the failover. +It is designed to limit data loss and unavailability of the cluster during the failover. +This command is analogous to the `CLUSTER FAILOVER` command for non-clustered Redis and is similar to the failover support provided by sentinel. + +The specific details of the default failover flow are as follows: + +1. The master will internally start a `CLIENT PAUSE WRITE`, which will pause incoming writes and prevent the accumulation of new data in the replication stream. +2. The master will monitor its replicas, waiting for a replica to indicate that it has fully consumed the replication stream. If the master has multiple replicas, it will only wait for the first replica to catch up. +3. The master will then demote itself to a replica. This is done to prevent any dual master scenarios. NOTE: The master will not discard its data, so it will be able to rollback if the replica rejects the failover request in the next step. +4. The previous master will send a special PSYNC request to the target replica, `PSYNC FAILOVER`, instructing the target replica to become a master. +5. Once the previous master receives acknowledgement the `PSYNC FAILOVER` was accepted it will unpause its clients. If the PSYNC request is rejected, the master will abort the failover and return to normal. + +The field `master_failover_state` in `INFO replication` can be used to track the current state of the failover, which has the following values: + +* `no-failover`: There is no ongoing coordinated failover. +* `waiting-for-sync`: The master is waiting for the replica to catch up to its replication offset. +* `failover-in-progress`: The master has demoted itself, and is attempting to hand off ownership to a target replica. + +If the previous master had additional replicas attached to it, they will continue replicating from it as chained replicas. You will need to manually execute a `REPLICAOF` on these replicas to start replicating directly from the new master. + +## Optional arguments +The following optional arguments exist to modify the behavior of the failover flow: + +* `TIMEOUT` *milliseconds* -- This option allows specifying a maximum time a master will wait in the `waiting-for-sync` state before aborting the failover attempt and rolling back. +This is intended to set an upper bound on the write outage the Redis cluster can experience. +Failovers typically happen in less than a second, but could take longer if there is a large amount of write traffic or the replica is already behind in consuming the replication stream. +If this value is not specified, the timeout can be considered to be "infinite". + +* `TO` *HOST* *PORT* -- This option allows designating a specific replica, by its host and port, to failover to. The master will wait specifically for this replica to catch up to its replication offset, and then failover to it. + +* `FORCE` -- If both the `TIMEOUT` and `TO` options are set, the force flag can also be used to designate that that once the timeout has elapsed, the master should failover to the target replica instead of rolling back. +This can be used for a best-effort attempt at a failover without data loss, but limiting write outage. + +NOTE: The master will always rollback if the `PSYNC FAILOVER` request is rejected by the target replica. + +## Failover abort + +The failover command is intended to be safe from data loss and corruption, but can encounter some scenarios it can not automatically remediate from and may get stuck. +For this purpose, the `FAILOVER ABORT` command exists, which will abort an ongoing failover and return the master to its normal state. +The command has no side effects if issued in the `waiting-for-sync` state but can introduce multi-master scenarios in the `failover-in-progress` state. +If a multi-master scenario is encountered, you will need to manually identify which master has the latest data and designate it as the master and have the other replicas. + +NOTE: `REPLICAOF` is disabled while a failover is in progress, this is to prevent unintended interactions with the failover that might cause data loss. diff --git a/commands/fcall.md b/commands/fcall.md new file mode 100644 index 0000000000..30e17518ad --- /dev/null +++ b/commands/fcall.md @@ -0,0 +1,28 @@ +Invoke a function. + +Functions are loaded to the server with the `FUNCTION LOAD` command. +The first argument is the name of a loaded function. + +The second argument is the number of input key name arguments, followed by all the keys accessed by the function. +In Lua, these names of input keys are available to the function as a table that is the callback's first argument. + +**Important:** +To ensure the correct execution of functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. +The function **should only** access keys whose names are given as input arguments. +Functions **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. + +Any additional input argument **should not** represent names of keys. +These are regular arguments and are passed in a Lua table as the callback's second argument. + +For more information please refer to the [Redis Programmability](/topics/programmability) and [Introduction to Redis Functions](/topics/functions-intro) pages. + +@examples + +The following example will create a library named `mylib` with a single function, `myfunc`, that returns the first argument it gets. + +``` +redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)" +"mylib" +redis> FCALL myfunc 0 hello +"hello" +``` diff --git a/commands/fcall_ro.md b/commands/fcall_ro.md new file mode 100644 index 0000000000..576b140674 --- /dev/null +++ b/commands/fcall_ro.md @@ -0,0 +1,5 @@ +This is a read-only variant of the `FCALL` command that cannot execute commands that modify data. + +For more information about when to use this command vs `FCALL`, please refer to [Read-only scripts](/docs/manual/programmability/#read-only_scripts). + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/flushall.md b/commands/flushall.md index e5f18817ff..68ec53c0b5 100644 --- a/commands/flushall.md +++ b/commands/flushall.md @@ -1,7 +1,16 @@ +Delete all the keys of all the existing databases, not just the currently selected one. +This command never fails. +By default, `FLUSHALL` will synchronously flush all the databases. +Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. -Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. +It is possible to use one of the following modifiers to dictate the flushing mode explicitly: -@return +* `ASYNC`: flushes the databases asynchronously +* `!SYNC`: flushes the databases synchronously -@status-reply +Note: an asynchronous `FLUSHALL` command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected. + +## Behavior change history + +* `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. \ No newline at end of file diff --git a/commands/flushdb.md b/commands/flushdb.md index f233e3e764..112a9db3eb 100644 --- a/commands/flushdb.md +++ b/commands/flushdb.md @@ -1,7 +1,16 @@ +Delete all the keys of the currently selected DB. +This command never fails. +By default, `FLUSHDB` will synchronously flush all keys from the database. +Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. -Delete all the keys of the currently selected DB. This command never fails. +It is possible to use one of the following modifiers to dictate the flushing mode explicitly: -@return +* `ASYNC`: flushes the database asynchronously +* `!SYNC`: flushes the database synchronously -@status-reply +Note: an asynchronous `FLUSHDB` command only deletes keys that were present at the time the command was invoked. Keys created during an asynchronous flush will be unaffected. + +## Behavior change history + +* `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. \ No newline at end of file diff --git a/commands/function-delete.md b/commands/function-delete.md new file mode 100644 index 0000000000..557ce4d1c3 --- /dev/null +++ b/commands/function-delete.md @@ -0,0 +1,19 @@ +Delete a library and all its functions. + +This command deletes the library called _library-name_ and all functions in it. +If the library doesn't exist, the server returns an error. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). + +@examples + +``` +redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return 'hello' end)" +"mylib" +redis> FCALL myfunc 0 +"hello" +redis> FUNCTION DELETE mylib +OK +redis> FCALL myfunc 0 +(error) ERR Function not found +``` diff --git a/commands/function-dump.md b/commands/function-dump.md new file mode 100644 index 0000000000..167001c3cb --- /dev/null +++ b/commands/function-dump.md @@ -0,0 +1,32 @@ +Return the serialized payload of loaded libraries. +You can restore the serialized payload later with the `FUNCTION RESTORE` command. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). + +@examples + +The following example shows how to dump loaded libraries using `FUNCTION DUMP` and then it calls `FUNCTION FLUSH` deletes all the libraries. +Then, it restores the original libraries from the serialized payload with `FUNCTION RESTORE`. + +``` +redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)" +"mylib" +redis> FUNCTION DUMP +"\xf5\xc3@X@]\x1f#!lua name=mylib \n redis.registe\rr_function('my@\x0b\x02', @\x06`\x12\nkeys, args) 6\x03turn`\x0c\a[1] end)\x0c\x00\xba\x98\xc2\xa2\x13\x0e$\a" +redis> FUNCTION FLUSH +OK +redis> FUNCTION RESTORE "\xf5\xc3@X@]\x1f#!lua name=mylib \n redis.registe\rr_function('my@\x0b\x02', @\x06`\x12\nkeys, args) 6\x03turn`\x0c\a[1] end)\x0c\x00\xba\x98\xc2\xa2\x13\x0e$\a" +OK +redis> FUNCTION LIST +1) 1) "library_name" + 2) "mylib" + 3) "engine" + 4) "LUA" + 5) "functions" + 6) 1) 1) "name" + 2) "myfunc" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) +``` diff --git a/commands/function-flush.md b/commands/function-flush.md new file mode 100644 index 0000000000..7d9a2836a0 --- /dev/null +++ b/commands/function-flush.md @@ -0,0 +1,8 @@ +Deletes all the libraries. + +Unless called with the optional mode argument, the `lazyfree-lazy-user-flush` configuration directive sets the effective behavior. Valid modes are: + +* `ASYNC`: Asynchronously flush the libraries. +* `!SYNC`: Synchronously flush the libraries. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/function-help.md b/commands/function-help.md new file mode 100644 index 0000000000..9190a9b082 --- /dev/null +++ b/commands/function-help.md @@ -0,0 +1 @@ +The `FUNCTION HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/function-kill.md b/commands/function-kill.md new file mode 100644 index 0000000000..1a61c9eb91 --- /dev/null +++ b/commands/function-kill.md @@ -0,0 +1,6 @@ +Kill a function that is currently executing. + + +The `FUNCTION KILL` command can be used only on functions that did not modify the dataset during their execution (since stopping a read-only function does not violate the scripting engine's guaranteed atomicity). + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/function-list.md b/commands/function-list.md new file mode 100644 index 0000000000..7cd3853d7f --- /dev/null +++ b/commands/function-list.md @@ -0,0 +1,17 @@ +Return information about the functions and libraries. + +You can use the optional `LIBRARYNAME` argument to specify a pattern for matching library names. +The optional `WITHCODE` modifier will cause the server to include the libraries source implementation in the reply. + +The following information is provided for each of the libraries in the response: + +* **library_name:** the name of the library. +* **engine:** the engine of the library. +* **functions:** the list of functions in the library. + Each function has the following fields: + * **name:** the name of the function. + * **description:** the function's description. + * **flags:** an array of [function flags](/docs/manual/programmability/functions-intro/#function-flags). +* **library_code:** the library's source code (when given the `WITHCODE` modifier). + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/function-load.md b/commands/function-load.md new file mode 100644 index 0000000000..2bb36d3e71 --- /dev/null +++ b/commands/function-load.md @@ -0,0 +1,32 @@ +Load a library to Redis. + +The command's gets a single mandatory parameter which is the source code that implements the library. +The library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). +Shebang format: `#! name=`. Currently engine name must be `lua`. + +For the Lua engine, the implementation should declare one or more entry points to the library with the [`redis.register_function()` API](/topics/lua-api#redis.register_function). +Once loaded, you can call the functions in the library with the `FCALL` (or `FCALL_RO` when applicable) command. + +When attempting to load a library with a name that already exists, the Redis server returns an error. +The `REPLACE` modifier changes this behavior and overwrites the existing library with the new contents. + +The command will return an error in the following circumstances: + +* An invalid _engine-name_ was provided. +* The library's name already exists without the `REPLACE` modifier. +* A function in the library is created with a name that already exists in another library (even when `REPLACE` is specified). +* The engine failed in creating the library's functions (due to a compilation error, for example). +* No functions were declared by the library. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). + +@examples + +The following example will create a library named `mylib` with a single function, `myfunc`, that returns the first argument it gets. + +``` +redis> FUNCTION LOAD "#!lua name=mylib \n redis.register_function('myfunc', function(keys, args) return args[1] end)" +mylib +redis> FCALL myfunc 0 hello +"hello" +``` diff --git a/commands/function-restore.md b/commands/function-restore.md new file mode 100644 index 0000000000..f50edda58f --- /dev/null +++ b/commands/function-restore.md @@ -0,0 +1,11 @@ +Restore libraries from the serialized payload. + +You can use the optional _policy_ argument to provide a policy for handling existing libraries. +The following policies are allowed: + +* **APPEND:** appends the restored libraries to the existing libraries and aborts on collision. + This is the default policy. +* **FLUSH:** deletes all existing libraries before restoring the payload. +* **REPLACE:** appends the restored libraries to the existing libraries, replacing any existing ones in case of name collisions. Note that this policy doesn't prevent function name collisions, only libraries. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/function-stats.md b/commands/function-stats.md new file mode 100644 index 0000000000..1746d0e0e7 --- /dev/null +++ b/commands/function-stats.md @@ -0,0 +1,17 @@ +Return information about the function that's currently running and information about the available execution engines. + +The reply is map with two keys: + +1. `running_script`: information about the running script. + If there's no in-flight function, the server replies with a _nil_. + Otherwise, this is a map with the following keys: + * **name:** the name of the function. + * **command:** the command and arguments used for invoking the function. + * **duration_ms:** the function's runtime duration in milliseconds. +2. `engines`: this is a map of maps. Each entry in the map represent a single engine. + Engine map contains statistics about the engine like number of functions and number of libraries. + + +You can use this command to inspect the invocation of a long-running function and decide whether kill it with the `FUNCTION KILL` command. + +For more information please refer to [Introduction to Redis Functions](/topics/functions-intro). diff --git a/commands/function.md b/commands/function.md new file mode 100644 index 0000000000..36ccd9b61e --- /dev/null +++ b/commands/function.md @@ -0,0 +1,3 @@ +This is a container command for function commands. + +To see the list of available commands you can call `FUNCTION HELP`. \ No newline at end of file diff --git a/commands/geoadd.md b/commands/geoadd.md new file mode 100644 index 0000000000..fdc696d277 --- /dev/null +++ b/commands/geoadd.md @@ -0,0 +1,49 @@ +Adds the specified geospatial items (longitude, latitude, name) to the specified key. Data is stored into the key as a sorted set, in a way that makes it possible to query the items with the `GEOSEARCH` command. + +The command takes arguments in the standard format x,y so the longitude must be specified before the latitude. There are limits to the coordinates that can be indexed: areas very near to the poles are not indexable. + +The exact limits, as specified by EPSG:900913 / EPSG:3785 / OSGEO:41001 are the following: + +* Valid longitudes are from -180 to 180 degrees. +* Valid latitudes are from -85.05112878 to 85.05112878 degrees. + +The command will report an error when the user attempts to index coordinates outside the specified ranges. + +**Note:** there is no **GEODEL** command because you can use `ZREM` to remove elements. The Geo index structure is just a sorted set. + +## GEOADD options + +`GEOADD` also provides the following options: + +* **XX**: Only update elements that already exist. Never add elements. +* **NX**: Don't update already existing elements. Always add new elements. +* **CH**: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of *changed*). Changed elements are **new elements added** and elements already existing for which **the coordinates was updated**. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally, the return value of `GEOADD` only counts the number of new elements added. + +Note: The **XX** and **NX** options are mutually exclusive. + +How does it work? +--- + +The way the sorted set is populated is using a technique called +[Geohash](https://en.wikipedia.org/wiki/Geohash). Latitude and Longitude +bits are interleaved to form a unique 52-bit integer. We know +that a sorted set double score can represent a 52-bit integer without losing +precision. + +This format allows for bounding box and radius querying by checking the 1+8 areas needed to cover the whole shape and discarding elements outside it. The areas are checked by calculating the range of the box covered, removing enough bits from the less significant part of the sorted set score, and computing the score range to query in the sorted set for each area. + +What Earth model does it use? +--- + +The model assumes that the Earth is a sphere since it uses the Haversine formula to calculate distance. This formula is only an approximation when applied to the Earth, which is not a perfect sphere. +The introduced errors are not an issue when used, for example, by social networks and similar applications requiring this type of querying. +However, in the worst case, the error may be up to 0.5%, so you may want to consider other systems for error-critical applications. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEODIST Sicily Palermo Catania +GEORADIUS Sicily 15 37 100 km +GEORADIUS Sicily 15 37 200 km +``` diff --git a/commands/geodist.md b/commands/geodist.md new file mode 100644 index 0000000000..257d97f7f3 --- /dev/null +++ b/commands/geodist.md @@ -0,0 +1,24 @@ +Return the distance between two members in the geospatial index represented by the sorted set. + +Given a sorted set representing a geospatial index, populated using the `GEOADD` command, the command returns the distance between the two specified members in the specified unit. + +If one or both the members are missing, the command returns NULL. + +The unit must be one of the following, and defaults to meters: + +* **m** for meters. +* **km** for kilometers. +* **mi** for miles. +* **ft** for feet. + +The distance is computed assuming that the Earth is a perfect sphere, so errors up to 0.5% are possible in edge cases. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEODIST Sicily Palermo Catania +GEODIST Sicily Palermo Catania km +GEODIST Sicily Palermo Catania mi +GEODIST Sicily Foo Bar +``` diff --git a/commands/geohash.md b/commands/geohash.md new file mode 100644 index 0000000000..0d43056747 --- /dev/null +++ b/commands/geohash.md @@ -0,0 +1,26 @@ +Return valid [Geohash](https://en.wikipedia.org/wiki/Geohash) strings representing the position of one or more elements in a sorted set value representing a geospatial index (where elements were added using `GEOADD`). + +Normally Redis represents positions of elements using a variation of the Geohash +technique where positions are encoded using 52 bit integers. The encoding is +also different compared to the standard because the initial min and max +coordinates used during the encoding and decoding process are different. This +command however **returns a standard Geohash** in the form of a string as +described in the [Wikipedia article](https://en.wikipedia.org/wiki/Geohash) and compatible with the [geohash.org](http://geohash.org) web site. + +Geohash string properties +--- + +The command returns 11 characters Geohash strings, so no precision is lost +compared to the Redis internal 52 bit representation. The returned Geohashes +have the following properties: + +1. They can be shortened removing characters from the right. It will lose precision but will still point to the same area. +2. It is possible to use them in `geohash.org` URLs such as `http://geohash.org/`. This is an [example of such URL](http://geohash.org/sqdtr74hyu0). +3. Strings with a similar prefix are nearby, but the contrary is not true, it is possible that strings with different prefixes are nearby too. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEOHASH Sicily Palermo Catania +``` diff --git a/commands/geopos.md b/commands/geopos.md new file mode 100644 index 0000000000..c23a879978 --- /dev/null +++ b/commands/geopos.md @@ -0,0 +1,12 @@ +Return the positions (longitude,latitude) of all the specified members of the geospatial index represented by the sorted set at *key*. + +Given a sorted set representing a geospatial index, populated using the `GEOADD` command, it is often useful to obtain back the coordinates of specified members. When the geospatial index is populated via `GEOADD` the coordinates are converted into a 52 bit geohash, so the coordinates returned may not be exactly the ones used in order to add the elements, but small errors may be introduced. + +The command can accept a variable number of arguments so it always returns an array of positions even when a single element is specified. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEOPOS Sicily Palermo Catania NonExisting +``` diff --git a/commands/georadius.md b/commands/georadius.md new file mode 100644 index 0000000000..27e9fdc78b --- /dev/null +++ b/commands/georadius.md @@ -0,0 +1,49 @@ +Return the members of a sorted set populated with geospatial information using `GEOADD`, which are within the borders of the area specified with the center location and the maximum distance from the center (the radius). + +This manual page also covers the `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO` variants (see the section below for more information). + +The common use case for this command is to retrieve geospatial items near a specified point not farther than a given amount of meters (or other units). This allows, for example, to suggest mobile users of an application nearby places. + +The radius is specified in one of the following units: + +* **m** for meters. +* **km** for kilometers. +* **mi** for miles. +* **ft** for feet. + +The command optionally returns additional information using the following options: + +* `WITHDIST`: Also return the distance of the returned items from the specified center. The distance is returned in the same unit as the unit specified as the radius argument of the command. +* `WITHCOORD`: Also return the longitude,latitude coordinates of the matching items. +* `WITHHASH`: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user. + +The command default is to return unsorted items. Two different sorting methods can be invoked using the following two options: + +* `ASC`: Sort returned items from the nearest to the farthest, relative to the center. +* `DESC`: Sort returned items from the farthest to the nearest, relative to the center. + +By default all the matching items are returned. It is possible to limit the results to the first N matching items by using the **COUNT ``** option. +When `ANY` is provided the command will return as soon as enough matches are found, +so the results may not be the ones closest to the specified point, but on the other hand, the effort invested by the server is significantly lower. +When `ANY` is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, +so to query very large areas with a very small `COUNT` option may be slow even if just a few results are returned. + +By default the command returns the items to the client. It is possible to store the results with one of these options: + +* `!STORE`: Store the items in a sorted set populated with their geospatial information. +* `!STOREDIST`: Store the items in a sorted set populated with their distance from the center as a floating point number, in the same unit specified in the radius. + +## Read-only variants + +Since `GEORADIUS` and `GEORADIUSBYMEMBER` have a `STORE` and `STOREDIST` option they are technically flagged as writing commands in the Redis command table. For this reason read-only replicas will flag them, and Redis Cluster replicas will redirect them to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). + +Breaking the compatibility with the past was considered but rejected, at least for Redis 4.0, so instead two read-only variants of the commands were added. They are exactly like the original commands but refuse the `STORE` and `STOREDIST` options. The two variants are called `GEORADIUS_RO` and `GEORADIUSBYMEMBER_RO`, and can safely be used in replicas. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEORADIUS Sicily 15 37 200 km WITHDIST +GEORADIUS Sicily 15 37 200 km WITHCOORD +GEORADIUS Sicily 15 37 200 km WITHDIST WITHCOORD +``` diff --git a/commands/georadius_ro.md b/commands/georadius_ro.md new file mode 100644 index 0000000000..df6c42b147 --- /dev/null +++ b/commands/georadius_ro.md @@ -0,0 +1,3 @@ +Read-only variant of the `GEORADIUS` command. + +This command is identical to the `GEORADIUS` command, except that it doesn't support the optional `STORE` and `STOREDIST` parameters. diff --git a/commands/georadiusbymember.md b/commands/georadiusbymember.md new file mode 100644 index 0000000000..5eab55d831 --- /dev/null +++ b/commands/georadiusbymember.md @@ -0,0 +1,16 @@ +This command is exactly like `GEORADIUS` with the sole difference that instead +of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index represented by the sorted set. + +The position of the specified member is used as the center of the query. + +Please check the example below and the `GEORADIUS` documentation for more information about the command and its options. + +Note that `GEORADIUSBYMEMBER_RO` is also available since Redis 3.2.10 and Redis 4.0.0 in order to provide a read-only command that can be used in replicas. See the `GEORADIUS` page for more information. + +@examples + +```cli +GEOADD Sicily 13.583333 37.316667 "Agrigento" +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEORADIUSBYMEMBER Sicily Agrigento 100 km +``` diff --git a/commands/georadiusbymember_ro.md b/commands/georadiusbymember_ro.md new file mode 100644 index 0000000000..94a57a815e --- /dev/null +++ b/commands/georadiusbymember_ro.md @@ -0,0 +1,3 @@ +Read-only variant of the `GEORADIUSBYMEMBER` command. + +This command is identical to the `GEORADIUSBYMEMBER` command, except that it doesn't support the optional `STORE` and `STOREDIST` parameters. diff --git a/commands/geosearch.md b/commands/geosearch.md new file mode 100644 index 0000000000..b094a93bce --- /dev/null +++ b/commands/geosearch.md @@ -0,0 +1,38 @@ +Return the members of a sorted set populated with geospatial information using `GEOADD`, which are within the borders of the area specified by a given shape. This command extends the `GEORADIUS` command, so in addition to searching within circular areas, it supports searching within rectangular areas. + +This command should be used in place of the deprecated `GEORADIUS` and `GEORADIUSBYMEMBER` commands. + +The query's center point is provided by one of these mandatory options: + +* `FROMMEMBER`: Use the position of the given existing `` in the sorted set. +* `FROMLONLAT`: Use the given `` and `` position. + +The query's shape is provided by one of these mandatory options: + +* `BYRADIUS`: Similar to `GEORADIUS`, search inside circular area according to given ``. +* `BYBOX`: Search inside an axis-aligned rectangle, determined by `` and ``. + +The command optionally returns additional information using the following options: + +* `WITHDIST`: Also return the distance of the returned items from the specified center point. The distance is returned in the same unit as specified for the radius or height and width arguments. +* `WITHCOORD`: Also return the longitude and latitude of the matching items. +* `WITHHASH`: Also return the raw geohash-encoded sorted set score of the item, in the form of a 52 bit unsigned integer. This is only useful for low level hacks or debugging and is otherwise of little interest for the general user. + +Matching items are returned unsorted by default. To sort them, use one of the following two options: + +* `ASC`: Sort returned items from the nearest to the farthest, relative to the center point. +* `DESC`: Sort returned items from the farthest to the nearest, relative to the center point. + +All matching items are returned by default. To limit the results to the first N matching items, use the **COUNT ``** option. +When the `ANY` option is used, the command returns as soon as enough matches are found. This means that the results returned may not be the ones closest to the specified point, but the effort invested by the server to generate them is significantly less. +When `ANY` is not provided, the command will perform an effort that is proportional to the number of items matching the specified area and sort them, +so to query very large areas with a very small `COUNT` option may be slow even if just a few results are returned. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" +GEOSEARCH Sicily FROMLONLAT 15 37 BYRADIUS 200 km ASC +GEOSEARCH Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST +``` diff --git a/commands/geosearchstore.md b/commands/geosearchstore.md new file mode 100644 index 0000000000..b27d40125a --- /dev/null +++ b/commands/geosearchstore.md @@ -0,0 +1,18 @@ +This command is like `GEOSEARCH`, but stores the result in destination key. + +This command replaces the now deprecated `GEORADIUS` and `GEORADIUSBYMEMBER`. + +By default, it stores the results in the `destination` sorted set with their geospatial information. + +When using the `STOREDIST` option, the command stores the items in a sorted set populated with their distance from the center of the circle or box, as a floating-point number, in the same unit specified for that shape. + +@examples + +```cli +GEOADD Sicily 13.361389 38.115556 "Palermo" 15.087269 37.502669 "Catania" +GEOADD Sicily 12.758489 38.788135 "edge1" 17.241510 38.788135 "edge2" +GEOSEARCHSTORE key1 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 +GEOSEARCH key1 FROMLONLAT 15 37 BYBOX 400 400 km ASC WITHCOORD WITHDIST WITHHASH +GEOSEARCHSTORE key2 Sicily FROMLONLAT 15 37 BYBOX 400 400 km ASC COUNT 3 STOREDIST +ZRANGE key2 0 -1 WITHSCORES +``` diff --git a/commands/get.md b/commands/get.md index cfabc62cb2..2706dec616 100644 --- a/commands/get.md +++ b/commands/get.md @@ -1,20 +1,16 @@ -@complexity - -O(1) - - -Get the value of `key`. If the key does not exist the special value `nil` is returned. +Get the value of `key`. +If the key does not exist the special value `nil` is returned. An error is returned if the value stored at `key` is not a string, because `GET` only handles string values. -@return - -@bulk-reply: the value of `key`, or `nil` when `key` does not exist. - @examples - @cli - GET nonexisting - SET mykey "Hello" - GET mykey +```cli +GET nonexisting +SET mykey "Hello" +GET mykey +``` + +### Code examples +{{< clients-example set_and_get />}} diff --git a/commands/getbit.md b/commands/getbit.md index 9dd3554cb2..ec0d7414cd 100644 --- a/commands/getbit.md +++ b/commands/getbit.md @@ -1,24 +1,16 @@ -@complexity - -O(1) - - Returns the bit value at _offset_ in the string value stored at _key_. When _offset_ is beyond the string length, the string is assumed to be a -contiguous space with 0 bits. When _key_ does not exist it is assumed to be an -empty string, so _offset_ is always out of range and the value is also assumed -to be a contiguous space with 0 bits. - -@return - -@integer-reply: the bit value stored at _offset_. +contiguous space with 0 bits. +When _key_ does not exist it is assumed to be an empty string, so _offset_ is +always out of range and the value is also assumed to be a contiguous space with +0 bits. @examples - @cli - SETBIT mykey 7 1 - GETBIT mykey 0 - GETBIT mykey 7 - GETBIT mykey 100 - +```cli +SETBIT mykey 7 1 +GETBIT mykey 0 +GETBIT mykey 7 +GETBIT mykey 100 +``` diff --git a/commands/getdel.md b/commands/getdel.md new file mode 100644 index 0000000000..68867d3490 --- /dev/null +++ b/commands/getdel.md @@ -0,0 +1,10 @@ +Get the value of `key` and delete the key. +This command is similar to `GET`, except for the fact that it also deletes the key on success (if and only if the key's value type is a string). + +@examples + +```cli +SET mykey "Hello" +GETDEL mykey +GET mykey +``` diff --git a/commands/getex.md b/commands/getex.md new file mode 100644 index 0000000000..4b11b47384 --- /dev/null +++ b/commands/getex.md @@ -0,0 +1,22 @@ +Get the value of `key` and optionally set its expiration. +`GETEX` is similar to `GET`, but is a write command with additional options. + +## Options + +The `GETEX` command supports a set of options that modify its behavior: + +* `EX` *seconds* -- Set the specified expire time, in seconds. +* `PX` *milliseconds* -- Set the specified expire time, in milliseconds. +* `EXAT` *timestamp-seconds* -- Set the specified Unix time at which the key will expire, in seconds. +* `PXAT` *timestamp-milliseconds* -- Set the specified Unix time at which the key will expire, in milliseconds. +* `PERSIST` -- Remove the time to live associated with the key. + +@examples + +```cli +SET mykey "Hello" +GETEX mykey +TTL mykey +GETEX mykey EX 60 +TTL mykey +``` diff --git a/commands/getrange.md b/commands/getrange.md index f4c1bd017c..c188f95494 100644 --- a/commands/getrange.md +++ b/commands/getrange.md @@ -1,29 +1,18 @@ -@complexity - -O(N) where N is the length of the returned string. The complexity is ultimately -determined by the returned length, but because creating a substring from an -existing string is very cheap, it can be considered O(1) for small strings. - -**Warning**: this command was renamed to `GETRANGE`, it is called `SUBSTR` in Redis versions `<= 2.0`. - Returns the substring of the string value stored at `key`, determined by the -offsets `start` and `end` (both are inclusive). Negative offsets can be used in -order to provide an offset starting from the end of the string. So -1 means the -last character, -2 the penultimate and so forth. +offsets `start` and `end` (both are inclusive). +Negative offsets can be used in order to provide an offset starting from the end +of the string. +So -1 means the last character, -2 the penultimate and so forth. The function handles out of range requests by limiting the resulting range to the actual length of the string. -@return - -@bulk-reply - @examples - @cli - SET mykey "This is a string" - GETRANGE mykey 0 3 - GETRANGE mykey -3 -1 - GETRANGE mykey 0 -1 - GETRANGE mykey 10 100 - +```cli +SET mykey "This is a string" +GETRANGE mykey 0 3 +GETRANGE mykey -3 -1 +GETRANGE mykey 0 -1 +GETRANGE mykey 10 100 +``` diff --git a/commands/getset.md b/commands/getset.md index 64d537ba5b..ba3f82ac51 100644 --- a/commands/getset.md +++ b/commands/getset.md @@ -1,31 +1,26 @@ -@complexity - -O(1) - - Atomically sets `key` to `value` and returns the old value stored at `key`. -Returns an error when `key` exists but does not hold a string value. +Returns an error when `key` exists but does not hold a string value. Any +previous time to live associated with the key is discarded on successful +`SET` operation. ## Design pattern -`GETSET` can be used together with `INCR` for counting with atomic reset. For -example: a process may call `INCR` against the key `mycounter` every time some -event occurs, but from time to time we need to get the value of the counter and -reset it to zero atomically. This can be done using `GETSET mycounter "0"`: +`GETSET` can be used together with `INCR` for counting with atomic reset. +For example: a process may call `INCR` against the key `mycounter` every time +some event occurs, but from time to time we need to get the value of the counter +and reset it to zero atomically. +This can be done using `GETSET mycounter "0"`: - @cli - INCR mycounter - GETSET mycounter "0" - GET mycounter - -@return - -@bulk-reply: the old value stored at `key`, or `nil` when `key` did not exist. +```cli +INCR mycounter +GETSET mycounter "0" +GET mycounter +``` @examples - @cli - SET mykey "Hello" - GETSET mykey "World" - GET mykey - +```cli +SET mykey "Hello" +GETSET mykey "World" +GET mykey +``` diff --git a/commands/hdel.md b/commands/hdel.md index 8f6aec6c27..b0dceb2baf 100644 --- a/commands/hdel.md +++ b/commands/hdel.md @@ -1,28 +1,12 @@ -@complexity - -O(N) where N is the number of fields to be removed. - - -Removes the specified fields from the hash stored at `key`. Specified fields -that do not exist within this hash are ignored. +Removes the specified fields from the hash stored at `key`. +Specified fields that do not exist within this hash are ignored. If `key` does not exist, it is treated as an empty hash and this command returns `0`. -@return - -@integer-reply: the number of fields that were removed from the hash, not including specified but non existing fields. - -@history - -* `>= 2.4`: Accepts multiple `field` arguments. Redis versions older than 2.4 can only remove a field per call. - - To remove multiple fields from a hash in an atomic fashion in earlier - versions, use a `MULTI`/`EXEC` block. - @examples - @cli - HSET myhash field1 "foo" - HDEL myhash field1 - HDEL myhash field2 - +```cli +HSET myhash field1 "foo" +HDEL myhash field1 +HDEL myhash field2 +``` diff --git a/commands/hello.md b/commands/hello.md new file mode 100644 index 0000000000..92c6604214 --- /dev/null +++ b/commands/hello.md @@ -0,0 +1,57 @@ +Switch to a different protocol, optionally authenticating and setting the +connection's name, or provide a contextual client report. + +Redis version 6 and above supports two protocols: the old protocol, RESP2, and +a new one introduced with Redis 6, RESP3. RESP3 has certain advantages since +when the connection is in this mode, Redis is able to reply with more semantical +replies: for instance, `HGETALL` will return a *map type*, so a client library +implementation no longer requires to know in advance to translate the array into +a hash before returning it to the caller. For a full coverage of RESP3, please +check the [RESP3 specification](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md). + +In Redis 6 connections start in RESP2 mode, so clients implementing RESP2 do +not need to updated or changed. There are no short term plans to drop support for +RESP2, although future version may default to RESP3. + +`HELLO` always replies with a list of current server and connection properties, +such as: versions, modules loaded, client ID, replication role and so forth. +When called without any arguments in Redis 6.2 and its default use of RESP2 +protocol, the reply looks like this: + + > HELLO + 1) "server" + 2) "redis" + 3) "version" + 4) "255.255.255" + 5) "proto" + 6) (integer) 2 + 7) "id" + 8) (integer) 5 + 9) "mode" + 10) "standalone" + 11) "role" + 12) "master" + 13) "modules" + 14) (empty array) + +Clients that want to handshake using the RESP3 mode need to call the `HELLO` +command and specify the value "3" as the `protover` argument, like so: + + > HELLO 3 + 1# "server" => "redis" + 2# "version" => "6.0.0" + 3# "proto" => (integer) 3 + 4# "id" => (integer) 10 + 5# "mode" => "standalone" + 6# "role" => "master" + 7# "modules" => (empty array) + +Because `HELLO` replies with useful information, and given that `protover` is +optional or can be set to "2", client library authors may consider using this +command instead of the canonical `PING` when setting up the connection. + +When called with the optional `protover` argument, this command switches the +protocol to the specified version and also accepts the following options: + +* `AUTH `: directly authenticate the connection in addition to switching to the specified protocol version. This makes calling `AUTH` before `HELLO` unnecessary when setting up a new connection. Note that the `username` can be set to "default" to authenticate against a server that does not use ACLs, but rather the simpler `requirepass` mechanism of Redis prior to version 6. +* `SETNAME `: this is the equivalent of calling `CLIENT SETNAME`. diff --git a/commands/hexists.md b/commands/hexists.md index ea6a213ca7..b63b63f443 100644 --- a/commands/hexists.md +++ b/commands/hexists.md @@ -1,21 +1,9 @@ -@complexity - -O(1) - - Returns if `field` is an existing field in the hash stored at `key`. -@return - -@integer-reply, specifically: - -* `1` if the hash contains `field`. -* `0` if the hash does not contain `field`, or `key` does not exist. - @examples - @cli - HSET myhash field1 "foo" - HEXISTS myhash field1 - HEXISTS myhash field2 - +```cli +HSET myhash field1 "foo" +HEXISTS myhash field1 +HEXISTS myhash field2 +``` diff --git a/commands/hget.md b/commands/hget.md index ef07f9d58c..d6bef72f81 100644 --- a/commands/hget.md +++ b/commands/hget.md @@ -1,19 +1,9 @@ -@complexity - -O(1) - - Returns the value associated with `field` in the hash stored at `key`. -@return - -@bulk-reply: the value associated with `field`, or `nil` when `field` is not -present in the hash or `key` does not exist. - @examples - @cli - HSET myhash field1 "foo" - HGET myhash field1 - HGET myhash field2 - +```cli +HSET myhash field1 "foo" +HGET myhash field1 +HGET myhash field2 +``` diff --git a/commands/hgetall.md b/commands/hgetall.md index e027ea6478..4fbd625f84 100644 --- a/commands/hgetall.md +++ b/commands/hgetall.md @@ -1,20 +1,11 @@ -@complexity - -O(N) where N is the size of the hash. - -Returns all fields and values of the hash stored at `key`. In the returned -value, every field name is followed by its value, so the length +Returns all fields and values of the hash stored at `key`. +In the returned value, every field name is followed by its value, so the length of the reply is twice the size of the hash. -@return - -@multi-bulk-reply: list of fields and their values stored in the hash, or an -empty list when `key` does not exist. - @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HGETALL myhash - +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HGETALL myhash +``` diff --git a/commands/hincrby.md b/commands/hincrby.md index bf69ea7cd1..c2f1b63960 100644 --- a/commands/hincrby.md +++ b/commands/hincrby.md @@ -1,28 +1,19 @@ -@complexity - -O(1) - - Increments the number stored at `field` in the hash stored at `key` by -`increment`. If `key` does not exist, a new key holding a hash is created. If -`field` does not exist the value is set to `0` before the operation is +`increment`. +If `key` does not exist, a new key holding a hash is created. +If `field` does not exist the value is set to `0` before the operation is performed. -The range of values supported by `HINCRBY` is limited to 64 bit signed -integers. - -@return - -@integer-reply: the value at `field` after the increment operation. +The range of values supported by `HINCRBY` is limited to 64 bit signed integers. @examples Since the `increment` argument is signed, both increment and decrement operations can be performed: - @cli - HSET myhash field 5 - HINCRBY myhash field 1 - HINCRBY myhash field -1 - HINCRBY myhash field -10 - +```cli +HSET myhash field 5 +HINCRBY myhash field 1 +HINCRBY myhash field -1 +HINCRBY myhash field -10 +``` diff --git a/commands/hincrbyfloat.md b/commands/hincrbyfloat.md new file mode 100644 index 0000000000..f83d7d124d --- /dev/null +++ b/commands/hincrbyfloat.md @@ -0,0 +1,29 @@ +Increment the specified `field` of a hash stored at `key`, and representing a +floating point number, by the specified `increment`. If the increment value +is negative, the result is to have the hash field value **decremented** instead of incremented. +If the field does not exist, it is set to `0` before performing the operation. +An error is returned if one of the following conditions occur: + +* The key contains a value of the wrong type (not a hash). +* The current field content or the specified increment are not parsable as a + double precision floating point number. + +The exact behavior of this command is identical to the one of the `INCRBYFLOAT` +command, please refer to the documentation of `INCRBYFLOAT` for further +information. + +@examples + +```cli +HSET mykey field 10.50 +HINCRBYFLOAT mykey field 0.1 +HINCRBYFLOAT mykey field -5 +HSET mykey field 5.0e3 +HINCRBYFLOAT mykey field 2.0e2 +``` + +## Implementation details + +The command is always propagated in the replication link and the Append Only +File as a `HSET` operation, so that differences in the underlying floating point +math implementation will not be sources of inconsistency. diff --git a/commands/hkeys.md b/commands/hkeys.md index 07d15e7203..945b8f6204 100644 --- a/commands/hkeys.md +++ b/commands/hkeys.md @@ -1,18 +1,9 @@ -@complexity - -O(N) where N is the size of the hash. - Returns all field names in the hash stored at `key`. -@return - -@multi-bulk-reply: list of fields in the hash, or an empty list when `key` does -not exist. - @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HKEYS myhash - +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HKEYS myhash +``` diff --git a/commands/hlen.md b/commands/hlen.md index 9d79c3ced0..ab19a35656 100644 --- a/commands/hlen.md +++ b/commands/hlen.md @@ -1,18 +1,9 @@ -@complexity - -O(1) - - Returns the number of fields contained in the hash stored at `key`. -@return - -@integer-reply: number of fields in the hash, or `0` when `key` does not exist. - @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HLEN myhash - +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HLEN myhash +``` diff --git a/commands/hmget.md b/commands/hmget.md index 7cc47b3afc..ff322a15d8 100644 --- a/commands/hmget.md +++ b/commands/hmget.md @@ -1,21 +1,12 @@ -@complexity - -O(N) where N is the number of fields being requested. - Returns the values associated with the specified `fields` in the hash stored at `key`. For every `field` that does not exist in the hash, a `nil` value is returned. -Because a non-existing keys are treated as empty hashes, running `HMGET` -against a non-existing `key` will return a list of `nil` values. - -@return - -@multi-bulk-reply: list of values associated with the given fields, in the same -order as they are requested. - - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HMGET myhash field1 field2 nofield - +Because non-existing keys are treated as empty hashes, running `HMGET` against +a non-existing `key` will return a list of `nil` values. + +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HMGET myhash field1 field2 nofield +``` diff --git a/commands/hmset.md b/commands/hmset.md index 2dc715a90a..b89a13fc5e 100644 --- a/commands/hmset.md +++ b/commands/hmset.md @@ -1,19 +1,12 @@ -@complexity - -O(N) where N is the number of fields being set. - -Sets the specified fields to their respective values in the hash -stored at `key`. This command overwrites any existing fields in the hash. +Sets the specified fields to their respective values in the hash stored at +`key`. +This command overwrites any specified fields already existing in the hash. If `key` does not exist, a new key holding a hash is created. -@return - -@status-reply - @examples - @cli - HMSET myhash field1 "Hello" field2 "World" - HGET myhash field1 - HGET myhash field2 - +```cli +HMSET myhash field1 "Hello" field2 "World" +HGET myhash field1 +HGET myhash field2 +``` diff --git a/commands/hrandfield.md b/commands/hrandfield.md new file mode 100644 index 0000000000..e019e8a946 --- /dev/null +++ b/commands/hrandfield.md @@ -0,0 +1,32 @@ +When called with just the `key` argument, return a random field from the hash value stored at `key`. + +If the provided `count` argument is positive, return an array of **distinct fields**. +The array's length is either `count` or the hash's number of fields (`HLEN`), whichever is lower. + +If called with a negative `count`, the behavior changes and the command is allowed to return the **same field multiple times**. +In this case, the number of returned fields is the absolute value of the specified `count`. + +The optional `WITHVALUES` modifier changes the reply so it includes the respective values of the randomly selected hash fields. + +@examples + +```cli +HSET coin heads obverse tails reverse edge null +HRANDFIELD coin +HRANDFIELD coin +HRANDFIELD coin -5 WITHVALUES +``` + +## Specification of the behavior when count is passed + +When the `count` argument is a positive value this command behaves as follows: + +* No repeated fields are returned. +* If `count` is bigger than the number of fields in the hash, the command will only return the whole hash without additional fields. +* The order of fields in the reply is not truly random, so it is up to the client to shuffle them if needed. + +When the `count` is a negative value, the behavior changes as follows: + +* Repeating fields are possible. +* Exactly `count` fields, or an empty array if the hash is empty (non-existing key), are always returned. +* The order of fields in the reply is truly random. diff --git a/commands/hscan.md b/commands/hscan.md new file mode 100644 index 0000000000..9ab261616a --- /dev/null +++ b/commands/hscan.md @@ -0,0 +1 @@ +See `SCAN` for `HSCAN` documentation. diff --git a/commands/hset.md b/commands/hset.md index a453b0f25d..92c34f3d30 100644 --- a/commands/hset.md +++ b/commands/hset.md @@ -1,22 +1,15 @@ -@complexity +Sets the specified fields to their respective values in the hash stored at `key`. -O(1) - - -Sets `field` in the hash stored at `key` to `value`. If `key` does not exist, a -new key holding a hash is created. If `field` already exists in the hash, it -is overwritten. - -@return - -@integer-reply, specifically: - -* `1` if `field` is a new field in the hash and `value` was set. -* `0` if `field` already exists in the hash and the value was updated. +This command overwrites the values of specified fields that exist in the hash. +If `key` doesn't exist, a new key holding a hash is created. @examples - @cli - HSET myhash field1 "Hello" - HGET myhash field1 - +```cli +HSET myhash field1 "Hello" +HGET myhash field1 +HSET myhash field2 "Hi" field3 "World" +HGET myhash field2 +HGET myhash field3 +HGETALL myhash +``` diff --git a/commands/hsetnx.md b/commands/hsetnx.md index 6e09ea835a..cc2dbdc020 100644 --- a/commands/hsetnx.md +++ b/commands/hsetnx.md @@ -1,23 +1,12 @@ -@complexity - -O(1) - - Sets `field` in the hash stored at `key` to `value`, only if `field` does not -yet exist. If `key` does not exist, a new key holding a hash is created. If -`field` already exists, this operation has no effect. - -@return - -@integer-reply, specifically: - -* `1` if `field` is a new field in the hash and `value` was set. -* `0` if `field` already exists in the hash and no operation was performed. +yet exist. +If `key` does not exist, a new key holding a hash is created. +If `field` already exists, this operation has no effect. @examples - @cli - HSETNX myhash field "Hello" - HSETNX myhash field "World" - HGET myhash field - +```cli +HSETNX myhash field "Hello" +HSETNX myhash field "World" +HGET myhash field +``` diff --git a/commands/hstrlen.md b/commands/hstrlen.md new file mode 100644 index 0000000000..e473ecffeb --- /dev/null +++ b/commands/hstrlen.md @@ -0,0 +1,10 @@ +Returns the string length of the value associated with `field` in the hash stored at `key`. If the `key` or the `field` do not exist, 0 is returned. + +@examples + +```cli +HSET myhash f1 HelloWorld f2 99 f3 -256 +HSTRLEN myhash f1 +HSTRLEN myhash f2 +HSTRLEN myhash f3 +``` diff --git a/commands/hvals.md b/commands/hvals.md index 9e3b5c0231..f54f780519 100644 --- a/commands/hvals.md +++ b/commands/hvals.md @@ -1,18 +1,9 @@ -@complexity - -O(N) where N is the size of the hash. - Returns all values in the hash stored at `key`. -@return - -@multi-bulk-reply: list of values in the hash, or an empty list when `key` does -not exist. - @examples - @cli - HSET myhash field1 "Hello" - HSET myhash field2 "World" - HVALS myhash - +```cli +HSET myhash field1 "Hello" +HSET myhash field2 "World" +HVALS myhash +``` diff --git a/commands/incr.md b/commands/incr.md index 351ac0aa6b..e8aae005d0 100644 --- a/commands/incr.md +++ b/commands/incr.md @@ -1,30 +1,159 @@ -@complexity - -O(1) - - Increments the number stored at `key` by one. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 -bit signed integers. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. **Note**: this is a string operation because Redis does not have a dedicated -integer type. The the string stored at the key is interpreted as a base-10 64 -bit signed integer to execute the operation. +integer type. +The string stored at the key is interpreted as a base-10 **64 bit signed +integer** to execute the operation. Redis stores integers in their integer representation, so for string values -that actually hold an integer, there is no overhead for storing the -string representation of the integer. +that actually hold an integer, there is no overhead for storing the string +representation of the integer. -@return +@examples -@integer-reply: the value of `key` after the increment +```cli +SET mykey "10" +INCR mykey +GET mykey +``` -@examples +## Pattern: Counter + +The counter pattern is the most obvious thing you can do with Redis atomic +increment operations. +The idea is simply send an `INCR` command to Redis every time an operation +occurs. +For instance in a web application we may want to know how many page views this +user did every day of the year. + +To do so the web application may simply increment a key every time the user +performs a page view, creating the key name concatenating the User ID and a +string representing the current date. + +This simple pattern can be extended in many ways: + +* It is possible to use `INCR` and `EXPIRE` together at every page view to have + a counter counting only the latest N page views separated by less than the + specified amount of seconds. +* A client may use GETSET in order to atomically get the current counter value + and reset it to zero. +* Using other atomic increment/decrement commands like `DECR` or `INCRBY` it + is possible to handle values that may get bigger or smaller depending on the + operations performed by the user. + Imagine for instance the score of different users in an online game. + +## Pattern: Rate limiter + +The rate limiter pattern is a special counter that is used to limit the rate at +which an operation can be performed. +The classical materialization of this pattern involves limiting the number of +requests that can be performed against a public API. + +We provide two implementations of this pattern using `INCR`, where we assume +that the problem to solve is limiting the number of API calls to a maximum of +_ten requests per second per IP address_. + +## Pattern: Rate limiter 1 + +The more simple and direct implementation of this pattern is the following: + +``` +FUNCTION LIMIT_API_CALL(ip) +ts = CURRENT_UNIX_TIME() +keyname = ip+":"+ts +MULTI + INCR(keyname) + EXPIRE(keyname,10) +EXEC +current = RESPONSE_OF_INCR_WITHIN_MULTI +IF current > 10 THEN + ERROR "too many requests per second" +ELSE + PERFORM_API_CALL() +END +``` + +Basically we have a counter for every IP, for every different second. +But these counters are always incremented setting an expire of 10 seconds so that +they'll be removed by Redis automatically when the current second is a different +one. + +Note the used of `MULTI` and `EXEC` in order to make sure that we'll both +increment and set the expire at every API call. + +## Pattern: Rate limiter 2 + +An alternative implementation uses a single counter, but is a bit more complex +to get it right without race conditions. +We'll examine different variants. + +``` +FUNCTION LIMIT_API_CALL(ip): +current = GET(ip) +IF current != NULL AND current > 10 THEN + ERROR "too many requests per second" +ELSE + value = INCR(ip) + IF value == 1 THEN + EXPIRE(ip,1) + END + PERFORM_API_CALL() +END +``` + +The counter is created in a way that it only will survive one second, starting +from the first request performed in the current second. +If there are more than 10 requests in the same second the counter will reach a +value greater than 10, otherwise it will expire and start again from 0. + +**In the above code there is a race condition**. +If for some reason the client performs the `INCR` command but does not perform +the `EXPIRE` the key will be leaked until we'll see the same IP address again. + +This can be fixed easily turning the `INCR` with optional `EXPIRE` into a Lua +script that is send using the `EVAL` command (only available since Redis version +2.6). + +``` +local current +current = redis.call("incr",KEYS[1]) +if current == 1 then + redis.call("expire",KEYS[1],1) +end +``` + +There is a different way to fix this issue without using scripting, by using +Redis lists instead of counters. +The implementation is more complex and uses more advanced features but has the +advantage of remembering the IP addresses of the clients currently performing an +API call, that may be useful or not depending on the application. + +``` +FUNCTION LIMIT_API_CALL(ip) +current = LLEN(ip) +IF current > 10 THEN + ERROR "too many requests per second" +ELSE + IF EXISTS(ip) == FALSE + MULTI + RPUSH(ip,ip) + EXPIRE(ip,1) + EXEC + ELSE + RPUSHX(ip,ip) + END + PERFORM_API_CALL() +END +``` - @cli - SET mykey "10" - INCR mykey - GET mykey +The `RPUSHX` command only pushes the element if the key already exists. +Note that we have a race here, but it is not a problem: `EXISTS` may return +false but the key may be created by another client before we create it inside +the `MULTI` / `EXEC` block. +However this race will just miss an API call under rare conditions, so the rate +limiting will still work correctly. diff --git a/commands/incrby.md b/commands/incrby.md index 58e587ff89..d67a2dae54 100644 --- a/commands/incrby.md +++ b/commands/incrby.md @@ -1,23 +1,14 @@ -@complexity - -O(1) - - Increments the number stored at `key` by `increment`. -If the key does not exist, it is set to `0` before performing the operation. An -error is returned if the key contains a value of the wrong type or contains a -string that is not representable as integer. This operation is limited to 64 -bit signed integers. +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if the key contains a value of the wrong type or contains a +string that can not be represented as integer. +This operation is limited to 64 bit signed integers. See `INCR` for extra information on increment/decrement operations. -@return - -@integer-reply: the value of `key` after the increment - @examples - @cli - SET mykey "10" - INCRBY mykey 5 - +```cli +SET mykey "10" +INCRBY mykey 5 +``` diff --git a/commands/incrbyfloat.md b/commands/incrbyfloat.md new file mode 100644 index 0000000000..d44bec435d --- /dev/null +++ b/commands/incrbyfloat.md @@ -0,0 +1,40 @@ +Increment the string representing a floating point number stored at `key` by the +specified `increment`. By using a negative `increment` value, the result is +that the value stored at the key is decremented (by the obvious properties +of addition). +If the key does not exist, it is set to `0` before performing the operation. +An error is returned if one of the following conditions occur: + +* The key contains a value of the wrong type (not a string). +* The current key content or the specified increment are not parsable as a + double precision floating point number. + +If the command is successful the new incremented value is stored as the new +value of the key (replacing the old one), and returned to the caller as a +string. + +Both the value already contained in the string key and the increment argument +can be optionally provided in exponential notation, however the value computed +after the increment is stored consistently in the same format, that is, an +integer number followed (if needed) by a dot, and a variable number of digits +representing the decimal part of the number. +Trailing zeroes are always removed. + +The precision of the output is fixed at 17 digits after the decimal point +regardless of the actual internal precision of the computation. + +@examples + +```cli +SET mykey 10.50 +INCRBYFLOAT mykey 0.1 +INCRBYFLOAT mykey -5 +SET mykey 5.0e3 +INCRBYFLOAT mykey 2.0e2 +``` + +## Implementation details + +The command is always propagated in the replication link and the Append Only +File as a `SET` operation, so that differences in the underlying floating point +math implementation will not be sources of inconsistency. diff --git a/commands/info.md b/commands/info.md index 6ec2304b2c..117a5da638 100644 --- a/commands/info.md +++ b/commands/info.md @@ -1,46 +1,466 @@ -The `INFO` command returns information and statistics about the server -in format that is simple to parse by computers and easy to red by humans. +The `INFO` command returns information and statistics about the server in a +format that is simple to parse by computers and easy to read by humans. -@return +The optional parameter can be used to select a specific section of information: -@bulk-reply: in the following format (compacted for brevity): +* `server`: General information about the Redis server +* `clients`: Client connections section +* `memory`: Memory consumption related information +* `persistence`: RDB and AOF related information +* `stats`: General statistics +* `replication`: Master/replica replication information +* `cpu`: CPU consumption statistics +* `commandstats`: Redis command statistics +* `latencystats`: Redis command latency percentile distribution statistics +* `sentinel`: Redis Sentinel section (only applicable to Sentinel instances) +* `cluster`: Redis Cluster section +* `modules`: Modules section +* `keyspace`: Database related statistics +* `errorstats`: Redis error statistics - redis_version:2.2.2 - uptime_in_seconds:148 - used_cpu_sys:0.01 - used_cpu_user:0.03 - used_memory:768384 - used_memory_rss:1536000 - mem_fragmentation_ratio:2.00 - changes_since_last_save:118 - keyspace_hits:174 - keyspace_misses:37 - allocation_stats:4=56,8=312,16=1498,... - db0:keys=1240,expires=0 +It can also take the following values: -All the fields are in the form of `field:value` terminated by `\r\n`. +* `all`: Return all sections (excluding module generated ones) +* `default`: Return only the default set of sections +* `everything`: Includes `all` and `modules` + +When no parameter is provided, the `default` option is assumed. + +```cli +INFO +``` ## Notes -* `used_memory` is the total number of bytes allocated by Redis using its - allocator (either standard `libc` `malloc`, or an alternative allocator such as - [`tcmalloc`][1] +Please note depending on the version of Redis some of the fields have been +added or removed. A robust client application should therefore parse the +result of this command by skipping unknown properties, and gracefully handle +missing fields. + +Here is the description of fields for Redis >= 2.4. + + +Here is the meaning of all fields in the **server** section: + +* `redis_version`: Version of the Redis server +* `redis_git_sha1`: Git SHA1 +* `redis_git_dirty`: Git dirty flag +* `redis_build_id`: The build id +* `redis_mode`: The server's mode ("standalone", "sentinel" or "cluster") +* `os`: Operating system hosting the Redis server +* `arch_bits`: Architecture (32 or 64 bits) +* `multiplexing_api`: Event loop mechanism used by Redis +* `atomicvar_api`: Atomicvar API used by Redis +* `gcc_version`: Version of the GCC compiler used to compile the Redis server +* `process_id`: PID of the server process +* `process_supervised`: Supervised system ("upstart", "systemd", "unknown" or "no") +* `run_id`: Random value identifying the Redis server (to be used by Sentinel + and Cluster) +* `tcp_port`: TCP/IP listen port +* `server_time_usec`: Epoch-based system time with microsecond precision +* `uptime_in_seconds`: Number of seconds since Redis server start +* `uptime_in_days`: Same value expressed in days +* `hz`: The server's current frequency setting +* `configured_hz`: The server's configured frequency setting +* `lru_clock`: Clock incrementing every minute, for LRU management +* `executable`: The path to the server's executable +* `config_file`: The path to the config file +* `io_threads_active`: Flag indicating if I/O threads are active +* `shutdown_in_milliseconds`: The maximum time remaining for replicas to catch up the replication before completing the shutdown sequence. + This field is only present during shutdown. + +Here is the meaning of all fields in the **clients** section: + +* `connected_clients`: Number of client connections (excluding connections + from replicas) +* `cluster_connections`: An approximation of the number of sockets used by the + cluster's bus +* `maxclients`: The value of the `maxclients` configuration directive. This is + the upper limit for the sum of `connected_clients`, `connected_slaves` and + `cluster_connections`. +* `client_recent_max_input_buffer`: Biggest input buffer among current client connections +* `client_recent_max_output_buffer`: Biggest output buffer among current client connections +* `blocked_clients`: Number of clients pending on a blocking call (`BLPOP`, + `BRPOP`, `BRPOPLPUSH`, `BLMOVE`, `BZPOPMIN`, `BZPOPMAX`) +* `tracking_clients`: Number of clients being tracked (`CLIENT TRACKING`) +* `pubsub_clients`: Number of clients in pubsub mode (`SUBSCRIBE`, `PSUBSCRIBE`, `SSUBSCRIBE`). Added in Redis 8.0 +* `watching_clients`: Number of clients in watching mode (`WATCH`). Added in Redis 8.0 +* `clients_in_timeout_table`: Number of clients in the clients timeout table +* `total_watched_keys`: Number of watched keys. Added in Redis 8.0. +* `total_blocking_keys`: Number of blocking keys. Added in Redis 7.2. +* `total_blocking_keys_on_nokey`: Number of blocking keys that one or more clients that would like to be unblocked when the key is deleted. Added in Redis 7.2. + +Here is the meaning of all fields in the **memory** section: + +* `used_memory`: Total number of bytes allocated by Redis using its + allocator (either standard **libc**, **jemalloc**, or an alternative + allocator such as [**tcmalloc**][hcgcpgp]) +* `used_memory_human`: Human readable representation of previous value +* `used_memory_rss`: Number of bytes that Redis allocated as seen by the + operating system (a.k.a resident set size). This is the number reported by + tools such as `top(1)` and `ps(1)` +* `used_memory_rss_human`: Human readable representation of previous value +* `used_memory_peak`: Peak memory consumed by Redis (in bytes) +* `used_memory_peak_human`: Human readable representation of previous value +* `used_memory_peak_perc`: The percentage of `used_memory_peak` out of + `used_memory` +* `used_memory_overhead`: The sum in bytes of all overheads that the server + allocated for managing its internal data structures +* `used_memory_startup`: Initial amount of memory consumed by Redis at startup + in bytes +* `used_memory_dataset`: The size in bytes of the dataset + (`used_memory_overhead` subtracted from `used_memory`) +* `used_memory_dataset_perc`: The percentage of `used_memory_dataset` out of + the net memory usage (`used_memory` minus `used_memory_startup`) +* `total_system_memory`: The total amount of memory that the Redis host has +* `total_system_memory_human`: Human readable representation of previous value +* `used_memory_lua`: Number of bytes used by the Lua engine for EVAL scripts. Deprecated in Redis 7.0, renamed to `used_memory_vm_eval` +* `used_memory_vm_eval`: Number of bytes used by the script VM engines for EVAL framework (not part of used_memory). Added in Redis 7.0 +* `used_memory_lua_human`: Human readable representation of previous value. Deprecated in Redis 7.0 +* `used_memory_scripts_eval`: Number of bytes overhead by the EVAL scripts (part of used_memory). Added in Redis 7.0 +* `number_of_cached_scripts`: The number of EVAL scripts cached by the server. Added in Redis 7.0 +* `number_of_functions`: The number of functions. Added in Redis 7.0 +* `number_of_libraries`: The number of libraries. Added in Redis 7.0 +* `used_memory_vm_functions`: Number of bytes used by the script VM engines for Functions framework (not part of used_memory). Added in Redis 7.0 +* `used_memory_vm_total`: `used_memory_vm_eval` + `used_memory_vm_functions` (not part of used_memory). Added in Redis 7.0 +* `used_memory_vm_total_human`: Human readable representation of previous value. +* `used_memory_functions`: Number of bytes overhead by Function scripts (part of used_memory). Added in Redis 7.0 +* `used_memory_scripts`: `used_memory_scripts_eval` + `used_memory_functions` (part of used_memory). Added in Redis 7.0 +* `used_memory_scripts_human`: Human readable representation of previous value +* `maxmemory`: The value of the `maxmemory` configuration directive +* `maxmemory_human`: Human readable representation of previous value +* `maxmemory_policy`: The value of the `maxmemory-policy` configuration + directive +* `mem_fragmentation_ratio`: Ratio between `used_memory_rss` and `used_memory`. + Note that this doesn't only includes fragmentation, but also other process overheads (see the `allocator_*` metrics), and also overheads like code, shared libraries, stack, etc. +* `mem_fragmentation_bytes`: Delta between `used_memory_rss` and `used_memory`. + Note that when the total fragmentation bytes is low (few megabytes), a high ratio (e.g. 1.5 and above) is not an indication of an issue. +* `allocator_frag_ratio:`: Ratio between `allocator_active` and `allocator_allocated`. This is the true (external) fragmentation metric (not `mem_fragmentation_ratio`). +* `allocator_frag_bytes` Delta between `allocator_active` and `allocator_allocated`. See note about `mem_fragmentation_bytes`. +* `allocator_rss_ratio`: Ratio between `allocator_resident` and `allocator_active`. This usually indicates pages that the allocator can and probably will soon release back to the OS. +* `allocator_rss_bytes`: Delta between `allocator_resident` and `allocator_active` +* `rss_overhead_ratio`: Ratio between `used_memory_rss` (the process RSS) and `allocator_resident`. This includes RSS overheads that are not allocator or heap related. +* `rss_overhead_bytes`: Delta between `used_memory_rss` (the process RSS) and `allocator_resident` +* `allocator_allocated`: Total bytes allocated form the allocator, including internal-fragmentation. Normally the same as `used_memory`. +* `allocator_active`: Total bytes in the allocator active pages, this includes external-fragmentation. +* `allocator_resident`: Total bytes resident (RSS) in the allocator, this includes pages that can be released to the OS (by `MEMORY PURGE`, or just waiting). +* `allocator_muzzy`: Total bytes of 'muzzy' memory (RSS) in the allocator. Muzzy memory is memory that has been freed, but not yet fully returned to the operating system. It can be reused immediately when needed or reclaimed by the OS when system pressure increases. +* `mem_not_counted_for_evict`: Used memory that's not counted for key eviction. This is basically transient replica and AOF buffers. +* `mem_clients_slaves`: Memory used by replica clients - Starting Redis 7.0, replica buffers share memory with the replication backlog, so this field can show 0 when replicas don't trigger an increase of memory usage. +* `mem_clients_normal`: Memory used by normal clients +* `mem_cluster_links`: Memory used by links to peers on the cluster bus when cluster mode is enabled. +* `mem_aof_buffer`: Transient memory used for AOF and AOF rewrite buffers +* `mem_replication_backlog`: Memory used by replication backlog +* `mem_total_replication_buffers`: Total memory consumed for replication buffers - Added in Redis 7.0. +* `mem_allocator`: Memory allocator, chosen at compile time. +* `mem_overhead_db_hashtable_rehashing`: Temporary memory overhead of database dictionaries currently being rehashed - Added in 8.0. +* `active_defrag_running`: When `activedefrag` is enabled, this indicates whether defragmentation is currently active, and the CPU percentage it intends to utilize. +* `lazyfree_pending_objects`: The number of objects waiting to be freed (as a + result of calling `UNLINK`, or `FLUSHDB` and `FLUSHALL` with the **ASYNC** + option) +* `lazyfreed_objects`: The number of objects that have been lazy freed. + +Ideally, the `used_memory_rss` value should be only slightly higher than +`used_memory`. +When rss >> used, a large difference may mean there is (external) memory fragmentation, which can be evaluated by checking +`allocator_frag_ratio`, `allocator_frag_bytes`. +When used >> rss, it means part of Redis memory has been swapped off by the +operating system: expect some significant latencies. + +Because Redis does not have control over how its allocations are mapped to +memory pages, high `used_memory_rss` is often the result of a spike in memory +usage. + +When Redis frees memory, the memory is given back to the allocator, and the +allocator may or may not give the memory back to the system. There may be +a discrepancy between the `used_memory` value and memory consumption as +reported by the operating system. It may be due to the fact memory has been +used and released by Redis, but not given back to the system. The +`used_memory_peak` value is generally useful to check this point. + +Additional introspective information about the server's memory can be obtained +by referring to the `MEMORY STATS` command and the `MEMORY DOCTOR`. + +Here is the meaning of all fields in the **persistence** section: + +* `loading`: Flag indicating if the load of a dump file is on-going +* `async_loading`: Currently loading replication data-set asynchronously while serving old data. This means `repl-diskless-load` is enabled and set to `swapdb`. Added in Redis 7.0. +* `current_cow_peak`: The peak size in bytes of copy-on-write memory + while a child fork is running +* `current_cow_size`: The size in bytes of copy-on-write memory + while a child fork is running +* `current_cow_size_age`: The age, in seconds, of the `current_cow_size` value. +* `current_fork_perc`: The percentage of progress of the current fork process. For AOF and RDB forks it is the percentage of `current_save_keys_processed` out of `current_save_keys_total`. +* `current_save_keys_processed`: Number of keys processed by the current save operation +* `current_save_keys_total`: Number of keys at the beginning of the current save operation +* `rdb_changes_since_last_save`: Number of changes since the last dump +* `rdb_bgsave_in_progress`: Flag indicating a RDB save is on-going +* `rdb_last_save_time`: Epoch-based timestamp of last successful RDB save +* `rdb_last_bgsave_status`: Status of the last RDB save operation +* `rdb_last_bgsave_time_sec`: Duration of the last RDB save operation in + seconds +* `rdb_current_bgsave_time_sec`: Duration of the on-going RDB save operation + if any +* `rdb_last_cow_size`: The size in bytes of copy-on-write memory during + the last RDB save operation +* `rdb_last_load_keys_expired`: Number of volatile keys deleted during the last RDB loading. Added in Redis 7.0. +* `rdb_last_load_keys_loaded`: Number of keys loaded during the last RDB loading. Added in Redis 7.0. +* `aof_enabled`: Flag indicating AOF logging is activated +* `aof_rewrite_in_progress`: Flag indicating a AOF rewrite operation is + on-going +* `aof_rewrite_scheduled`: Flag indicating an AOF rewrite operation + will be scheduled once the on-going RDB save is complete. +* `aof_last_rewrite_time_sec`: Duration of the last AOF rewrite operation in + seconds +* `aof_current_rewrite_time_sec`: Duration of the on-going AOF rewrite + operation if any +* `aof_last_bgrewrite_status`: Status of the last AOF rewrite operation +* `aof_last_write_status`: Status of the last write operation to the AOF +* `aof_last_cow_size`: The size in bytes of copy-on-write memory during + the last AOF rewrite operation +* `module_fork_in_progress`: Flag indicating a module fork is on-going +* `module_fork_last_cow_size`: The size in bytes of copy-on-write memory + during the last module fork operation +* `aof_rewrites`: Number of AOF rewrites performed since startup +* `rdb_saves`: Number of RDB snapshots performed since startup + +`rdb_changes_since_last_save` refers to the number of operations that produced +some kind of changes in the dataset since the last time either `SAVE` or +`BGSAVE` was called. + +If AOF is activated, these additional fields will be added: + +* `aof_current_size`: AOF current file size +* `aof_base_size`: AOF file size on latest startup or rewrite +* `aof_pending_rewrite`: Flag indicating an AOF rewrite operation + will be scheduled once the on-going RDB save is complete. +* `aof_buffer_length`: Size of the AOF buffer +* `aof_rewrite_buffer_length`: Size of the AOF rewrite buffer. Note this field was removed in Redis 7.0 +* `aof_pending_bio_fsync`: Number of fsync pending jobs in background I/O + queue +* `aof_delayed_fsync`: Delayed fsync counter + +If a load operation is on-going, these additional fields will be added: + +* `loading_start_time`: Epoch-based timestamp of the start of the load + operation +* `loading_total_bytes`: Total file size +* `loading_rdb_used_mem`: The memory usage of the server that had generated + the RDB file at the time of the file's creation +* `loading_loaded_bytes`: Number of bytes already loaded +* `loading_loaded_perc`: Same value expressed as a percentage +* `loading_eta_seconds`: ETA in seconds for the load to be complete + +Here is the meaning of all fields in the **stats** section: + +* `total_connections_received`: Total number of connections accepted by the + server +* `total_commands_processed`: Total number of commands processed by the server +* `instantaneous_ops_per_sec`: Number of commands processed per second +* `total_net_input_bytes`: The total number of bytes read from the network +* `total_net_output_bytes`: The total number of bytes written to the network +* `total_net_repl_input_bytes`: The total number of bytes read from the network for replication purposes +* `total_net_repl_output_bytes`: The total number of bytes written to the network for replication purposes +* `instantaneous_input_kbps`: The network's read rate per second in KB/sec +* `instantaneous_output_kbps`: The network's write rate per second in KB/sec +* `instantaneous_input_repl_kbps`: The network's read rate per second in KB/sec for replication purposes +* `instantaneous_output_repl_kbps`: The network's write rate per second in KB/sec for replication purposes +* `rejected_connections`: Number of connections rejected because of + `maxclients` limit +* `sync_full`: The number of full resyncs with replicas +* `sync_partial_ok`: The number of accepted partial resync requests +* `sync_partial_err`: The number of denied partial resync requests +* `expired_keys`: Total number of key expiration events +* `expired_stale_perc`: The percentage of keys probably expired +* `expired_time_cap_reached_count`: The count of times that active expiry cycles have stopped early +* `expire_cycle_cpu_milliseconds`: The cumulative amount of time spent on active expiry cycles +* `evicted_keys`: Number of evicted keys due to `maxmemory` limit +* `evicted_clients`: Number of evicted clients due to `maxmemory-clients` limit. Added in Redis 7.0. +* `evicted_scripts`: Number of evicted EVAL scripts due to LRU policy, see `EVAL` for more details. Added in Redis 8.0. +* `total_eviction_exceeded_time`: Total time `used_memory` was greater than `maxmemory` since server startup, in milliseconds +* `current_eviction_exceeded_time`: The time passed since `used_memory` last rose above `maxmemory`, in milliseconds +* `keyspace_hits`: Number of successful lookup of keys in the main dictionary +* `keyspace_misses`: Number of failed lookup of keys in the main dictionary +* `pubsub_channels`: Global number of pub/sub channels with client + subscriptions +* `pubsub_patterns`: Global number of pub/sub pattern with client + subscriptions +* `pubsubshard_channels`: Global number of pub/sub shard channels with client subscriptions. Added in Redis 7.0.3 +* `latest_fork_usec`: Duration of the latest fork operation in microseconds +* `total_forks`: Total number of fork operations since the server start +* `migrate_cached_sockets`: The number of sockets open for `MIGRATE` purposes +* `slave_expires_tracked_keys`: The number of keys tracked for expiry purposes + (applicable only to writable replicas) +* `active_defrag_hits`: Number of value reallocations performed by active the + defragmentation process +* `active_defrag_misses`: Number of aborted value reallocations started by the + active defragmentation process +* `active_defrag_key_hits`: Number of keys that were actively defragmented +* `active_defrag_key_misses`: Number of keys that were skipped by the active + defragmentation process +* `total_active_defrag_time`: Total time memory fragmentation was over the limit, in milliseconds +* `current_active_defrag_time`: The time passed since memory fragmentation last was over the limit, in milliseconds +* `tracking_total_keys`: Number of keys being tracked by the server +* `tracking_total_items`: Number of items, that is the sum of clients number for + each key, that are being tracked +* `tracking_total_prefixes`: Number of tracked prefixes in server's prefix table + (only applicable for broadcast mode) +* `unexpected_error_replies`: Number of unexpected error replies, that are types + of errors from an AOF load or replication +* `total_error_replies`: Total number of issued error replies, that is the sum of + rejected commands (errors prior command execution) and + failed commands (errors within the command execution) +* `dump_payload_sanitizations`: Total number of dump payload deep integrity validations (see `sanitize-dump-payload` config). +* `total_reads_processed`: Total number of read events processed +* `total_writes_processed`: Total number of write events processed +* `io_threaded_reads_processed`: Number of read events processed by the main and I/O threads +* `io_threaded_writes_processed`: Number of write events processed by the main and I/O threads +* `client_query_buffer_limit_disconnections`: Total number of disconnections due to client reaching query buffer limit +* `client_output_buffer_limit_disconnections`: Total number of disconnections due to client reaching output buffer limit +* `reply_buffer_shrinks`: Total number of output buffer shrinks +* `reply_buffer_expands`: Total number of output buffer expands +* `eventloop_cycles`: Total number of eventloop cycles +* `eventloop_duration_sum`: Total time spent in the eventloop in microseconds (including I/O and command processing) +* `eventloop_duration_cmd_sum`: Total time spent on executing commands in microseconds +* `instantaneous_eventloop_cycles_per_sec`: Number of eventloop cycles per second +* `instantaneous_eventloop_duration_usec`: Average time spent in a single eventloop cycle in microseconds +* `acl_access_denied_auth`: Number of authentication failures +* `acl_access_denied_cmd`: Number of commands rejected because of access denied to the command +* `acl_access_denied_key`: Number of commands rejected because of access denied to a key +* `acl_access_denied_channel`: Number of commands rejected because of access denied to a channel + +Here is the meaning of all fields in the **replication** section: + +* `role`: Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. + Note that a replica can be master of another replica (chained replication). +* `master_failover_state`: The state of an ongoing failover, if any. +* `master_replid`: The replication ID of the Redis server. +* `master_replid2`: The secondary replication ID, used for PSYNC after a failover. +* `master_repl_offset`: The server's current replication offset +* `second_repl_offset`: The offset up to which replication IDs are accepted +* `repl_backlog_active`: Flag indicating replication backlog is active +* `repl_backlog_size`: Total size in bytes of the replication backlog buffer +* `repl_backlog_first_byte_offset`: The master offset of the replication + backlog buffer +* `repl_backlog_histlen`: Size in bytes of the data in the replication backlog + buffer + +If the instance is a replica, these additional fields are provided: + +* `master_host`: Host or IP address of the master +* `master_port`: Master listening TCP port +* `master_link_status`: Status of the link (up/down) +* `master_last_io_seconds_ago`: Number of seconds since the last interaction + with master +* `master_sync_in_progress`: Indicate the master is syncing to the replica +* `slave_read_repl_offset`: The read replication offset of the replica instance. +* `slave_repl_offset`: The replication offset of the replica instance +* `slave_priority`: The priority of the instance as a candidate for failover +* `slave_read_only`: Flag indicating if the replica is read-only +* `replica_announced`: Flag indicating if the replica is announced by Sentinel. + +If a SYNC operation is on-going, these additional fields are provided: + +* `master_sync_total_bytes`: Total number of bytes that need to be + transferred. this may be 0 when the size is unknown (for example, when + the `repl-diskless-sync` configuration directive is used) +* `master_sync_read_bytes`: Number of bytes already transferred +* `master_sync_left_bytes`: Number of bytes left before syncing is complete + (may be negative when `master_sync_total_bytes` is 0) +* `master_sync_perc`: The percentage `master_sync_read_bytes` from + `master_sync_total_bytes`, or an approximation that uses + `loading_rdb_used_mem` when `master_sync_total_bytes` is 0 +* `master_sync_last_io_seconds_ago`: Number of seconds since last transfer I/O + during a SYNC operation + +If the link between master and replica is down, an additional field is provided: + +* `master_link_down_since_seconds`: Number of seconds since the link is down + +The following field is always provided: + +* `connected_slaves`: Number of connected replicas + +If the server is configured with the `min-slaves-to-write` (or starting with Redis 5 with the `min-replicas-to-write`) directive, an additional field is provided: + +* `min_slaves_good_slaves`: Number of replicas currently considered good + +For each replica, the following line is added: + +* `slaveXXX`: id, IP address, port, state, offset, lag + +Here is the meaning of all fields in the **cpu** section: + +* `used_cpu_sys`: System CPU consumed by the Redis server, which is the sum of system CPU consumed by all threads of the server process (main thread and background threads) +* `used_cpu_user`: User CPU consumed by the Redis server, which is the sum of user CPU consumed by all threads of the server process (main thread and background threads) +* `used_cpu_sys_children`: System CPU consumed by the background processes +* `used_cpu_user_children`: User CPU consumed by the background processes +* `used_cpu_sys_main_thread`: System CPU consumed by the Redis server main thread +* `used_cpu_user_main_thread`: User CPU consumed by the Redis server main thread + +The **commandstats** section provides statistics based on the command type, + including the number of calls that reached command execution (not rejected), + the total CPU time consumed by these commands, the average CPU consumed + per command execution, the number of rejected calls + (errors prior command execution), and the number of failed calls + (errors within the command execution). + +For each command type, the following line is added: + +* `cmdstat_XXX`: `calls=XXX,usec=XXX,usec_per_call=XXX,rejected_calls=XXX,failed_calls=XXX` + +The **latencystats** section provides latency percentile distribution statistics based on the command type. + + By default, the exported latency percentiles are the p50, p99, and p999. + If you need to change the exported percentiles, use `CONFIG SET latency-tracking-info-percentiles "50.0 99.0 99.9"`. + + This section requires the extended latency monitoring feature to be enabled (by default it's enabled). + If you need to enable it, use `CONFIG SET latency-tracking yes`. + +For each command type, the following line is added: + +* `latency_percentiles_usec_XXX: p=,p=,...` + +The **errorstats** section enables keeping track of the different errors that occurred within Redis, + based upon the reply error prefix ( The first word after the "-", up to the first space. Example: `ERR` ). + +For each error type, the following line is added: + +* `errorstat_XXX`: `count=XXX` + +The **sentinel** section is only available in Redis Sentinel instances. It consists of the following fields: + +* `sentinel_masters`: Number of Redis masters monitored by this Sentinel instance +* `sentinel_tilt`: A value of 1 means this sentinel is in TILT mode +* `sentinel_tilt_since_seconds`: Duration in seconds of current TILT, or -1 if not TILTed. Added in Redis 7.0.0 +* `sentinel_running_scripts`: The number of scripts this Sentinel is currently executing +* `sentinel_scripts_queue_length`: The length of the queue of user scripts that are pending execution +* `sentinel_simulate_failure_flags`: Flags for the `SENTINEL SIMULATE-FAILURE` command + +The **cluster** section currently only contains a unique field: + +* `cluster_enabled`: Indicate Redis cluster is enabled + +The **modules** section contains additional information about loaded modules if the modules provide it. The field part of properties lines in this section is always prefixed with the module's name. + +The **keyspace** section provides statistics on the main dictionary of each +database. +The statistics are the number of keys, and the number of keys with an expiration. + +For each database, the following line is added: + +* `dbXXX`: `keys=XXX,expires=XXX` + +The **debug** section contains experimental metrics, which might change or get removed in future versions. +It won't be included when `INFO` or `INFO ALL` are called, and it is returned only when `INFO DEBUG` is used. -* `used_memory_rss` is the number of bytes that Redis allocated as seen by the - operating system. Optimally, this number is close to `used_memory` and there - is little memory fragmentation. This is the number reported by tools such as - `top` and `ps`. A large difference between these numbers means there is - memory fragmentation. Because Redis does not have control over how its - allocations are mapped to memory pages, `used_memory_rss` is often the result - of a spike in memory usage. The ratio between `used_memory_rss` and - `used_memory` is given as `mem_fragmentation_ratio`. +* `eventloop_duration_aof_sum`: Total time spent on flushing AOF in eventloop in microseconds +* `eventloop_duration_cron_sum`: Total time consumption of cron in microseconds (including serverCron and beforeSleep, but excluding IO and AOF flushing) +* `eventloop_duration_max`: The maximal time spent in a single eventloop cycle in microseconds +* `eventloop_cmd_per_cycle_max`: The maximal number of commands processed in a single eventloop cycle -* `changes_since_last_save` refers to the number of operations that produced - some kind of change in the dataset since the last time either `SAVE` or - `BGSAVE` was called. +[hcgcpgp]: http://code.google.com/p/google-perftools/ -* `allocation_stats` holds a histogram containing the number of allocations of - a certain size (up to 256). This provides a means of introspection for the - type of allocations performed by Redis at run time. +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. -[1]: http://code.google.com/p/google-perftools/ +**Modules generated sections**: Starting with Redis 6, modules can inject their info into the `INFO` command, these are excluded by default even when the `all` argument is provided (it will include a list of loaded modules but not their generated info fields). To get these you must use either the `modules` argument or `everything`., diff --git a/commands/keys.md b/commands/keys.md index aa1907688c..f51e0ea51a 100644 --- a/commands/keys.md +++ b/commands/keys.md @@ -1,38 +1,43 @@ -@complexity - -O(N) with N being the number of keys in the database, under the assumption that -the key names in the database and the given pattern have limited length. - Returns all keys matching `pattern`. -While the time complexity for this operation is O(N), the constant -times are fairly low. For example, Redis running on an entry level laptop can -scan a 1 million key database in 40 milliseconds. +While the time complexity for this operation is O(N), the constant times are +fairly low. +For example, Redis running on an entry level laptop can scan a 1 million key +database in 40 milliseconds. + +**Warning**: consider `KEYS` as a command that should only be used in production +environments with extreme care. +It may ruin performance when it is executed against large databases. +This command is intended for debugging and special operations, such as changing +your keyspace layout. +Don't use `KEYS` in your regular application code. +If you're looking for a way to find keys in a subset of your keyspace, consider +using `SCAN` or [sets][tdts]. -**Warning**: consider `KEYS` as a command that should only be used in -production environments with extreme care. It may ruin performance when it is -executed against large databases. This command is intended for debugging and -special operations, such as changing your keyspace layout. Don't use `KEYS` -in your regular application code. If you're looking for a way to find keys in -a subset of your keyspace, consider using [sets](/topics/data-types#sets). +[tdts]: /topics/data-types#sets Supported glob-style patterns: * `h?llo` matches `hello`, `hallo` and `hxllo` * `h*llo` matches `hllo` and `heeeello` * `h[ae]llo` matches `hello` and `hallo,` but not `hillo` +* `h[^e]llo` matches `hallo`, `hbllo`, ... but not `hello` +* `h[a-b]llo` matches `hallo` and `hbllo` Use `\` to escape special characters if you want to match them verbatim. -@return - -@multi-bulk-reply: list of keys matching `pattern`. +When using [Redis Cluster](/docs/management/scaling/), the search is optimized for patterns that imply a single slot. +If a pattern can only match keys of one slot, +Redis only iterates over keys in that slot, rather than the whole database, +when searching for keys matching the pattern. +For example, with the pattern `{a}h*llo`, Redis would only try to match it with the keys in slot 15495, which hash tag `{a}` implies. +To use pattern with hash tag, see [Hash tags](/docs/reference/cluster-spec/#hash-tags) in the Cluster specification for more information. @examples - @cli - MSET one 1 two 2 three 3 four 4 - KEYS *o* - KEYS t?? - KEYS * - +```cli +MSET firstname Jack lastname Stuntman age 35 +KEYS *name* +KEYS a?? +KEYS * +``` diff --git a/commands/lastsave.md b/commands/lastsave.md index 62f2de6e07..1e38f6f626 100644 --- a/commands/lastsave.md +++ b/commands/lastsave.md @@ -1,10 +1,4 @@ - - Return the UNIX TIME of the last DB save executed with success. -A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` -value, then issuing a `BGSAVE` command and checking at regular intervals -every N seconds if `LASTSAVE` changed. - -@return - -@integer-reply: an UNIX time stamp. +A client may check if a `BGSAVE` command succeeded reading the `LASTSAVE` value, +then issuing a `BGSAVE` command and checking at regular intervals every N +seconds if `LASTSAVE` changed. Redis considers the database saved successfully at startup. diff --git a/commands/latency-doctor.md b/commands/latency-doctor.md new file mode 100644 index 0000000000..6693eff2f5 --- /dev/null +++ b/commands/latency-doctor.md @@ -0,0 +1,41 @@ +The `LATENCY DOCTOR` command reports about different latency-related issues and advises about possible remedies. + +This command is the most powerful analysis tool in the latency monitoring +framework, and is able to provide additional statistical data like the average +period between latency spikes, the median deviation, and a human-readable +analysis of the event. For certain events, like `fork`, additional information +is provided, like the rate at which the system forks processes. + +This is the output you should post in the Redis mailing list if you are +looking for help about Latency related issues. + +@examples + +``` +127.0.0.1:6379> latency doctor + +Dave, I have observed latency spikes in this Redis instance. +You don't mind talking about it, do you Dave? + +1. command: 5 latency spikes (average 300ms, mean deviation 120ms, + period 73.40 sec). Worst all time event 500ms. + +I have a few advices for you: + +- Your current Slow Log configuration only logs events that are + slower than your configured latency monitor threshold. Please + use 'CONFIG SET slowlog-log-slower-than 1000'. +- Check your Slow Log to understand what are the commands you are + running which are too slow to execute. Please check + http://redis.io/commands/slowlog for more information. +- Deleting, expiring or evicting (because of maxmemory policy) + large objects is a blocking operation. If you have very large + objects that are often deleted, expired, or evicted, try to + fragment those objects into multiple smaller objects. +``` + +**Note:** the doctor has erratic psychological behaviors, so we recommend interacting with it carefully. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency-graph.md b/commands/latency-graph.md new file mode 100644 index 0000000000..285b2488ea --- /dev/null +++ b/commands/latency-graph.md @@ -0,0 +1,60 @@ +Produces an ASCII-art style graph for the specified event. + +`LATENCY GRAPH` lets you intuitively understand the latency trend of an `event` via state-of-the-art visualization. It can be used for quickly grasping the situation before resorting to means such parsing the raw data from `LATENCY HISTORY` or external tooling. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +@examples + +``` +127.0.0.1:6379> latency reset command +(integer) 0 +127.0.0.1:6379> debug sleep .1 +OK +127.0.0.1:6379> debug sleep .2 +OK +127.0.0.1:6379> debug sleep .3 +OK +127.0.0.1:6379> debug sleep .5 +OK +127.0.0.1:6379> debug sleep .4 +OK +127.0.0.1:6379> latency graph command +command - high 500 ms, low 101 ms (all time high 500 ms) +-------------------------------------------------------------------------------- + #_ + _|| + _||| +_|||| + +11186 +542ss +sss +``` + +The vertical labels under each graph column represent the amount of seconds, +minutes, hours or days ago the event happened. For example "15s" means that the +first graphed event happened 15 seconds ago. + +The graph is normalized in the min-max scale so that the zero (the underscore +in the lower row) is the minimum, and a # in the higher row is the maximum. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency-help.md b/commands/latency-help.md new file mode 100644 index 0000000000..59f3999370 --- /dev/null +++ b/commands/latency-help.md @@ -0,0 +1,6 @@ +The `LATENCY HELP` command returns a helpful text describing the different +subcommands. + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency-histogram.md b/commands/latency-histogram.md new file mode 100644 index 0000000000..9d984bc9e4 --- /dev/null +++ b/commands/latency-histogram.md @@ -0,0 +1,36 @@ +`LATENCY HISTOGRAM` returns a cumulative distribution of commands' latencies in histogram format. + +By default, all available latency histograms are returned. +You can filter the reply by providing specific command names. + +Each histogram consists of the following fields: + +* Command name +* The total calls for that command +* A map of time buckets: + * Each bucket represents a latency range + * Each bucket covers twice the previous bucket's range + * Empty buckets are excluded from the reply + * The tracked latencies are between 1 microsecond and roughly 1 second + * Everything above 1 second is considered +Inf + * At max, there will be log2(1,000,000,000)=30 buckets + +This command requires the extended latency monitoring feature to be enabled, which is the default. +If you need to enable it, call `CONFIG SET latency-tracking yes`. + +To delete the latency histograms' data use the `CONFIG RESETSTAT` command. + +@examples + +``` +127.0.0.1:6379> LATENCY HISTOGRAM set +1# "set" => + 1# "calls" => (integer) 100000 + 2# "histogram_usec" => + 1# (integer) 1 => (integer) 99583 + 2# (integer) 2 => (integer) 99852 + 3# (integer) 4 => (integer) 99914 + 4# (integer) 8 => (integer) 99940 + 5# (integer) 16 => (integer) 99968 + 6# (integer) 33 => (integer) 100000 +``` diff --git a/commands/latency-history.md b/commands/latency-history.md new file mode 100644 index 0000000000..4207727a1a --- /dev/null +++ b/commands/latency-history.md @@ -0,0 +1,37 @@ +The `LATENCY HISTORY` command returns the raw data of the `event`'s latency spikes time series. + +This is useful to an application that wants to fetch raw data in order to perform monitoring, display graphs, and so forth. + +The command will return up to 160 timestamp-latency pairs for the `event`. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +@examples + +``` +127.0.0.1:6379> latency history command +1) 1) (integer) 1405067822 + 2) (integer) 251 +2) 1) (integer) 1405067941 + 2) (integer) 1001 +``` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency-latest.md b/commands/latency-latest.md new file mode 100644 index 0000000000..26a1e4d4ed --- /dev/null +++ b/commands/latency-latest.md @@ -0,0 +1,30 @@ +The `LATENCY LATEST` command reports the latest latency events logged. + +Each reported event has the following fields: + +* Event name. +* Unix timestamp of the latest latency spike for the event. +* Latest event latency in millisecond. +* All-time maximum latency for this event. + +"All-time" means the maximum latency since the Redis instance was +started, or the time that events were reset `LATENCY RESET`. + +@examples + +``` +127.0.0.1:6379> debug sleep 1 +OK +(1.00s) +127.0.0.1:6379> debug sleep .25 +OK +127.0.0.1:6379> latency latest +1) 1) "command" + 2) (integer) 1405067976 + 3) (integer) 251 + 4) (integer) 1001 +``` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency-reset.md b/commands/latency-reset.md new file mode 100644 index 0000000000..f2d87cc121 --- /dev/null +++ b/commands/latency-reset.md @@ -0,0 +1,30 @@ +The `LATENCY RESET` command resets the latency spikes time series of all, or only some, events. + +When the command is called without arguments, it resets all the +events, discarding the currently logged latency spike events, and resetting +the maximum event time register. + +It is possible to reset only specific events by providing the `event` names +as arguments. + +Valid values for `event` are: +* `active-defrag-cycle` +* `aof-fsync-always` +* `aof-stat` +* `aof-rewrite-diff-write` +* `aof-rename` +* `aof-write` +* `aof-write-active-child` +* `aof-write-alone` +* `aof-write-pending-fsync` +* `command` +* `expire-cycle` +* `eviction-cycle` +* `eviction-del` +* `fast-command` +* `fork` +* `rdb-unlink-temp-file` + +For more information refer to the [Latency Monitoring Framework page][lm]. + +[lm]: /topics/latency-monitor diff --git a/commands/latency.md b/commands/latency.md new file mode 100644 index 0000000000..fd5c95de26 --- /dev/null +++ b/commands/latency.md @@ -0,0 +1,3 @@ +This is a container command for latency diagnostics commands. + +To see the list of available commands you can call `LATENCY HELP`. \ No newline at end of file diff --git a/commands/lcs.md b/commands/lcs.md new file mode 100644 index 0000000000..09e9048534 --- /dev/null +++ b/commands/lcs.md @@ -0,0 +1,72 @@ + +The LCS command implements the longest common subsequence algorithm. Note that this is different than the longest common string algorithm, since matching characters in the string does not need to be contiguous. + +For instance the LCS between "foo" and "fao" is "fo", since scanning the two strings from left to right, the longest common set of characters is composed of the first "f" and then the "o". + +LCS is very useful in order to evaluate how similar two strings are. Strings can represent many things. For instance if two strings are DNA sequences, the LCS will provide a measure of similarity between the two DNA sequences. If the strings represent some text edited by some user, the LCS could represent how different the new text is compared to the old one, and so forth. + +Note that this algorithm runs in `O(N*M)` time, where N is the length of the first string and M is the length of the second string. So either spin a different Redis instance in order to run this algorithm, or make sure to run it against very small strings. + +``` +> MSET key1 ohmytext key2 mynewtext +OK +> LCS key1 key2 +"mytext" +``` + +Sometimes we need just the length of the match: + +``` +> LCS key1 key2 LEN +(integer) 6 +``` + +However what is often very useful, is to know the match position in each strings: + +``` +> LCS key1 key2 IDX +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 + 2) 1) 1) (integer) 2 + 2) (integer) 3 + 2) 1) (integer) 0 + 2) (integer) 1 +3) "len" +4) (integer) 6 +``` + +Matches are produced from the last one to the first one, since this is how +the algorithm works, and it more efficient to emit things in the same order. +The above array means that the first match (second element of the array) +is between positions 2-3 of the first string and 0-1 of the second. +Then there is another match between 4-7 and 5-8. + +To restrict the list of matches to the ones of a given minimal length: + +``` +> LCS key1 key2 IDX MINMATCHLEN 4 +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 +3) "len" +4) (integer) 6 +``` + +Finally to also have the match len: + +``` +> LCS key1 key2 IDX MINMATCHLEN 4 WITHMATCHLEN +1) "matches" +2) 1) 1) 1) (integer) 4 + 2) (integer) 7 + 2) 1) (integer) 5 + 2) (integer) 8 + 3) (integer) 4 +3) "len" +4) (integer) 6 +``` diff --git a/commands/lindex.md b/commands/lindex.md index e7d00a1850..0f6438e42e 100644 --- a/commands/lindex.md +++ b/commands/lindex.md @@ -1,27 +1,18 @@ -@complexity - -O(N) where N is the number of elements to traverse to get to the element -at `index`. This makes asking for the first or the last -element of the list O(1). - Returns the element at index `index` in the list stored at `key`. -The index is zero-based, so `0` means the first element, `1` the second -element and so on. Negative indices can be used to designate elements -starting at the tail of the list. Here, `-1` means the last element, `-2` means -the penultimate and so forth. +The index is zero-based, so `0` means the first element, `1` the second element +and so on. +Negative indices can be used to designate elements starting at the tail of the +list. +Here, `-1` means the last element, `-2` means the penultimate and so forth. When the value at `key` is not a list, an error is returned. -@return - -@bulk-reply: the requested element, or `nil` when `index` is out of range. - @examples - @cli - LPUSH mylist "World" - LPUSH mylist "Hello" - LINDEX mylist 0 - LINDEX mylist -1 - LINDEX mylist 3 - +```cli +LPUSH mylist "World" +LPUSH mylist "Hello" +LINDEX mylist 0 +LINDEX mylist -1 +LINDEX mylist 3 +``` diff --git a/commands/linsert.md b/commands/linsert.md index 801a996bbd..fdbdaf9c8c 100644 --- a/commands/linsert.md +++ b/commands/linsert.md @@ -1,27 +1,16 @@ -@complexity - -O(N) where N is the number of elements to traverse before seeing the value -`pivot`. This means that inserting somewhere on the left end on the list (head) -can be considered O(1) and inserting somewhere on the right end (tail) is O(N). - -Inserts `value` in the list stored at `key` either before or after the -reference value `pivot`. +Inserts `element` in the list stored at `key` either before or after the reference +value `pivot`. When `key` does not exist, it is considered an empty list and no operation is performed. An error is returned when `key` exists but does not hold a list value. -@return - -@integer-reply: the length of the list after the insert operation, or `-1` when -the value `pivot` was not found. - @examples - @cli - RPUSH mylist "Hello" - RPUSH mylist "World" - LINSERT mylist BEFORE "World" "There" - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "Hello" +RPUSH mylist "World" +LINSERT mylist BEFORE "World" "There" +LRANGE mylist 0 -1 +``` diff --git a/commands/llen.md b/commands/llen.md index 4bcabdfc77..4c9a7862ee 100644 --- a/commands/llen.md +++ b/commands/llen.md @@ -1,20 +1,11 @@ -@complexity - -O(1) - - Returns the length of the list stored at `key`. If `key` does not exist, it is interpreted as an empty list and `0` is returned. An error is returned when the value stored at `key` is not a list. -@return - -@integer-reply: the length of the list at `key`. - @examples - @cli - LPUSH mylist "World" - LPUSH mylist "Hello" - LLEN mylist - +```cli +LPUSH mylist "World" +LPUSH mylist "Hello" +LLEN mylist +``` diff --git a/commands/lmove.md b/commands/lmove.md new file mode 100644 index 0000000000..7dd02fa4e2 --- /dev/null +++ b/commands/lmove.md @@ -0,0 +1,77 @@ +Atomically returns and removes the first/last element (head/tail depending on +the `wherefrom` argument) of the list stored at `source`, and pushes the +element at the first/last element (head/tail depending on the `whereto` +argument) of the list stored at `destination`. + +For example: consider `source` holding the list `a,b,c`, and `destination` +holding the list `x,y,z`. +Executing `LMOVE source destination RIGHT LEFT` results in `source` holding +`a,b` and `destination` holding `c,x,y,z`. + +If `source` does not exist, the value `nil` is returned and no operation is +performed. +If `source` and `destination` are the same, the operation is equivalent to +removing the first/last element from the list and pushing it as first/last +element of the list, so it can be considered as a list rotation command (or a +no-op if `wherefrom` is the same as `whereto`). + +This command comes in place of the now deprecated `RPOPLPUSH`. Doing +`LMOVE RIGHT LEFT` is equivalent. + +@examples + +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LMOVE mylist myotherlist RIGHT LEFT +LMOVE mylist myotherlist LEFT RIGHT +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` + +## Pattern: Reliable queue + +Redis is often used as a messaging server to implement processing of background +jobs or other kinds of messaging tasks. +A simple form of queue is often obtained pushing values into a list in the +producer side, and waiting for this values in the consumer side using `RPOP` +(using polling), or `BRPOP` if the client is better served by a blocking +operation. + +However in this context the obtained queue is not _reliable_ as messages can +be lost, for example in the case there is a network problem or if the consumer +crashes just after the message is received but it is still to process. + +`LMOVE` (or `BLMOVE` for the blocking variant) offers a way to avoid +this problem: the consumer fetches the message and at the same time pushes it +into a _processing_ list. +It will use the `LREM` command in order to remove the message from the +_processing_ list once the message has been processed. + +An additional client may monitor the _processing_ list for items that remain +there for too much time, and will push those timed out items into the queue +again if needed. + +## Pattern: Circular list + +Using `LMOVE` with the same source and destination key, a client can visit +all the elements of an N-elements list, one after the other, in O(N) without +transferring the full list from the server to the client using a single `LRANGE` +operation. + +The above pattern works even in the following conditions: + +* There are multiple clients rotating the list: they'll fetch different + elements, until all the elements of the list are visited, and the process + restarts. +* Even if other clients are actively pushing new items at the end of the list. + +The above makes it very simple to implement a system where a set of items must +be processed by N workers continuously as fast as possible. +An example is a monitoring system that must check that a set of web sites are +reachable, with the smallest delay possible, using a number of parallel workers. + +Note that this implementation of workers is trivially scalable and reliable, +because even if a message is lost the item is still in the queue and will be +processed at the next iteration. diff --git a/commands/lmpop.md b/commands/lmpop.md new file mode 100644 index 0000000000..ee053e2378 --- /dev/null +++ b/commands/lmpop.md @@ -0,0 +1,28 @@ +Pops one or more elements from the first non-empty list key from the list of provided key names. + +`LMPOP` and `BLMPOP` are similar to the following, more limited, commands: + +- `LPOP` or `RPOP` which take only one key, and can return multiple elements. +- `BLPOP` or `BRPOP` which take multiple keys, but return only one element from just one key. + +See `BLMPOP` for the blocking variant of this command. + +Elements are popped from either the left or right of the first non-empty list based on the passed argument. +The number of returned elements is limited to the lower between the non-empty list's length, and the count argument (which defaults to 1). + +@examples + +```cli +LMPOP 2 non1 non2 LEFT COUNT 10 +LPUSH mylist "one" "two" "three" "four" "five" +LMPOP 1 mylist LEFT +LRANGE mylist 0 -1 +LMPOP 1 mylist RIGHT COUNT 10 +LPUSH mylist "one" "two" "three" "four" "five" +LPUSH mylist2 "a" "b" "c" "d" "e" +LMPOP 2 mylist mylist2 right count 3 +LRANGE mylist 0 -1 +LMPOP 2 mylist mylist2 right count 5 +LMPOP 2 mylist mylist2 right count 10 +EXISTS mylist mylist2 +``` diff --git a/commands/lolwut.md b/commands/lolwut.md new file mode 100644 index 0000000000..027c9b177e --- /dev/null +++ b/commands/lolwut.md @@ -0,0 +1,25 @@ +The LOLWUT command displays the Redis version: however as a side effect of +doing so, it also creates a piece of generative computer art that is different +with each version of Redis. The command was introduced in Redis 5 and announced +with this [blog post](http://antirez.com/news/123). + +By default the `LOLWUT` command will display the piece corresponding to the +current Redis version, however it is possible to display a specific version +using the following form: + + LOLWUT VERSION 5 ... other optional arguments ... + +Of course the "5" above is an example. Each LOLWUT version takes a different +set of arguments in order to change the output. The user is encouraged to +play with it to discover how the output changes adding more numerical +arguments. + +LOLWUT wants to be a reminder that there is more in programming than just +putting some code together in order to create something useful. Every +LOLWUT version should have the following properties: + +1. It should display some computer art. There are no limits as long as the output works well in a normal terminal display. However the output should not be limited to graphics (like LOLWUT 5 and 6 actually do), but can be generative poetry and other non graphical things. +2. LOLWUT output should be completely useless. Displaying some useful Redis internal metrics does not count as a valid LOLWUT. +3. LOLWUT output should be fast to generate so that the command can be called in production instances without issues. It should remain fast even when the user experiments with odd parameters. +4. LOLWUT implementations should be safe and carefully checked for security, and resist to untrusted inputs if they take arguments. +5. LOLWUT must always display the Redis version at the end. diff --git a/commands/lpop.md b/commands/lpop.md index ee88737580..ed7dc9cf4e 100644 --- a/commands/lpop.md +++ b/commands/lpop.md @@ -1,20 +1,14 @@ -@complexity +Removes and returns the first elements of the list stored at `key`. -O(1) - - -Removes and returns the first element of the list stored at `key`. - -@return - -@bulk-reply: the value of the first element, or `nil` when `key` does not exist. +By default, the command pops a single element from the beginning of the list. +When provided with the optional `count` argument, the reply will consist of up +to `count` elements, depending on the list's length. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LPOP mylist - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "one" "two" "three" "four" "five" +LPOP mylist +LPOP mylist 2 +LRANGE mylist 0 -1 +``` diff --git a/commands/lpos.md b/commands/lpos.md new file mode 100644 index 0000000000..b4485acb56 --- /dev/null +++ b/commands/lpos.md @@ -0,0 +1,66 @@ +The command returns the index of matching elements inside a Redis list. +By default, when no options are given, it will scan the list from head to tail, +looking for the first match of "element". If the element is found, its index (the zero-based position in the list) is returned. Otherwise, if no match is found, `nil` is returned. + +``` +> RPUSH mylist a b c 1 2 3 c c +> LPOS mylist c +2 +``` + +The optional arguments and options can modify the command's behavior. +The `RANK` option specifies the "rank" of the first element to return, in case there are multiple matches. A rank of 1 means to return the first match, 2 to return the second match, and so forth. + +For instance, in the above example the element "c" is present multiple times, if I want the index of the second match, I'll write: + +``` +> LPOS mylist c RANK 2 +6 +``` + +That is, the second occurrence of "c" is at position 6. +A negative "rank" as the `RANK` argument tells `LPOS` to invert the search direction, starting from the tail to the head. + +So, we want to say, give me the first element starting from the tail of the list: + +``` +> LPOS mylist c RANK -1 +7 +``` + +Note that the indexes are still reported in the "natural" way, that is, considering the first element starting from the head of the list at index 0, the next element at index 1, and so forth. This basically means that the returned indexes are stable whatever the rank is positive or negative. + +Sometimes we want to return not just the Nth matching element, but the position of all the first N matching elements. This can be achieved using the `COUNT` option. + +``` +> LPOS mylist c COUNT 2 +[2,6] +``` + +We can combine `COUNT` and `RANK`, so that `COUNT` will try to return up to the specified number of matches, but starting from the Nth match, as specified by the `RANK` option. + +``` +> LPOS mylist c RANK -1 COUNT 2 +[7,6] +``` + +When `COUNT` is used, it is possible to specify 0 as the number of matches, as a way to tell the command we want all the matches found returned as an array of indexes. This is better than giving a very large `COUNT` option because it is more general. + +``` +> LPOS mylist c COUNT 0 +[2,6,7] +``` + +When `COUNT` is used and no match is found, an empty array is returned. However when `COUNT` is not used and there are no matches, the command returns `nil`. + +Finally, the `MAXLEN` option tells the command to compare the provided element only with a given maximum number of list items. So for instance specifying `MAXLEN 1000` will make sure that the command performs only 1000 comparisons, effectively running the algorithm on a subset of the list (the first part or the last part depending on the fact we use a positive or negative rank). This is useful to limit the maximum complexity of the command. It is also useful when we expect the match to be found very early, but want to be sure that in case this is not true, the command does not take too much time to run. + +When `MAXLEN` is used, it is possible to specify 0 as the maximum number of comparisons, as a way to tell the command we want unlimited comparisons. This is better than giving a very large `MAXLEN` option because it is more general. + +@examples + +```cli +RPUSH mylist a b c d 1 2 3 4 3 3 3 +LPOS mylist 3 +LPOS mylist 3 COUNT 0 RANK 2 +``` diff --git a/commands/lpush.md b/commands/lpush.md index aaeb51debe..c1d198bad9 100644 --- a/commands/lpush.md +++ b/commands/lpush.md @@ -1,27 +1,19 @@ -@complexity - -O(1) - - Insert all the specified values at the head of the list stored at `key`. -If `key` does not exist, it is created as empty list before performing -the push operations. +If `key` does not exist, it is created as empty list before performing the push +operations. When `key` holds a value that is not a list, an error is returned. -It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command `LPUSH mylist a b c` will result into a list containing `c` as first element, `b` as second element and `a` as third element. - -@return - -@integer-reply: the length of the list after the push operations. - -@history - -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. +It is possible to push multiple elements using a single command call just +specifying multiple arguments at the end of the command. +Elements are inserted one after the other to the head of the list, from the +leftmost element to the rightmost element. +So for instance the command `LPUSH mylist a b c` will result into a list +containing `c` as first element, `b` as second element and `a` as third element. @examples - @cli - LPUSH mylist "world" - LPUSH mylist "hello" - LRANGE mylist 0 -1 - +```cli +LPUSH mylist "world" +LPUSH mylist "hello" +LRANGE mylist 0 -1 +``` diff --git a/commands/lpushx.md b/commands/lpushx.md index acf61c7bbf..69182df8a8 100644 --- a/commands/lpushx.md +++ b/commands/lpushx.md @@ -1,22 +1,14 @@ -@complexity - -O(1) - - -Inserts `value` at the head of the list stored at `key`, only if `key` -already exists and holds a list. In contrary to `LPUSH`, no operation will -be performed when `key` does not yet exist. - -@return - -@integer-reply: the length of the list after the push operation. +Inserts specified values at the head of the list stored at `key`, only if `key` +already exists and holds a list. +In contrary to `LPUSH`, no operation will be performed when `key` does not yet +exist. @examples - @cli - LPUSH mylist "World" - LPUSHX mylist "Hello" - LPUSHX myotherlist "Hello" - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 - +```cli +LPUSH mylist "World" +LPUSHX mylist "Hello" +LPUSHX myotherlist "Hello" +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` diff --git a/commands/lrange.md b/commands/lrange.md index d570458c56..c59de57e1b 100644 --- a/commands/lrange.md +++ b/commands/lrange.md @@ -1,43 +1,36 @@ -@complexity - -O(S+N) where S is the `start` offset and N is the number of elements in the -specified range. - -Returns the specified elements of the list stored at `key`. The offsets -`start` and `stop` are zero-based indexes, with `0` being the first element of -the list (the head of the list), `1` being the next element and so on. +Returns the specified elements of the list stored at `key`. +The offsets `start` and `stop` are zero-based indexes, with `0` being the first +element of the list (the head of the list), `1` being the next element and so +on. These offsets can also be negative numbers indicating offsets starting at the -end of the list. For example, `-1` is the last element of the list, `-2` the -penultimate, and so on. +end of the list. +For example, `-1` is the last element of the list, `-2` the penultimate, and so +on. ## Consistency with range functions in various programming languages Note that if you have a list of numbers from 0 to 100, `LRANGE list 0 10` will -return 11 elements, that is, the rightmost item is included. This **may or may -not** be consistent with behavior of range-related functions in your -programming language of choice (think Ruby's `Range.new`, `Array#slice` or -Python's `range()` function). +return 11 elements, that is, the rightmost item is included. +This **may or may not** be consistent with behavior of range-related functions +in your programming language of choice (think Ruby's `Range.new`, `Array#slice` +or Python's `range()` function). ## Out-of-range indexes -Out of range indexes will not produce an error. If `start` is larger than the -end of the list, an empty list is returned. If `stop` is -larger than the actual end of the list, Redis will treat it like the last -element of the list. - -@return - -@multi-bulk-reply: list of elements in the specified range. +Out of range indexes will not produce an error. +If `start` is larger than the end of the list, an empty list is returned. +If `stop` is larger than the actual end of the list, Redis will treat it like +the last element of the list. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LRANGE mylist 0 0 - LRANGE mylist -3 2 - LRANGE mylist -100 100 - LRANGE mylist 5 10 - +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LRANGE mylist 0 0 +LRANGE mylist -3 2 +LRANGE mylist -100 100 +LRANGE mylist 5 10 +``` diff --git a/commands/lrem.md b/commands/lrem.md index 42c5fc64ab..dd2f7e7fbc 100644 --- a/commands/lrem.md +++ b/commands/lrem.md @@ -1,32 +1,24 @@ -@complexity +Removes the first `count` occurrences of elements equal to `element` from the list +stored at `key`. +The `count` argument influences the operation in the following ways: -O(N) where N is the length of the list. - -Removes the first `count` occurrences of elements equal to `value` from the -list stored at `key`. The `count` argument influences the operation in the -following ways: - -* `count > 0`: Remove elements equal to `value` moving from head to tail. -* `count < 0`: Remove elements equal to `value` moving from tail to head. -* `count = 0`: Remove all elements equal to `value`. +* `count > 0`: Remove elements equal to `element` moving from head to tail. +* `count < 0`: Remove elements equal to `element` moving from tail to head. +* `count = 0`: Remove all elements equal to `element`. For example, `LREM list -2 "hello"` will remove the last two occurrences of `"hello"` in the list stored at `list`. -Note that non-existing keys are treated like empty lists, so when `key` does -not exist, the command will always return `0`. - -@return - -@integer-reply: the number of removed elements. +Note that non-existing keys are treated like empty lists, so when `key` does not +exist, the command will always return `0`. @examples - @cli - RPUSH mylist "hello" - RPUSH mylist "hello" - RPUSH mylist "foo" - RPUSH mylist "hello" - LREM mylist -2 "hello" - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "hello" +RPUSH mylist "hello" +RPUSH mylist "foo" +RPUSH mylist "hello" +LREM mylist -2 "hello" +LRANGE mylist 0 -1 +``` diff --git a/commands/lset.md b/commands/lset.md index 87cae024a7..c6fb635758 100644 --- a/commands/lset.md +++ b/commands/lset.md @@ -1,24 +1,15 @@ -@complexity - -O(N) where N is the length of the list. Setting either the first or the last -element of the list is O(1). - -Sets the list element at `index` to `value`. For more information on the -`index` argument, see `LINDEX`. +Sets the list element at `index` to `element`. +For more information on the `index` argument, see `LINDEX`. An error is returned for out of range indexes. -@return - -@status-reply - @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LSET mylist 0 "four" - LSET mylist -2 "five" - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LSET mylist 0 "four" +LSET mylist -2 "five" +LRANGE mylist 0 -1 +``` diff --git a/commands/ltrim.md b/commands/ltrim.md index de8f564e34..61ae3f6e26 100644 --- a/commands/ltrim.md +++ b/commands/ltrim.md @@ -1,10 +1,7 @@ -@complexity - -O(N) where N is the number of elements to be removed by the operation. - Trim an existing list so that it will contain only the specified range of -elements specified. Both `start` and `stop` are zero-based indexes, where `0` -is the first element of the list (the head), `1` the next element and so on. +elements specified. +Both `start` and `stop` are zero-based indexes, where `0` is the first element +of the list (the head), `1` the next element and so on. For example: `LTRIM foobar 0 2` will modify the list stored at `foobar` so that only the first three elements of the list will remain. @@ -15,30 +12,31 @@ element and so on. Out of range indexes will not produce an error: if `start` is larger than the end of the list, or `start > end`, the result will be an empty list (which -causes `key` to be removed). If `end` is larger than the end of the list, -Redis will treat it like the last element of the list. +causes `key` to be removed). +If `end` is larger than the end of the list, Redis will treat it like the last +element of the list. -A common use of `LTRIM` is together with `LPUSH`/`RPUSH`. For example: +A common use of `LTRIM` is together with `LPUSH` / `RPUSH`. +For example: - LPUSH mylist someelement - LTRIM mylist 0 99 +``` +LPUSH mylist someelement +LTRIM mylist 0 99 +``` This pair of commands will push a new element on the list, while making sure -that the list will not grow larger than 100 elements. This is very useful when -using Redis to store logs for example. It is important to note that when used -in this way `LTRIM` is an O(1) operation because in the average case just one -element is removed from the tail of the list. - -@return - -@status-reply +that the list will not grow larger than 100 elements. +This is very useful when using Redis to store logs for example. +It is important to note that when used in this way `LTRIM` is an O(1) operation +because in the average case just one element is removed from the tail of the +list. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - LTRIM mylist 1 -1 - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +LTRIM mylist 1 -1 +LRANGE mylist 0 -1 +``` diff --git a/commands/memory-doctor.md b/commands/memory-doctor.md new file mode 100644 index 0000000000..8a61604542 --- /dev/null +++ b/commands/memory-doctor.md @@ -0,0 +1,2 @@ +The `MEMORY DOCTOR` command reports about different memory-related issues that +the Redis server experiences, and advises about possible remedies. diff --git a/commands/memory-help.md b/commands/memory-help.md new file mode 100644 index 0000000000..1b86c43a32 --- /dev/null +++ b/commands/memory-help.md @@ -0,0 +1,2 @@ +The `MEMORY HELP` command returns a helpful text describing the different +subcommands. diff --git a/commands/memory-malloc-stats.md b/commands/memory-malloc-stats.md new file mode 100644 index 0000000000..f0b645ca38 --- /dev/null +++ b/commands/memory-malloc-stats.md @@ -0,0 +1,5 @@ +The `MEMORY MALLOC-STATS` command provides an internal statistics report from +the memory allocator. + +This command is currently implemented only when using **jemalloc** as an +allocator, and evaluates to a benign NOOP for all others. diff --git a/commands/memory-purge.md b/commands/memory-purge.md new file mode 100644 index 0000000000..947a4fea45 --- /dev/null +++ b/commands/memory-purge.md @@ -0,0 +1,5 @@ +The `MEMORY PURGE` command attempts to purge dirty pages so these can be +reclaimed by the allocator. + +This command is currently implemented only when using **jemalloc** as an +allocator, and evaluates to a benign NOOP for all others. diff --git a/commands/memory-stats.md b/commands/memory-stats.md new file mode 100644 index 0000000000..3d88350dad --- /dev/null +++ b/commands/memory-stats.md @@ -0,0 +1,57 @@ +The `MEMORY STATS` command returns an @array-reply about the memory usage of the +server. + +The information about memory usage is provided as metrics and their respective +values. The following metrics are reported: + +* `peak.allocated`: Peak memory consumed by Redis in bytes (see `INFO`'s + `used_memory_peak`) +* `total.allocated`: Total number of bytes allocated by Redis using its + allocator (see `INFO`'s `used_memory`) +* `startup.allocated`: Initial amount of memory consumed by Redis at startup + in bytes (see `INFO`'s `used_memory_startup`) +* `replication.backlog`: Size in bytes of the replication backlog (see + `INFO`'s `repl_backlog_active`) +* `clients.slaves`: The total size in bytes of all replicas overheads (output + and query buffers, connection contexts) +* `clients.normal`: The total size in bytes of all clients overheads (output + and query buffers, connection contexts) +* `cluster.links`: Memory usage by cluster links (Added in Redis 7.0, see `INFO`'s `mem_cluster_links`). +* `aof.buffer`: The summed size in bytes of AOF related buffers. +* `lua.caches`: the summed size in bytes of the overheads of the Lua scripts' + caches +* `functions.caches`: the summed size in bytes of the overheads of the Function scripts' + caches +* `dbXXX`: For each of the server's databases, the overheads of the main and + expiry dictionaries (`overhead.hashtable.main` and + `overhead.hashtable.expires`, respectively) are reported in bytes +* `overhead.db.hashtable.lut`: Total overhead of dictionary buckets in databases (Added in Redis 8.0) +* `overhead.db.hashtable.rehashing`: Temporary memory overhead of database dictionaries currently being rehashed (Added in Redis 8.0) +* `overhead.total`: The sum of all overheads, i.e. `startup.allocated`, + `replication.backlog`, `clients.slaves`, `clients.normal`, `aof.buffer` and + those of the internal data structures that are used in managing the + Redis keyspace (see `INFO`'s `used_memory_overhead`) +* `db.dict.rehashing.count`: Number of DB dictionaries currently being rehashed (Added in Redis 8.0) +* `keys.count`: The total number of keys stored across all databases in the + server +* `keys.bytes-per-key`: The ratio between `dataset.bytes` and `keys.count` +* `dataset.bytes`: The size in bytes of the dataset, i.e. `overhead.total` + subtracted from `total.allocated` (see `INFO`'s `used_memory_dataset`) +* `dataset.percentage`: The percentage of `dataset.bytes` out of the total + memory usage +* `peak.percentage`: The percentage of `total.allocated` out of + `peak.allocated` +* `allocator.allocated`: See `INFO`'s `allocator_allocated` +* `allocator.active`: See `INFO`'s `allocator_active` +* `allocator.resident`: See `INFO`'s `allocator_resident` +* `allocator.muzzy`: See `INFO`'s `allocator_muzzy` +* `allocator-fragmentation.ratio`: See `INFO`'s `allocator_frag_ratio` +* `allocator-fragmentation.bytes`: See `INFO`'s `allocator_frag_bytes` +* `allocator-rss.ratio`: See `INFO`'s `allocator_rss_ratio` +* `allocator-rss.bytes`: See `INFO`'s `allocator_rss_bytes` +* `rss-overhead.ratio`: See `INFO`'s `rss_overhead_ratio` +* `rss-overhead.bytes`: See `INFO`'s `rss_overhead_bytes` +* `fragmentation`: See `INFO`'s `mem_fragmentation_ratio` +* `fragmentation.bytes`: See `INFO`'s `mem_fragmentation_bytes` + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/memory-usage.md b/commands/memory-usage.md new file mode 100644 index 0000000000..b436292373 --- /dev/null +++ b/commands/memory-usage.md @@ -0,0 +1,39 @@ +The `MEMORY USAGE` command reports the number of bytes that a key and its value +require to be stored in RAM. + +The reported usage is the total of memory allocations for data and +administrative overheads that a key and its value require. + +For nested data types, the optional `SAMPLES` option can be provided, where +`count` is the number of sampled nested values. The samples are averaged to estimate the total size. +By default, this option is set to `5`. To sample the all of the nested values, use `SAMPLES 0`. + +@examples + +With Redis v7.2.0 64-bit and **jemalloc**, the empty string measures as follows: + +``` +> SET "" "" +OK +> MEMORY USAGE "" +(integer) 56 +``` + +These bytes are pure overhead at the moment as no actual data is stored, and are +used for maintaining the internal data structures of the server (include internal allocator fragmentation). Longer keys and +values show asymptotically linear usage. + +``` +> SET foo bar +OK +> MEMORY USAGE foo +(integer) 56 +> SET foo2 mybar +OK +> MEMORY USAGE foo2 +(integer) 64 +> SET foo3 0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 +OK +> MEMORY USAGE foo3 +(integer) 160 +``` diff --git a/commands/memory.md b/commands/memory.md new file mode 100644 index 0000000000..46bde8da69 --- /dev/null +++ b/commands/memory.md @@ -0,0 +1,3 @@ +This is a container command for memory introspection and management commands. + +To see the list of available commands you can call `MEMORY HELP`. diff --git a/commands/mget.md b/commands/mget.md index b3ee07cbba..97de1cc61d 100644 --- a/commands/mget.md +++ b/commands/mget.md @@ -1,20 +1,12 @@ -@complexity - -O(N) where N is the number of keys to retrieve - - -Returns the values of all specified keys. For every key that does not hold a string value -or does not exist, the special value `nil` is returned. +Returns the values of all specified keys. +For every key that does not hold a string value or does not exist, the special +value `nil` is returned. Because of this, the operation never fails. -@return - -@multi-bulk-reply: list of values at the specified keys. - @examples - @cli - SET key1 "Hello" - SET key2 "World" - MGET key1 key2 nonexisting - +```cli +SET key1 "Hello" +SET key2 "World" +MGET key1 key2 nonexisting +``` diff --git a/commands/migrate.md b/commands/migrate.md new file mode 100644 index 0000000000..1bc5913302 --- /dev/null +++ b/commands/migrate.md @@ -0,0 +1,67 @@ +Atomically transfer a key from a source Redis instance to a destination Redis +instance. +On success the key is deleted from the original instance and is guaranteed to +exist in the target instance. + +The command is atomic and blocks the two instances for the time required to +transfer the key, at any given time the key will appear to exist in a given +instance or in the other instance, unless a timeout error occurs. In 3.2 and +above, multiple keys can be pipelined in a single call to `MIGRATE` by passing +the empty string ("") as key and adding the `!KEYS` clause. + +The command internally uses `DUMP` to generate the serialized version of the key +value, and `RESTORE` in order to synthesize the key in the target instance. +The source instance acts as a client for the target instance. +If the target instance returns OK to the `RESTORE` command, the source instance +deletes the key using `DEL`. + +The timeout specifies the maximum idle time in any moment of the communication +with the destination instance in milliseconds. +This means that the operation does not need to be completed within the specified +amount of milliseconds, but that the transfer should make progresses without +blocking for more than the specified amount of milliseconds. + +`MIGRATE` needs to perform I/O operations and to honor the specified timeout. +When there is an I/O error during the transfer or if the timeout is reached the +operation is aborted and the special error - `IOERR` returned. +When this happens the following two cases are possible: + +* The key may be on both the instances. +* The key may be only in the source instance. + +It is not possible for the key to get lost in the event of a timeout, but the +client calling `MIGRATE`, in the event of a timeout error, should check if the +key is _also_ present in the target instance and act accordingly. + +When any other error is returned (starting with `ERR`) `MIGRATE` guarantees that +the key is still only present in the originating instance (unless a key with the +same name was also _already_ present on the target instance). + +If there are no keys to migrate in the source instance `NOKEY` is returned. +Because missing keys are possible in normal conditions, from expiry for example, +`NOKEY` isn't an error. + +## Migrating multiple keys with a single command call + +Starting with Redis 3.0.6 `MIGRATE` supports a new bulk-migration mode that +uses pipelining in order to migrate multiple keys between instances without +incurring in the round trip time latency and other overheads that there are +when moving each key with a single `MIGRATE` call. + +In order to enable this form, the `!KEYS` option is used, and the normal *key* +argument is set to an empty string. The actual key names will be provided +after the `!KEYS` argument itself, like in the following example: + + MIGRATE 192.168.1.34 6379 "" 0 5000 KEYS key1 key2 key3 + +When this form is used the `NOKEY` status code is only returned when none +of the keys is present in the instance, otherwise the command is executed, even if +just a single key exists. + +## Options + +* `!COPY` -- Do not remove the key from the local instance. +* `REPLACE` -- Replace existing key on the remote instance. +* `!KEYS` -- If the key argument is an empty string, the command will instead migrate all the keys that follow the `!KEYS` option (see the above section for more info). +* `!AUTH` -- Authenticate with the given password to the remote instance. +* `AUTH2` -- Authenticate with the given username and password pair (Redis 6 or greater ACL auth style). diff --git a/commands/module-help.md b/commands/module-help.md new file mode 100644 index 0000000000..6759bfe0fe --- /dev/null +++ b/commands/module-help.md @@ -0,0 +1 @@ +The `MODULE HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/module-list.md b/commands/module-list.md new file mode 100644 index 0000000000..3f91414b05 --- /dev/null +++ b/commands/module-list.md @@ -0,0 +1 @@ +Returns information about the modules loaded to the server. diff --git a/commands/module-load.md b/commands/module-load.md new file mode 100644 index 0000000000..7232257a1d --- /dev/null +++ b/commands/module-load.md @@ -0,0 +1,9 @@ +Loads a module from a dynamic library at runtime. + +This command loads and initializes the Redis module from the dynamic library +specified by the `path` argument. The `path` should be the absolute path of the +library, including the full filename. Any additional arguments are passed +unmodified to the module. + +**Note**: modules can also be loaded at server startup with `loadmodule` +configuration directive in `redis.conf`. diff --git a/commands/module-loadex.md b/commands/module-loadex.md new file mode 100644 index 0000000000..a863aae32c --- /dev/null +++ b/commands/module-loadex.md @@ -0,0 +1,11 @@ +Loads a module from a dynamic library at runtime with configuration directives. + +This is an extended version of the `MODULE LOAD` command. + +It loads and initializes the Redis module from the dynamic library specified by the `path` argument. The `path` should be the absolute path of the library, including the full filename. + +You can use the optional `!CONFIG` argument to provide the module with configuration directives. +Any additional arguments that follow the `ARGS` keyword are passed unmodified to the module. + +**Note**: modules can also be loaded at server startup with `loadmodule` +configuration directive in `redis.conf`. diff --git a/commands/module-unload.md b/commands/module-unload.md new file mode 100644 index 0000000000..4214e750f0 --- /dev/null +++ b/commands/module-unload.md @@ -0,0 +1,9 @@ +Unloads a module. + +This command unloads the module specified by `name`. Note that the module's name +is reported by the `MODULE LIST` command, and may differ from the dynamic +library's filename. + +Known limitations: + +* Modules that register custom data types can not be unloaded. diff --git a/commands/module.md b/commands/module.md new file mode 100644 index 0000000000..87fa539b5f --- /dev/null +++ b/commands/module.md @@ -0,0 +1,3 @@ +This is a container command for module management commands. + +To see the list of available commands you can call `MODULE HELP`. diff --git a/commands/monitor.md b/commands/monitor.md index 67811a2a31..5a7a70c575 100644 --- a/commands/monitor.md +++ b/commands/monitor.md @@ -1,38 +1,87 @@ +`MONITOR` is a debugging command that streams back every command processed by +the Redis server. +It can help in understanding what is happening to the database. +This command can both be used via `redis-cli` and via `telnet`. - -`MONITOR` is a debugging command that outputs the whole sequence of commands -received by the Redis server. is very handy in order to understand -what is happening into the database. This command is used directly -via telnet. - % telnet 127.0.0.1 6379 - Trying 127.0.0.1... - Connected to segnalo-local.com. - Escape character is '^]'. - MONITOR - +OK - monitor - keys * - dbsize - set x 6 - foobar - get x - del x - get x - set key_x 5 - hello - set key_y 5 - hello - set key_z 5 - hello - set foo_a 5 - hello The ability to see all the requests processed by the server is useful in order -to spot bugs in the application both when using Redis as a database and as -a distributed caching system. +to spot bugs in an application both when using Redis as a database and as a +distributed caching system. + +``` +$ redis-cli monitor +1339518083.107412 [0 127.0.0.1:60866] "keys" "*" +1339518087.877697 [0 127.0.0.1:60866] "dbsize" +1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" +1339518096.506257 [0 127.0.0.1:60866] "get" "x" +1339518099.363765 [0 127.0.0.1:60866] "eval" "return redis.call('set','x','7')" "0" +1339518100.363799 [0 lua] "set" "x" "7" +1339518100.544926 [0 127.0.0.1:60866] "del" "x" +``` + +Use `SIGINT` (Ctrl-C) to stop a `MONITOR` stream running via `redis-cli`. + +``` +$ telnet localhost 6379 +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. +MONITOR ++OK ++1339518083.107412 [0 127.0.0.1:60866] "keys" "*" ++1339518087.877697 [0 127.0.0.1:60866] "dbsize" ++1339518090.420270 [0 127.0.0.1:60866] "set" "x" "6" ++1339518096.506257 [0 127.0.0.1:60866] "get" "x" ++1339518099.363765 [0 127.0.0.1:60866] "del" "x" ++1339518100.544926 [0 127.0.0.1:60866] "get" "x" +QUIT ++OK +Connection closed by foreign host. +``` + +Manually issue the `QUIT` or `RESET` commands to stop a `MONITOR` stream running +via `telnet`. + +## Commands not logged by MONITOR + +Because of security concerns, no administrative commands are logged +by `MONITOR`'s output and sensitive data is redacted in the command `AUTH`. + +Furthermore, the command `QUIT` is also not logged. + +## Cost of running MONITOR + +Because `MONITOR` streams back **all** commands, its use comes at a cost. +The following (totally unscientific) benchmark numbers illustrate what the cost +of running `MONITOR` can be. + +Benchmark result **without** `MONITOR` running: + +``` +$ src/redis-benchmark -c 10 -n 100000 -q +PING_INLINE: 101936.80 requests per second +PING_BULK: 102880.66 requests per second +SET: 95419.85 requests per second +GET: 104275.29 requests per second +INCR: 93283.58 requests per second +``` + +Benchmark result **with** `MONITOR` running (`redis-cli monitor > /dev/null`): + +``` +$ src/redis-benchmark -c 10 -n 100000 -q +PING_INLINE: 58479.53 requests per second +PING_BULK: 59136.61 requests per second +SET: 41823.50 requests per second +GET: 45330.91 requests per second +INCR: 41771.09 requests per second +``` -In order to end a monitoring session just issue a `QUIT` command by hand. +In this particular case, running a single `MONITOR` client can reduce the +throughput by more than 50%. +Running more `MONITOR` clients will reduce throughput even more. -@return +## Behavior change history -**Non standard return value**, just dumps the received commands in an infinite -flow. \ No newline at end of file +* `>= 6.0.0`: `AUTH` excluded from the command's output. +* `>= 6.2.0`: "`RESET` can be called to exit monitor mode. +* `>= 6.2.4`: "`AUTH`, `HELLO`, `EVAL`, `EVAL_RO`, `EVALSHA` and `EVALSHA_RO` included in the command's output. \ No newline at end of file diff --git a/commands/move.md b/commands/move.md index 62f845b8c8..1e9f6cdbf9 100644 --- a/commands/move.md +++ b/commands/move.md @@ -1,17 +1,5 @@ -@complexity - -O(1) - - Move `key` from the currently selected database (see `SELECT`) to the specified -destination database. When `key` already exists in the destination database, or -it does not exist in the source database, it does nothing. It is possible to -use `MOVE` as a locking primitive because of this. - -@return - -@integer-reply, specifically: - -* `1` if `key` was moved. -* `0` if `key` was not moved. - +destination database. +When `key` already exists in the destination database, or it does not exist in +the source database, it does nothing. +It is possible to use `MOVE` as a locking primitive because of this. diff --git a/commands/mset.md b/commands/mset.md index 21c51124d2..d22b0de5da 100644 --- a/commands/mset.md +++ b/commands/mset.md @@ -1,23 +1,15 @@ -@complexity +Sets the given keys to their respective values. +`MSET` replaces existing values with new values, just as regular `SET`. +See `MSETNX` if you don't want to overwrite existing values. -O(N) where N is the number of keys to set - - -Sets the given keys to their respective values. `MSET` replaces existing values -with new values, just as regular `SET`. See `MSETNX` if you don't want to -overwrite existing values. - -`MSET` is atomic, so all given keys are set at once. It is not possible for -clients to see that some of the keys were updated while others are unchanged. - -@return - -@status-reply: always `OK` since `MSET` can't fail. +`MSET` is atomic, so all given keys are set at once. +It is not possible for clients to see that some of the keys were updated while +others are unchanged. @examples - @cli - MSET key1 "Hello" key2 "World" - GET key1 - GET key2 - +```cli +MSET key1 "Hello" key2 "World" +GET key1 +GET key2 +``` diff --git a/commands/msetnx.md b/commands/msetnx.md index d8f395e021..71b4117c30 100644 --- a/commands/msetnx.md +++ b/commands/msetnx.md @@ -1,29 +1,19 @@ -@complexity - -O(N) where N is the number of keys to set - - -Sets the given keys to their respective values. `MSETNX` will not perform any -operation at all even if just a single key already exists. +Sets the given keys to their respective values. +`MSETNX` will not perform any operation at all even if just a single key already +exists. Because of this semantic `MSETNX` can be used in order to set different keys -representing different fields of an unique logic object in a way that -ensures that either all the fields or none at all are set. - -`MSETNX` is atomic, so all given keys are set at once. It is not possible for -clients to see that some of the keys were updated while others are unchanged. +representing different fields of a unique logic object in a way that ensures +that either all the fields or none at all are set. -@return - -@integer-reply, specifically: - -* `1` if the all the keys were set. -* `0` if no key was set (at least one key already existed). +`MSETNX` is atomic, so all given keys are set at once. +It is not possible for clients to see that some of the keys were updated while +others are unchanged. @examples - @cli - MSETNX key1 "Hello" key2 "there" - MSETNX key2 "there" key3 "world" - MGET key1 key2 key3 - +```cli +MSETNX key1 "Hello" key2 "there" +MSETNX key2 "new" key3 "world" +MGET key1 key2 key3 +``` diff --git a/commands/multi.md b/commands/multi.md index a53664ddc1..1b2ba22659 100644 --- a/commands/multi.md +++ b/commands/multi.md @@ -1,7 +1,4 @@ -Marks the start of a [transaction](/topics/transactions) -block. Subsequent commands will be queued for atomic execution using -`EXEC`. +Marks the start of a [transaction][tt] block. +Subsequent commands will be queued for atomic execution using `EXEC`. -@return - -@status-reply: always `OK`. +[tt]: /topics/transactions diff --git a/commands/object-encoding.md b/commands/object-encoding.md new file mode 100644 index 0000000000..685debf912 --- /dev/null +++ b/commands/object-encoding.md @@ -0,0 +1,42 @@ +Returns the internal encoding for the Redis object stored at `` + +Redis objects can be encoded in different ways: + +* Strings can be encoded as: + + - `raw`, normal string encoding. + - `int`, strings representing integers in a 64-bit signed interval, encoded in this way to save space. + - `embstr`, an embedded string, which is an object where the internal simple dynamic string, `sds`, is an unmodifiable string allocated in the same chuck as the object itself. + `embstr` can be strings with lengths up to the hardcoded limit of `OBJ_ENCODING_EMBSTR_SIZE_LIMIT` or 44 bytes. + +* Lists can be encoded as: + + - `linkedlist`, simple list encoding. No longer used, an old list encoding. + - `ziplist`, Redis <= 6.2, a space-efficient encoding used for small lists. + - `listpack`, Redis >= 7.0, a space-efficient encoding used for small lists. + - `quicklist`, encoded as linkedlist of ziplists or listpacks. + +* Sets can be encoded as: + + - `hashtable`, normal set encoding. + - `intset`, a special encoding used for small sets composed solely of integers. + - `listpack`, Redis >= 7.2, a space-efficient encoding used for small sets. + +* Hashes can be encoded as: + + - `zipmap`, no longer used, an old hash encoding. + - `hashtable`, normal hash encoding. + - `ziplist`, Redis <= 6.2, a space-efficient encoding used for small hashes. + - `listpack`, Redis >= 7.0, a space-efficient encoding used for small hashes. + +* Sorted Sets can be encoded as: + + - `skiplist`, normal sorted set encoding. + - `ziplist`, Redis <= 6.2, a space-efficient encoding used for small sorted sets. + - `listpack`, Redis >= 7.0, a space-efficient encoding used for small sorted sets. + +* Streams can be encoded as: + + - `stream`, encoded as a radix tree of listpacks. + +All the specially encoded types are automatically converted to the general type once you perform an operation that makes it impossible for Redis to retain the space saving encoding. diff --git a/commands/object-freq.md b/commands/object-freq.md new file mode 100644 index 0000000000..5c75bfb787 --- /dev/null +++ b/commands/object-freq.md @@ -0,0 +1,3 @@ +This command returns the logarithmic access frequency counter of a Redis object stored at ``. + +The command is only available when the `maxmemory-policy` configuration directive is set to one of the LFU policies. diff --git a/commands/object-help.md b/commands/object-help.md new file mode 100644 index 0000000000..d528d40751 --- /dev/null +++ b/commands/object-help.md @@ -0,0 +1 @@ +The `OBJECT HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/object-idletime.md b/commands/object-idletime.md new file mode 100644 index 0000000000..791d8fb00a --- /dev/null +++ b/commands/object-idletime.md @@ -0,0 +1,3 @@ +This command returns the time in seconds since the last access to the value stored at ``. + +The command is only available when the `maxmemory-policy` configuration directive is not set to one of the LFU policies. diff --git a/commands/object-refcount.md b/commands/object-refcount.md new file mode 100644 index 0000000000..e526681640 --- /dev/null +++ b/commands/object-refcount.md @@ -0,0 +1 @@ +This command returns the reference count of the stored at ``. diff --git a/commands/object.md b/commands/object.md index 86f12059f2..887ab9d5fd 100644 --- a/commands/object.md +++ b/commands/object.md @@ -1,59 +1,3 @@ -@complexity - -O(1) for all the currently implemented subcommands. - -The `OBJECT` command allows to inspect the internals of Redis Objects associated -with keys. It is useful for debugging or to understand if your keys are using -the specially encoded data types to save space. Your application may also use -the information reported by the `OBJECT` command to implement application level -key eviction policies when using Redis as a Cache. - -The `OBJECT` command supports multiple sub commands: - -* `OBJECT REFCOUNT ` returns the number of references of the value associated with the specified key. This command is mainly useful for debugging. -* `OBJECT ENCODING ` returns the kind of internal representation used in order to store the value associated with a key. -* `OBJECT IDLETIME ` returns the number of seconds since the object stored at the specified key is idle (not requested by read or write operations). While the value is returned in seconds the actual resolution of this timer is 10 seconds, but may vary in future implementations. - -Objects can be encoded in different ways: - -* Strings can be encoded as `raw` (normal string encoding) or `int` (strings representing integers in a 64 bit signed interval are encoded in this way in order to save space). -* Lists can be encoded as `ziplist` or `linkedlist`. The `ziplist` is the special representation that is used to save space for small lists. -* Sets can be encoded as `intset` or `hashtable`. The `intset` is a special encoding used for small sets composed solely of integers. -* Hashes can be encoded as `zipmap` or `hashtable`. The `zipmap` is a special encoding used for small hashes. -* Sorted Sets can be encoded as `ziplist` or `skiplist` format. As for the List type small sorted sets can be specially encoded using `ziplist`, while the `skiplist` encoding is the one that works with sorted sets of any size. - -All the specially encoded types are automatically converted to the general type once you perform an operation that makes it no possible for Redis to retain the space saving encoding. - -@return - -Different return values are used for different subcommands. - -* Subcommands `refcount` and `idletime` returns integers. -* Subcommand `encoding` returns a bulk reply. - -If the object you try to inspect is missing, a null bulk reply is returned. - -@examples - - redis> lpush mylist "Hello World" - (integer) 4 - redis> object refcount mylist - (integer) 1 - redis> object encoding mylist - "ziplist" - redis> object idletime mylist - (integer) 10 - -In the following example you can see how the encoding changes once Redis is no longer able to use the space saving encoding. - - redis> set foo 1000 - OK - redis> object encoding foo - "int" - redis> append foo bar - (integer) 7 - redis> get foo - "1000bar" - redis> object encoding foo - "raw" +This is a container command for object introspection commands. +To see the list of available commands you can call `OBJECT HELP`. diff --git a/commands/persist.md b/commands/persist.md index fda4c1f789..44f067d7d8 100644 --- a/commands/persist.md +++ b/commands/persist.md @@ -1,23 +1,13 @@ -@complexity - -O(1) - - -Remove the existing timeout on `key`. - -@return - -@integer-reply, specifically: - -* `1` if the timeout was removed. -* `0` if `key` does not exist or does not have an associated timeout. +Remove the existing timeout on `key`, turning the key from _volatile_ (a key +with an expire set) to _persistent_ (a key that will never expire as no timeout +is associated). @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - PERSIST mykey - TTL mykey - +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +PERSIST mykey +TTL mykey +``` diff --git a/commands/pexpire.md b/commands/pexpire.md new file mode 100644 index 0000000000..2e0df07063 --- /dev/null +++ b/commands/pexpire.md @@ -0,0 +1,27 @@ +This command works exactly like `EXPIRE` but the time to live of the key is +specified in milliseconds instead of seconds. + +## Options + +The `PEXPIRE` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + +@examples + +```cli +SET mykey "Hello" +PEXPIRE mykey 1500 +TTL mykey +PTTL mykey +PEXPIRE mykey 1000 XX +TTL mykey +PEXPIRE mykey 1000 NX +TTL mykey +``` diff --git a/commands/pexpireat.md b/commands/pexpireat.md new file mode 100644 index 0000000000..748d491a72 --- /dev/null +++ b/commands/pexpireat.md @@ -0,0 +1,23 @@ +`PEXPIREAT` has the same effect and semantic as `EXPIREAT`, but the Unix time at +which the key will expire is specified in milliseconds instead of seconds. + +## Options + +The `PEXPIREAT` command supports a set of options since Redis 7.0: + +* `NX` -- Set expiry only when the key has no expiry +* `XX` -- Set expiry only when the key has an existing expiry +* `GT` -- Set expiry only when the new expiry is greater than current one +* `LT` -- Set expiry only when the new expiry is less than current one + +A non-volatile key is treated as an infinite TTL for the purpose of `GT` and `LT`. +The `GT`, `LT` and `NX` options are mutually exclusive. + +@examples + +```cli +SET mykey "Hello" +PEXPIREAT mykey 1555555555005 +TTL mykey +PTTL mykey +``` diff --git a/commands/pexpiretime.md b/commands/pexpiretime.md new file mode 100644 index 0000000000..ffde6be48b --- /dev/null +++ b/commands/pexpiretime.md @@ -0,0 +1,9 @@ +`PEXPIRETIME` has the same semantic as `EXPIRETIME`, but returns the absolute Unix expiration timestamp in milliseconds instead of seconds. + +@examples + +```cli +SET mykey "Hello" +PEXPIREAT mykey 33177117420000 +PEXPIRETIME mykey +``` diff --git a/commands/pfadd.md b/commands/pfadd.md new file mode 100644 index 0000000000..f621a00d4e --- /dev/null +++ b/commands/pfadd.md @@ -0,0 +1,16 @@ +Adds all the element arguments to the HyperLogLog data structure stored at the variable name specified as first argument. + +As a side effect of this command the HyperLogLog internals may be updated to reflect a different estimation of the number of unique items added so far (the cardinality of the set). + +If the approximated cardinality estimated by the HyperLogLog changed after executing the command, `PFADD` returns 1, otherwise 0 is returned. The command automatically creates an empty HyperLogLog structure (that is, a Redis String of a specified length and with a given encoding) if the specified key does not exist. + +To call the command without elements but just the variable name is valid, this will result into no operation performed if the variable already exists, or just the creation of the data structure if the key does not exist (in the latter case 1 is returned). + +For an introduction to HyperLogLog data structure check the `PFCOUNT` command page. + +@examples + +```cli +PFADD hll a b c d e f g +PFCOUNT hll +``` diff --git a/commands/pfcount.md b/commands/pfcount.md new file mode 100644 index 0000000000..ac6f712a6a --- /dev/null +++ b/commands/pfcount.md @@ -0,0 +1,55 @@ +When called with a single key, returns the approximated cardinality computed by the HyperLogLog data structure stored at the specified variable, which is 0 if the variable does not exist. + +When called with multiple keys, returns the approximated cardinality of the union of the HyperLogLogs passed, by internally merging the HyperLogLogs stored at the provided keys into a temporary HyperLogLog. + +The HyperLogLog data structure can be used in order to count **unique** elements in a set using just a small constant amount of memory, specifically 12k bytes for every HyperLogLog (plus a few bytes for the key itself). + +The returned cardinality of the observed set is not exact, but approximated with a standard error of 0.81%. + +For example in order to take the count of all the unique search queries performed in a day, a program needs to call `PFADD` every time a query is processed. The estimated number of unique queries can be retrieved with `PFCOUNT` at any time. + +Note: as a side effect of calling this function, it is possible that the HyperLogLog is modified, since the last 8 bytes encode the latest computed cardinality +for caching purposes. So `PFCOUNT` is technically a write command. + +@examples + +```cli +PFADD hll foo bar zap +PFADD hll zap zap zap +PFADD hll foo bar +PFCOUNT hll +PFADD some-other-hll 1 2 3 +PFCOUNT hll some-other-hll +``` + +Performances +--- + +When `PFCOUNT` is called with a single key, performances are excellent even if +in theory constant times to process a dense HyperLogLog are high. This is +possible because the `PFCOUNT` uses caching in order to remember the cardinality +previously computed, that rarely changes because most `PFADD` operations will +not update any register. Hundreds of operations per second are possible. + +When `PFCOUNT` is called with multiple keys, an on-the-fly merge of the +HyperLogLogs is performed, which is slow, moreover the cardinality of the union +can't be cached, so when used with multiple keys `PFCOUNT` may take a time in +the order of magnitude of the millisecond, and should be not abused. + +The user should take in mind that single-key and multiple-keys executions of +this command are semantically different and have different performances. + +HyperLogLog representation +--- + +Redis HyperLogLogs are represented using a double representation: the *sparse* representation suitable for HLLs counting a small number of elements (resulting in a small number of registers set to non-zero value), and a *dense* representation suitable for higher cardinalities. Redis automatically switches from the sparse to the dense representation when needed. + +The sparse representation uses a run-length encoding optimized to store efficiently a big number of registers set to zero. The dense representation is a Redis string of 12288 bytes in order to store 16384 6-bit counters. The need for the double representation comes from the fact that using 12k (which is the dense representation memory requirement) to encode just a few registers for smaller cardinalities is extremely suboptimal. + +Both representations are prefixed with a 16 bytes header, that includes a magic, an encoding / version field, and the cached cardinality estimation computed, stored in little endian format (the most significant bit is 1 if the estimation is invalid since the HyperLogLog was updated since the cardinality was computed). + +The HyperLogLog, being a Redis string, can be retrieved with `GET` and restored with `SET`. Calling `PFADD`, `PFCOUNT` or `PFMERGE` commands with a corrupted HyperLogLog is never a problem, it may return random values but does not affect the stability of the server. Most of the times when corrupting a sparse representation, the server recognizes the corruption and returns an error. + +The representation is neutral from the point of view of the processor word size and endianness, so the same representation is used by 32 bit and 64 bit processor, big endian or little endian. + +More details about the Redis HyperLogLog implementation can be found in [this blog post](http://antirez.com/news/75). The source code of the implementation in the `hyperloglog.c` file is also easy to read and understand, and includes a full specification for the exact encoding used for the sparse and dense representations. diff --git a/commands/pfdebug.md b/commands/pfdebug.md new file mode 100644 index 0000000000..b7cceea5f4 --- /dev/null +++ b/commands/pfdebug.md @@ -0,0 +1,2 @@ +The `PFDEBUG` command is an internal command. +It is meant to be used for developing and testing Redis. \ No newline at end of file diff --git a/commands/pfmerge.md b/commands/pfmerge.md new file mode 100644 index 0000000000..4eb1b90966 --- /dev/null +++ b/commands/pfmerge.md @@ -0,0 +1,19 @@ +Merge multiple HyperLogLog values into a unique value that will approximate +the cardinality of the union of the observed Sets of the source HyperLogLog +structures. + +The computed merged HyperLogLog is set to the destination variable, which is +created if does not exist (defaulting to an empty HyperLogLog). + +If the destination variable exists, it is treated as one of the source sets +and its cardinality will be included in the cardinality of the computed +HyperLogLog. + +@examples + +```cli +PFADD hll1 foo bar zap a +PFADD hll2 a b c foo +PFMERGE hll3 hll1 hll2 +PFCOUNT hll3 +``` diff --git a/commands/pfselftest.md b/commands/pfselftest.md new file mode 100644 index 0000000000..bdc1e61da0 --- /dev/null +++ b/commands/pfselftest.md @@ -0,0 +1,2 @@ +The `PFSELFTEST` command is an internal command. +It is meant to be used for developing and testing Redis. \ No newline at end of file diff --git a/commands/ping.md b/commands/ping.md index b572a36c68..f1631d4a2c 100644 --- a/commands/ping.md +++ b/commands/ping.md @@ -1,14 +1,19 @@ -@description +Returns `PONG` if no argument is provided, otherwise return a copy of the +argument as a bulk. +This command is useful for: +1. Testing whether a connection is still alive. +1. Verifying the server's ability to serve data - an error is returned when this isn't the case (e.g., during load from persistence or accessing a stale replica). +1. Measuring latency. -Returns `PONG`. This command is often used to test if a connection is still -alive, or to measure latency. - -@return - -@status-reply +If the client is subscribed to a channel or a pattern, it will instead return a +multi-bulk with a "pong" in the first position and an empty bulk in the second +position, unless an argument is provided in which case it returns a copy +of the argument. @examples - @cli - PING +```cli +PING +PING "hello world" +``` diff --git a/commands/psetex.md b/commands/psetex.md new file mode 100644 index 0000000000..3e9988eff9 --- /dev/null +++ b/commands/psetex.md @@ -0,0 +1,10 @@ +`PSETEX` works exactly like `SETEX` with the sole difference that the expire +time is specified in milliseconds instead of seconds. + +@examples + +```cli +PSETEX mykey 1000 "Hello" +PTTL mykey +GET mykey +``` diff --git a/commands/psubscribe.md b/commands/psubscribe.md index ee6a6842eb..abb80b9975 100644 --- a/commands/psubscribe.md +++ b/commands/psubscribe.md @@ -1,5 +1,18 @@ -@complexity +Subscribes the client to the given patterns. -O(N) where N is the number of patterns the client is already subscribed to. +Supported glob-style patterns: -Subscribes the client to the given patterns. +* `h?llo` subscribes to `hello`, `hallo` and `hxllo` +* `h*llo` subscribes to `hllo` and `heeeello` +* `h[ae]llo` subscribes to `hello` and `hallo,` but not `hillo` + +Use `\` to escape special characters if you want to match them verbatim. + +Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional `SUBSCRIBE`, `SSUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE`, `SUNSUBSCRIBE`, `PUNSUBSCRIBE`, `PING`, `RESET` and `QUIT` commands. +However, if RESP3 is used (see `HELLO`) it is possible for a client to issue any commands while in subscribed state. + +For more information, see [Pub/sub](/docs/interact/pubsub/). + +## Behavior change history + +* `>= 6.2.0`: `RESET` can be called to exit subscribed state. diff --git a/commands/psync.md b/commands/psync.md new file mode 100644 index 0000000000..d9eac2ab9a --- /dev/null +++ b/commands/psync.md @@ -0,0 +1,9 @@ +Initiates a replication stream from the master. + +The `PSYNC` command is called by Redis replicas for initiating a replication +stream from the master. + +For more information about replication in Redis please check the +[replication page][tr]. + +[tr]: /topics/replication diff --git a/commands/pttl.md b/commands/pttl.md new file mode 100644 index 0000000000..302194b46a --- /dev/null +++ b/commands/pttl.md @@ -0,0 +1,18 @@ +Like `TTL` this command returns the remaining time to live of a key that has an +expire set, with the sole difference that `TTL` returns the amount of remaining +time in seconds while `PTTL` returns it in milliseconds. + +In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. + +Starting with Redis 2.8 the return value in case of error changed: + +* The command returns `-2` if the key does not exist. +* The command returns `-1` if the key exists but has no associated expire. + +@examples + +```cli +SET mykey "Hello" +EXPIRE mykey 1 +PTTL mykey +``` diff --git a/commands/publish.md b/commands/publish.md index fa55e3702f..cd61d04678 100644 --- a/commands/publish.md +++ b/commands/publish.md @@ -1,11 +1,5 @@ -@complexity - -O(N+M) where N is the number of clients subscribed to the receiving -channel and M is the total number of subscribed patterns (by any -client). - Posts a message to the given channel. -@return - -@integer-reply: the number of clients that received the message. +In a Redis Cluster clients can publish to every node. The cluster makes sure +that published messages are forwarded as needed, so clients can subscribe to any +channel by connecting to any one of the nodes. diff --git a/commands/pubsub-channels.md b/commands/pubsub-channels.md new file mode 100644 index 0000000000..8e0b3e36fd --- /dev/null +++ b/commands/pubsub-channels.md @@ -0,0 +1,7 @@ +Lists the currently *active channels*. + +An active channel is a Pub/Sub channel with one or more subscribers (excluding clients subscribed to patterns). + +If no `pattern` is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. diff --git a/commands/pubsub-help.md b/commands/pubsub-help.md new file mode 100644 index 0000000000..f711c27db2 --- /dev/null +++ b/commands/pubsub-help.md @@ -0,0 +1 @@ +The `PUBSUB HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/pubsub-numpat.md b/commands/pubsub-numpat.md new file mode 100644 index 0000000000..2a3282c5c6 --- /dev/null +++ b/commands/pubsub-numpat.md @@ -0,0 +1,5 @@ +Returns the number of unique patterns that are subscribed to by clients (that are performed using the `PSUBSCRIBE` command). + +Note that this isn't the count of clients subscribed to patterns, but the total number of unique patterns all the clients are subscribed to. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. diff --git a/commands/pubsub-numsub.md b/commands/pubsub-numsub.md new file mode 100644 index 0000000000..604c317900 --- /dev/null +++ b/commands/pubsub-numsub.md @@ -0,0 +1,5 @@ +Returns the number of subscribers (exclusive of clients subscribed to patterns) for the specified channels. + +Note that it is valid to call this command without channels. In this case it will just return an empty list. + +Cluster note: in a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed. That said, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. diff --git a/commands/pubsub-shardchannels.md b/commands/pubsub-shardchannels.md new file mode 100644 index 0000000000..c6c460bd79 --- /dev/null +++ b/commands/pubsub-shardchannels.md @@ -0,0 +1,16 @@ +Lists the currently *active shard channels*. + +An active shard channel is a Pub/Sub shard channel with one or more subscribers. + +If no `pattern` is specified, all the channels are listed, otherwise if pattern is specified only channels matching the specified glob-style pattern are listed. + +The information returned about the active shard channels are at the shard level and not at the cluster level. + +@examples + +``` +> PUBSUB SHARDCHANNELS +1) "orders" +> PUBSUB SHARDCHANNELS o* +1) "orders" +``` diff --git a/commands/pubsub-shardnumsub.md b/commands/pubsub-shardnumsub.md new file mode 100644 index 0000000000..8119fe9b56 --- /dev/null +++ b/commands/pubsub-shardnumsub.md @@ -0,0 +1,13 @@ +Returns the number of subscribers for the specified shard channels. + +Note that it is valid to call this command without channels, in this case it will just return an empty list. + +Cluster note: in a Redis Cluster, `PUBSUB`'s replies in a cluster only report information from the node's Pub/Sub context, rather than the entire cluster. + +@examples + +``` +> PUBSUB SHARDNUMSUB orders +1) "orders" +2) (integer) 1 +``` diff --git a/commands/pubsub.md b/commands/pubsub.md new file mode 100644 index 0000000000..fa10a9edc4 --- /dev/null +++ b/commands/pubsub.md @@ -0,0 +1,3 @@ +This is a container command for Pub/Sub introspection commands. + +To see the list of available commands you can call `PUBSUB HELP`. diff --git a/commands/punsubscribe.md b/commands/punsubscribe.md index 4f5cd4a7e3..af8ee7e021 100644 --- a/commands/punsubscribe.md +++ b/commands/punsubscribe.md @@ -1,12 +1,7 @@ -@complexity +Unsubscribes the client from the given patterns, or from all of them if none is +given. -O(N+M) where N is the number of patterns the client is already -subscribed and M is the number of total patterns subscribed in the -system (by any client). - -Unsubscribes the client from the given patterns, or from all of them if -none is given. - -When no patters are specified, the client is unsubscribed from all -the previously subscribed patterns. In this case, a message for every -unsubscribed pattern will be sent to the client. +When no patterns are specified, the client is unsubscribed from all the +previously subscribed patterns. +In this case, a message for every unsubscribed pattern will be sent to the +client. diff --git a/commands/quit.md b/commands/quit.md index 333ddc696d..e0507d43d2 100644 --- a/commands/quit.md +++ b/commands/quit.md @@ -1,9 +1,7 @@ -@description - -Ask the server to close the connection. The connection is closed as soon as all -pending replies have been written to the client. - -@return - -@status-reply: always OK. +Ask the server to close the connection. +The connection is closed as soon as all pending replies have been written to the +client. +**Note:** Clients should not use this command. +Instead, clients should simply close the connection when they're not used anymore. +Terminating a connection on the client side is preferable, as it eliminates `TIME_WAIT` lingering sockets on the server side. diff --git a/commands/randomkey.md b/commands/randomkey.md index 2bd29212be..37a2d759ab 100644 --- a/commands/randomkey.md +++ b/commands/randomkey.md @@ -1,11 +1 @@ -@complexity - -O(1) - - Return a random key from the currently selected database. - -@return - -@bulk-reply: the random key, or `nil` when the database is empty. - diff --git a/commands/readonly.md b/commands/readonly.md new file mode 100644 index 0000000000..9bd4f96a7e --- /dev/null +++ b/commands/readonly.md @@ -0,0 +1,15 @@ +Enables read queries for a connection to a Redis Cluster replica node. + +Normally replica nodes will redirect clients to the authoritative master for +the hash slot involved in a given command, however clients can use replicas +in order to scale reads using the `READONLY` command. + +`READONLY` tells a Redis Cluster replica node that the client is willing to +read possibly stale data and is not interested in running write queries. + +When the connection is in readonly mode, the cluster will send a redirection +to the client only if the operation involves keys not served by the replica's +master node. This may happen because: + +1. The client sent a command about hash slots never served by the master of this replica. +2. The cluster was reconfigured (for example resharded) and the replica is no longer able to serve commands for a given hash slot. diff --git a/commands/readwrite.md b/commands/readwrite.md new file mode 100644 index 0000000000..9b50eefd80 --- /dev/null +++ b/commands/readwrite.md @@ -0,0 +1,6 @@ +Disables read queries for a connection to a Redis Cluster replica node. + +Read queries against a Redis Cluster replica node are disabled by default, +but you can use the `READONLY` command to change this behavior on a per- +connection basis. The `READWRITE` command resets the readonly mode flag +of a connection back to readwrite. diff --git a/commands/rename.md b/commands/rename.md index 329e0ad274..82c503c219 100644 --- a/commands/rename.md +++ b/commands/rename.md @@ -1,20 +1,17 @@ -@complexity +Renames `key` to `newkey`. +It returns an error when `key` does not exist. +If `newkey` already exists it is overwritten, when this happens `RENAME` executes an implicit `DEL` operation, so if the deleted key contains a very big value it may cause high latency even if `RENAME` itself is usually a constant-time operation. -O(1) - - -Renames `key` to `newkey`. It returns an error when the source and destination -names are the same, or when `key` does not exist. If `newkey` already exists it -is overwritten. - -@return - -@status-reply +In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. @examples - @cli - SET mykey "Hello" - RENAME mykey myotherkey - GET myotherkey +```cli +SET mykey "Hello" +RENAME mykey myotherkey +GET myotherkey +``` + +## Behavior change history +* `>= 3.2.0`: The command no longer returns an error when source and destination names are the same. \ No newline at end of file diff --git a/commands/renamenx.md b/commands/renamenx.md index 8bb75909c3..15b951e4ca 100644 --- a/commands/renamenx.md +++ b/commands/renamenx.md @@ -1,23 +1,13 @@ -@complexity - -O(1) - - Renames `key` to `newkey` if `newkey` does not yet exist. -It returns an error under the same conditions as `RENAME`. +It returns an error when `key` does not exist. -@return - -@integer-reply, specifically: - -* `1` if `key` was renamed to `newkey`. -* `0` if `newkey` already exists. +In Cluster mode, both `key` and `newkey` must be in the same **hash slot**, meaning that in practice only keys that have the same hash tag can be reliably renamed in cluster. @examples - @cli - SET mykey "Hello" - SET myotherkey "World" - RENAMENX mykey myotherkey - GET myotherkey - +```cli +SET mykey "Hello" +SET myotherkey "World" +RENAMENX mykey myotherkey +GET myotherkey +``` diff --git a/commands/replconf.md b/commands/replconf.md new file mode 100644 index 0000000000..fc34549cd6 --- /dev/null +++ b/commands/replconf.md @@ -0,0 +1,2 @@ +The `REPLCONF` command is an internal command. +It is used by a Redis master to configure a connected replica. \ No newline at end of file diff --git a/commands/replicaof.md b/commands/replicaof.md new file mode 100644 index 0000000000..8351f1453e --- /dev/null +++ b/commands/replicaof.md @@ -0,0 +1,17 @@ +The `REPLICAOF` command can change the replication settings of a replica on the fly. + +If a Redis server is already acting as replica, the command `REPLICAOF` NO ONE will turn off the replication, turning the Redis server into a MASTER. In the proper form `REPLICAOF` hostname port will make the server a replica of another server listening at the specified hostname and port. + +If a server is already a replica of some master, `REPLICAOF` hostname port will stop the replication against the old server and start the synchronization against the new one, discarding the old dataset. + +The form `REPLICAOF` NO ONE will stop replication, turning the server into a MASTER, but will not discard the replication. So, if the old master stops working, it is possible to turn the replica into a master and set the application to use this new master in read/write. Later when the other Redis server is fixed, it can be reconfigured to work as a replica. + +@examples + +``` +> REPLICAOF NO ONE +"OK" + +> REPLICAOF 127.0.0.1 6799 +"OK" +``` diff --git a/commands/reset.md b/commands/reset.md new file mode 100644 index 0000000000..78434755a4 --- /dev/null +++ b/commands/reset.md @@ -0,0 +1,21 @@ +This command performs a full reset of the connection's server-side context, +mimicking the effect of disconnecting and reconnecting again. + +When the command is called from a regular client connection, it does the +following: + +* Discards the current `MULTI` transaction block, if one exists. +* Unwatches all keys `WATCH`ed by the connection. +* Disables `CLIENT TRACKING`, if in use. +* Sets the connection to `READWRITE` mode. +* Cancels the connection's `ASKING` mode, if previously set. +* Sets `CLIENT REPLY` to `ON`. +* Sets the protocol version to RESP2. +* `SELECT`s database 0. +* Exits `MONITOR` mode, when applicable. +* Aborts Pub/Sub's subscription state (`SUBSCRIBE` and `PSUBSCRIBE`), when + appropriate. +* Deauthenticates the connection, requiring a call `AUTH` to reauthenticate when + authentication is enabled. +* Turns off `NO-EVICT` mode. +* Turns off `NO-TOUCH` mode. diff --git a/commands/restore-asking.md b/commands/restore-asking.md new file mode 100644 index 0000000000..16488054ec --- /dev/null +++ b/commands/restore-asking.md @@ -0,0 +1,2 @@ +The `RESTORE-ASKING` command is an internal command. +It is used by a Redis cluster master during slot migration. \ No newline at end of file diff --git a/commands/restore.md b/commands/restore.md new file mode 100644 index 0000000000..d756a33bb4 --- /dev/null +++ b/commands/restore.md @@ -0,0 +1,36 @@ +Create a key associated with a value that is obtained by deserializing the +provided serialized value (obtained via `DUMP`). + +If `ttl` is 0 the key is created without any expire, otherwise the specified +expire time (in milliseconds) is set. + +If the `ABSTTL` modifier was used, `ttl` should represent an absolute +[Unix timestamp][hewowu] (in milliseconds) in which the key will expire. + +[hewowu]: http://en.wikipedia.org/wiki/Unix_time + +For eviction purposes, you may use the `IDLETIME` or `FREQ` modifiers. See +`OBJECT` for more information. + +`!RESTORE` will return a "Target key name is busy" error when `key` already +exists unless you use the `REPLACE` modifier. + +`!RESTORE` checks the RDB version and data checksum. +If they don't match an error is returned. + +@examples + +``` +redis> DEL mykey +0 +redis> RESTORE mykey 0 "\n\x17\x17\x00\x00\x00\x12\x00\x00\x00\x03\x00\ + x00\xc0\x01\x00\x04\xc0\x02\x00\x04\xc0\x03\x00\ + xff\x04\x00u#<\xc0;.\xe9\xdd" +OK +redis> TYPE mykey +list +redis> LRANGE mykey 0 -1 +1) "1" +2) "2" +3) "3" +``` diff --git a/commands/role.md b/commands/role.md new file mode 100644 index 0000000000..08bf21df1f --- /dev/null +++ b/commands/role.md @@ -0,0 +1,78 @@ +Provide information on the role of a Redis instance in the context of replication, by returning if the instance is currently a `master`, `slave`, or `sentinel`. The command also returns additional information about the state of the replication (if the role is master or slave) or the list of monitored master names (if the role is sentinel). + +## Output format + +The command returns an array of elements. The first element is the role of +the instance, as one of the following three strings: + +* "master" +* "slave" +* "sentinel" + +The additional elements of the array depends on the role. + +## Master output + +An example of output when `ROLE` is called in a master instance: + +``` +1) "master" +2) (integer) 3129659 +3) 1) 1) "127.0.0.1" + 2) "9001" + 3) "3129242" + 2) 1) "127.0.0.1" + 2) "9002" + 3) "3129543" +``` + +The master output is composed of the following parts: + +1. The string `master`. +2. The current master replication offset, which is an offset that masters and replicas share to understand, in partial resynchronizations, the part of the replication stream the replicas needs to fetch to continue. +3. An array composed of three elements array representing the connected replicas. Every sub-array contains the replica IP, port, and the last acknowledged replication offset. + +## Output of the command on replicas + +An example of output when `ROLE` is called in a replica instance: + +``` +1) "slave" +2) "127.0.0.1" +3) (integer) 9000 +4) "connected" +5) (integer) 3167038 +``` + +The replica output is composed of the following parts: + +1. The string `slave`, because of backward compatibility (see note at the end of this page). +2. The IP of the master. +3. The port number of the master. +4. The state of the replication from the point of view of the master, that can be `connect` (the instance needs to connect to its master), `connecting` (the master-replica connection is in progress), `sync` (the master and replica are trying to perform the synchronization), `connected` (the replica is online). +5. The amount of data received from the replica so far in terms of master replication offset. + +## Sentinel output + +An example of Sentinel output: + +``` +1) "sentinel" +2) 1) "resque-master" + 2) "html-fragments-master" + 3) "stats-master" + 4) "metadata-master" +``` + +The sentinel output is composed of the following parts: + +1. The string `sentinel`. +2. An array of master names monitored by this Sentinel instance. + +@examples + +```cli +ROLE +``` + +**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. diff --git a/commands/rpop.md b/commands/rpop.md index 85299770f1..1f66fe2711 100644 --- a/commands/rpop.md +++ b/commands/rpop.md @@ -1,20 +1,14 @@ -@complexity +Removes and returns the last elements of the list stored at `key`. -O(1) - - -Removes and returns the last element of the list stored at `key`. - -@return - -@bulk-reply: the value of the last element, or `nil` when `key` does not exist. +By default, the command pops a single element from the end of the list. +When provided with the optional `count` argument, the reply will consist of up +to `count` elements, depending on the list's length. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - RPOP mylist - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "one" "two" "three" "four" "five" +RPOP mylist +RPOP mylist 2 +LRANGE mylist 0 -1 +``` diff --git a/commands/rpoplpush.md b/commands/rpoplpush.md index b02c59b4a8..55f38bb555 100644 --- a/commands/rpoplpush.md +++ b/commands/rpoplpush.md @@ -1,57 +1,71 @@ -@complexity - -O(1) - - Atomically returns and removes the last element (tail) of the list stored at `source`, and pushes the element at the first element (head) of the list stored at `destination`. For example: consider `source` holding the list `a,b,c`, and `destination` -holding the list `x,y,z`. Executing `RPOPLPUSH` results in `source` holding -`a,b` and `destination` holding `c,x,y,z`. +holding the list `x,y,z`. +Executing `RPOPLPUSH` results in `source` holding `a,b` and `destination` +holding `c,x,y,z`. If `source` does not exist, the value `nil` is returned and no operation is -performed. If `source` and `destination` are the same, the operation is -equivalent to removing the last element from the list and pushing it as first -element of the list, so it can be considered as a list rotation command. - -@return - -@bulk-reply: the element being popped and pushed. +performed. +If `source` and `destination` are the same, the operation is equivalent to +removing the last element from the list and pushing it as first element of the +list, so it can be considered as a list rotation command. @examples - @cli - RPUSH mylist "one" - RPUSH mylist "two" - RPUSH mylist "three" - RPOPLPUSH mylist myotherlist - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 - -## Design pattern: safe queues - -Redis lists are often used as queues in order to exchange messages between -different programs. A program can add a message performing an `LPUSH` operation -against a Redis list (we call this program the _Producer_), while another program -(that we call _Consumer_) can process the messages performing an `RPOP` command -in order to start reading the messages starting at the oldest. - -Unfortunately, if a _Consumer_ crashes just after an `RPOP` operation, the message -is lost. `RPOPLPUSH` solves this problem since the returned message is -added to another backup list. The _Consumer_ can later remove the message -from the backup list using the `LREM` command when the message was correctly -processed. - -Another process (that we call _Helper_), can monitor the backup list to check for -timed out entries to re-push against the main queue. - -## Design pattern: server-side O(N) list traversal - -Using `RPOPLPUSH` with the same source and destination key, a process can -visit all the elements of an N-elements list in O(N) without transferring -the full list from the server to the client in a single `LRANGE` operation. -Note that a process can traverse the list even while other processes -are actively pushing to the list, and still no element will be skipped. - +```cli +RPUSH mylist "one" +RPUSH mylist "two" +RPUSH mylist "three" +RPOPLPUSH mylist myotherlist +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` + +## Pattern: Reliable queue + +Redis is often used as a messaging server to implement processing of background +jobs or other kinds of messaging tasks. +A simple form of queue is often obtained pushing values into a list in the +producer side, and waiting for this values in the consumer side using `RPOP` +(using polling), or `BRPOP` if the client is better served by a blocking +operation. + +However in this context the obtained queue is not _reliable_ as messages can +be lost, for example in the case there is a network problem or if the consumer +crashes just after the message is received but before it can be processed. + +`RPOPLPUSH` (or `BRPOPLPUSH` for the blocking variant) offers a way to avoid +this problem: the consumer fetches the message and at the same time pushes it +into a _processing_ list. +It will use the `LREM` command in order to remove the message from the +_processing_ list once the message has been processed. + +An additional client may monitor the _processing_ list for items that remain +there for too much time, pushing timed out items into the queue +again if needed. + +## Pattern: Circular list + +Using `RPOPLPUSH` with the same source and destination key, a client can visit +all the elements of an N-elements list, one after the other, in O(N) without +transferring the full list from the server to the client using a single `LRANGE` +operation. + +The above pattern works even if one or both of the following conditions occur: + +* There are multiple clients rotating the list: they'll fetch different + elements, until all the elements of the list are visited, and the process + restarts. +* Other clients are actively pushing new items at the end of the list. + +The above makes it very simple to implement a system where a set of items must +be processed by N workers continuously as fast as possible. +An example is a monitoring system that must check that a set of web sites are +reachable, with the smallest delay possible, using a number of parallel workers. + +Note that this implementation of workers is trivially scalable and reliable, +because even if a message is lost the item is still in the queue and will be +processed at the next iteration. diff --git a/commands/rpush.md b/commands/rpush.md index df832bff25..9b50a44450 100644 --- a/commands/rpush.md +++ b/commands/rpush.md @@ -1,27 +1,19 @@ -@complexity - -O(1) - - Insert all the specified values at the tail of the list stored at `key`. -If `key` does not exist, it is created as empty list before performing the -push operation. +If `key` does not exist, it is created as empty list before performing the push +operation. When `key` holds a value that is not a list, an error is returned. -It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the tail of the list, from the leftmost element to the rightmost element. So for instance the command `RPUSH mylist a b c` will result into a list containing `a` as first element, `b` as second element and `c` as third element. - -@return - -@integer-reply: the length of the list after the push operation. - -@history - -* `>= 2.4`: Accepts multiple `value` arguments. In Redis versions older than 2.4 it was possible to push a single value per command. +It is possible to push multiple elements using a single command call just +specifying multiple arguments at the end of the command. +Elements are inserted one after the other to the tail of the list, from the +leftmost element to the rightmost element. +So for instance the command `RPUSH mylist a b c` will result into a list +containing `a` as first element, `b` as second element and `c` as third element. @examples - @cli - RPUSH mylist "hello" - RPUSH mylist "world" - LRANGE mylist 0 -1 - +```cli +RPUSH mylist "hello" +RPUSH mylist "world" +LRANGE mylist 0 -1 +``` diff --git a/commands/rpushx.md b/commands/rpushx.md index 94aa6a6bc0..88e0da657e 100644 --- a/commands/rpushx.md +++ b/commands/rpushx.md @@ -1,22 +1,14 @@ -@complexity - -O(1) - - -Inserts `value` at the tail of the list stored at `key`, only if `key` -already exists and holds a list. In contrary to `RPUSH`, no operation will -be performed when `key` does not yet exist. - -@return - -@integer-reply: the length of the list after the push operation. +Inserts specified values at the tail of the list stored at `key`, only if `key` +already exists and holds a list. +In contrary to `RPUSH`, no operation will be performed when `key` does not yet +exist. @examples - @cli - RPUSH mylist "Hello" - RPUSHX mylist "World" - RPUSHX myotherlist "World" - LRANGE mylist 0 -1 - LRANGE myotherlist 0 -1 - +```cli +RPUSH mylist "Hello" +RPUSHX mylist "World" +RPUSHX myotherlist "World" +LRANGE mylist 0 -1 +LRANGE myotherlist 0 -1 +``` diff --git a/commands/sadd.md b/commands/sadd.md index 419f13c46c..93ad38d388 100644 --- a/commands/sadd.md +++ b/commands/sadd.md @@ -1,27 +1,15 @@ -@complexity - -O(N) where N is the number of members to be added. - - -Add the specified members to the set stored at `key`. Specified members that -are already a member of this set are ignored. If `key` does not exist, a new -set is created before adding the specified members. +Add the specified members to the set stored at `key`. +Specified members that are already a member of this set are ignored. +If `key` does not exist, a new set is created before adding the specified +members. An error is returned when the value stored at `key` is not a set. -@return - -@integer-reply: the number of elements that were added to the set, not including all the elements already present into the set. - -@history - -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions before 2.4 are only able to add a single member per call. - @examples - @cli - SADD myset "Hello" - SADD myset "World" - SADD myset "World" - SMEMBERS myset - +```cli +SADD myset "Hello" +SADD myset "World" +SADD myset "World" +SMEMBERS myset +``` diff --git a/commands/save.md b/commands/save.md index 8dafb3c81c..e2f32aa4d8 100644 --- a/commands/save.md +++ b/commands/save.md @@ -1,7 +1,14 @@ -@complexity +The `SAVE` commands performs a **synchronous** save of the dataset producing a +_point in time_ snapshot of all the data inside the Redis instance, in the form +of an RDB file. -@description +You almost never want to call `SAVE` in production environments where it will +block all the other clients. +Instead usually `BGSAVE` is used. +However in case of issues preventing Redis to create the background saving child +(for instance errors in the fork(2) system call), the `SAVE` command can be a +good last resort to perform the dump of the latest dataset. -@examples +Please refer to the [persistence documentation][tp] for detailed information. -@return \ No newline at end of file +[tp]: /topics/persistence diff --git a/commands/scan.md b/commands/scan.md new file mode 100644 index 0000000000..7ef81e47e9 --- /dev/null +++ b/commands/scan.md @@ -0,0 +1,248 @@ +The `SCAN` command and the closely related commands `SSCAN`, `HSCAN` and `ZSCAN` are used in order to incrementally iterate over a collection of elements. + +* `SCAN` iterates the set of keys in the currently selected Redis database. +* `SSCAN` iterates elements of Sets types. +* `HSCAN` iterates fields of Hash types and their associated values. +* `ZSCAN` iterates elements of Sorted Set types and their associated scores. + +Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like `KEYS` or `SMEMBERS` that may block the server for a long time (even several seconds) when called against big collections of keys or elements. + +However while blocking commands like `SMEMBERS` are able to provide all the elements that are part of a Set in a given moment, The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. + +Note that `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` all work very similarly, so this documentation covers all four commands. However an obvious difference is that in the case of `SSCAN`, `HSCAN` and `ZSCAN` the first argument is the name of the key holding the Set, Hash or Sorted Set value. The `SCAN` command does not need any key name argument as it iterates keys in the current database, so the iterated object is the database itself. + +## SCAN basic usage + +SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call. + +An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0. The following is an example of SCAN iteration: + +``` +redis 127.0.0.1:6379> scan 0 +1) "17" +2) 1) "key:12" + 2) "key:8" + 3) "key:4" + 4) "key:14" + 5) "key:16" + 6) "key:17" + 7) "key:15" + 8) "key:10" + 9) "key:3" + 10) "key:7" + 11) "key:1" +redis 127.0.0.1:6379> scan 17 +1) "0" +2) 1) "key:5" + 2) "key:18" + 3) "key:0" + 4) "key:2" + 5) "key:19" + 6) "key:13" + 7) "key:6" + 8) "key:9" + 9) "key:11" +``` + +In the example above, the first call uses zero as a cursor, to start the iteration. The second call uses the cursor returned by the previous call as the first element of the reply, that is, 17. + +As you can see the **SCAN return value** is an array of two values: the first value is the new cursor to use in the next call, the second value is an array of elements. + +Since in the second call the returned cursor is 0, the server signaled to the caller that the iteration finished, and the collection was completely explored. Starting an iteration with a cursor value of 0, and calling `SCAN` until the returned cursor is 0 again is called a **full iteration**. + +## Return value + +`SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` return a two element multi-bulk reply, where the first element is a string representing an unsigned 64 bit number (the cursor), and the second element is a multi-bulk with an array of elements. + +* `SCAN` array of elements is a list of keys. +* `SSCAN` array of elements is a list of Set members. +* `HSCAN` array of elements contain two elements, a field and a value, for every returned element of the Hash. +* `ZSCAN` array of elements contain two elements, a member and its associated score, for every returned element of the Sorted Set. + +## Scan guarantees + +The `SCAN` command, and the other commands in the `SCAN` family, are able to provide to the user a set of guarantees associated to full iterations. + +* A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started, and is still there when an iteration terminates, then at some point `SCAN` returned it to the user. +* A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration, and is never added back to the collection for all the time an iteration lasts, `SCAN` ensures that this element will never be returned. + +However because `SCAN` has very little state associated (just the cursor) it has the following drawbacks: + +* A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example only using the returned elements in order to perform operations that are safe when re-applied multiple times. +* Elements that were not constantly present in the collection during a full iteration, may be returned or not: it is undefined. + +## Number of elements returned at every SCAN call + +`SCAN` family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero. + +However the number of returned elements is reasonable, that is, in practical terms `SCAN` may return a maximum number of elements in the order of a few tens of elements when iterating a large collection, or may return all the elements of the collection in a single call when the iterated collection is small enough to be internally represented as an encoded data structure (this happens for small Sets, Hashes and Sorted Sets). + +However there is a way for the user to tune the order of magnitude of the number of returned elements per call using the **COUNT** option. + +## The COUNT option + +While `SCAN` does not provide guarantees about the number of elements returned at every iteration, it is possible to empirically adjust the behavior of `SCAN` using the **COUNT** option. Basically with COUNT the user specifies the *amount of work that should be done at every call in order to retrieve elements from the collection*. This is **just a hint** for the implementation, however generally speaking this is what you could expect most of the times from the implementation. + +* The default `COUNT` value is 10. +* When iterating the key space, or a Set, Hash or Sorted Set that is big enough to be represented by a hash table, assuming no **MATCH** option is used, the server will usually return *count* or a few more than *count* elements per call. Please check the *why SCAN may return all the elements at once* section later in this document. +* When iterating Sets encoded as intsets (small sets composed of just integers), or Hashes and Sorted Sets encoded as ziplists (small hashes and sets composed of small individual values), usually all the elements are returned in the first `SCAN` call regardless of the `COUNT` value. + +Important: **there is no need to use the same COUNT value** for every iteration. The caller is free to change the count from one iteration to the other as required, as long as the cursor passed in the next call is the one obtained in the previous call to the command. + +## The MATCH option + +It is possible to only iterate elements matching a given glob-style pattern, similarly to the behavior of the `KEYS` command that takes a pattern as its only argument. + +To do so, just append the `MATCH ` arguments at the end of the `SCAN` command (it works with all the `SCAN` family commands). + +This is an example of iteration using **MATCH**: + +``` +redis 127.0.0.1:6379> sadd myset 1 2 3 foo foobar feelsgood +(integer) 6 +redis 127.0.0.1:6379> sscan myset 0 match f* +1) "0" +2) 1) "foo" + 2) "feelsgood" + 3) "foobar" +redis 127.0.0.1:6379> +``` + +It is important to note that the **MATCH** filter is applied after elements are retrieved from the collection, just before returning data to the client. This means that if the pattern matches very little elements inside the collection, `SCAN` will likely return no elements in most iterations. An example is shown below: + +``` +redis 127.0.0.1:6379> scan 0 MATCH *11* +1) "288" +2) 1) "key:911" +redis 127.0.0.1:6379> scan 288 MATCH *11* +1) "224" +2) (empty list or set) +redis 127.0.0.1:6379> scan 224 MATCH *11* +1) "80" +2) (empty list or set) +redis 127.0.0.1:6379> scan 80 MATCH *11* +1) "176" +2) (empty list or set) +redis 127.0.0.1:6379> scan 176 MATCH *11* COUNT 1000 +1) "0" +2) 1) "key:611" + 2) "key:711" + 3) "key:118" + 4) "key:117" + 5) "key:311" + 6) "key:112" + 7) "key:111" + 8) "key:110" + 9) "key:113" + 10) "key:211" + 11) "key:411" + 12) "key:115" + 13) "key:116" + 14) "key:114" + 15) "key:119" + 16) "key:811" + 17) "key:511" + 18) "key:11" +redis 127.0.0.1:6379> +``` + +As you can see most of the calls returned zero elements, but the last call where a `COUNT` of 1000 was used in order to force the command to do more scanning for that iteration. + +When using [Redis Cluster](/docs/management/scaling/), the search is optimized for patterns that imply a single slot. +If a pattern can only match keys of one slot, +Redis only iterates over keys in that slot, rather than the whole database, +when searching for keys matching the pattern. +For example, with the pattern `{a}h*llo`, Redis would only try to match it with the keys in slot 15495, which hash tag `{a}` implies. +To use pattern with hash tag, see [Hash tags](/docs/reference/cluster-spec/#hash-tags) in the Cluster specification for more information. + +## The TYPE option + +You can use the `!TYPE` option to ask `SCAN` to only return objects that match a given `type`, allowing you to iterate through the database looking for keys of a specific type. The **TYPE** option is only available on the whole-database `SCAN`, not `HSCAN` or `ZSCAN` etc. + +The `type` argument is the same string name that the `TYPE` command returns. Note a quirk where some Redis types, such as GeoHashes, HyperLogLogs, Bitmaps, and Bitfields, may internally be implemented using other Redis types, such as a string or zset, so can't be distinguished from other keys of that same type by `SCAN`. For example, a ZSET and GEOHASH: + +``` +redis 127.0.0.1:6379> GEOADD geokey 0 0 value +(integer) 1 +redis 127.0.0.1:6379> ZADD zkey 1000 value +(integer) 1 +redis 127.0.0.1:6379> TYPE geokey +zset +redis 127.0.0.1:6379> TYPE zkey +zset +redis 127.0.0.1:6379> SCAN 0 TYPE zset +1) "0" +2) 1) "geokey" + 2) "zkey" +``` + +It is important to note that the **TYPE** filter is also applied after elements are retrieved from the database, so the option does not reduce the amount of work the server has to do to complete a full iteration, and for rare types you may receive no elements in many iterations. + +## The NOVALUES option + +When using `HSCAN`, you can use the `NOVALUES` option to make Redis return only the keys in the hash table without their corresponding values. + +``` +redis 127.0.0.1:6379> HSET myhash a 1 b 2 +OK +redis 127.0.0.1:6379> HSCAN myhash 0 +1) "0" +2) 1) "a" + 2) "1" + 3) "b" + 4) "2" +redis 127.0.0.1:6379> HSCAN myhash 0 NOVALUES +1) "0" +2) 1) "a" + 2) "b" +``` + +## Multiple parallel iterations + +It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. No server side state is taken at all. + +## Terminating iterations in the middle + +Since there is no state server side, but the full state is captured by the cursor, the caller is free to terminate an iteration half-way without signaling this to the server in any way. An infinite number of iterations can be started and never terminated without any issue. + +## Calling SCAN with a corrupted cursor + +Calling `SCAN` with a broken, negative, out of range, or otherwise invalid cursor, will result in undefined behavior but never in a crash. What will be undefined is that the guarantees about the returned elements can no longer be ensured by the `SCAN` implementation. + +The only valid cursors to use are: + +* The cursor value of 0 when starting an iteration. +* The cursor returned by the previous call to SCAN in order to continue the iteration. + +## Guarantee of termination + +The `SCAN` algorithm is guaranteed to terminate only if the size of the iterated collection remains bounded to a given maximum size, otherwise iterating a collection that always grows may result into `SCAN` to never terminate a full iteration. + +This is easy to see intuitively: if the collection grows there is more and more work to do in order to visit all the possible elements, and the ability to terminate the iteration depends on the number of calls to `SCAN` and its COUNT option value compared with the rate at which the collection grows. + +## Why SCAN may return all the items of an aggregate data type in a single call? + +In the `COUNT` option documentation, we state that sometimes this family of commands may return all the elements of a Set, Hash or Sorted Set at once in a single call, regardless of the `COUNT` option value. The reason why this happens is that the cursor-based iterator can be implemented, and is useful, only when the aggregate data type that we are scanning is represented as a hash table. However Redis uses a [memory optimization](/topics/memory-optimization) where small aggregate data types, until they reach a given amount of items or a given max size of single elements, are represented using a compact single-allocation packed encoding. When this is the case, `SCAN` has no meaningful cursor to return, and must iterate the whole data structure at once, so the only sane behavior it has is to return everything in a call. + +However once the data structures are bigger and are promoted to use real hash tables, the `SCAN` family of commands will resort to the normal behavior. Note that since this special behavior of returning all the elements is true only for small aggregates, it has no effects on the command complexity or latency. However the exact limits to get converted into real hash tables are [user configurable](/topics/memory-optimization), so the maximum number of elements you can see returned in a single call depends on how big an aggregate data type could be and still use the packed representation. + +Also note that this behavior is specific of `SSCAN`, `HSCAN` and `ZSCAN`. `SCAN` itself never shows this behavior because the key space is always represented by hash tables. + +## Further reading + +For more information about managing keys, please refer to the [The Redis Keyspace](/docs/manual/keyspace) tutorial. + +## Additional examples + +Iteration of a Hash value. + +``` +redis 127.0.0.1:6379> hmset hash name Jack age 33 +OK +redis 127.0.0.1:6379> hscan hash 0 +1) "0" +2) 1) "name" + 2) "Jack" + 3) "age" + 4) "33" +``` diff --git a/commands/scard.md b/commands/scard.md index 0a4a29f5b3..1bbbc0c8bd 100644 --- a/commands/scard.md +++ b/commands/scard.md @@ -1,19 +1,9 @@ -@complexity - -O(1) - - Returns the set cardinality (number of elements) of the set stored at `key`. -@return - -@integer-reply: the cardinality (number of elements) of the set, or `0` if -`key` does not exist. - @examples - @cli - SADD myset "Hello" - SADD myset "World" - SCARD myset - +```cli +SADD myset "Hello" +SADD myset "World" +SCARD myset +``` diff --git a/commands/script-debug.md b/commands/script-debug.md new file mode 100644 index 0000000000..5a2e845f69 --- /dev/null +++ b/commands/script-debug.md @@ -0,0 +1,22 @@ +Set the debug mode for subsequent scripts executed with `EVAL`. Redis includes a +complete Lua debugger, codename LDB, that can be used to make the task of +writing complex scripts much simpler. In debug mode Redis acts as a remote +debugging server and a client, such as `redis-cli`, can execute scripts step by +step, set breakpoints, inspect variables and more - for additional information +about LDB refer to the [Redis Lua debugger](/topics/ldb) page. + +**Important note:** avoid debugging Lua scripts using your Redis production +server. Use a development server instead. + +LDB can be enabled in one of two modes: asynchronous or synchronous. In +asynchronous mode the server creates a forked debugging session that does not +block and all changes to the data are **rolled back** after the session +finishes, so debugging can be restarted using the same initial state. The +alternative synchronous debug mode blocks the server while the debugging session +is active and retains all changes to the data set once it ends. + +* `YES`. Enable non-blocking asynchronous debugging of Lua scripts (changes are discarded). +* `!SYNC`. Enable blocking synchronous debugging of Lua scripts (saves changes to data). +* `NO`. Disables scripts debug mode. + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). diff --git a/commands/script-exists.md b/commands/script-exists.md new file mode 100644 index 0000000000..5681628f25 --- /dev/null +++ b/commands/script-exists.md @@ -0,0 +1,11 @@ +Returns information about the existence of the scripts in the script cache. + +This command accepts one or more SHA1 digests and returns a list of ones or +zeros to signal if the scripts are already defined or not inside the script +cache. +This can be useful before a pipelining operation to ensure that scripts are +loaded (and if not, to load them using `SCRIPT LOAD`) so that the pipelining +operation can be performed solely using `EVALSHA` instead of `EVAL` to save +bandwidth. + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). diff --git a/commands/script-flush.md b/commands/script-flush.md new file mode 100644 index 0000000000..d1e65a109d --- /dev/null +++ b/commands/script-flush.md @@ -0,0 +1,15 @@ +Flush the Lua scripts cache. + +By default, `SCRIPT FLUSH` will synchronously flush the cache. +Starting with Redis 6.2, setting the **lazyfree-lazy-user-flush** configuration directive to "yes" changes the default flush mode to asynchronous. + +It is possible to use one of the following modifiers to dictate the flushing mode explicitly: + +* `ASYNC`: flushes the cache asynchronously +* `!SYNC`: flushes the cache synchronously + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). + +## Behavior change history + +* `>= 6.2.0`: Default flush behavior now configurable by the **lazyfree-lazy-user-flush** configuration directive. \ No newline at end of file diff --git a/commands/script-help.md b/commands/script-help.md new file mode 100644 index 0000000000..ed745e1389 --- /dev/null +++ b/commands/script-help.md @@ -0,0 +1 @@ +The `SCRIPT HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/script-kill.md b/commands/script-kill.md new file mode 100644 index 0000000000..b6dae1486e --- /dev/null +++ b/commands/script-kill.md @@ -0,0 +1,15 @@ +Kills the currently executing `EVAL` script, assuming no write operation was yet +performed by the script. + +This command is mainly useful to kill a script that is running for too much +time(for instance, because it entered an infinite loop because of a bug). +The script will be killed, and the client currently blocked into EVAL will see +the command returning with an error. + +If the script has already performed write operations, it can not be killed in this +way because it would violate Lua's script atomicity contract. +In such a case, only `SHUTDOWN NOSAVE` can kill the script, killing +the Redis process in a hard way and preventing it from persisting with half-written +information. + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). diff --git a/commands/script-load.md b/commands/script-load.md new file mode 100644 index 0000000000..e85c6de628 --- /dev/null +++ b/commands/script-load.md @@ -0,0 +1,12 @@ +Load a script into the scripts cache, without executing it. +After the specified command is loaded into the script cache it will be callable +using `EVALSHA` with the correct SHA1 digest of the script, exactly like after +the first successful invocation of `EVAL`. + +The script is guaranteed to stay in the script cache forever (unless `SCRIPT +FLUSH` is called). + +The command works in the same way even if the script was already present in the +script cache. + +For more information about `EVAL` scripts please refer to [Introduction to Eval Scripts](/topics/eval-intro). diff --git a/commands/script.md b/commands/script.md new file mode 100644 index 0000000000..a7a41d8ac6 --- /dev/null +++ b/commands/script.md @@ -0,0 +1,3 @@ +This is a container command for script management commands. + +To see the list of available commands you can call `SCRIPT HELP`. diff --git a/commands/sdiff.md b/commands/sdiff.md index 3dcc2b83cf..7815877126 100644 --- a/commands/sdiff.md +++ b/commands/sdiff.md @@ -1,31 +1,25 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - Returns the members of the set resulting from the difference between the first set and all the successive sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SDIFF key1 key2 key3 = {b,d} +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SDIFF key1 key2 key3 = {b,d} +``` Keys that do not exist are considered to be empty sets. -@return - -@multi-bulk-reply: list with members of the resulting set. - @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SDIFF key1 key2 - +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SDIFF key1 key2 +``` diff --git a/commands/sdiffstore.md b/commands/sdiffstore.md index 51a25c98aa..23cd591532 100644 --- a/commands/sdiffstore.md +++ b/commands/sdiffstore.md @@ -1,12 +1,17 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - -This command is equal to `SDIFF`, but instead of returning the resulting set, -it is stored in `destination`. +This command is equal to `SDIFF`, but instead of returning the resulting set, it +is stored in `destination`. If `destination` already exists, it is overwritten. -@return +@examples -@integer-reply: the number of elements in the resulting set. +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SDIFFSTORE key key1 key2 +SMEMBERS key +``` diff --git a/commands/select.md b/commands/select.md index b0fc4f74df..5f76dedbab 100644 --- a/commands/select.md +++ b/commands/select.md @@ -1,9 +1,10 @@ -@description +Select the Redis logical database having the specified zero-based numeric index. +New connections always use the database 0. -Select the DB with having the specified zero-based numeric index. -New connections always use DB 0. +Selectable Redis databases are a form of namespacing: all databases are still persisted in the same RDB / AOF file. However different databases can have keys with the same name, and commands like `FLUSHDB`, `SWAPDB` or `RANDOMKEY` work on specific databases. -@return +In practical terms, Redis databases should be used to separate different keys belonging to the same application (if needed), and not to use a single Redis instance for multiple unrelated applications. -@status-reply +When using Redis Cluster, the `SELECT` command cannot be used, since Redis Cluster only supports database zero. In the case of a Redis Cluster, having multiple databases would be useless and an unnecessary source of complexity. Commands operating atomically on a single database would not be possible with the Redis Cluster design and goals. +Since the currently selected database is a property of the connection, clients should track the currently selected database and re-select it on reconnection. While there is no command in order to query the selected database in the current connection, the `CLIENT LIST` output shows, for each client, the currently selected database. diff --git a/commands/set.md b/commands/set.md index 4edab2faf8..1dc7322e96 100644 --- a/commands/set.md +++ b/commands/set.md @@ -1,18 +1,59 @@ -@complexity +Set `key` to hold the string `value`. +If `key` already holds a value, it is overwritten, regardless of its type. +Any previous time to live associated with the key is discarded on successful `SET` operation. -O(1) +## Options +The `SET` command supports a set of options that modify its behavior: -Set `key` to hold the string `value`. If `key` already holds a value, it is -overwritten, regardless of its type. +* `EX` *seconds* -- Set the specified expire time, in seconds (a positive integer). +* `PX` *milliseconds* -- Set the specified expire time, in milliseconds (a positive integer). +* `EXAT` *timestamp-seconds* -- Set the specified Unix time at which the key will expire, in seconds (a positive integer). +* `PXAT` *timestamp-milliseconds* -- Set the specified Unix time at which the key will expire, in milliseconds (a positive integer). +* `NX` -- Only set the key if it does not already exist. +* `XX` -- Only set the key if it already exists. +* `KEEPTTL` -- Retain the time to live associated with the key. +* `!GET` -- Return the old string stored at key, or nil if key did not exist. An error is returned and `SET` aborted if the value stored at key is not a string. -@return - -@status-reply: always `OK` since `SET` can't fail. +Note: Since the `SET` command options can replace `SETNX`, `SETEX`, `PSETEX`, `GETSET`, it is possible that in future versions of Redis these commands will be deprecated and finally removed. @examples - @cli - SET mykey "Hello" - GET mykey +```cli +SET mykey "Hello" +GET mykey + +SET anotherkey "will expire in a minute" EX 60 +``` + +### Code examples + +{{< clients-example set_and_get />}} + +## Patterns + +**Note:** The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. + +The command `SET resource-name anystring NX EX max-lock-time` is a simple way to implement a locking system with Redis. + +A client can acquire the lock if the above command returns `OK` (or retry after some time if the command returns Nil), and remove the lock just using `DEL`. + +The lock will be auto-released after the expire time is reached. + +It is possible to make this system more robust modifying the unlock schema as follows: + +* Instead of setting a fixed string, set a non-guessable large random string, called token. +* Instead of releasing the lock with `DEL`, send a script that only removes the key if the value matches. + +This avoids that a client will try to release the lock after the expire time deleting the key created by another client that acquired the lock later. + +An example of unlock script would be similar to the following: + + if redis.call("get",KEYS[1]) == ARGV[1] + then + return redis.call("del",KEYS[1]) + else + return 0 + end +The script should be called with `EVAL ...script... 1 resource-name token-value` diff --git a/commands/setbit.md b/commands/setbit.md index bf4cb1ebcd..a2e27ae840 100644 --- a/commands/setbit.md +++ b/commands/setbit.md @@ -1,35 +1,157 @@ -@complexity - -O(1) - - Sets or clears the bit at _offset_ in the string value stored at _key_. The bit is either set or cleared depending on _value_, which can be either 0 or -1. When _key_ does not exist, a new string value is created. The string is -grown to make sure it can hold a bit at _offset_. The _offset_ argument is -required to be greater than or equal to 0, and smaller than 2^32 (this -limits bitmaps to 512MB). When the string at _key_ is grown, added -bits are set to 0. +1. + +When _key_ does not exist, a new string value is created. +The string is grown to make sure it can hold a bit at _offset_. +The _offset_ argument is required to be greater than or equal to 0, and smaller +than 2^32 (this limits bitmaps to 512MB). +When the string at _key_ is grown, added bits are set to 0. **Warning**: When setting the last possible bit (_offset_ equal to 2^32 -1) and the string value stored at _key_ does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can -block the server for some time. On a 2010 MacBook Pro, setting bit number -2^32 -1 (512MB allocation) takes ~300ms, setting bit number 2^30 -1 (128MB -allocation) takes ~80ms, setting bit number 2^28 -1 (32MB allocation) takes -~30ms and setting bit number 2^26 -1 (8MB allocation) takes ~8ms. Note that -once this first allocation is done, subsequent calls to `SETBIT` for the same -_key_ will not have the allocation overhead. +block the server for some time. +On a 2010 MacBook Pro, setting bit number 2^32 -1 (512MB allocation) takes +~300ms, setting bit number 2^30 -1 (128MB allocation) takes ~80ms, setting bit +number 2^28 -1 (32MB allocation) takes ~30ms and setting bit number 2^26 -1 (8MB +allocation) takes ~8ms. +Note that once this first allocation is done, subsequent calls to `SETBIT` for +the same _key_ will not have the allocation overhead. -@return +@examples -@integer-reply: the original bit value stored at _offset_. +```cli +SETBIT mykey 7 1 +SETBIT mykey 7 0 +GET mykey +``` -@examples +## Pattern: accessing the entire bitmap + +There are cases when you need to set all the bits of single bitmap at once, for +example when initializing it to a default non-zero value. It is possible to do +this with multiple calls to the `SETBIT` command, one for each bit that needs to +be set. However, so as an optimization you can use a single `SET` command to set +the entire bitmap. + +Bitmaps are not an actual data type, but a set of bit-oriented operations +defined on the String type (for more information refer to the +[Bitmaps section of the Data Types Introduction page][ti]). This means that +bitmaps can be used with string commands, and most importantly with `SET` and +`GET`. + +Because Redis' strings are binary-safe, a bitmap is trivially encoded as a bytes +stream. The first byte of the string corresponds to offsets 0..7 of +the bitmap, the second byte to the 8..15 range, and so forth. + +For example, after setting a few bits, getting the string value of the bitmap +would look like this: + +``` +> SETBIT bitmapsarestrings 2 1 +> SETBIT bitmapsarestrings 3 1 +> SETBIT bitmapsarestrings 5 1 +> SETBIT bitmapsarestrings 10 1 +> SETBIT bitmapsarestrings 11 1 +> SETBIT bitmapsarestrings 14 1 +> GET bitmapsarestrings +"42" +``` + +By getting the string representation of a bitmap, the client can then parse the +response's bytes by extracting the bit values using native bit operations in its +native programming language. Symmetrically, it is also possible to set an entire +bitmap by performing the bits-to-bytes encoding in the client and calling `SET` +with the resultant string. + +[ti]: /topics/data-types-intro#bitmaps + +## Pattern: setting multiple bits + +`SETBIT` excels at setting single bits, and can be called several times when +multiple bits need to be set. To optimize this operation you can replace +multiple `SETBIT` calls with a single call to the variadic `BITFIELD` command +and the use of fields of type `u1`. + +For example, the example above could be replaced by: + +``` +> BITFIELD bitsinabitmap SET u1 2 1 SET u1 3 1 SET u1 5 1 SET u1 10 1 SET u1 11 1 SET u1 14 1 +``` + +## Advanced Pattern: accessing bitmap ranges + +It is also possible to use the `GETRANGE` and `SETRANGE` string commands to +efficiently access a range of bit offsets in a bitmap. Below is a sample +implementation in idiomatic Redis Lua scripting that can be run with the `EVAL` +command: + +``` +--[[ +Sets a bitmap range + +Bitmaps are stored as Strings in Redis. A range spans one or more bytes, +so we can call `SETRANGE` when entire bytes need to be set instead of flipping +individual bits. Also, to avoid multiple internal memory allocations in +Redis, we traverse in reverse. +Expected input: + KEYS[1] - bitfield key + ARGV[1] - start offset (0-based, inclusive) + ARGV[2] - end offset (same, should be bigger than start, no error checking) + ARGV[3] - value (should be 0 or 1, no error checking) +]]-- + +-- A helper function to stringify a binary string to semi-binary format +local function tobits(str) + local r = '' + for i = 1, string.len(str) do + local c = string.byte(str, i) + local b = ' ' + for j = 0, 7 do + b = tostring(bit.band(c, 1)) .. b + c = bit.rshift(c, 1) + end + r = r .. b + end + return r +end + +-- Main +local k = KEYS[1] +local s, e, v = tonumber(ARGV[1]), tonumber(ARGV[2]), tonumber(ARGV[3]) + +-- First treat the dangling bits in the last byte +local ms, me = s % 8, (e + 1) % 8 +if me > 0 then + local t = math.max(e - me + 1, s) + for i = e, t, -1 do + redis.call('SETBIT', k, i, v) + end + e = t +end + +-- Then the danglings in the first byte +if ms > 0 then + local t = math.min(s - ms + 7, e) + for i = s, t, 1 do + redis.call('SETBIT', k, i, v) + end + s = t + 1 +end - @cli - SETBIT mykey 7 1 - SETBIT mykey 7 0 - GET mykey +-- Set a range accordingly, if at all +local rs, re = s / 8, (e + 1) / 8 +local rl = re - rs +if rl > 0 then + local b = '\255' + if 0 == v then + b = '\0' + end + redis.call('SETRANGE', k, rs, string.rep(b, rl)) +end +``` +**Note:** the implementation for getting a range of bit offsets from a bitmap is +left as an exercise to the reader. diff --git a/commands/setex.md b/commands/setex.md index 64adf98d9d..c56718a11a 100644 --- a/commands/setex.md +++ b/commands/setex.md @@ -1,30 +1,20 @@ -@complexity - -O(1) - - Set `key` to hold the string `value` and set `key` to timeout after a given -number of seconds. This command is equivalent to executing the following -commands: +number of seconds. +This command is equivalent to: - SET mykey value - EXPIRE mykey seconds - -`SETEX` is atomic, and can be reproduced by using the previous two commands -inside an `MULTI`/`EXEC` block. It is provided as a faster alternative to the -given sequence of operations, because this operation is very common when Redis -is used as a cache. +``` +SET key value EX seconds +``` An error is returned when `seconds` is invalid. -@return - -@status-reply - @examples - @cli - SETEX mykey 10 "Hello" - TTL mykey - GET mykey +```cli +SETEX mykey 10 "Hello" +TTL mykey +GET mykey +``` +## See also +`TTL` \ No newline at end of file diff --git a/commands/setnx.md b/commands/setnx.md index 100490b51d..72f5ac6a63 100644 --- a/commands/setnx.md +++ b/commands/setnx.md @@ -1,50 +1,47 @@ -@complexity - -O(1) - - Set `key` to hold string `value` if `key` does not exist. -In that case, it is equal to `SET`. When `key` already holds -a value, no operation is performed. +In that case, it is equal to `SET`. +When `key` already holds a value, no operation is performed. `SETNX` is short for "**SET** if **N**ot e**X**ists". -@return - -@integer-reply, specifically: - -* `1` if the key was set -* `0` if the key was not set - @examples - @cli - SETNX mykey "Hello" - SETNX mykey "World" - GET mykey +```cli +SETNX mykey "Hello" +SETNX mykey "World" +GET mykey +``` ## Design pattern: Locking with `!SETNX` -`SETNX` can be used as a locking primitive. For example, to acquire -the lock of the key `foo`, the client could try the following: +**Please note that:** - SETNX lock.foo +1. The following pattern is discouraged in favor of [the Redlock algorithm](https://redis.io/topics/distlock) which is only a bit more complex to implement, but offers better guarantees and is fault tolerant. +2. We document the old pattern anyway because certain existing implementations link to this page as a reference. Moreover it is an interesting example of how Redis commands can be used in order to mount programming primitives. +3. Anyway even assuming a single-instance locking primitive, starting with 2.6.12 it is possible to create a much simpler locking primitive, equivalent to the one discussed here, using the `SET` command to acquire the lock, and a simple Lua script to release the lock. The pattern is documented in the `SET` command page. -If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` -key to the Unix time at which the lock should no longer be considered valid. +That said, `SETNX` can be used, and was historically used, as a locking primitive. For example, to acquire the lock of the key `foo`, the client could try the +following: + +``` +SETNX lock.foo +``` + +If `SETNX` returns `1` the client acquired the lock, setting the `lock.foo` key +to the Unix time at which the lock should no longer be considered valid. The client will later use `DEL lock.foo` in order to release the lock. -If `SETNX` returns `0` the key is already locked by some other client. We can -either return to the caller if it's a non blocking lock, or enter a -loop retrying to hold the lock until we succeed or some kind of timeout -expires. +If `SETNX` returns `0` the key is already locked by some other client. +We can either return to the caller if it's a non blocking lock, or enter a loop +retrying to hold the lock until we succeed or some kind of timeout expires. ### Handling deadlocks In the above locking algorithm there is a problem: what happens if a client fails, crashes, or is otherwise not able to release the lock? -It's possible to detect this condition because the lock key contains a -UNIX timestamp. If such a timestamp is equal to the current Unix time the lock -is no longer valid. +It's possible to detect this condition because the lock key contains a UNIX +timestamp. +If such a timestamp is equal to the current Unix time the lock is no longer +valid. When this happens we can't just call `DEL` against the key to remove the lock and then try to issue a `SETNX`, as there is a race condition here, when @@ -62,25 +59,34 @@ multiple clients detected an expired lock and are trying to release it. Fortunately, it's possible to avoid this issue using the following algorithm. Let's see how C4, our sane client, uses the good algorithm: -* C4 sends `SETNX lock.foo` in order to acquire the lock -* The crashed client C3 still holds it, so Redis will reply with `0` to C4. -* C4 sends `GET lock.foo` to check if the lock expired. If it is not, it will - sleep for some time and retry from the start. -* Instead, if the lock is expired because the Unix time at `lock.foo` is older - than the current Unix time, C4 tries to perform: +* C4 sends `SETNX lock.foo` in order to acquire the lock + +* The crashed client C3 still holds it, so Redis will reply with `0` to C4. + +* C4 sends `GET lock.foo` to check if the lock expired. + If it is not, it will sleep for some time and retry from the start. + +* Instead, if the lock is expired because the Unix time at `lock.foo` is older + than the current Unix time, C4 tries to perform: - GETSET lock.foo + ``` + GETSET lock.foo + ``` -* Because of the `GETSET` semantic, C4 can check if the old value stored - at `key` is still an expired timestamp. If it is, the lock was acquired. -* If another client, for instance C5, was faster than C4 and acquired - the lock with the `GETSET` operation, the C4 `GETSET` operation will return a non - expired timestamp. C4 will simply restart from the first step. Note that even - if C4 set the key a bit a few seconds in the future this is not a problem. +* Because of the `GETSET` semantic, C4 can check if the old value stored at + `key` is still an expired timestamp. + If it is, the lock was acquired. -**Important note**: In order to make this locking algorithm more robust, a client -holding a lock should always check the timeout didn't expire before unlocking -the key with `DEL` because client failures can be complex, not just crashing -but also blocking a lot of time against some operations and trying to issue -`DEL` after a lot of time (when the LOCK is already held by another client). +* If another client, for instance C5, was faster than C4 and acquired the lock + with the `GETSET` operation, the C4 `GETSET` operation will return a non + expired timestamp. + C4 will simply restart from the first step. + Note that even if C4 set the key a bit a few seconds in the future this is + not a problem. +In order to make this locking algorithm more robust, a +client holding a lock should always check the timeout didn't expire before +unlocking the key with `DEL` because client failures can be complex, not just +crashing but also blocking a lot of time against some operations and trying +to issue `DEL` after a lot of time (when the LOCK is already held by another +client). diff --git a/commands/setrange.md b/commands/setrange.md index e9f61f3be9..3c7aa05b01 100644 --- a/commands/setrange.md +++ b/commands/setrange.md @@ -1,52 +1,44 @@ -@complexity - -O(1), not counting the time taken to copy the new string in place. Usually, -this string is very small so the amortized complexity is O(1). Otherwise, -complexity is O(M) with M being the length of the _value_ argument. - -Overwrites part of the string stored at _key_, starting at the specified -offset, for the entire length of _value_. If the offset is larger than the -current length of the string at _key_, the string is padded with zero-bytes to -make _offset_ fit. Non-existing keys are considered as empty strings, so this -command will make sure it holds a string large enough to be able to set _value_ -at _offset_. +Overwrites part of the string stored at _key_, starting at the specified offset, +for the entire length of _value_. +If the offset is larger than the current length of the string at _key_, the +string is padded with zero-bytes to make _offset_ fit. +Non-existing keys are considered as empty strings, so this command will make +sure it holds a string large enough to be able to set _value_ at _offset_. Note that the maximum offset that you can set is 2^29 -1 (536870911), as Redis -Strings are limited to 512 megabytes. If you need to grow beyond this size, you -can use multiple keys. +Strings are limited to 512 megabytes. +If you need to grow beyond this size, you can use multiple keys. **Warning**: When setting the last possible byte and the string value stored at _key_ does not yet hold a string value, or holds a small string value, Redis needs to allocate all intermediate memory which can block the server for some -time. On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) -takes ~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, -setting bit number 33554432 (32MB allocation) takes ~30ms and setting bit -number 8388608 (8MB allocation) takes ~8ms. Note that once this first -allocation is done, subsequent calls to `SETRANGE` for the same _key_ will not -have the allocation overhead. +time. +On a 2010 MacBook Pro, setting byte number 536870911 (512MB allocation) takes +~300ms, setting byte number 134217728 (128MB allocation) takes ~80ms, setting +bit number 33554432 (32MB allocation) takes ~30ms and setting bit number 8388608 +(8MB allocation) takes ~8ms. +Note that once this first allocation is done, subsequent calls to `SETRANGE` for +the same _key_ will not have the allocation overhead. ## Patterns -Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis strings -as a linear array with O(1) random access. This is a very fast and -efficient storage in many real world use cases. - -@return - -@integer-reply: the length of the string after it was modified by the command. +Thanks to `SETRANGE` and the analogous `GETRANGE` commands, you can use Redis +strings as a linear array with O(1) random access. +This is a very fast and efficient storage in many real world use cases. @examples Basic usage: - @cli - SET key1 "Hello World" - SETRANGE key1 6 "Redis" - GET key1 +```cli +SET key1 "Hello World" +SETRANGE key1 6 "Redis" +GET key1 +``` Example of zero padding: - @cli - SETRANGE key2 6 "Redis" - GET key2 - +```cli +SETRANGE key2 6 "Redis" +GET key2 +``` diff --git a/commands/shutdown.md b/commands/shutdown.md index a3abdd1d81..8bef39752d 100644 --- a/commands/shutdown.md +++ b/commands/shutdown.md @@ -1,21 +1,67 @@ The command behavior is the following: +* If there are any replicas lagging behind in replication: + * Pause clients attempting to write by performing a `CLIENT PAUSE` with the `WRITE` option. + * Wait up to the configured `shutdown-timeout` (default 10 seconds) for replicas to catch up the replication offset. * Stop all the clients. * Perform a blocking SAVE if at least one **save point** is configured. * Flush the Append Only File if AOF is enabled. * Quit the server. -If persistence is enabled this commands makes sure that Redis is switched -off without the lost of any data. This is not guaranteed if the client uses -simply `SAVE` and then `QUIT` because other clients may alter the DB data -between the two commands. +If persistence is enabled this commands makes sure that Redis is switched off +without any data loss. -Note: A Redis instance that is configured for not persisting on disk -(no AOF configured, nor "save" directive) will not dump the RDB file on -`SHUTDOWN`, as usually you don't want Redis instances used only for caching -to block on when shutting down. +Note: A Redis instance that is configured for not persisting on disk (no AOF +configured, nor "save" directive) will not dump the RDB file on `SHUTDOWN`, as +usually you don't want Redis instances used only for caching to block on when +shutting down. -@return +Also note: If Redis receives one of the signals `SIGTERM` and `SIGINT`, the same shutdown sequence is performed. +See also [Signal Handling](/topics/signals). -@status-reply on error. On success nothing is returned since the server -quits and the connection is closed. +## Modifiers + +It is possible to specify optional modifiers to alter the behavior of the command. +Specifically: + +* **SAVE** will force a DB saving operation even if no save points are configured. +* **NOSAVE** will prevent a DB saving operation even if one or more save points are configured. +* **NOW** skips waiting for lagging replicas, i.e. it bypasses the first step in the shutdown sequence. +* **FORCE** ignores any errors that would normally prevent the server from exiting. + For details, see the following section. +* **ABORT** cancels an ongoing shutdown and cannot be combined with other flags. + +## Conditions where a SHUTDOWN fails + +When a save point is configured or the **SAVE** modifier is specified, the shutdown may fail if the RDB file can't be saved. +Then, the server continues to run in order to ensure no data loss. +This may be bypassed using the **FORCE** modifier, causing the server to exit anyway. + +When the Append Only File is enabled the shutdown may fail because the +system is in a state that does not allow to safely immediately persist +on disk. + +Normally if there is an AOF child process performing an AOF rewrite, Redis +will simply kill it and exit. +However, there are situations where it is unsafe to do so and, unless the **FORCE** modifier is specified, the **SHUTDOWN** command will be refused with an error instead. +This happens in the following situations: + +* The user just turned on AOF, and the server triggered the first AOF rewrite in order to create the initial AOF file. In this context, stopping will result in losing the dataset at all: once restarted, the server will potentially have AOF enabled without having any AOF file at all. +* A replica with AOF enabled, reconnected with its master, performed a full resynchronization, and restarted the AOF file, triggering the initial AOF creation process. In this case not completing the AOF rewrite is dangerous because the latest dataset received from the master would be lost. The new master can actually be even a different instance (if the **REPLICAOF** or **SLAVEOF** command was used in order to reconfigure the replica), so it is important to finish the AOF rewrite and start with the correct data set representing the data set in memory when the server was terminated. + +There are situations when we want just to terminate a Redis instance ASAP, regardless of what its content is. +In such a case, the command **SHUTDOWN NOW NOSAVE FORCE** can be used. +In versions before 7.0, where the **NOW** and **FORCE** flags are not available, the right combination of commands is to send a **CONFIG appendonly no** followed by a **SHUTDOWN NOSAVE**. +The first command will turn off the AOF if needed, and will terminate the AOF rewriting child if there is one active. +The second command will not have any problem to execute since the AOF is no longer enabled. + +## Minimize the risk of data loss + +Since Redis 7.0, the server waits for lagging replicas up to a configurable `shutdown-timeout`, by default 10 seconds, before shutting down. +This provides a best effort minimizing the risk of data loss in a situation where no save points are configured and AOF is disabled. +Before version 7.0, shutting down a heavily loaded master node in a diskless setup was more likely to result in data loss. +To minimize the risk of data loss in such setups, it's advised to trigger a manual `FAILOVER` (or `CLUSTER FAILOVER`) to demote the master to a replica and promote one of the replicas to be the new master, before shutting down a master node. + +## Behavior change history + +* `>= 7.0.0`: Introduced waiting for lagging replicas before exiting. \ No newline at end of file diff --git a/commands/sinter.md b/commands/sinter.md index c137fe0bf1..e0b665b8a3 100644 --- a/commands/sinter.md +++ b/commands/sinter.md @@ -1,34 +1,27 @@ -@complexity - -O(N\*M) worst case where N is the cardinality of the smallest set and M is the -number of sets. - Returns the members of the set resulting from the intersection of all the given sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SINTER key1 key2 key3 = {c} - -Keys that do not exist are considered to be empty sets. With one of the keys -being an empty set, the resulting set is also empty (since set intersection -with an empty set always results in an empty set). +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SINTER key1 key2 key3 = {c} +``` -@return - -@multi-bulk-reply: list with members of the resulting set. +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since +set intersection with an empty set always results in an empty set). @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SINTER key1 key2 - +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTER key1 key2 +``` diff --git a/commands/sintercard.md b/commands/sintercard.md new file mode 100644 index 0000000000..464e7bded5 --- /dev/null +++ b/commands/sintercard.md @@ -0,0 +1,24 @@ +This command is similar to `SINTER`, but instead of returning the result set, it returns just the cardinality of the result. +Returns the cardinality of the set which would result from the intersection of all the given sets. + +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). + +By default, the command calculates the cardinality of the intersection of all given sets. +When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. +Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. + +@examples + +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key1 "d" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTER key1 key2 +SINTERCARD 2 key1 key2 +SINTERCARD 2 key1 key2 LIMIT 1 +``` diff --git a/commands/sinterstore.md b/commands/sinterstore.md index 9f5ceba25e..e3e712f036 100644 --- a/commands/sinterstore.md +++ b/commands/sinterstore.md @@ -1,13 +1,17 @@ -@complexity - -O(N*M) worst case where N is the cardinality of the smallest set and M is the -number of sets. - This command is equal to `SINTER`, but instead of returning the resulting set, it is stored in `destination`. If `destination` already exists, it is overwritten. -@return +@examples -@integer-reply: the number of elements in the resulting set. +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SINTERSTORE key key1 key2 +SMEMBERS key +``` diff --git a/commands/sismember.md b/commands/sismember.md index 995b8f4681..08b1c6c51f 100644 --- a/commands/sismember.md +++ b/commands/sismember.md @@ -1,21 +1,9 @@ -@complexity - -O(1) - - Returns if `member` is a member of the set stored at `key`. -@return - -@integer-reply, specifically: - -* `1` if the element is a member of the set. -* `0` if the element is not a member of the set, or if `key` does not exist. - @examples - @cli - SADD myset "one" - SISMEMBER myset "one" - SISMEMBER myset "two" - +```cli +SADD myset "one" +SISMEMBER myset "one" +SISMEMBER myset "two" +``` diff --git a/commands/slaveof.md b/commands/slaveof.md index 04eb0c8ffa..b42dc06160 100644 --- a/commands/slaveof.md +++ b/commands/slaveof.md @@ -1,20 +1,18 @@ +**A note about the word slave used in this man page and command name**: starting with Redis version 5, if not for backward compatibility, the Redis project no longer uses the word slave. Please use the new command `REPLICAOF`. The command `SLAVEOF` will continue to work for backward compatibility. -The `SLAVEOF` command can change the replication settings of a slave on the fly. -If a Redis server is already acting as slave, the command `SLAVEOF` NO ONE -will turn off the replication turning the Redis server into a MASTER. -In the proper form `SLAVEOF` hostname port will make the server a slave of the -specific server listening at the specified hostname and port. +The `SLAVEOF` command can change the replication settings of a replica on the fly. +If a Redis server is already acting as replica, the command `SLAVEOF` NO ONE will +turn off the replication, turning the Redis server into a MASTER. +In the proper form `SLAVEOF` hostname port will make the server a replica of +another server listening at the specified hostname and port. -If a server is already a slave of some master, `SLAVEOF` hostname port will -stop the replication against the old server and start the synchronization -against the new one discarding the old dataset. +If a server is already a replica of some master, `SLAVEOF` hostname port will stop +the replication against the old server and start the synchronization against the +new one, discarding the old dataset. -The form `SLAVEOF` no one will stop replication turning the server into a -MASTER but will not discard the replication. So if the old master stop working -it is possible to turn the slave into a master and set the application to -use the new master in read/write. Later when the other Redis server will be -fixed it can be configured in order to work as slave. - -@return - -@status-reply +The form `SLAVEOF` NO ONE will stop replication, turning the server into a +MASTER, but will not discard the replication. +So, if the old master stops working, it is possible to turn the replica into a +master and set the application to use this new master in read/write. +Later when the other Redis server is fixed, it can be reconfigured to work as a +replica. diff --git a/commands/slowlog-get.md b/commands/slowlog-get.md new file mode 100644 index 0000000000..4b8f0c58d5 --- /dev/null +++ b/commands/slowlog-get.md @@ -0,0 +1,22 @@ +The `SLOWLOG GET` command returns entries from the slow log in chronological order. + +The Redis Slow Log is a system to log queries that exceeded a specified execution time. +The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime). + +A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. +The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. + +By default the command returns latest ten entries in the log. The optional `count` argument limits the number of returned entries, so the command returns at most up to `count` entries, the special number -1 means return all entries. + +Each entry from the slow log is comprised of the following six values: + +1. A unique progressive identifier for every slow log entry. +2. The unix timestamp at which the logged command was processed. +3. The amount of time needed for its execution, in microseconds. +4. The array composing the arguments of the command. +5. Client IP address and port. +6. Client name if set via the `CLIENT SETNAME` command. + +The entry's unique ID can be used in order to avoid processing slow log entries multiple times (for instance you may have a script sending you an email alert for every new slow log entry). +The ID is never reset in the course of the Redis server execution, only a server +restart will reset it. diff --git a/commands/slowlog-help.md b/commands/slowlog-help.md new file mode 100644 index 0000000000..86bf5b39a2 --- /dev/null +++ b/commands/slowlog-help.md @@ -0,0 +1 @@ +The `SLOWLOG HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/slowlog-len.md b/commands/slowlog-len.md new file mode 100644 index 0000000000..9515bb9473 --- /dev/null +++ b/commands/slowlog-len.md @@ -0,0 +1,6 @@ +This command returns the current number of entries in the slow log. + +A new entry is added to the slow log whenever a command exceeds the execution time threshold defined by the `slowlog-log-slower-than` configuration directive. +The maximum number of entries in the slow log is governed by the `slowlog-max-len` configuration directive. +Once the slog log reaches its maximal size, the oldest entry is removed whenever a new entry is created. +The slow log can be cleared with the `SLOWLOG RESET` command. diff --git a/commands/slowlog-reset.md b/commands/slowlog-reset.md new file mode 100644 index 0000000000..a860f4433f --- /dev/null +++ b/commands/slowlog-reset.md @@ -0,0 +1,3 @@ +This command resets the slow log, clearing all entries in it. + +Once deleted the information is lost forever. diff --git a/commands/slowlog.md b/commands/slowlog.md index 25939c0404..26e5bb700d 100644 --- a/commands/slowlog.md +++ b/commands/slowlog.md @@ -1,74 +1,3 @@ -This command is used in order to read and reset the Redis slow queries log. +This is a container command for slow log management commands. -## Redis slow log overview - -The Redis Slow Log is a system to log queries that exceeded a specified -execution time. The execution time does not include I/O operations -like talking with the client, sending the reply and so forth, -but just the time needed to actually execute the command (this is the only -stage of command execution where the thread is blocked and can not serve -other requests in the meantime). - -You can configure the slow log with two parameters: one tells Redis -what is the execution time, in microseconds, to exceed in order for the -command to get logged, and the other parameter is the length of the -slow log. When a new command is logged and the slow log is already at its -maximum length, the oldest one is removed from the queue of logged commands -in order to make space. - -The configuration can be done both editing the redis.conf file or -while the server is running using -the [CONFIG GET](/commands/config-get) and [CONFIG SET](/commands/config-set) -commands. - -## Reading the slow log - -The slow log is accumulated in memory, so no file is written with information -about the slow command executions. This makes the slow log remarkably fast -at the point that you can enable the logging of all the commands (setting the -*slowlog-log-slower-than* config parameter to zero) with minor performance -hit. - -To read the slow log the **SLOWLOG GET** command is used, that returns every -entry in the slow log. It is possible to return only the N most recent entries -passing an additional argument to the command (for instance **SLOWLOG GET 10**). - -Note that you need a recent version of redis-cli in order to read the slow -log output, since it uses some features of the protocol that were not -formerly implemented in redis-cli (deeply nested multi bulk replies). - -## Output format - - redis 127.0.0.1:6379> slowlog get 2 - 1) 1) (integer) 14 - 2) (integer) 1309448221 - 3) (integer) 15 - 4) 1) "ping" - 2) 1) (integer) 13 - 2) (integer) 1309448128 - 3) (integer) 30 - 4) 1) "slowlog" - 2) "get" - 3) "100" - -Every entry is composed of four fields: -* A unique progressive identifier for every slow log entry. -* The unix timestamp at which the logged command was processed. -* The amount of time needed for its execution, in microseconds. -* The array composing the arguments of the command. - -The entry's unique ID can be used in order to avoid processing slow log entries -multiple times (for instance you may have a script sending you an email -alert for every new slow log entry). - -The ID is never reset in the course of the Redis server execution, only a -server restart will reset it. - -## Obtaining the current length of the slow log - -It is possible to get just the length of the slow log using the command **SLOWLOG LEN**. - -## Resetting the slow log. - -You can reset the slow log using the **SLOWLOG RESET** command. -Once deleted the information is lost forever. +To see the list of available commands you can call `SLOWLOG HELP`. diff --git a/commands/smembers.md b/commands/smembers.md index 399de68cb6..20a8e23872 100644 --- a/commands/smembers.md +++ b/commands/smembers.md @@ -1,19 +1,11 @@ -@complexity - -O(N) where N is the set cardinality. - Returns all the members of the set value stored at `key`. This has the same effect as running `SINTER` with one argument `key`. -@return - -@multi-bulk-reply: all elements of the set. - @examples - @cli - SADD myset "Hello" - SADD myset "World" - SMEMBERS myset - +```cli +SADD myset "Hello" +SADD myset "World" +SMEMBERS myset +``` diff --git a/commands/smismember.md b/commands/smismember.md new file mode 100644 index 0000000000..4b8885996c --- /dev/null +++ b/commands/smismember.md @@ -0,0 +1,11 @@ +Returns whether each `member` is a member of the set stored at `key`. + +For every `member`, `1` is returned if the value is a member of the set, or `0` if the element is not a member of the set or if `key` does not exist. + +@examples + +```cli +SADD myset "one" +SADD myset "one" +SMISMEMBER myset "one" "notamember" +``` diff --git a/commands/smove.md b/commands/smove.md index 14803f0834..241125cd2b 100644 --- a/commands/smove.md +++ b/commands/smove.md @@ -1,34 +1,24 @@ -@complexity - -O(1) - - -Move `member` from the set at `source` to the set at `destination`. This -operation is atomic. In every given moment the element will appear to be a -member of `source` **or** `destination` for other clients. +Move `member` from the set at `source` to the set at `destination`. +This operation is atomic. +In every given moment the element will appear to be a member of `source` **or** +`destination` for other clients. If the source set does not exist or does not contain the specified element, no -operation is performed and `0` is returned. Otherwise, the element is removed -from the source set and added to the destination set. When the specified -element already exists in the destination set, it is only removed from the -source set. +operation is performed and `0` is returned. +Otherwise, the element is removed from the source set and added to the +destination set. +When the specified element already exists in the destination set, it is only +removed from the source set. An error is returned if `source` or `destination` does not hold a set value. -@return - -@integer-reply, specifically: - -* `1` if the element is moved. -* `0` if the element is not a member of `source` and no operation was performed. - @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myotherset "three" - SMOVE myset myotherset "two" - SMEMBERS myset - SMEMBERS myotherset - +```cli +SADD myset "one" +SADD myset "two" +SADD myotherset "three" +SMOVE myset myotherset "two" +SMEMBERS myset +SMEMBERS myotherset +``` diff --git a/commands/sort.md b/commands/sort.md index 0b788c2b39..a5843d86e7 100644 --- a/commands/sort.md +++ b/commands/sort.md @@ -1,25 +1,34 @@ -@complexity +Returns or stores the elements contained in the [list][tdtl], [set][tdts] or +[sorted set][tdtss] at `key`. -O(N+M\*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is currently O(N) as there is a copy step that will be avoided in next releases. +There is also the `SORT_RO` read-only variant of this command. -Returns or stores the elements contained in the -[list](/topics/data-types#lists), [set](/topics/data-types#set) or [sorted -set](/topics/data-types#sorted-sets) at `key`. By default, sorting is numeric -and elements are compared by their value interpreted as double precision -floating point number. This is `SORT` in its simplest form: +By default, sorting is numeric and elements are compared by their value +interpreted as double precision floating point number. +This is `SORT` in its simplest form: - SORT mylist +[tdtl]: /topics/data-types#lists +[tdts]: /topics/data-types#set +[tdtss]: /topics/data-types#sorted-sets + +``` +SORT mylist +``` Assuming `mylist` is a list of numbers, this command will return the same list -with the elements sorted from small to large. In order to sort the numbers from -large to small, use the `!DESC` modifier: +with the elements sorted from small to large. +In order to sort the numbers from large to small, use the `!DESC` modifier: - SORT mylist DESC +``` +SORT mylist DESC +``` -When `mylist` contains string values and you want to sort them lexicographically, -use the `!ALPHA` modifier: +When `mylist` contains string values and you want to sort them +lexicographically, use the `!ALPHA` modifier: - SORT mylist ALPHA +``` +SORT mylist ALPHA +``` Redis is UTF-8 aware, assuming you correctly set the `!LC_COLLATE` environment variable. @@ -27,87 +36,117 @@ variable. The number of returned elements can be limited using the `!LIMIT` modifier. This modifier takes the `offset` argument, specifying the number of elements to skip and the `count` argument, specifying the number of elements to return from -starting at `offset`. The following example will return 10 elements of the -sorted version of `mylist`, starting at element 0 (`offset` is zero-based): +starting at `offset`. +The following example will return 10 elements of the sorted version of `mylist`, +starting at element 0 (`offset` is zero-based): - SORT mylist LIMIT 0 10 +``` +SORT mylist LIMIT 0 10 +``` -Almost all modifiers can be used together. The following example will return -the first 5 elements, lexicographically sorted in descending order: +Almost all modifiers can be used together. +The following example will return the first 5 elements, lexicographically sorted +in descending order: - SORT mylist LIMIT 0 5 ALPHA DESC +``` +SORT mylist LIMIT 0 5 ALPHA DESC +``` ## Sorting by external keys Sometimes you want to sort elements using external keys as weights to compare -instead of comparing the actual elements in the list, set or sorted set. Let's -say the list `mylist` contains the elements `1`, `2` and `3` representing -unique IDs of objects stored in `object_1`, `object_2` and `object_3`. When -these objects have associated weights stored in `weight_1`, `weight_2` and +instead of comparing the actual elements in the list, set or sorted set. +Let's say the list `mylist` contains the elements `1`, `2` and `3` representing +unique IDs of objects stored in `object_1`, `object_2` and `object_3`. +When these objects have associated weights stored in `weight_1`, `weight_2` and `weight_3`, `SORT` can be instructed to use these weights to sort `mylist` with the following statement: - SORT mylist BY weight_* +``` +SORT mylist BY weight_* +``` The `BY` option takes a pattern (equal to `weight_*` in this example) that is -used to generate the keys that are used for sorting. These key names are -obtained substituting the first occurrence of `*` with the actual value of the -element in the list (`1`, `2` and `3` in this example). +used to generate the keys that are used for sorting. +These key names are obtained substituting the first occurrence of `*` with the +actual value of the element in the list (`1`, `2` and `3` in this example). ## Skip sorting the elements The `!BY` option can also take a non-existent key, which causes `SORT` to skip -the sorting operation. This is useful if you want to retrieve external keys -(see the `!GET` option below) without the overhead of sorting. +the sorting operation. +This is useful if you want to retrieve external keys (see the `!GET` option +below) without the overhead of sorting. - SORT mylist BY nosort +``` +SORT mylist BY nosort +``` ## Retrieving external keys -Our previous example returns just the sorted IDs. In some cases, it is more -useful to get the actual objects instead of their IDs (`object_1`, `object_2` -and `object_3`). Retrieving external keys based on the elements in a list, set -or sorted set can be done with the following command: +Our previous example returns just the sorted IDs. +In some cases, it is more useful to get the actual objects instead of their IDs +(`object_1`, `object_2` and `object_3`). +Retrieving external keys based on the elements in a list, set or sorted set can +be done with the following command: - SORT mylist BY weight_* GET object_* +``` +SORT mylist BY weight_* GET object_* +``` -The `!GET` option can be used multiple times in order to get more keys for -every element of the original list, set or sorted set. +The `!GET` option can be used multiple times in order to get more keys for every +element of the original list, set or sorted set. It is also possible to `!GET` the element itself using the special pattern `#`: - SORT mylist BY weight_* GET object_* GET # +``` +SORT mylist BY weight_* GET object_* GET # +``` + +## Restrictions for using external keys + +Before 8.0, when enabling `Redis cluster-mode` there is no way to guarantee the existence of the external keys on the node which the command is processed on. In this case, any use of `GET` or `BY` which reference external key pattern will cause the command to fail with an error. + +Starting from 8.0, pattern with hash tag can be mapped to a slot, and so in `Redis cluster-mode`, the use of `BY` or `GET` is allowed when pattern contains hash tag and implies a specific slot which the key is also in, which means any key matching this pattern must be in the same slot as the key, and therefore in the same node. For example, in cluster mode, `{mylist}weight_*` is acceptable as a pattern when sorting `mylist`, while pattern `{abc}weight_*` will be denied, causing the command to fail with an error. + +To use pattern with hash tag, see [Hash tags](/docs/reference/cluster-spec/#hash-tags) for more information. + +Starting from Redis 7.0, any use of `GET` or `BY` which reference external key pattern will only be allowed in case the current user running the command has full key read permissions. +Full key read permissions can be set for the user by, for example, specifying `'%R~*'` or `'~*` with the relevant command access rules. +You can check the `ACL SETUSER` command manual for more information on setting ACL access rules. +If full key read permissions aren't set, the command will fail with an error. ## Storing the result of a SORT operation -By default, `SORT` returns the sorted elements to the client. With the `!STORE` -option, the result will be stored as a list at the specified key instead of -being returned to the client. +By default, `SORT` returns the sorted elements to the client. +With the `!STORE` option, the result will be stored as a list at the specified +key instead of being returned to the client. - SORT mylist BY weight_* STORE resultkey +``` +SORT mylist BY weight_* STORE resultkey +``` An interesting pattern using `SORT ... STORE` consists in associating an `EXPIRE` timeout to the resulting key so that in applications where the result -of a `SORT` operation can be cached for some time. Other clients will use the -cached list instead of calling `SORT` for every request. When the key will -timeout, an updated version of the cache can be created by calling `SORT ... STORE` again. +of a `SORT` operation can be cached for some time. +Other clients will use the cached list instead of calling `SORT` for every +request. +When the key will timeout, an updated version of the cache can be created by +calling `SORT ... STORE` again. -Note that for correctly implementing this pattern it is important to avoid multiple -clients rebuilding the cache at the same time. Some kind of locking is needed here -(for instance using `SETNX`). +Note that for correctly implementing this pattern it is important to avoid +multiple clients rebuilding the cache at the same time. +Some kind of locking is needed here (for instance using `SETNX`). ## Using hashes in `!BY` and `!GET` It is possible to use `!BY` and `!GET` options against hash fields with the following syntax: - SORT mylist BY weight_*->fieldname GET object_*->fieldname +``` +SORT mylist BY weight_*->fieldname GET object_*->fieldname +``` The string `->` is used to separate the key name from the hash field name. -The key is substituted as documented above, and the hash stored at the -resulting key is accessed to retrieve the specified hash field. - -@return - -@multi-bulk-reply: list of sorted elements. - +The key is substituted as documented above, and the hash stored at the resulting +key is accessed to retrieve the specified hash field. diff --git a/commands/sort_ro.md b/commands/sort_ro.md new file mode 100644 index 0000000000..82dc85ee8d --- /dev/null +++ b/commands/sort_ro.md @@ -0,0 +1,13 @@ +Read-only variant of the `SORT` command. It is exactly like the original `SORT` but refuses the `STORE` option and can safely be used in read-only replicas. + +Since the original `SORT` has a `STORE` option it is technically flagged as a writing command in the Redis command table. For this reason read-only replicas in a Redis Cluster will redirect it to the master instance even if the connection is in read-only mode (see the `READONLY` command of Redis Cluster). + +The `SORT_RO` variant was introduced in order to allow `SORT` behavior in read-only replicas without breaking compatibility on command flags. + +See original `SORT` for more details. + +@examples + +``` +SORT_RO mylist BY weight_*->fieldname GET object_*->fieldname +``` diff --git a/commands/spop.md b/commands/spop.md index d1ba8fdec9..5ebfaed881 100644 --- a/commands/spop.md +++ b/commands/spop.md @@ -1,23 +1,24 @@ -@complexity +Removes and returns one or more random members from the set value store at `key`. -O(1) +This operation is similar to `SRANDMEMBER`, that returns one or more random elements from a set but does not remove it. - -Removes and returns a random element from the set value stored at `key`. - -This operation is similar to `SRANDMEMBER`, that returns a random -element from a set but does not remove it. - -@return - -@bulk-reply: the removed element, or `nil` when `key` does not exist. +By default, the command pops a single member from the set. When provided with +the optional `count` argument, the reply will consist of up to `count` members, +depending on the set's cardinality. @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SPOP myset - SMEMBERS myset - +```cli +SADD myset "one" +SADD myset "two" +SADD myset "three" +SPOP myset +SMEMBERS myset +SADD myset "four" +SADD myset "five" +SPOP myset 3 +SMEMBERS myset +``` +## Distribution of returned elements + +Note that this command is not suitable when you need a guaranteed uniform distribution of the returned elements. For more information about the algorithms used for `SPOP`, look up both the Knuth sampling and Floyd sampling algorithms. diff --git a/commands/spublish.md b/commands/spublish.md new file mode 100644 index 0000000000..0b8e65d09a --- /dev/null +++ b/commands/spublish.md @@ -0,0 +1,16 @@ +Posts a message to the given shard channel. + +In Redis Cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. +A shard message must be sent to a node that own the slot the shard channel is hashed to. +The cluster makes sure that published shard messages are forwarded to all the node in the shard, so clients can subscribe to a shard channel by connecting to any one of the nodes in the shard. + +For more information about sharded pubsub, see [Sharded Pubsub](/topics/pubsub#sharded-pubsub). + +@examples + +For example the following command publish to channel `orders` with a subscriber already waiting for message(s). + +``` +> spublish orders hello +(integer) 1 +``` diff --git a/commands/srandmember.md b/commands/srandmember.md index 1ad4e0a5ce..b90dd23b7b 100644 --- a/commands/srandmember.md +++ b/commands/srandmember.md @@ -1,23 +1,40 @@ -@complexity +When called with just the `key` argument, return a random element from the set value stored at `key`. -O(1) +If the provided `count` argument is positive, return an array of **distinct elements**. +The array's length is either `count` or the set's cardinality (`SCARD`), whichever is lower. +If called with a negative `count`, the behavior changes and the command is allowed to return the **same element multiple times**. +In this case, the number of returned elements is the absolute value of the specified `count`. -Return a random element from the set value stored at `key`. +@examples -This operation is similar to `SPOP`, however while `SPOP` also removes the -randomly selected element from the set, `SRANDMEMBER` will just return a random -element without altering the original set in any way. +```cli +SADD myset one two three +SRANDMEMBER myset +SRANDMEMBER myset 2 +SRANDMEMBER myset -5 +``` -@return +## Specification of the behavior when count is passed -@bulk-reply: the randomly selected element, or `nil` when `key` does not exist. +When the `count` argument is a positive value this command behaves as follows: -@examples +* No repeated elements are returned. +* If `count` is bigger than the set's cardinality, the command will only return the whole set without additional elements. +* The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed. + +When the `count` is a negative value, the behavior changes as follows: + +* Repeating elements are possible. +* Exactly `count` elements, or an empty array if the set is empty (non-existing key), are always returned. +* The order of elements in the reply is truly random. + +## Distribution of returned elements + +Note: this section is relevant only for Redis 5 or below, as Redis 6 implements a fairer algorithm. + +The distribution of the returned elements is far from perfect when the number of elements in the set is small, this is due to the fact that we used an approximated random element function that does not really guarantees good distribution. - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SRANDMEMBER myset +The algorithm used, that is implemented inside dict.c, samples the hash table buckets to find a non-empty one. Once a non empty bucket is found, since we use chaining in our hash table implementation, the number of elements inside the bucket is checked and a random element is selected. +This means that if you have two non-empty buckets in the entire hash table, and one has three elements while one has just one, the element that is alone in its bucket will be returned with much higher probability. diff --git a/commands/srem.md b/commands/srem.md index 32df527ddb..2a85a99e6b 100644 --- a/commands/srem.md +++ b/commands/srem.md @@ -1,29 +1,17 @@ -@complexity - -O(N) where N is the number of members to be removed. - - -Remove the specified members from the set stored at `key`. Specified members -that are not a member of this set are ignored. If `key` does not exist, it is -treated as an empty set and this command returns `0`. +Remove the specified members from the set stored at `key`. +Specified members that are not a member of this set are ignored. +If `key` does not exist, it is treated as an empty set and this command returns +`0`. An error is returned when the value stored at `key` is not a set. -@return - -@integer-reply: the number of members that were removed from the set, not including non existing members. - -@history - -* `>= 2.4`: Accepts multiple `member` arguments. Redis versions older than 2.4 can only remove a set member per call. - @examples - @cli - SADD myset "one" - SADD myset "two" - SADD myset "three" - SREM myset "one" - SREM myset "four" - SMEMBERS myset - +```cli +SADD myset "one" +SADD myset "two" +SADD myset "three" +SREM myset "one" +SREM myset "four" +SMEMBERS myset +``` diff --git a/commands/sscan.md b/commands/sscan.md new file mode 100644 index 0000000000..c19f3b1bf3 --- /dev/null +++ b/commands/sscan.md @@ -0,0 +1 @@ +See `SCAN` for `SSCAN` documentation. diff --git a/commands/ssubscribe.md b/commands/ssubscribe.md new file mode 100644 index 0000000000..bf7d30e859 --- /dev/null +++ b/commands/ssubscribe.md @@ -0,0 +1,21 @@ +Subscribes the client to the specified shard channels. + +In a Redis cluster, shard channels are assigned to slots by the same algorithm used to assign keys to slots. +Client(s) can subscribe to a node covering a slot (primary/replica) to receive the messages published. +All the specified shard channels needs to belong to a single slot to subscribe in a given `SSUBSCRIBE` call, +A client can subscribe to channels across different slots over separate `SSUBSCRIBE` call. + +For more information about sharded Pub/Sub, see [Sharded Pub/Sub](/topics/pubsub#sharded-pubsub). + +@examples + +``` +> ssubscribe orders +Reading messages... (press Ctrl-C to quit) +1) "ssubscribe" +2) "orders" +3) (integer) 1 +1) "smessage" +2) "orders" +3) "hello" +``` diff --git a/commands/strlen.md b/commands/strlen.md index 0605ffeaf1..f4b1fee52d 100644 --- a/commands/strlen.md +++ b/commands/strlen.md @@ -1,19 +1,10 @@ -@complexity - -O(1) - - Returns the length of the string value stored at `key`. An error is returned when `key` holds a non-string value. -@return - -@integer-reply: the length of the string at `key`, or `0` when `key` does not exist. - @examples - @cli - SET mykey "Hello world" - STRLEN mykey - STRLEN nonexisting - +```cli +SET mykey "Hello world" +STRLEN mykey +STRLEN nonexisting +``` diff --git a/commands/subscribe.md b/commands/subscribe.md index 4fce765ae0..1bde8ebe9c 100644 --- a/commands/subscribe.md +++ b/commands/subscribe.md @@ -1,9 +1,12 @@ -@complexity +Subscribes the client to the specified channels. -O(N) where N is the number of channels to subscribe to. +Once the client enters the subscribed state it is not supposed to issue any +other commands, except for additional `SUBSCRIBE`, `SSUBSCRIBE`, `PSUBSCRIBE`, `UNSUBSCRIBE`, `SUNSUBSCRIBE`, +`PUNSUBSCRIBE`, `PING`, `RESET` and `QUIT` commands. +However, if RESP3 is used (see `HELLO`) it is possible for a client to issue any commands while in subscribed state. -Subscribes the client to the specified channels. +For more information, see [Pub/sub](/docs/interact/pubsub/). + +## Behavior change history -Once the client enters the subscribed state it is not supposed to issue -any other commands, except for additional `SUBSCRIBE`, `PSUBSCRIBE`, -`UNSUBSCRIBE` and `PUNSUBSCRIBE` commands. +* `>= 6.2.0`: `RESET` can be called to exit subscribed state. diff --git a/commands/substr.md b/commands/substr.md new file mode 100644 index 0000000000..c188f95494 --- /dev/null +++ b/commands/substr.md @@ -0,0 +1,18 @@ +Returns the substring of the string value stored at `key`, determined by the +offsets `start` and `end` (both are inclusive). +Negative offsets can be used in order to provide an offset starting from the end +of the string. +So -1 means the last character, -2 the penultimate and so forth. + +The function handles out of range requests by limiting the resulting range to +the actual length of the string. + +@examples + +```cli +SET mykey "This is a string" +GETRANGE mykey 0 3 +GETRANGE mykey -3 -1 +GETRANGE mykey 0 -1 +GETRANGE mykey 10 100 +``` diff --git a/commands/sunion.md b/commands/sunion.md index 4d59b85816..31315484b6 100644 --- a/commands/sunion.md +++ b/commands/sunion.md @@ -1,31 +1,24 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - -Returns the members of the set resulting from the union of all the -given sets. +Returns the members of the set resulting from the union of all the given sets. For example: - key1 = {a,b,c,d} - key2 = {c} - key3 = {a,c,e} - SUNION key1 key2 key3 = {a,b,c,d,e} +``` +key1 = {a,b,c,d} +key2 = {c} +key3 = {a,c,e} +SUNION key1 key2 key3 = {a,b,c,d,e} +``` Keys that do not exist are considered to be empty sets. -@return - -@multi-bulk-reply: list with members of the resulting set. - @examples - @cli - SADD key1 "a" - SADD key1 "b" - SADD key1 "c" - SADD key2 "c" - SADD key2 "d" - SADD key2 "e" - SUNION key1 key2 - +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SUNION key1 key2 +``` diff --git a/commands/sunionstore.md b/commands/sunionstore.md index 4db793cc0f..28b752640a 100644 --- a/commands/sunionstore.md +++ b/commands/sunionstore.md @@ -1,12 +1,17 @@ -@complexity - -O(N) where N is the total number of elements in all given sets. - This command is equal to `SUNION`, but instead of returning the resulting set, it is stored in `destination`. If `destination` already exists, it is overwritten. -@return +@examples -@integer-reply: the number of elements in the resulting set. +```cli +SADD key1 "a" +SADD key1 "b" +SADD key1 "c" +SADD key2 "c" +SADD key2 "d" +SADD key2 "e" +SUNIONSTORE key key1 key2 +SMEMBERS key +``` diff --git a/commands/sunsubscribe.md b/commands/sunsubscribe.md new file mode 100644 index 0000000000..7ce76c333e --- /dev/null +++ b/commands/sunsubscribe.md @@ -0,0 +1,8 @@ +Unsubscribes the client from the given shard channels, or from all of them if none is given. + +When no shard channels are specified, the client is unsubscribed from all the previously subscribed shard channels. +In this case a message for every unsubscribed shard channel will be sent to the client. + +Note: The global channels and shard channels needs to be unsubscribed from separately. + +For more information about sharded Pub/Sub, see [Sharded Pub/Sub](/topics/pubsub#sharded-pubsub). diff --git a/commands/swapdb.md b/commands/swapdb.md new file mode 100644 index 0000000000..7037f477f0 --- /dev/null +++ b/commands/swapdb.md @@ -0,0 +1,13 @@ +This command swaps two Redis databases, so that immediately all the +clients connected to a given database will see the data of the other database, and +the other way around. Example: + + SWAPDB 0 1 + +This will swap database 0 with database 1. All the clients connected with database 0 will immediately see the new data, exactly like all the clients connected with database 1 will see the data that was formerly of database 0. + +@examples + +``` +SWAPDB 0 1 +``` diff --git a/commands/sync.md b/commands/sync.md index 8dafb3c81c..5d9ad047f9 100644 --- a/commands/sync.md +++ b/commands/sync.md @@ -1,7 +1,10 @@ -@complexity +Initiates a replication stream from the master. -@description +The `SYNC` command is called by Redis replicas for initiating a replication +stream from the master. It has been replaced in newer versions of Redis by + `PSYNC`. -@examples +For more information about replication in Redis please check the +[replication page][tr]. -@return \ No newline at end of file +[tr]: /topics/replication diff --git a/commands/time.md b/commands/time.md new file mode 100644 index 0000000000..b7adb8f164 --- /dev/null +++ b/commands/time.md @@ -0,0 +1,11 @@ +The `TIME` command returns the current server time as a two items lists: a Unix +timestamp and the amount of microseconds already elapsed in the current second. +Basically the interface is very similar to the one of the `gettimeofday` system +call. + +@examples + +```cli +TIME +TIME +``` diff --git a/commands/touch.md b/commands/touch.md new file mode 100644 index 0000000000..ba5bc9b0db --- /dev/null +++ b/commands/touch.md @@ -0,0 +1,10 @@ +Alters the last access time of a key(s). +A key is ignored if it does not exist. + +@examples + +```cli +SET key1 "Hello" +SET key2 "World" +TOUCH key1 key2 +``` diff --git a/commands/ttl.md b/commands/ttl.md index 7e06c4ae90..1cc0a86f90 100644 --- a/commands/ttl.md +++ b/commands/ttl.md @@ -1,20 +1,20 @@ -@complexity - -O(1) +Returns the remaining time to live of a key that has a timeout. +This introspection capability allows a Redis client to check how many seconds a +given key will continue to be part of the dataset. +In Redis 2.6 or older the command returns `-1` if the key does not exist or if the key exist but has no associated expire. -Returns the remaining time to live of a key that has a timeout. This -introspection capability allows a Redis client to check how many seconds a -given key will continue to be part of the dataset. +Starting with Redis 2.8 the return value in case of error changed: -@return +* The command returns `-2` if the key does not exist. +* The command returns `-1` if the key exists but has no associated expire. -@integer-reply: TTL in seconds or `-1` when `key` does not exist or does not have a timeout. +See also the `PTTL` command that returns the same information with milliseconds resolution (Only available in Redis 2.6 or greater). @examples - @cli - SET mykey "Hello" - EXPIRE mykey 10 - TTL mykey - +```cli +SET mykey "Hello" +EXPIRE mykey 10 +TTL mykey +``` diff --git a/commands/type.md b/commands/type.md index 46a6d8bb14..7bdd48407d 100644 --- a/commands/type.md +++ b/commands/type.md @@ -1,23 +1,14 @@ -@complexity - -O(1) - - Returns the string representation of the type of the value stored at `key`. -The different types that can be returned are: `string`, `list`, `set`, `zset` -and `hash`. - -@return - -@status-reply: type of `key`, or `none` when `key` does not exist. +The different types that can be returned are: `string`, `list`, `set`, `zset`, +`hash` and `stream`. @examples - @cli - SET key1 "value" - LPUSH key2 "value" - SADD key3 "value" - TYPE key1 - TYPE key2 - TYPE key3 - +```cli +SET key1 "value" +LPUSH key2 "value" +SADD key3 "value" +TYPE key1 +TYPE key2 +TYPE key3 +``` diff --git a/commands/unlink.md b/commands/unlink.md new file mode 100644 index 0000000000..0d5ade0217 --- /dev/null +++ b/commands/unlink.md @@ -0,0 +1,14 @@ +This command is very similar to `DEL`: it removes the specified keys. +Just like `DEL` a key is ignored if it does not exist. However the command +performs the actual memory reclaiming in a different thread, so it is not +blocking, while `DEL` is. This is where the command name comes from: the +command just **unlinks** the keys from the keyspace. The actual removal +will happen later asynchronously. + +@examples + +```cli +SET key1 "Hello" +SET key2 "World" +UNLINK key1 key2 key3 +``` diff --git a/commands/unsubscribe.md b/commands/unsubscribe.md index a1b5e8b882..7bdf1d15e5 100644 --- a/commands/unsubscribe.md +++ b/commands/unsubscribe.md @@ -1,10 +1,7 @@ -@complexity +Unsubscribes the client from the given channels, or from all of them if none is +given. -O(N) where N is the number of clients already subscribed to a channel. - -Unsubscribes the client from the given channels, or from all of them if -none is given. - -When no channels are specified, the client is unsubscribed from all -the previously subscribed channels. In this case, a message for every -unsubscribed channel will be sent to the client. +When no channels are specified, the client is unsubscribed from all the +previously subscribed channels. +In this case, a message for every unsubscribed channel will be sent to the +client. diff --git a/commands/unwatch.md b/commands/unwatch.md index 32655426ce..dcdda08dab 100644 --- a/commands/unwatch.md +++ b/commands/unwatch.md @@ -1,11 +1,5 @@ -@complexity +Flushes all the previously watched keys for a [transaction][tt]. -O(1). - -Flushes all the previously watched keys for a [transaction](/topics/transactions). +[tt]: /topics/transactions If you call `EXEC` or `DISCARD`, there's no need to manually call `UNWATCH`. - -@return - -@status-reply: always `OK`. diff --git a/commands/wait.md b/commands/wait.md new file mode 100644 index 0000000000..b671faff4e --- /dev/null +++ b/commands/wait.md @@ -0,0 +1,51 @@ +This command blocks the current client until all the previous write commands +are successfully transferred and acknowledged by at least the specified number +of replicas. If the timeout, specified in milliseconds, is reached, the command +returns even if the specified number of replicas were not yet reached. + +The command **will always return** the number of replicas that acknowledged +the write commands sent by the current client before the `WAIT` command, both in the case where +the specified number of replicas are reached, or when the timeout is reached. + +A few remarks: + +1. When `WAIT` returns, all the previous write commands sent in the context of the current connection are guaranteed to be received by the number of replicas returned by `WAIT`. +2. If the command is sent as part of a `MULTI` transaction (since Redis 7.0, any context that does not allow blocking, such as inside scripts), the command does not block but instead just return ASAP the number of replicas that acknowledged the previous write commands. +3. A timeout of 0 means to block forever. +4. Since `WAIT` returns the number of replicas reached both in case of failure and success, the client should check that the returned value is equal or greater to the replication level it demanded. + +Consistency and WAIT +--- + +Note that `WAIT` does not make Redis a strongly consistent store: while synchronous replication is part of a replicated state machine, it is not the only thing needed. However in the context of Sentinel or Redis Cluster failover, `WAIT` improves the real world data safety. + +Specifically if a given write is transferred to one or more replicas, it is more likely (but not guaranteed) that if the master fails, we'll be able to promote, during a failover, a replica that received the write: both Sentinel and Redis Cluster will do a best-effort attempt to promote the best replica among the set of available replicas. + +However this is just a best-effort attempt so it is possible to still lose a write synchronously replicated to multiple replicas. + +Implementation details +--- + +Since the introduction of partial resynchronization with replicas (PSYNC feature) Redis replicas asynchronously ping their master with the offset they already processed in the replication stream. This is used in multiple ways: + +1. Detect timed out replicas. +2. Perform a partial resynchronization after a disconnection. +3. Implement `WAIT`. + +In the specific case of the implementation of `WAIT`, Redis remembers, for each client, the replication offset of the produced replication stream when a given +write command was executed in the context of a given client. When `WAIT` is +called Redis checks if the specified number of replicas already acknowledged +this offset or a greater one. + +@examples + +``` +> SET foo bar +OK +> WAIT 1 0 +(integer) 1 +> WAIT 2 1000 +(integer) 1 +``` + +In the following example the first call to `WAIT` does not use a timeout and asks for the write to reach 1 replica. It returns with success. In the second attempt instead we put a timeout, and ask for the replication of the write to two replicas. Since there is a single replica available, after one second `WAIT` unblocks and returns 1, the number of replicas reached. diff --git a/commands/waitaof.md b/commands/waitaof.md new file mode 100644 index 0000000000..a6269e2375 --- /dev/null +++ b/commands/waitaof.md @@ -0,0 +1,59 @@ +This command blocks the current client until all previous write commands by that client are acknowledged as having been fsynced to the AOF of the local Redis and/or at least the specified number of replicas. + +`numlocal` represents the number of local fsyncs required to be confirmed before proceeding. +When `numlocal` is set to 1, the command blocks until the data written to the Redis instance is confirmed to be persisted to the local AOF file. +The value 0 disables this check. + +If the timeout, specified in milliseconds, is reached, the command returns even if the specified number of acknowledgments has not been met. + +The command **will always return** the number of masters and replicas that have fsynced all write commands sent by the current client before the `WAITAOF` command, both in the case where the specified thresholds were met, and when the timeout is reached. + +A few remarks: + +1. When `WAITAOF` returns, all the previous write commands sent in the context of the current connection are guaranteed to be fsynced to the AOF of at least the number of masters and replicas returned by `WAITAOF`. +2. If the command is sent as part of a `MULTI` transaction (or any other context that does not allow blocking, such as inside scripts), the command does not block but instead returns immediately the number of masters and replicas that fsynced all previous write commands. +3. A timeout of 0 means to block forever. +4. Since `WAITAOF` returns the number of fsyncs completed both in case of success and timeout, the client should check that the returned values are equal or greater than the persistence level required. +5. `WAITAOF` cannot be used on replica instances, and the `numlocal` argument cannot be non-zero if the local Redis does not have AOF enabled. + +Limitations +--- +It is possible to write a module or Lua script that propagate writes to the AOF but not the replication stream. +(For modules, this is done using the `fmt` argument to `RedisModule_Call` or `RedisModule_Replicate`; For Lua scripts, this is achieved using `redis.set_repl`.) + +These features are incompatible with the `WAITAOF` command as it is currently implemented, and using them in combination may result in incorrect behavior. + +Consistency and WAITAOF +--- + +Note that, similarly to `WAIT`, `WAITAOF` does not make Redis a strongly-consistent store. +Unless waiting for all members of a cluster to fsync writes to disk, data can still be lost during a failover or a Redis restart. +However, `WAITAOF` does improve real-world data safety. + +Implementation details +--- + +Since Redis 7.2, Redis tracks and increments the replication offset even when no replicas are configured (as long as AOF exists). + +In addition, Redis replicas asynchronously ping their master with two replication offsets: the offset they have processed in the replication stream, and the offset they have fsynced to their AOF. + +Redis remembers, for each client, the replication offset of the produced replication stream when the last write command was executed in the context of that client. +When `WAITAOF` is called, Redis checks if the local Redis and/or the specified number of replicas have confirmed fsyncing this offset or a greater one to their AOF. + +@examples + +``` +> SET foo bar +OK +> WAITAOF 1 0 0 +1) (integer) 1 +2) (integer) 0 +> WAITAOF 0 1 1000 +1) (integer) 1 +2) (integer) 0 +``` + +In the above example, the first call to `WAITAOF` does not use a timeout and asks for the write to be fsynced to the local Redis only; it returns with [1, 0] when this is completed. + +In the second attempt we instead specify a timeout, and ask for the write to be confirmed as fsynced by a single replica. +Since there are no connected replicas, the `WAITAOF` command unblocks after one second and again returns [1, 0], indicating the write has been fsynced on the local Redis but no replicas. diff --git a/commands/watch.md b/commands/watch.md index a44c711091..2121bd5cb5 100644 --- a/commands/watch.md +++ b/commands/watch.md @@ -1,9 +1,4 @@ -@complexity +Marks the given keys to be watched for conditional execution of a +[transaction][tt]. -O(1) for every key. - -Marks the given keys to be watched for conditional execution of a [transaction](/topics/transactions). - -@return - -@status-reply: always `OK`. +[tt]: /topics/transactions diff --git a/commands/xack.md b/commands/xack.md new file mode 100644 index 0000000000..daeb7c21c8 --- /dev/null +++ b/commands/xack.md @@ -0,0 +1,22 @@ +The `XACK` command removes one or multiple messages from the +*Pending Entries List* (PEL) of a stream consumer group. A message is pending, +and as such stored inside the PEL, when it was delivered to some consumer, +normally as a side effect of calling `XREADGROUP`, or when a consumer took +ownership of a message calling `XCLAIM`. The pending message was delivered to +some consumer but the server is yet not sure it was processed at least once. +So new calls to `XREADGROUP` to grab the messages history for a consumer +(for instance using an ID of 0), will return such message. +Similarly the pending message will be listed by the `XPENDING` command, +that inspects the PEL. + +Once a consumer *successfully* processes a message, it should call `XACK` +so that such message does not get processed again, and as a side effect, +the PEL entry about this message is also purged, releasing memory from the +Redis server. + +@examples + +``` +redis> XACK mystream mygroup 1526569495631-0 +(integer) 1 +``` diff --git a/commands/xadd.md b/commands/xadd.md new file mode 100644 index 0000000000..3f431a133b --- /dev/null +++ b/commands/xadd.md @@ -0,0 +1,82 @@ +Appends the specified stream entry to the stream at the specified key. +If the key does not exist, as a side effect of running this command the +key is created with a stream value. The creation of stream's key can be +disabled with the `NOMKSTREAM` option. + +An entry is composed of a list of field-value pairs. +The field-value pairs are stored in the same order they are given by the user. +Commands that read the stream, such as `XRANGE` or `XREAD`, are guaranteed to return the fields and values exactly in the same order they were added by `XADD`. + +`XADD` is the *only Redis command* that can add data to a stream, but +there are other commands, such as `XDEL` and `XTRIM`, that are able to +remove data from a stream. + +## Specifying a Stream ID as an argument + +A stream entry ID identifies a given entry inside a stream. + +The `XADD` command will auto-generate a unique ID for you if the ID argument +specified is the `*` character (asterisk ASCII character). However, while +useful only in very rare cases, it is possible to specify a well-formed ID, so +that the new entry will be added exactly with the specified ID. + +IDs are specified by two numbers separated by a `-` character: + + 1526919030474-55 + +Both quantities are 64-bit numbers. When an ID is auto-generated, the +first part is the Unix time in milliseconds of the Redis instance generating +the ID. The second part is just a sequence number and is used in order to +distinguish IDs generated in the same millisecond. + +You can also specify an incomplete ID, that consists only of the milliseconds part, which is interpreted as a zero value for sequence part. +To have only the sequence part automatically generated, specify the milliseconds part followed by the `-` separator and the `*` character: + +``` +> XADD mystream 1526919030474-55 message "Hello," +"1526919030474-55" +> XADD mystream 1526919030474-* message " World!" +"1526919030474-56" +``` + +IDs are guaranteed to be always incremental: If you compare the ID of the +entry just inserted it will be greater than any other past ID, so entries +are totally ordered inside a stream. In order to guarantee this property, +if the current top ID in the stream has a time greater than the current +local time of the instance, the top entry time will be used instead, and +the sequence part of the ID incremented. This may happen when, for instance, +the local clock jumps backward, or if after a failover the new master has +a different absolute time. + +When a user specified an explicit ID to `XADD`, the minimum valid ID is +`0-1`, and the user *must* specify an ID which is greater than any other +ID currently inside the stream, otherwise the command will fail and return an error. Usually +resorting to specific IDs is useful only if you have another system generating +unique IDs (for instance an SQL table) and you really want the Redis stream +IDs to match the one of this other system. + +## Capped streams + +`XADD` incorporates the same semantics as the `XTRIM` command - refer to its documentation page for more information. +This allows adding new entries and keeping the stream's size in check with a single call to `XADD`, effectively capping the stream with an arbitrary threshold. +Although exact trimming is possible and is the default, due to the internal representation of steams it is more efficient to add an entry and trim stream with `XADD` using **almost exact** trimming (the `~` argument). + +For example, calling `XADD` in the following form: + + XADD mystream MAXLEN ~ 1000 * ... entry fields here ... + +Will add a new entry but will also evict old entries so that the stream will contain only 1000 entries, or at most a few tens more. + +## Additional information about streams + +For further information about Redis streams please check our +[introduction to Redis Streams document](/topics/streams-intro). + +@examples + +```cli +XADD mystream * name Sara surname OConnor +XADD mystream * field1 value1 field2 value2 field3 value3 +XLEN mystream +XRANGE mystream - + +``` diff --git a/commands/xautoclaim.md b/commands/xautoclaim.md new file mode 100644 index 0000000000..4b65eba3ae --- /dev/null +++ b/commands/xautoclaim.md @@ -0,0 +1,42 @@ +This command transfers ownership of pending stream entries that match the specified criteria. Conceptually, `XAUTOCLAIM` is equivalent to calling `XPENDING` and then `XCLAIM`, +but provides a more straightforward way to deal with message delivery failures via `SCAN`-like semantics. + +Like `XCLAIM`, the command operates on the stream entries at `` and in the context of the provided ``. +It transfers ownership to `` of messages pending for more than `` milliseconds and having an equal or greater ID than ``. + +The optional `` argument, which defaults to 100, is the upper limit of the number of entries that the command attempts to claim. +Internally, the command begins scanning the consumer group's Pending Entries List (PEL) from `` and filters out entries having an idle time less than or equal to ``. +The maximum number of pending entries that the command scans is the product of multiplying ``'s value by 10 (hard-coded). +It is possible, therefore, that the number of entries claimed will be less than the specified value. + +The optional `JUSTID` argument changes the reply to return just an array of IDs of messages successfully claimed, without returning the actual message. +Using this option means the retry counter is not incremented. + +The command returns the claimed entries as an array. It also returns a stream ID intended for cursor-like use as the `` argument for its subsequent call. +When there are no remaining PEL entries, the command returns the special `0-0` ID to signal completion. +However, note that you may want to continue calling `XAUTOCLAIM` even after the scan is complete with the `0-0` as `` ID, because enough time passed, so older pending entries may now be eligible for claiming. + +Note that only messages that are idle longer than `` are claimed, and claiming a message resets its idle time. +This ensures that only a single consumer can successfully claim a given pending message at a specific instant of time and trivially reduces the probability of processing the same message multiple times. + +While iterating the PEL, if `XAUTOCLAIM` stumbles upon a message which doesn't exist in the stream anymore (either trimmed or deleted by `XDEL`) it does not claim it, and deletes it from the PEL in which it was found. This feature was introduced in Redis 7.0. +These message IDs are returned to the caller as a part of `XAUTOCLAIM`s reply. + +Lastly, claiming a message with `XAUTOCLAIM` also increments the attempted deliveries count for that message, unless the `JUSTID` option has been specified (which only delivers the message ID, not the message itself). +Messages that cannot be processed for some reason - for example, because consumers systematically crash when processing them - will exhibit high attempted delivery counts that can be detected by monitoring. + +@examples + +``` +> XAUTOCLAIM mystream mygroup Alice 3600000 0-0 COUNT 25 +1) "0-0" +2) 1) 1) "1609338752495-0" + 2) 1) "field" + 2) "value" +3) (empty array) +``` + +In the above example, we attempt to claim up to 25 entries that are pending and idle (not having been acknowledged or claimed) for at least an hour, starting at the stream's beginning. +The consumer "Alice" from the "mygroup" group acquires ownership of these messages. +Note that the stream ID returned in the example is `0-0`, indicating that the entire stream was scanned. +We can also see that `XAUTOCLAIM` did not stumble upon any deleted messages (the third reply element is an empty array). diff --git a/commands/xclaim.md b/commands/xclaim.md new file mode 100644 index 0000000000..50fd8d23c2 --- /dev/null +++ b/commands/xclaim.md @@ -0,0 +1,47 @@ +In the context of a stream consumer group, this command changes the ownership +of a pending message, so that the new owner is the consumer specified as the +command argument. Normally this is what happens: + +1. There is a stream with an associated consumer group. +2. Some consumer A reads a message via `XREADGROUP` from a stream, in the context of that consumer group. +3. As a side effect a pending message entry is created in the Pending Entries List (PEL) of the consumer group: it means the message was delivered to a given consumer, but it was not yet acknowledged via `XACK`. +4. Then suddenly that consumer fails forever. +5. Other consumers may inspect the list of pending messages, that are stale for quite some time, using the `XPENDING` command. In order to continue processing such messages, they use `XCLAIM` to acquire the ownership of the message and continue. Consumers can also use the `XAUTOCLAIM` command to automatically scan and claim stale pending messages. + +This dynamic is clearly explained in the [Stream intro documentation](/topics/streams-intro). + +Note that the message is claimed only if its idle time is greater than the minimum idle time we specify when calling `XCLAIM`. Because as a side effect `XCLAIM` will also reset the idle time (since this is a new attempt at processing the message), two consumers trying to claim a message at the same time will never both succeed: only one will successfully claim the message. This avoids that we process a given message multiple times in a trivial way (yet multiple processing is possible and unavoidable in the general case). + +Moreover, as a side effect, `XCLAIM` will increment the count of attempted deliveries of the message unless the `JUSTID` option has been specified (which only delivers the message ID, not the message itself). In this way messages that cannot be processed for some reason, for instance because the consumers crash attempting to process them, will start to have a larger counter and can be detected inside the system. + +`XCLAIM` will not claim a message in the following cases: + +1. The message doesn't exist in the group PEL (i.e. it was never read by any consumer) +2. The message exists in the group PEL but not in the stream itself (i.e. the message was read but never acknowledged, and then was deleted from the stream, either by trimming or by `XDEL`) + +In both cases the reply will not contain a corresponding entry to that message (i.e. the length of the reply array may be smaller than the number of IDs provided to `XCLAIM`). +In the latter case, the message will also be deleted from the PEL in which it was found. This feature was introduced in Redis 7.0. + +## Command options + +The command has multiple options, however most are mainly for internal use in +order to transfer the effects of `XCLAIM` or other commands to the AOF file +and to propagate the same effects to the replicas, and are unlikely to be +useful to normal users: + +1. `IDLE `: Set the idle time (last time it was delivered) of the message. If IDLE is not specified, an IDLE of 0 is assumed, that is, the time count is reset because the message has now a new owner trying to process it. +2. `TIME `: This is the same as IDLE but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). This is useful in order to rewrite the AOF file generating `XCLAIM` commands. +3. `RETRYCOUNT `: Set the retry counter to the specified value. This counter is incremented every time a message is delivered again. Normally `XCLAIM` does not alter this counter, which is just served to clients when the XPENDING command is called: this way clients can detect anomalies, like messages that are never processed for some reason after a big number of delivery attempts. +4. `FORCE`: Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. However the message must be exist in the stream, otherwise the IDs of non existing messages are ignored. +5. `JUSTID`: Return just an array of IDs of messages successfully claimed, without returning the actual message. Using this option means the retry counter is not incremented. + +@examples + +``` +> XCLAIM mystream mygroup Alice 3600000 1526569498055-0 +1) 1) 1526569498055-0 + 2) 1) "message" + 2) "orange" +``` + +In the above example we claim the message with ID `1526569498055-0`, only if the message is idle for at least one hour without the original consumer or some other consumer making progresses (acknowledging or claiming it), and assigns the ownership to the consumer `Alice`. diff --git a/commands/xdel.md b/commands/xdel.md new file mode 100644 index 0000000000..57b9a8ba47 --- /dev/null +++ b/commands/xdel.md @@ -0,0 +1,47 @@ +Removes the specified entries from a stream, and returns the number of entries +deleted. This number may be less than the number of IDs passed to the command in +the case where some of the specified IDs do not exist in the stream. + +Normally you may think at a Redis stream as an append-only data structure, +however Redis streams are represented in memory, so we are also able to +delete entries. This may be useful, for instance, in order to comply with +certain privacy policies. + +## Understanding the low level details of entries deletion + +Redis streams are represented in a way that makes them memory efficient: +a radix tree is used in order to index macro-nodes that pack linearly tens +of stream entries. Normally what happens when you delete an entry from a stream +is that the entry is not *really* evicted, it just gets marked as deleted. + +Eventually if all the entries in a macro-node are marked as deleted, the whole +node is destroyed and the memory reclaimed. This means that if you delete +a large amount of entries from a stream, for instance more than 50% of the +entries appended to the stream, the memory usage per entry may increment, since +what happens is that the stream will become fragmented. However the stream +performance will remain the same. + +In future versions of Redis it is possible that we'll trigger a node garbage +collection in case a given macro-node reaches a given amount of deleted +entries. Currently with the usage we anticipate for this data structure, it is +not a good idea to add such complexity. + +@examples + +``` +> XADD mystream * a 1 +1538561698944-0 +> XADD mystream * b 2 +1538561700640-0 +> XADD mystream * c 3 +1538561701744-0 +> XDEL mystream 1538561700640-0 +(integer) 1 +127.0.0.1:6379> XRANGE mystream - + +1) 1) 1538561698944-0 + 2) 1) "a" + 2) "1" +2) 1) 1538561701744-0 + 2) 1) "c" + 2) "3" +``` diff --git a/commands/xgroup-create.md b/commands/xgroup-create.md new file mode 100644 index 0000000000..124a8a14f4 --- /dev/null +++ b/commands/xgroup-create.md @@ -0,0 +1,21 @@ +Create a new consumer group uniquely identified by `` for the stream stored at `` + +Every group has a unique name in a given stream. +When a consumer group with the same name already exists, the command returns a `-BUSYGROUP` error. + +The command's `` argument specifies the last delivered entry in the stream from the new group's perspective. +The special ID `$` is the ID of the last entry in the stream, but you can substitute it with any valid ID. + +For example, if you want the group's consumers to fetch the entire stream from the beginning, use zero as the starting ID for the consumer group: + + XGROUP CREATE mystream mygroup 0 + +By default, the `XGROUP CREATE` command expects that the target stream exists, and returns an error when it doesn't. +If a stream does not exist, you can create it automatically with length of 0 by using the optional `MKSTREAM` subcommand as the last argument after the ``: + + XGROUP CREATE mystream mygroup $ MKSTREAM + +To enable consumer group lag tracking, specify the optional `entries_read` named argument with an arbitrary ID. +An arbitrary ID is any ID that isn't the ID of the stream's first entry, last entry, or zero ("0-0") ID. +Use it to find out how many entries are between the arbitrary ID (excluding it) and the stream's last entry. +Set the `entries_read` the stream's `entries_added` subtracted by the number of entries. diff --git a/commands/xgroup-createconsumer.md b/commands/xgroup-createconsumer.md new file mode 100644 index 0000000000..f81d468a02 --- /dev/null +++ b/commands/xgroup-createconsumer.md @@ -0,0 +1,4 @@ +Create a consumer named `` in the consumer group `` of the stream that's stored at ``. + +Consumers are also created automatically whenever an operation, such as `XREADGROUP`, references a consumer that doesn't exist. +This is valid for `XREADGROUP` only when there is data in the stream. diff --git a/commands/xgroup-delconsumer.md b/commands/xgroup-delconsumer.md new file mode 100644 index 0000000000..57c71adb12 --- /dev/null +++ b/commands/xgroup-delconsumer.md @@ -0,0 +1,6 @@ +The `XGROUP DELCONSUMER` command deletes a consumer from the consumer group. + +Sometimes it may be useful to remove old consumers since they are no longer used. + +Note, however, that any pending messages that the consumer had will become unclaimable after it was deleted. +It is strongly recommended, therefore, that any pending messages are claimed or acknowledged prior to deleting the consumer from the group. diff --git a/commands/xgroup-destroy.md b/commands/xgroup-destroy.md new file mode 100644 index 0000000000..29af5f73cb --- /dev/null +++ b/commands/xgroup-destroy.md @@ -0,0 +1,3 @@ +The `XGROUP DESTROY` command completely destroys a consumer group. + +The consumer group will be destroyed even if there are active consumers, and pending messages, so make sure to call this command only when really needed. diff --git a/commands/xgroup-help.md b/commands/xgroup-help.md new file mode 100644 index 0000000000..405008ac80 --- /dev/null +++ b/commands/xgroup-help.md @@ -0,0 +1 @@ +The `XGROUP HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/xgroup-setid.md b/commands/xgroup-setid.md new file mode 100644 index 0000000000..5ea91accdc --- /dev/null +++ b/commands/xgroup-setid.md @@ -0,0 +1,12 @@ +Set the **last delivered ID** for a consumer group. + +Normally, a consumer group's last delivered ID is set when the group is created with `XGROUP CREATE`. +The `XGROUP SETID` command allows modifying the group's last delivered ID, without having to delete and recreate the group. +For instance if you want the consumers in a consumer group to re-process all the messages in a stream, you may want to set its next ID to 0: + + XGROUP SETID mystream mygroup 0 + +The optional `entries_read` argument can be specified to enable consumer group lag tracking for an arbitrary ID. +An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. +This can be useful you know exactly how many entries are between the arbitrary ID (excluding it) and the stream's last entry. +In such cases, the `entries_read` can be set to the stream's `entries_added` subtracted with the number of entries. diff --git a/commands/xgroup.md b/commands/xgroup.md new file mode 100644 index 0000000000..e7b517a978 --- /dev/null +++ b/commands/xgroup.md @@ -0,0 +1,3 @@ +This is a container command for stream consumer group management commands. + +To see the list of available commands you can call `XGROUP HELP`. diff --git a/commands/xinfo-consumers.md b/commands/xinfo-consumers.md new file mode 100644 index 0000000000..855a308ced --- /dev/null +++ b/commands/xinfo-consumers.md @@ -0,0 +1,33 @@ +This command returns the list of consumers that belong to the `` consumer group of the stream stored at ``. + +The following information is provided for each consumer in the group: + +* **name**: the consumer's name +* **pending**: the number of entries in the PEL: pending messages for the consumer, which are messages that were delivered but are yet to be acknowledged +* **idle**: the number of milliseconds that have passed since the consumer's last attempted interaction (Examples: `XREADGROUP`, `XCLAIM`, `XAUTOCLAIM`) +* **inactive**: the number of milliseconds that have passed since the consumer's last successful interaction (Examples: `XREADGROUP` that actually read some entries into the PEL, `XCLAIM`/`XAUTOCLAIM` that actually claimed some entries) + +Note that before Redis 7.2.0, **idle** used to denote the time passed since last successful interaction. +In 7.2.0, **inactive** was added and **idle** was changed to denote the time passed since last attempted interaction. + +@examples + +``` +> XINFO CONSUMERS mystream mygroup +1) 1) name + 2) "Alice" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 9104628 + 7) inactive + 8) (integer) 18104698 +2) 1) name + 2) "Bob" + 3) pending + 4) (integer) 1 + 5) idle + 6) (integer) 83841983 + 7) inactive + 8) (integer) 993841998 +``` diff --git a/commands/xinfo-groups.md b/commands/xinfo-groups.md new file mode 100644 index 0000000000..3074725158 --- /dev/null +++ b/commands/xinfo-groups.md @@ -0,0 +1,70 @@ +This command returns the list of all consumer groups of the stream stored at ``. + +By default, only the following information is provided for each of the groups: + +* **name**: the consumer group's name +* **consumers**: the number of consumers in the group +* **pending**: the length of the group's pending entries list (PEL), which are messages that were delivered but are yet to be acknowledged +* **last-delivered-id**: the ID of the last entry delivered to the group's consumers +* **entries-read**: the logical "read counter" of the last entry delivered to the group's consumers +* **lag**: the number of entries in the stream that are still waiting to be delivered to the group's consumers, or a NULL when that number can't be determined. + +### Consumer group lag + +The lag of a given consumer group is the number of entries in the range between the group's `entries_read` and the stream's `entries_added`. +Put differently, it is the number of entries that are yet to be delivered to the group's consumers. + +The values and trends of this metric are helpful in making scaling decisions about the consumer group. +You can address high lag values by adding more consumers to the group, whereas low values may indicate that you can remove consumers from the group to scale it down. + +Redis reports the lag of a consumer group by keeping two counters: the number of all entries added to the stream and the number of logical reads made by the consumer group. +The lag is the difference between these two. + +The stream's counter (the `entries_added` field of the `XINFO STREAM` command) is incremented by one with every `XADD` and counts all of the entries added to the stream during its lifetime. + +The consumer group's counter, `entries_read`, is the logical counter of entries the group had read. +It is important to note that this counter is only a heuristic rather than an accurate counter, and therefore the use of the term "logical". +The counter attempts to reflect the number of entries that the group **should have read** to get to its current `last-delivered-id`. +The `entries_read` counter is accurate only in a perfect world, where a consumer group starts at the stream's first entry and processes all of its entries (i.e., no entries deleted before processing). + +There are two special cases in which this mechanism is unable to report the lag: + +1. A consumer group is created or set with an arbitrary last delivered ID (the `XGROUP CREATE` and `XGROUP SETID` commands, respectively). + An arbitrary ID is any ID that isn't the ID of the stream's first entry, its last entry or the zero ("0-0") ID. +2. One or more entries between the group's `last-delivered-id` and the stream's `last-generated-id` were deleted (with `XDEL` or a trimming operation). + +In both cases, the group's read counter is considered invalid, and the returned value is set to NULL to signal that the lag isn't currently available. + +However, the lag is only temporarily unavailable. +It is restored automatically during regular operation as consumers keep processing messages. +Once the consumer group delivers the last message in the stream to its members, it will be set with the correct logical read counter, and tracking its lag can be resumed. + +@examples + +``` +> XINFO GROUPS mystream +1) 1) "name" + 2) "mygroup" + 3) "consumers" + 4) (integer) 2 + 5) "pending" + 6) (integer) 2 + 7) "last-delivered-id" + 8) "1638126030001-0" + 9) "entries-read" + 10) (integer) 2 + 11) "lag" + 12) (integer) 0 +2) 1) "name" + 2) "some-other-group" + 3) "consumers" + 4) (integer) 1 + 5) "pending" + 6) (integer) 0 + 7) "last-delivered-id" + 8) "1638126028070-0" + 9) "entries-read" + 10) (integer) 1 + 11) "lag" + 12) (integer) 1 +``` diff --git a/commands/xinfo-help.md b/commands/xinfo-help.md new file mode 100644 index 0000000000..34ad659b9c --- /dev/null +++ b/commands/xinfo-help.md @@ -0,0 +1 @@ +The `XINFO HELP` command returns a helpful text describing the different subcommands. diff --git a/commands/xinfo-stream.md b/commands/xinfo-stream.md new file mode 100644 index 0000000000..4fcbe1e2ee --- /dev/null +++ b/commands/xinfo-stream.md @@ -0,0 +1,145 @@ +This command returns information about the stream stored at ``. + +The informative details provided by this command are: + +* **length**: the number of entries in the stream (see `XLEN`) +* **radix-tree-keys**: the number of keys in the underlying radix data structure +* **radix-tree-nodes**: the number of nodes in the underlying radix data structure +* **groups**: the number of consumer groups defined for the stream +* **last-generated-id**: the ID of the least-recently entry that was added to the stream +* **max-deleted-entry-id**: the maximal entry ID that was deleted from the stream +* **entries-added**: the count of all entries added to the stream during its lifetime +* **first-entry**: the ID and field-value tuples of the first entry in the stream +* **last-entry**: the ID and field-value tuples of the last entry in the stream + +### The `FULL` modifier + +The optional `FULL` modifier provides a more verbose reply. +When provided, the `FULL` reply includes an **entries** array that consists of the stream entries (ID and field-value tuples) in ascending order. +Furthermore, **groups** is also an array, and for each of the consumer groups it consists of the information reported by `XINFO GROUPS` and `XINFO CONSUMERS`. + +The following information is provided for each of the groups: + +* **name**: the consumer group's name +* **last-delivered-id**: the ID of the last entry delivered to the group's consumers +* **entries-read**: the logical "read counter" of the last entry delivered to the group's consumers +* **lag**: the number of entries in the stream that are still waiting to be delivered to the group's consumers, or a NULL when that number can't be determined. +* **pel-count**: the length of the group's pending entries list (PEL), which are messages that were delivered but are yet to be acknowledged +* **pending**: an array with pending entries information (see below) +* **consumers**: an array with consumers information (see below) + +The following information is provided for each pending entry: + +1. The ID of the message. +2. The name of the consumer that fetched the message and has still to acknowledge it. We call it the current *owner* of the message. +3. The UNIX timestamp of when the message was delivered to this consumer. +4. The number of times this message was delivered. + +The following information is provided for each consumer: + +* **name**: the consumer's name +* **seen-time**: the UNIX timestamp of the last attempted interaction (Examples: `XREADGROUP`, `XCLAIM`, `XAUTOCLAIM`) +* **active-time**: the UNIX timestamp of the last successful interaction (Examples: `XREADGROUP` that actually read some entries into the PEL, `XCLAIM`/`XAUTOCLAIM` that actually claimed some entries) +* **pel-count**: the number of entries in the PEL: pending messages for the consumer, which are messages that were delivered but are yet to be acknowledged +* **pending**: an array with pending entries information, has the same structure as described above, except the consumer name is omitted (redundant, since anyway we are in a specific consumer context) + +Note that before Redis 7.2.0, **seen-time** used to denote the last successful interaction. +In 7.2.0, **active-time** was added and **seen-time** was changed to denote the last attempted interaction. + +The `COUNT` option can be used to limit the number of stream and PEL entries that are returned (The first `` entries are returned). +The default `COUNT` is 10 and a `COUNT` of 0 means that all entries will be returned (execution time may be long if the stream has a lot of entries). + +@examples + +Default reply: + +``` +> XINFO STREAM mystream + 1) "length" + 2) (integer) 2 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1638125141232-0" + 9) "max-deleted-entry-id" +10) "0-0" +11) "entries-added" +12) (integer) 2 +13) "groups" +14) (integer) 1 +15) "first-entry" +16) 1) "1638125133432-0" + 2) 1) "message" + 2) "apple" +17) "last-entry" +18) 1) "1638125141232-0" + 2) 1) "message" + 2) "banana" +``` + +Full reply: + +``` +> XADD mystream * foo bar +"1638125133432-0" +> XADD mystream * foo bar2 +"1638125141232-0" +> XGROUP CREATE mystream mygroup 0-0 +OK +> XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream > +1) 1) "mystream" + 2) 1) 1) "1638125133432-0" + 2) 1) "foo" + 2) "bar" +> XINFO STREAM mystream FULL + 1) "length" + 2) (integer) 2 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1638125141232-0" + 9) "max-deleted-entry-id" +10) "0-0" +11) "entries-added" +12) (integer) 2 +13) "entries" +14) 1) 1) "1638125133432-0" + 2) 1) "foo" + 2) "bar" + 2) 1) "1638125141232-0" + 2) 1) "foo" + 2) "bar2" +15) "groups" +16) 1) 1) "name" + 2) "mygroup" + 3) "last-delivered-id" + 4) "1638125133432-0" + 5) "entries-read" + 6) (integer) 1 + 7) "lag" + 8) (integer) 1 + 9) "pel-count" + 10) (integer) 1 + 11) "pending" + 12) 1) 1) "1638125133432-0" + 2) "Alice" + 3) (integer) 1638125153423 + 4) (integer) 1 + 13) "consumers" + 14) 1) 1) "name" + 2) "Alice" + 3) "seen-time" + 4) (integer) 1638125133422 + 5) "active-time" + 6) (integer) 1638125133432 + 7) "pel-count" + 8) (integer) 1 + 9) "pending" + 10) 1) 1) "1638125133432-0" + 2) (integer) 1638125133432 + 3) (integer) 1 +``` diff --git a/commands/xinfo.md b/commands/xinfo.md new file mode 100644 index 0000000000..93fe9a2947 --- /dev/null +++ b/commands/xinfo.md @@ -0,0 +1,3 @@ +This is a container command for stream introspection commands. + +To see the list of available commands you can call `XINFO HELP`. diff --git a/commands/xlen.md b/commands/xlen.md new file mode 100644 index 0000000000..e996f7ee18 --- /dev/null +++ b/commands/xlen.md @@ -0,0 +1,18 @@ +Returns the number of entries inside a stream. If the specified key does not +exist the command returns zero, as if the stream was empty. +However note that unlike other Redis types, zero-length streams are +possible, so you should call `TYPE` or `EXISTS` in order to check if +a key exists or not. + +Streams are not auto-deleted once they have no entries inside (for instance +after an `XDEL` call), because the stream may have consumer groups +associated with it. + +@examples + +```cli +XADD mystream * item 1 +XADD mystream * item 2 +XADD mystream * item 3 +XLEN mystream +``` diff --git a/commands/xpending.md b/commands/xpending.md new file mode 100644 index 0000000000..c22ae8e38d --- /dev/null +++ b/commands/xpending.md @@ -0,0 +1,129 @@ +Fetching data from a stream via a consumer group, and not acknowledging +such data, has the effect of creating *pending entries*. This is +well explained in the `XREADGROUP` command, and even better in our +[introduction to Redis Streams](/topics/streams-intro). The `XACK` command +will immediately remove the pending entry from the Pending Entries List (PEL) +since once a message is successfully processed, there is no longer need +for the consumer group to track it and to remember the current owner +of the message. + +The `XPENDING` command is the interface to inspect the list of pending +messages, and is as thus a very important command in order to observe +and understand what is happening with a streams consumer groups: what +clients are active, what messages are pending to be consumed, or to see +if there are idle messages. Moreover this command, together with `XCLAIM` +is used in order to implement recovering of consumers that are failing +for a long time, and as a result certain messages are not processed: a +different consumer can claim the message and continue. This is better +explained in the [streams intro](/topics/streams-intro) and in the +`XCLAIM` command page, and is not covered here. + +## Summary form of XPENDING + +When `XPENDING` is called with just a key name and a consumer group +name, it just outputs a summary about the pending messages in a given +consumer group. In the following example, we create a consumer group and +immediately create a pending message by reading from the group with +`XREADGROUP`. + +``` +> XGROUP CREATE mystream group55 0-0 +OK + +> XREADGROUP GROUP group55 consumer-123 COUNT 1 STREAMS mystream > +1) 1) "mystream" + 2) 1) 1) 1526984818136-0 + 2) 1) "duration" + 2) "1532" + 3) "event-id" + 4) "5" + 5) "user-id" + 6) "7782813" +``` + +We expect the pending entries list for the consumer group `group55` to +have a message right now: consumer named `consumer-123` fetched the +message without acknowledging its processing. The simple `XPENDING` +form will give us this information: + +``` +> XPENDING mystream group55 +1) (integer) 1 +2) 1526984818136-0 +3) 1526984818136-0 +4) 1) 1) "consumer-123" + 2) "1" +``` + +In this form, the command outputs the total number of pending messages for this +consumer group, which is one, followed by the smallest and greatest ID among the +pending messages, and then list every consumer in the consumer group with +at least one pending message, and the number of pending messages it has. + +## Extended form of XPENDING + +The summary provides a good overview, but sometimes we are interested in the +details. In order to see all the pending messages with more associated +information we need to also pass a range of IDs, in a similar way we do it with +`XRANGE`, and a non optional *count* argument, to limit the number +of messages returned per call: + +``` +> XPENDING mystream group55 - + 10 +1) 1) 1526984818136-0 + 2) "consumer-123" + 3) (integer) 196415 + 4) (integer) 1 +``` + +In the extended form we no longer see the summary information, instead there +is detailed information for each message in the pending entries list. For +each message four attributes are returned: + +1. The ID of the message. +2. The name of the consumer that fetched the message and has still to acknowledge it. We call it the current *owner* of the message. +3. The number of milliseconds that elapsed since the last time this message was delivered to this consumer. +4. The number of times this message was delivered. + +The deliveries counter, that is the fourth element in the array, is incremented +when some other consumer *claims* the message with `XCLAIM`, or when the +message is delivered again via `XREADGROUP`, when accessing the history +of a consumer in a consumer group (see the `XREADGROUP` page for more info). + +It is possible to pass an additional argument to the command, in order +to see the messages having a specific owner: + +``` +> XPENDING mystream group55 - + 10 consumer-123 +``` + +But in the above case the output would be the same, since we have pending +messages only for a single consumer. However what is important to keep in +mind is that this operation, filtering by a specific consumer, is not +inefficient even when there are many pending messages from many consumers: +we have a pending entries list data structure both globally, and for +every consumer, so we can very efficiently show just messages pending for +a single consumer. + +## Idle time filter + +It is also possible to filter pending stream entries by their idle-time, +given in milliseconds (useful for `XCLAIM`ing entries that have not been +processed for some time): + +``` +> XPENDING mystream group55 IDLE 9000 - + 10 +> XPENDING mystream group55 IDLE 9000 - + 10 consumer-123 +``` + +The first case will return the first 10 (or less) PEL entries of the entire group +that are idle for over 9 seconds, whereas in the second case only those of +`consumer-123`. + +## Exclusive ranges and iterating the PEL + +The `XPENDING` command allows iterating over the pending entries just like +`XRANGE` and `XREVRANGE` allow for the stream's entries. You can do this by +prefixing the ID of the last-read pending entry with the `(` character that +denotes an open (exclusive) range, and proving it to the subsequent call to the +command. diff --git a/commands/xrange.md b/commands/xrange.md new file mode 100644 index 0000000000..a8d4e89b67 --- /dev/null +++ b/commands/xrange.md @@ -0,0 +1,213 @@ +The command returns the stream entries matching a given range of IDs. +The range is specified by a minimum and maximum ID. All the entries having +an ID between the two specified or exactly one of the two IDs specified +(closed interval) are returned. + +The `XRANGE` command has a number of applications: + +* Returning items in a specific time range. This is possible because + Stream IDs are [related to time](/topics/streams-intro). +* Iterating a stream incrementally, returning just + a few items at every iteration. However it is semantically much more + robust than the `SCAN` family of functions. +* Fetching a single entry from a stream, providing the ID of the entry + to fetch two times: as start and end of the query interval. + +The command also has a reciprocal command returning items in the +reverse order, called `XREVRANGE`, which is otherwise identical. + +## `-` and `+` special IDs + +The `-` and `+` special IDs mean respectively the minimum ID possible +and the maximum ID possible inside a stream, so the following command +will just return every entry in the stream: + +``` +> XRANGE somestream - + +1) 1) 1526985054069-0 + 2) 1) "duration" + 2) "72" + 3) "event-id" + 4) "9" + 5) "user-id" + 6) "839248" +2) 1) 1526985069902-0 + 2) 1) "duration" + 2) "415" + 3) "event-id" + 4) "2" + 5) "user-id" + 6) "772213" +... other entries here ... +``` + +The `-` and `+` special IDs mean, respectively, the minimal and maximal range IDs, +however they are nicer to type. + +## Incomplete IDs + +Stream IDs are composed of two parts, a Unix millisecond time stamp and a +sequence number for entries inserted in the same millisecond. It is possible +to use `XRANGE` specifying just the first part of the ID, the millisecond time, +like in the following example: + +``` +> XRANGE somestream 1526985054069 1526985055069 +``` + +In this case, `XRANGE` will auto-complete the start interval with `-0` +and end interval with `-18446744073709551615`, in order to return all the +entries that were generated between a given millisecond and the end of +the other specified millisecond. This also means that repeating the same +millisecond two times, we get all the entries within such millisecond, +because the sequence number range will be from zero to the maximum. + +Used in this way `XRANGE` works as a range query command to obtain entries +in a specified time. This is very handy in order to access the history +of past events in a stream. + +## Exclusive ranges + +The range is close (inclusive) by default, meaning that the reply can include +entries with IDs matching the query's start and end intervals. It is possible +to specify an open interval (exclusive) by prefixing the ID with the +character `(`. This is useful for iterating the stream, as explained below. + +## Returning a maximum number of entries + +Using the **COUNT** option it is possible to reduce the number of entries +reported. This is a very important feature even if it may look marginal, +because it allows, for instance, to model operations such as *give me +the entry greater or equal to the following*: + +``` +> XRANGE somestream 1526985054069-0 + COUNT 1 +1) 1) 1526985054069-0 + 2) 1) "duration" + 2) "72" + 3) "event-id" + 4) "9" + 5) "user-id" + 6) "839248" +``` + +In the above case the entry `1526985054069-0` exists, otherwise the server +would have sent us the next one. Using `COUNT` is also the base in order to +use `XRANGE` as an iterator. + +## Iterating a stream + +In order to iterate a stream, we can proceed as follows. Let's assume that +we want two elements per iteration. We start fetching the first two +elements, which is trivial: + +``` +> XRANGE writers - + COUNT 2 +1) 1) 1526985676425-0 + 2) 1) "name" + 2) "Virginia" + 3) "surname" + 4) "Woolf" +2) 1) 1526985685298-0 + 2) 1) "name" + 2) "Jane" + 3) "surname" + 4) "Austen" +``` + +Then instead of starting the iteration again from `-`, as the start +of the range we use the entry ID of the *last* entry returned by the +previous `XRANGE` call as an exclusive interval. + +The ID of the last entry is `1526985685298-0`, so we just prefix it +with a '(', and continue our iteration: + +``` +> XRANGE writers (1526985685298-0 + COUNT 2 +1) 1) 1526985691746-0 + 2) 1) "name" + 2) "Toni" + 3) "surname" + 4) "Morrison" +2) 1) 1526985712947-0 + 2) 1) "name" + 2) "Agatha" + 3) "surname" + 4) "Christie" +``` + +And so forth. Eventually this will allow to visit all the entries in the +stream. Obviously, we can start the iteration from any ID, or even from +a specific time, by providing a given incomplete start ID. Moreover, we +can limit the iteration to a given ID or time, by providing an end +ID or incomplete ID instead of `+`. + +The command `XREAD` is also able to iterate the stream. +The command `XREVRANGE` can iterate the stream reverse, from higher IDs +(or times) to lower IDs (or times). + +### Iterating with earlier versions of Redis + +While exclusive range intervals are only available from Redis 6.2, it is still +possible to use a similar stream iteration pattern with earlier versions. You +start fetching from the stream the same way as described above to obtain the +first entries. + +For the subsequent calls, you'll need to programmatically advance the last +entry's ID returned. Most Redis client should abstract this detail, but the +implementation can also be in the application if needed. In the example above, +this means incrementing the sequence of `1526985685298-0` by one, from 0 to 1. +The second call would, therefore, be: + +``` +> XRANGE writers 1526985685298-1 + COUNT 2 +1) 1) 1526985691746-0 + 2) 1) "name" + 2) "Toni" +... +``` + +Also, note that once the sequence part of the last ID equals +18446744073709551615, you'll need to increment the timestamp and reset the +sequence part to 0. For example, incrementing the ID +`1526985685298-18446744073709551615` should result in `1526985685299-0`. + +A symmetrical pattern applies to iterating the stream with `XREVRANGE`. The +only difference is that the client needs to decrement the ID for the subsequent +calls. When decrementing an ID with a sequence part of 0, the timestamp needs +to be decremented by 1 and the sequence set to 18446744073709551615. + +## Fetching single items + +If you look for an `XGET` command you'll be disappointed because `XRANGE` +is effectively the way to go in order to fetch a single entry from a +stream. All you have to do is to specify the ID two times in the arguments +of XRANGE: + +``` +> XRANGE mystream 1526984818136-0 1526984818136-0 +1) 1) 1526984818136-0 + 2) 1) "duration" + 2) "1532" + 3) "event-id" + 4) "5" + 5) "user-id" + 6) "7782813" +``` + +## Additional information about streams + +For further information about Redis streams please check our +[introduction to Redis Streams document](/topics/streams-intro). + +@examples + +```cli +XADD writers * name Virginia surname Woolf +XADD writers * name Jane surname Austen +XADD writers * name Toni surname Morrison +XADD writers * name Agatha surname Christie +XADD writers * name Ngozi surname Adichie +XLEN writers +XRANGE writers - + COUNT 2 +``` diff --git a/commands/xread.md b/commands/xread.md new file mode 100644 index 0000000000..0a6741519f --- /dev/null +++ b/commands/xread.md @@ -0,0 +1,202 @@ +Read data from one or multiple streams, only returning entries with an +ID greater than the last received ID reported by the caller. +This command has an option to block if items are not available, in a similar +fashion to `BRPOP` or `BZPOPMIN` and others. + +Please note that before reading this page, if you are new to streams, +we recommend to read [our introduction to Redis Streams](/topics/streams-intro). + +## Non-blocking usage + +If the **BLOCK** option is not used, the command is synchronous, and can +be considered somewhat related to `XRANGE`: it will return a range of items +inside streams, however it has two fundamental differences compared to `XRANGE` +even if we just consider the synchronous usage: + +* This command can be called with multiple streams if we want to read at + the same time from a number of keys. This is a key feature of `XREAD` because + especially when blocking with **BLOCK**, to be able to listen with a single + connection to multiple keys is a vital feature. +* While `XRANGE` returns items in a range of IDs, `XREAD` is more suited in + order to consume the stream starting from the first entry which is greater + than any other entry we saw so far. So what we pass to `XREAD` is, for each + stream, the ID of the last element that we received from that stream. + +For example, if I have two streams `mystream` and `writers`, and I want to +read data from both the streams starting from the first element they contain, +I could call `XREAD` like in the following example. + +Note: we use the **COUNT** option in the example, so that for each stream +the call will return at maximum two elements per stream. + +``` +> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0 +1) 1) "mystream" + 2) 1) 1) 1526984818136-0 + 2) 1) "duration" + 2) "1532" + 3) "event-id" + 4) "5" + 5) "user-id" + 6) "7782813" + 2) 1) 1526999352406-0 + 2) 1) "duration" + 2) "812" + 3) "event-id" + 4) "9" + 5) "user-id" + 6) "388234" +2) 1) "writers" + 2) 1) 1) 1526985676425-0 + 2) 1) "name" + 2) "Virginia" + 3) "surname" + 4) "Woolf" + 2) 1) 1526985685298-0 + 2) 1) "name" + 2) "Jane" + 3) "surname" + 4) "Austen" +``` + +The **STREAMS** option is mandatory and MUST be the final option because +such option gets a variable length of argument in the following format: + + STREAMS key_1 key_2 key_3 ... key_N ID_1 ID_2 ID_3 ... ID_N + +So we start with a list of keys, and later continue with all the associated +IDs, representing *the last ID we received for that stream*, so that the +call will serve us only greater IDs from the same stream. + +For instance in the above example, the last items that we received +for the stream `mystream` has ID `1526999352406-0`, while for the +stream `writers` has the ID `1526985685298-0`. + +To continue iterating the two streams I'll call: + +``` +> XREAD COUNT 2 STREAMS mystream writers 1526999352406-0 1526985685298-0 +1) 1) "mystream" + 2) 1) 1) 1526999626221-0 + 2) 1) "duration" + 2) "911" + 3) "event-id" + 4) "7" + 5) "user-id" + 6) "9488232" +2) 1) "writers" + 2) 1) 1) 1526985691746-0 + 2) 1) "name" + 2) "Toni" + 3) "surname" + 4) "Morrison" + 2) 1) 1526985712947-0 + 2) 1) "name" + 2) "Agatha" + 3) "surname" + 4) "Christie" +``` + +And so forth. Eventually, the call will not return any item, but just an +empty array, then we know that there is nothing more to fetch from our +stream (and we would have to retry the operation, hence this command +also supports a blocking mode). + +## Incomplete IDs + +To use incomplete IDs is valid, like it is valid for `XRANGE`. However +here the sequence part of the ID, if missing, is always interpreted as +zero, so the command: + +``` +> XREAD COUNT 2 STREAMS mystream writers 0 0 +``` + +is exactly equivalent to + +``` +> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0 +``` + +## Blocking for data + +In its synchronous form, the command can get new data as long as there +are more items available. However, at some point, we'll have to wait for +producers of data to use `XADD` to push new entries inside the streams +we are consuming. In order to avoid polling at a fixed or adaptive interval +the command is able to block if it could not return any data, according +to the specified streams and IDs, and automatically unblock once one of +the requested keys accept data. + +It is important to understand that this command *fans out* to all the +clients that are waiting for the same range of IDs, so every consumer will +get a copy of the data, unlike to what happens when blocking list pop +operations are used. + +In order to block, the **BLOCK** option is used, together with the number +of milliseconds we want to block before timing out. Normally Redis blocking +commands take timeouts in seconds, however this command takes a millisecond +timeout, even if normally the server will have a timeout resolution near +to 0.1 seconds. This time it is possible to block for a shorter time in +certain use cases, and if the server internals will improve over time, it is +possible that the resolution of timeouts will improve. + +When the **BLOCK** command is passed, but there is data to return at +least in one of the streams passed, the command is executed synchronously +*exactly like if the BLOCK option would be missing*. + +This is an example of blocking invocation, where the command later returns +a null reply because the timeout has elapsed without new data arriving: + +``` +> XREAD BLOCK 1000 STREAMS mystream 1526999626221-0 +(nil) +``` + +## The special `$` ID. + +When blocking sometimes we want to receive just entries that are added +to the stream via `XADD` starting from the moment we block. In such a case +we are not interested in the history of already added entries. For +this use case, we would have to check the stream top element ID, and use +such ID in the `XREAD` command line. This is not clean and requires to +call other commands, so instead it is possible to use the special `$` +ID to signal the stream that we want only the new things. + +It is **very important** to understand that you should use the `$` +ID only for the first call to `XREAD`. Later the ID should be the one +of the last reported item in the stream, otherwise you could miss all +the entries that are added in between. + +This is how a typical `XREAD` call looks like in the first iteration +of a consumer willing to consume only new entries: + +``` +> XREAD BLOCK 5000 COUNT 100 STREAMS mystream $ +``` + +Once we get some replies, the next call will be something like: + +``` +> XREAD BLOCK 5000 COUNT 100 STREAMS mystream 1526999644174-3 +``` + +And so forth. + +## How multiple clients blocked on a single stream are served + +Blocking list operations on lists or sorted sets have a *pop* behavior. +Basically, the element is removed from the list or sorted set in order +to be returned to the client. In this scenario you want the items +to be consumed in a fair way, depending on the moment clients blocked +on a given key arrived. Normally Redis uses the FIFO semantics in this +use cases. + +However note that with streams this is not a problem: stream entries +are not removed from the stream when clients are served, so every +client waiting will be served as soon as an `XADD` command provides +data to the stream. + +Reading the [Redis Streams introduction](/topics/streams-intro) is highly +suggested in order to understand more about the streams overall behavior +and semantics. \ No newline at end of file diff --git a/commands/xreadgroup.md b/commands/xreadgroup.md new file mode 100644 index 0000000000..e94d2bb619 --- /dev/null +++ b/commands/xreadgroup.md @@ -0,0 +1,134 @@ +The `XREADGROUP` command is a special version of the `XREAD` command +with support for consumer groups. Probably you will have to understand the +`XREAD` command before reading this page will makes sense. + +Moreover, if you are new to streams, we recommend to read our +[introduction to Redis Streams](/topics/streams-intro). +Make sure to understand the concept of consumer group in the introduction +so that following how this command works will be simpler. + +## Consumer groups in 30 seconds + +The difference between this command and the vanilla `XREAD` is that this +one supports consumer groups. + +Without consumer groups, just using `XREAD`, all the clients are served with all the entries arriving in a stream. Instead using consumer groups with `XREADGROUP`, it is possible to create groups of clients that consume different parts of the messages arriving in a given stream. If, for instance, the stream gets the new entries A, B, and C and there are two consumers reading via a consumer group, one client will get, for instance, the messages A and C, and the other the message B, and so forth. + +Within a consumer group, a given consumer (that is, just a client consuming messages from the stream), has to identify with a unique *consumer name*. Which is just a string. + +One of the guarantees of consumer groups is that a given consumer can only see the history of messages that were delivered to it, so a message has just a single owner. However there is a special feature called *message claiming* that allows other consumers to claim messages in case there is a non recoverable failure of some consumer. In order to implement such semantics, consumer groups require explicit acknowledgment of the messages successfully processed by the consumer, via the `XACK` command. This is needed because the stream will track, for each consumer group, who is processing what message. + +This is how to understand if you want to use a consumer group or not: + +1. If you have a stream and multiple clients, and you want all the clients to get all the messages, you do not need a consumer group. +2. If you have a stream and multiple clients, and you want the stream to be *partitioned* or *sharded* across your clients, so that each client will get a sub set of the messages arriving in a stream, you need a consumer group. + +## Differences between XREAD and XREADGROUP + +From the point of view of the syntax, the commands are almost the same, +however `XREADGROUP` *requires* a special and mandatory option: + + GROUP + +The group name is just the name of a consumer group associated to the stream. +The group is created using the `XGROUP` command. The consumer name is the +string that is used by the client to identify itself inside the group. +The consumer is auto created inside the consumer group the first time it +is saw. Different clients should select a different consumer name. + +When you read with `XREADGROUP`, the server will *remember* that a given +message was delivered to you: the message will be stored inside the +consumer group in what is called a Pending Entries List (PEL), that is +a list of message IDs delivered but not yet acknowledged. + +The client will have to acknowledge the message processing using `XACK` +in order for the pending entry to be removed from the PEL. The PEL +can be inspected using the `XPENDING` command. + +The `NOACK` subcommand can be used to avoid adding the message to the PEL in +cases where reliability is not a requirement and the occasional message loss +is acceptable. This is equivalent to acknowledging the message when it is read. + +The ID to specify in the **STREAMS** option when using `XREADGROUP` can +be one of the following two: + +* The special `>` ID, which means that the consumer want to receive only messages that were *never delivered to any other consumer*. It just means, give me new messages. +* Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs greater than the one provided. So basically if the ID is not `>`, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both `BLOCK` and `NOACK` are ignored. + +Like `XREAD` the `XREADGROUP` command can be used in a blocking way. There +are no differences in this regard. + +## What happens when a message is delivered to a consumer? + +Two things: + +1. If the message was never delivered to anyone, that is, if we are talking about a new message, then a PEL (Pending Entries List) is created. +2. If instead the message was already delivered to this consumer, and it is just re-fetching the same message again, then the *last delivery counter* is updated to the current time, and the *number of deliveries* is incremented by one. You can access those message properties using the `XPENDING` command. + +## Usage example + +Normally you use the command like that in order to get new messages and +process them. In pseudo-code: + +``` +WHILE true + entries = XREADGROUP GROUP $GroupName $ConsumerName BLOCK 2000 COUNT 10 STREAMS mystream > + if entries == nil + puts "Timeout... try again" + CONTINUE + end + + FOREACH entries AS stream_entries + FOREACH stream_entries as message + process_message(message.id,message.fields) + + # ACK the message as processed + XACK mystream $GroupName message.id + END + END +END +``` + +In this way the example consumer code will fetch only new messages, process +them, and acknowledge them via `XACK`. However the example code above is +not complete, because it does not handle recovering after a crash. What +will happen if we crash in the middle of processing messages, is that our +messages will remain in the pending entries list, so we can access our +history by giving `XREADGROUP` initially an ID of 0, and performing the same +loop. Once providing an ID of 0 the reply is an empty set of messages, we +know that we processed and acknowledged all the pending messages: we +can start to use `>` as ID, in order to get the new messages and rejoin the +consumers that are processing new things. + +To see how the command actually replies, please check the `XREAD` command page. + +## What happens when a pending message is deleted? + +Entries may be deleted from the stream due to trimming or explicit calls to `XDEL` at any time. +By design, Redis doesn't prevent the deletion of entries that are present in the stream's PELs. +When this happens, the PELs retain the deleted entries' IDs, but the actual entry payload is no longer available. +Therefore, when reading such PEL entries, Redis will return a null value in place of their respective data. + +Example: + +``` +> XADD mystream 1 myfield mydata +"1-0" +> XGROUP CREATE mystream mygroup 0 +OK +> XREADGROUP GROUP mygroup myconsumer STREAMS mystream > +1) 1) "mystream" + 2) 1) 1) "1-0" + 2) 1) "myfield" + 2) "mydata" +> XDEL mystream 1-0 +(integer) 1 +> XREADGROUP GROUP mygroup myconsumer STREAMS mystream 0 +1) 1) "mystream" + 2) 1) 1) "1-0" + 2) (nil) +``` + +Reading the [Redis Streams introduction](/topics/streams-intro) is highly +suggested in order to understand more about the streams overall behavior +and semantics. diff --git a/commands/xrevrange.md b/commands/xrevrange.md new file mode 100644 index 0000000000..ab10209082 --- /dev/null +++ b/commands/xrevrange.md @@ -0,0 +1,27 @@ +This command is exactly like `XRANGE`, but with the notable difference of +returning the entries in reverse order, and also taking the start-end +range in reverse order: in `XREVRANGE` you need to state the *end* ID +and later the *start* ID, and the command will produce all the element +between (or exactly like) the two IDs, starting from the *end* side. + +So for instance, to get all the elements from the higher ID to the lower +ID one could use: + + XREVRANGE somestream + - + +Similarly to get just the last element added into the stream it is +enough to send: + + XREVRANGE somestream + - COUNT 1 + +@examples + +```cli +XADD writers * name Virginia surname Woolf +XADD writers * name Jane surname Austen +XADD writers * name Toni surname Morrison +XADD writers * name Agatha surname Christie +XADD writers * name Ngozi surname Adichie +XLEN writers +XREVRANGE writers + - COUNT 1 +``` diff --git a/commands/xsetid.md b/commands/xsetid.md new file mode 100644 index 0000000000..39b593cbe3 --- /dev/null +++ b/commands/xsetid.md @@ -0,0 +1,2 @@ +The `XSETID` command is an internal command. +It is used by a Redis master to replicate the last delivered ID of streams. \ No newline at end of file diff --git a/commands/xtrim.md b/commands/xtrim.md new file mode 100644 index 0000000000..b26fec018e --- /dev/null +++ b/commands/xtrim.md @@ -0,0 +1,53 @@ +`XTRIM` trims the stream by evicting older entries (entries with lower IDs) if needed. + +Trimming the stream can be done using one of these strategies: + +* `MAXLEN`: Evicts entries as long as the stream's length exceeds the specified `threshold`, where `threshold` is a positive integer. +* `MINID`: Evicts entries with IDs lower than `threshold`, where `threshold` is a stream ID. + +For example, this will trim the stream to exactly the latest 1000 items: + +``` +XTRIM mystream MAXLEN 1000 +``` + +Whereas in this example, all entries that have an ID lower than 649085820-0 will be evicted: + +``` +XTRIM mystream MINID 649085820 +``` + +By default, or when provided with the optional `=` argument, the command performs exact trimming. + +Depending on the strategy, exact trimming means: + +* `MAXLEN`: the trimmed stream's length will be exactly the minimum between its original length and the specified `threshold`. +* `MINID`: the oldest ID in the stream will be exactly the maximum between its original oldest ID and the specified `threshold`. + +Nearly exact trimming +--- + +Because exact trimming may require additional effort from the Redis server, the optional `~` argument can be provided to make it more efficient. + +For example: + +``` +XTRIM mystream MAXLEN ~ 1000 +``` + +The `~` argument between the `MAXLEN` strategy and the `threshold` means that the user is requesting to trim the stream so its length is **at least** the `threshold`, but possibly slightly more. +In this case, Redis will stop trimming early when performance can be gained (for example, when a whole macro node in the data structure can't be removed). +This makes trimming much more efficient, and it is usually what you want, although after trimming, the stream may have few tens of additional entries over the `threshold`. + +Another way to control the amount of work done by the command when using the `~`, is the `LIMIT` clause. +When used, it specifies the maximal `count` of entries that will be evicted. +When `LIMIT` and `count` aren't specified, the default value of 100 * the number of entries in a macro node will be implicitly used as the `count`. +Specifying the value 0 as `count` disables the limiting mechanism entirely. + +@examples + +```cli +XADD mystream * field1 A field2 B field3 C field4 D +XTRIM mystream MAXLEN 2 +XRANGE mystream - + +``` diff --git a/commands/zadd.md b/commands/zadd.md index fc82a2984b..1f11c6ed28 100644 --- a/commands/zadd.md +++ b/commands/zadd.md @@ -1,34 +1,68 @@ -@complexity +Adds all the specified members with the specified scores to the sorted set +stored at `key`. +It is possible to specify multiple score / member pairs. +If a specified member is already a member of the sorted set, the score is +updated and the element reinserted at the right position to ensure the correct +ordering. -O(log(N)) where N is the number of elements in the sorted set. +If `key` does not exist, a new sorted set with the specified members as sole +members is created, like if the sorted set was empty. If the key exists but does not hold a sorted set, an error is returned. -Adds all the specified members with the specified scores to the sorted set stored at `key`. It is possible to specify multiple score/member pairs. -If a specified member is already a member of the sorted set, the score is updated and the element reinserted at the right position to ensure the correct ordering. If `key` does not exist, a new sorted set with the specified members as sole -members is created, like if the sorted set was empty. -If the key exists but does not hold a sorted set, an error is returned. +The score values should be the string representation of a double precision floating point number. `+inf` and `-inf` values are valid values as well. -The score values should be the string representation of a numeric value, and -accepts double precision floating point numbers. +ZADD options +--- + +ZADD supports a list of options, specified after the name of the key and before +the first score argument. Options are: + +* **XX**: Only update elements that already exist. Don't add new elements. +* **NX**: Only add new elements. Don't update already existing elements. +* **LT**: Only update existing elements if the new score is **less than** the current score. This flag doesn't prevent adding new elements. +* **GT**: Only update existing elements if the new score is **greater than** the current score. This flag doesn't prevent adding new elements. +* **CH**: Modify the return value from the number of new elements added, to the total number of elements changed (CH is an abbreviation of *changed*). Changed elements are **new elements added** and elements already existing for which **the score was updated**. So elements specified in the command line having the same score as they had in the past are not counted. Note: normally the return value of `ZADD` only counts the number of new elements added. +* **INCR**: When this option is specified `ZADD` acts like `ZINCRBY`. Only one score-element pair can be specified in this mode. + +Note: The **GT**, **LT** and **NX** options are mutually exclusive. + +Range of integer scores that can be expressed precisely +--- + +Redis sorted sets use a *double 64-bit floating point number* to represent the score. In all the architectures we support, this is represented as an **IEEE 754 floating point number**, that is able to represent precisely integer numbers between `-(2^53)` and `+(2^53)` included. In more practical terms, all the integers between -9007199254740992 and 9007199254740992 are perfectly representable. Larger integers, or fractions, are internally represented in exponential form, so it is possible that you get only an approximation of the decimal number, or of the very big integer, that you set as score. + +Sorted sets 101 +--- + +Sorted sets are sorted by their score in an ascending way. +The same element only exists a single time, no repeated elements are +permitted. The score can be modified both by `ZADD` that will update the +element score, and as a side effect, its position on the sorted set, and +by `ZINCRBY` that can be used in order to update the score relatively to its +previous value. + +The current score of an element can be retrieved using the `ZSCORE` command, +that can also be used to verify if an element already exists or not. For an introduction to sorted sets, see the data types page on [sorted -sets](/topics/data-types#sorted-sets). +sets][tdtss]. -@return +[tdtss]: /topics/data-types#sorted-sets -@integer-reply, specifically: +Elements with the same score +--- -* The number of elements added to the sorted sets, not including elements already existing for which the score was updated. +While the same element can't be repeated in a sorted set since every element +is unique, it is possible to add multiple different elements *having the same score*. When multiple elements have the same score, they are *ordered lexicographically* (they are still ordered by score as a first key, however, locally, all the elements with the same score are relatively ordered lexicographically). -@history +The lexicographic ordering used is binary, it compares strings as array of bytes. -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was possible to add or update a single member per call. +If the user inserts all the elements in a sorted set with the same score (for example 0), all the elements of the sorted set are sorted lexicographically, and range queries on elements are possible using the command `ZRANGEBYLEX` (Note: it is also possible to query sorted sets by range of scores using `ZRANGEBYSCORE`). @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 1 "uno" - ZADD myzset 2 "two" - ZADD myzset 3 "two" - ZRANGE myzset 0 -1 WITHSCORES - +```cli +ZADD myzset 1 "one" +ZADD myzset 1 "uno" +ZADD myzset 2 "two" 3 "three" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zcard.md b/commands/zcard.md index 044eace628..bacabd2883 100644 --- a/commands/zcard.md +++ b/commands/zcard.md @@ -1,20 +1,10 @@ -@complexity - -O(1) - - -Returns the sorted set cardinality (number of elements) of the sorted set -stored at `key`. - -@return - -@integer-reply: the cardinality (number of elements) of the sorted set, or `0` -if `key` does not exist. +Returns the sorted set cardinality (number of elements) of the sorted set stored +at `key`. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZCARD myzset - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZCARD myzset +``` diff --git a/commands/zcount.md b/commands/zcount.md index 9efbebbec2..a90d7c4b37 100644 --- a/commands/zcount.md +++ b/commands/zcount.md @@ -1,24 +1,17 @@ -@complexity +Returns the number of elements in the sorted set at `key` with a score between +`min` and `max`. -O(log(N)+M) with N being the number of elements in the -sorted set and M being the number of elements between `min` and `max`. +The `min` and `max` arguments have the same semantic as described for +`ZRANGEBYSCORE`. -Returns the number of elements in the sorted set at `key` with -a score between `min` and `max`. - -The `min` and `max` arguments have the same semantic as described -for `ZRANGEBYSCORE`. - -@return - -@integer-reply: the number of elements in the specified score range. +Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see `ZRANK`) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZCOUNT myzset -inf +inf - ZCOUNT myzset (1 3 - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZCOUNT myzset -inf +inf +ZCOUNT myzset (1 3 +``` diff --git a/commands/zdiff.md b/commands/zdiff.md new file mode 100644 index 0000000000..e2df52e424 --- /dev/null +++ b/commands/zdiff.md @@ -0,0 +1,14 @@ +This command is similar to `ZDIFFSTORE`, but instead of storing the resulting +sorted set, it is returned to the client. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset1 3 "three" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZDIFF 2 zset1 zset2 +ZDIFF 2 zset1 zset2 WITHSCORES +``` diff --git a/commands/zdiffstore.md b/commands/zdiffstore.md new file mode 100644 index 0000000000..d9d2e2cac3 --- /dev/null +++ b/commands/zdiffstore.md @@ -0,0 +1,19 @@ +Computes the difference between the first and all successive input sorted sets +and stores the result in `destination`. The total number of input keys is +specified by `numkeys`. + +Keys that do not exist are considered to be empty sets. + +If `destination` already exists, it is overwritten. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset1 3 "three" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZDIFFSTORE out 2 zset1 zset2 +ZRANGE out 0 -1 WITHSCORES +``` diff --git a/commands/zincrby.md b/commands/zincrby.md index a5fe4ea18a..bbf48716b8 100644 --- a/commands/zincrby.md +++ b/commands/zincrby.md @@ -1,29 +1,21 @@ -@complexity - -O(log(N)) where N is the number of elements in the sorted set. - Increments the score of `member` in the sorted set stored at `key` by -`increment`. If `member` does not exist in the sorted set, it is added with -`increment` as its score (as if its previous score was `0.0`). If `key` does -not exist, a new sorted set with the specified `member` as its sole member is -created. +`increment`. +If `member` does not exist in the sorted set, it is added with `increment` as +its score (as if its previous score was `0.0`). +If `key` does not exist, a new sorted set with the specified `member` as its +sole member is created. An error is returned when `key` exists but does not hold a sorted set. The `score` value should be the string representation of a numeric value, and -accepts double precision floating point numbers. It is possible to provide a -negative value to decrement the score. - -@return - -@bulk-reply: the new score of `member` (a double precision floating point -number), represented as string. +accepts double precision floating point numbers. +It is possible to provide a negative value to decrement the score. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZINCRBY myzset 2 "one" - ZRANGE myzset 0 -1 WITHSCORES - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZINCRBY myzset 2 "one" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zinter.md b/commands/zinter.md new file mode 100644 index 0000000000..b796517797 --- /dev/null +++ b/commands/zinter.md @@ -0,0 +1,16 @@ +This command is similar to `ZINTERSTORE`, but instead of storing the resulting +sorted set, it is returned to the client. + +For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTER 2 zset1 zset2 +ZINTER 2 zset1 zset2 WITHSCORES +``` diff --git a/commands/zintercard.md b/commands/zintercard.md new file mode 100644 index 0000000000..7ee7d1edeb --- /dev/null +++ b/commands/zintercard.md @@ -0,0 +1,21 @@ +This command is similar to `ZINTER`, but instead of returning the result set, it returns just the cardinality of the result. + +Keys that do not exist are considered to be empty sets. +With one of the keys being an empty set, the resulting set is also empty (since set intersection with an empty set always results in an empty set). + +By default, the command calculates the cardinality of the intersection of all given sets. +When provided with the optional `LIMIT` argument (which defaults to 0 and means unlimited), if the intersection cardinality reaches limit partway through the computation, the algorithm will exit and yield limit as the cardinality. +Such implementation ensures a significant speedup for queries where the limit is lower than the actual intersection cardinality. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTER 2 zset1 zset2 +ZINTERCARD 2 zset1 zset2 +ZINTERCARD 2 zset1 zset2 LIMIT 1 +``` diff --git a/commands/zinterstore.md b/commands/zinterstore.md index 89056c41b2..1d386aa3a4 100644 --- a/commands/zinterstore.md +++ b/commands/zinterstore.md @@ -1,36 +1,26 @@ -@complexity - -O(N\*K)+O(M\*log(M)) worst case with N being the smallest input sorted set, K -being the number of input sorted sets and M being the number of elements in the -resulting sorted set. - Computes the intersection of `numkeys` sorted sets given by the specified keys, -and stores the result in `destination`. It is mandatory to provide the number -of input keys (`numkeys`) before passing the input keys and the other -(optional) arguments. +and stores the result in `destination`. +It is mandatory to provide the number of input keys (`numkeys`) before passing +the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the -sorted sets where it exists. Because intersection requires an element -to be a member of every given sorted set, this results in the score of every -element in the resulting sorted set to be equal to the number of input sorted sets. +sorted sets where it exists. +Because intersection requires an element to be a member of every given sorted +set, this results in the score of every element in the resulting sorted set to +be equal to the number of input sorted sets. For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. If `destination` already exists, it is overwritten. -@return - -@integer-reply: the number of elements in the resulting sorted set at -`destination`. - @examples - @cli - ZADD zset1 1 "one" - ZADD zset1 2 "two" - ZADD zset2 1 "one" - ZADD zset2 2 "two" - ZADD zset2 3 "three" - ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 - ZRANGE out 0 -1 WITHSCORES - +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZINTERSTORE out 2 zset1 zset2 WEIGHTS 2 3 +ZRANGE out 0 -1 WITHSCORES +``` diff --git a/commands/zlexcount.md b/commands/zlexcount.md new file mode 100644 index 0000000000..2eeed5c27d --- /dev/null +++ b/commands/zlexcount.md @@ -0,0 +1,15 @@ +When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns the number of elements in the sorted set at `key` with a value between `min` and `max`. + +The `min` and `max` arguments have the same meaning as described for +`ZRANGEBYLEX`. + +Note: the command has a complexity of just O(log(N)) because it uses elements ranks (see `ZRANK`) to get an idea of the range. Because of this there is no need to do a work proportional to the size of the range. + +@examples + +```cli +ZADD myzset 0 a 0 b 0 c 0 d 0 e +ZADD myzset 0 f 0 g +ZLEXCOUNT myzset - + +ZLEXCOUNT myzset [b [f +``` diff --git a/commands/zmpop.md b/commands/zmpop.md new file mode 100644 index 0000000000..e3fd6944b3 --- /dev/null +++ b/commands/zmpop.md @@ -0,0 +1,29 @@ +Pops one or more elements, that are member-score pairs, from the first non-empty sorted set in the provided list of key names. + +`ZMPOP` and `BZMPOP` are similar to the following, more limited, commands: + +- `ZPOPMIN` or `ZPOPMAX` which take only one key, and can return multiple elements. +- `BZPOPMIN` or `BZPOPMAX` which take multiple keys, but return only one element from just one key. + +See `BZMPOP` for the blocking variant of this command. + +When the `MIN` modifier is used, the elements popped are those with the lowest scores from the first non-empty sorted set. The `MAX` modifier causes elements with the highest scores to be popped. +The optional `COUNT` can be used to specify the number of elements to pop, and is set to 1 by default. + +The number of popped elements is the minimum from the sorted set's cardinality and `COUNT`'s value. + +@examples + +```cli +ZMPOP 1 notsuchkey MIN +ZADD myzset 1 "one" 2 "two" 3 "three" +ZMPOP 1 myzset MIN +ZRANGE myzset 0 -1 WITHSCORES +ZMPOP 1 myzset MAX COUNT 10 +ZADD myzset2 4 "four" 5 "five" 6 "six" +ZMPOP 2 myzset myzset2 MIN COUNT 10 +ZRANGE myzset 0 -1 WITHSCORES +ZMPOP 2 myzset myzset2 MAX COUNT 10 +ZRANGE myzset2 0 -1 WITHSCORES +EXISTS myzset myzset2 +``` diff --git a/commands/zmscore.md b/commands/zmscore.md new file mode 100644 index 0000000000..0111a4a01b --- /dev/null +++ b/commands/zmscore.md @@ -0,0 +1,11 @@ +Returns the scores associated with the specified `members` in the sorted set stored at `key`. + +For every `member` that does not exist in the sorted set, a `nil` value is returned. + +@examples + +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZMSCORE myzset "one" "two" "nofield" +``` diff --git a/commands/zpopmax.md b/commands/zpopmax.md new file mode 100644 index 0000000000..c0245f28b6 --- /dev/null +++ b/commands/zpopmax.md @@ -0,0 +1,16 @@ +Removes and returns up to `count` members with the highest scores in the sorted +set stored at `key`. + +When left unspecified, the default value for `count` is 1. Specifying a `count` +value that is higher than the sorted set's cardinality will not produce an +error. When returning multiple elements, the one with the highest score will +be the first, followed by the elements with lower scores. + +@examples + +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZPOPMAX myzset +``` diff --git a/commands/zpopmin.md b/commands/zpopmin.md new file mode 100644 index 0000000000..2214e57dd6 --- /dev/null +++ b/commands/zpopmin.md @@ -0,0 +1,16 @@ +Removes and returns up to `count` members with the lowest scores in the sorted +set stored at `key`. + +When left unspecified, the default value for `count` is 1. Specifying a `count` +value that is higher than the sorted set's cardinality will not produce an +error. When returning multiple elements, the one with the lowest score will +be the first, followed by the elements with greater scores. + +@examples + +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZPOPMIN myzset +``` diff --git a/commands/zrandmember.md b/commands/zrandmember.md new file mode 100644 index 0000000000..d1f6ed983e --- /dev/null +++ b/commands/zrandmember.md @@ -0,0 +1,32 @@ +When called with just the `key` argument, return a random element from the sorted set value stored at `key`. + +If the provided `count` argument is positive, return an array of **distinct elements**. +The array's length is either `count` or the sorted set's cardinality (`ZCARD`), whichever is lower. + +If called with a negative `count`, the behavior changes and the command is allowed to return the **same element multiple times**. +In this case, the number of returned elements is the absolute value of the specified `count`. + +The optional `WITHSCORES` modifier changes the reply so it includes the respective scores of the randomly selected elements from the sorted set. + +@examples + +```cli +ZADD dadi 1 uno 2 due 3 tre 4 quattro 5 cinque 6 sei +ZRANDMEMBER dadi +ZRANDMEMBER dadi +ZRANDMEMBER dadi -5 WITHSCORES +``` + +## Specification of the behavior when count is passed + +When the `count` argument is a positive value this command behaves as follows: + +* No repeated elements are returned. +* If `count` is bigger than the cardinality of the sorted set, the command will only return the whole sorted set without additional elements. +* The order of elements in the reply is not truly random, so it is up to the client to shuffle them if needed. + +When the `count` is a negative value, the behavior changes as follows: + +* Repeating elements are possible. +* Exactly `count` elements, or an empty array if the sorted set is empty (non-existing key), are always returned. +* The order of elements in the reply is truly random. diff --git a/commands/zrange.md b/commands/zrange.md index b31db4dc7c..a928689d71 100644 --- a/commands/zrange.md +++ b/commands/zrange.md @@ -1,43 +1,124 @@ -@complexity +Returns the specified range of elements in the sorted set stored at ``. -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements returned. +`ZRANGE` can perform different types of range queries: by index (rank), by the score, or by lexicographical order. -Returns the specified range of elements in the sorted set stored at `key`. The -elements are considered to be ordered from the lowest to the highest score. -Lexicographical order is used for elements with equal score. +Starting with Redis 6.2.0, this command can replace the following commands: `ZREVRANGE`, `ZRANGEBYSCORE`, `ZREVRANGEBYSCORE`, `ZRANGEBYLEX` and `ZREVRANGEBYLEX`. -See `ZREVRANGE` when you need the elements ordered from highest to lowest -score (and descending lexicographical order for elements with equal score). +## Common behavior and options -Both `start` and `stop` are zero-based indexes, where `0` is the first element, -`1` is the next element and so on. They can also be negative numbers indicating -offsets from the end of the sorted set, with `-1` being the last element of the -sorted set, `-2` the penultimate element and so on. +The order of elements is from the lowest to the highest score. Elements with the same score are ordered lexicographically. -Out of range indexes will not produce an error. If `start` is larger than the -largest index in the sorted set, or `start > stop`, an empty list is returned. -If `stop` is larger than the end of the sorted set Redis will treat it like it -is the last element of the sorted set. +The optional `REV` argument reverses the ordering, so elements are ordered from highest to lowest score, and score ties are resolved by reverse lexicographical ordering. -It is possible to pass the `WITHSCORES` option in order to return the scores of -the elements together with the elements. The returned list will contain -`value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client -libraries are free to return a more appropriate data type (suggestion: an array -with (value, score) arrays/tuples). +The optional `LIMIT` argument can be used to obtain a sub-range from the matching elements (similar to _SELECT LIMIT offset, count_ in SQL). +A negative `` returns all elements from the ``. Keep in mind that if `` is large, the sorted set needs to be traversed for `` elements before getting to the elements to return, which can add up to O(N) time complexity. -@return +The optional `WITHSCORES` argument supplements the command's reply with the scores of elements returned. The returned list contains `value1,score1,...,valueN,scoreN` instead of `value1,...,valueN`. Client libraries are free to return a more appropriate data type (suggestion: an array with (value, score) arrays/tuples). -@multi-bulk-reply: list of elements in the specified range (optionally with -their scores). +## Index ranges + +By default, the command performs an index range query. The `` and `` arguments represent zero-based indexes, where `0` is the first element, `1` is the next element, and so on. These arguments specify an **inclusive range**, so for example, `ZRANGE myzset 0 1` will return both the first and the second element of the sorted set. + +The indexes can also be negative numbers indicating offsets from the end of the sorted set, with `-1` being the last element of the sorted set, `-2` the penultimate element, and so on. + +Out of range indexes do not produce an error. + +If `` is greater than either the end index of the sorted set or ``, an empty list is returned. + +If `` is greater than the end index of the sorted set, Redis will use the last element of the sorted set. + +## Score ranges + +When the `BYSCORE` option is provided, the command behaves like `ZRANGEBYSCORE` and returns the range of elements from the sorted set having scores equal or between `` and ``. + +`` and `` can be `-inf` and `+inf`, denoting the negative and positive infinities, respectively. This means that you are not required to know the highest or lowest score in the sorted set to get all elements from or up to a certain score. + +By default, the score intervals specified by `` and `` are closed (inclusive). +It is possible to specify an open interval (exclusive) by prefixing the score +with the character `(`. + +For example: + +``` +ZRANGE zset (1 5 BYSCORE +``` + +Will return all elements with `1 < score <= 5` while: + +``` +ZRANGE zset (5 (10 BYSCORE +``` + +Will return all the elements with `5 < score < 10` (5 and 10 excluded). + +## Reverse ranges + +Using the `REV` option reverses the sorted set, with index 0 as the element with the highest score. + +By default, `` must be less than or equal to `` to return anything. +However, if the `BYSCORE`, or `BYLEX` options are selected, the `` is the highest score to consider, and `` is the lowest score to consider, therefore `` must be greater than or equal to `` in order to return anything. + +For example: + +``` +ZRANGE zset 5 10 REV +``` + +Will return the elements between index 5 and 10 in the reversed index. + +``` +ZRANGE zset 10 5 REV BYSCORE +``` + +Will return all elements with scores less than 10 and greater than 5. + +## Lexicographical ranges + +When the `BYLEX` option is used, the command behaves like `ZRANGEBYLEX` and returns the range of elements from the sorted set between the `` and `` lexicographical closed range intervals. + +Note that lexicographical ordering relies on all elements having the same score. The reply is unspecified when the elements have different scores. + +Valid `` and `` must start with `(` or `[`, in order to specify +whether the range interval is exclusive or inclusive, respectively. + +The special values of `+` or `-` for `` and `` mean positive and negative infinite strings, respectively, so for instance the command `ZRANGE myzset - + BYLEX` is guaranteed to return all the elements in the sorted set, providing that all the elements have the same score. + +The `REV` options reverses the order of the `` and `` elements, where `` must be lexicographically greater than `` to produce a non-empty result. + +### Lexicographical comparison of strings + +Strings are compared as a binary array of bytes. Because of how the ASCII character set is specified, this means that usually this also have the effect of comparing normal ASCII characters in an obvious dictionary way. However, this is not true if non-plain ASCII strings are used (for example, utf8 strings). + +However, the user can apply a transformation to the encoded string so that the first part of the element inserted in the sorted set will compare as the user requires for the specific application. For example, if I want to +add strings that will be compared in a case-insensitive way, but I still +want to retrieve the real case when querying, I can add strings in the +following way: + + ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap + +Because of the first *normalized* part in every element (before the colon character), we are forcing a given comparison. However, after the range is queried using `ZRANGE ... BYLEX`, the application can display to the user the second part of the string, after the colon. + +The binary nature of the comparison allows to use sorted sets as a general purpose index, for example, the first part of the element can be a 64-bit big-endian number. Since big-endian numbers have the most significant bytes in the initial positions, the binary comparison will match the numerical comparison of the numbers. This can be used in order to implement range queries on 64-bit values. As in the example below, after the first 8 bytes, we can store the value of the element we are indexing. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANGE myzset 0 -1 - ZRANGE myzset 2 3 - ZRANGE myzset -2 -1 +```cli +ZADD myzset 1 "one" 2 "two" 3 "three" +ZRANGE myzset 0 -1 +ZRANGE myzset 2 3 +ZRANGE myzset -2 -1 +``` + +The following example using `WITHSCORES` shows how the command returns always an array, but this time, populated with *element_1*, *score_1*, *element_2*, *score_2*, ..., *element_N*, *score_N*. + +```cli +ZADD myzset 1 "one" 2 "two" 3 "three" +ZRANGE myzset 0 1 WITHSCORES +``` + +This example shows how to query the sorted set by score, excluding the value `1` and up to infinity, returning only the second element of the result: +```cli +ZADD myzset 1 "one" 2 "two" 3 "three" +ZRANGE myzset (1 +inf BYSCORE LIMIT 1 1 +``` diff --git a/commands/zrangebylex.md b/commands/zrangebylex.md new file mode 100644 index 0000000000..b4663f674b --- /dev/null +++ b/commands/zrangebylex.md @@ -0,0 +1,57 @@ +When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at `key` with a value between `min` and `max`. + +If the elements in the sorted set have different scores, the returned elements are unspecified. + +The elements are considered to be ordered from lower to higher strings as compared byte-by-byte using the `memcmp()` C function. Longer strings are considered greater than shorter strings if the common part is identical. + +The optional `LIMIT` argument can be used to only get a range of the matching +elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count` +returns all elements from the `offset`. +Keep in mind that if `offset` is large, the sorted set needs to be traversed for +`offset` elements before getting to the elements to return, which can add up to +O(N) time complexity. + +## How to specify intervals + +Valid *start* and *stop* must start with `(` or `[`, in order to specify +if the range item is respectively exclusive or inclusive. +The special values of `+` or `-` for *start* and *stop* have the special +meaning or positively infinite and negatively infinite strings, so for +instance the command **ZRANGEBYLEX myzset - +** is guaranteed to return +all the elements in the sorted set, if all the elements have the same +score. + +## Details on strings comparison + +Strings are compared as binary array of bytes. Because of how the ASCII character +set is specified, this means that usually this also have the effect of comparing +normal ASCII characters in an obvious dictionary way. However this is not true +if non plain ASCII strings are used (for example utf8 strings). + +However the user can apply a transformation to the encoded string so that +the first part of the element inserted in the sorted set will compare as the +user requires for the specific application. For example if I want to +add strings that will be compared in a case-insensitive way, but I still +want to retrieve the real case when querying, I can add strings in the +following way: + + ZADD autocomplete 0 foo:Foo 0 bar:BAR 0 zap:zap + +Because of the first *normalized* part in every element (before the colon character), we are forcing a given comparison, however after the range is queries using `ZRANGEBYLEX` the application can display to the user the second part of the string, after the colon. + +The binary nature of the comparison allows to use sorted sets as a general +purpose index, for example the first part of the element can be a 64 bit +big endian number: since big endian numbers have the most significant bytes +in the initial positions, the binary comparison will match the numerical +comparison of the numbers. This can be used in order to implement range +queries on 64 bit values. As in the example below, after the first 8 bytes +we can store the value of the element we are actually indexing. + +@examples + +```cli +ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g +ZRANGEBYLEX myzset - [c +ZRANGEBYLEX myzset - (c +ZRANGEBYLEX myzset [aaa (g +``` diff --git a/commands/zrangebyscore.md b/commands/zrangebyscore.md index 7308d7007b..5c4ea3a9fc 100644 --- a/commands/zrangebyscore.md +++ b/commands/zrangebyscore.md @@ -1,26 +1,21 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements being returned. If M is constant (e.g. always asking for the -first 10 elements with `LIMIT`), you can consider it O(log(N)). - Returns all the elements in the sorted set at `key` with a score between `min` -and `max` (including elements with score equal to `min` or `max`). The -elements are considered to be ordered from low to high scores. +and `max` (including elements with score equal to `min` or `max`). +The elements are considered to be ordered from low to high scores. The elements having the same score are returned in lexicographical order (this follows from a property of the sorted set implementation in Redis and does not involve further computation). The optional `LIMIT` argument can be used to only get a range of the matching -elements (similar to _SELECT LIMIT offset, count_ in SQL). Keep in mind that if -`offset` is large, the sorted set needs to be traversed for `offset` elements -before getting to the elements to return, which can add up to O(N) time -complexity. +elements (similar to _SELECT LIMIT offset, count_ in SQL). A negative `count` +returns all elements from the `offset`. +Keep in mind that if `offset` is large, the sorted set needs to be traversed for +`offset` elements before getting to the elements to return, which can add up to +O(N) time complexity. -The optional `WITHSCORES` argument makes the command return both the element -and its score, instead of the element alone. This option is available since -Redis 2.0. +The optional `WITHSCORES` argument makes the command return both the element and +its score, instead of the element alone. +This option is available since Redis 2.0. ## Exclusive intervals and infinity @@ -30,29 +25,73 @@ a certain score. By default, the interval specified by `min` and `max` is closed (inclusive). It is possible to specify an open interval (exclusive) by prefixing the score -with the character `(`. For example: +with the character `(`. +For example: - ZRANGEBYSCORE zset (1 5 +``` +ZRANGEBYSCORE zset (1 5 +``` Will return all elements with `1 < score <= 5` while: - ZRANGEBYSCORE zset (5 (10 +``` +ZRANGEBYSCORE zset (5 (10 +``` Will return all the elements with `5 < score < 10` (5 and 10 excluded). -@return +@examples -@multi-bulk-reply: list of elements in the specified score range (optionally with -their scores). +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZRANGEBYSCORE myzset -inf +inf +ZRANGEBYSCORE myzset 1 2 +ZRANGEBYSCORE myzset (1 2 +ZRANGEBYSCORE myzset (1 (2 +``` -@examples +## Pattern: weighted random selection of an element + +Normally `ZRANGEBYSCORE` is simply used in order to get range of items +where the score is the indexed integer key, however it is possible to do less +obvious things with the command. + +For example a common problem when implementing Markov chains and other algorithms +is to select an element at random from a set, but different elements may have +different weights that change how likely it is they are picked. + +This is how we use this command in order to mount such an algorithm: + +Imagine you have elements A, B and C with weights 1, 2 and 3. +You compute the sum of the weights, which is 1+2+3 = 6 + +At this point you add all the elements into a sorted set using this algorithm: + +``` +SUM = ELEMENTS.TOTAL_WEIGHT // 6 in this case. +SCORE = 0 +FOREACH ELE in ELEMENTS + SCORE += ELE.weight / SUM + ZADD KEY SCORE ELE +END +``` + +This means that you set: + +``` +A to score 0.16 +B to score .5 +C to score 1 +``` + +Since this involves approximations, in order to avoid C is set to, +like, 0.998 instead of 1, we just modify the above algorithm to make sure +the last score is 1 (left as an exercise for the reader...). - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANGEBYSCORE myzset -inf +inf - ZRANGEBYSCORE myzset 1 2 - ZRANGEBYSCORE myzset (1 2 - ZRANGEBYSCORE myzset (1 (2 +At this point, each time you want to get a weighted random element, +just compute a random number between 0 and 1 (which is like calling +`rand()` in most languages), so you can just do: + RANDOM_ELE = ZRANGEBYSCORE key RAND() +inf LIMIT 0 1 diff --git a/commands/zrangestore.md b/commands/zrangestore.md new file mode 100644 index 0000000000..2a0bbfc9d9 --- /dev/null +++ b/commands/zrangestore.md @@ -0,0 +1,9 @@ +This command is like `ZRANGE`, but stores the result in the `` destination key. + +@examples + +```cli +ZADD srczset 1 "one" 2 "two" 3 "three" 4 "four" +ZRANGESTORE dstzset srczset 2 -1 +ZRANGE dstzset 0 -1 +``` diff --git a/commands/zrank.md b/commands/zrank.md index d6fdcbf74a..8436883cd0 100644 --- a/commands/zrank.md +++ b/commands/zrank.md @@ -1,27 +1,21 @@ -@complexity - -O(log(N)) - - Returns the rank of `member` in the sorted set stored at `key`, with the scores -ordered from low to high. The rank (or index) is 0-based, which means that the -member with the lowest score has rank `0`. +ordered from low to high. +The rank (or index) is 0-based, which means that the member with the lowest +score has rank `0`. + +The optional `WITHSCORE` argument supplements the command's reply with the score of the element returned. Use `ZREVRANK` to get the rank of an element with the scores ordered from high to low. -@return - -* If `member` exists in the sorted set, @integer-reply: the rank of `member`. -* If `member` does not exist in the sorted set or `key` does not exist, -@bulk-reply: `nil`. - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZRANK myzset "three" - ZRANK myzset "four" - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZRANK myzset "three" +ZRANK myzset "four" +ZRANK myzset "three" WITHSCORE +ZRANK myzset "four" WITHSCORE +``` diff --git a/commands/zrem.md b/commands/zrem.md index b466fa1889..642e2874bc 100644 --- a/commands/zrem.md +++ b/commands/zrem.md @@ -1,27 +1,14 @@ -@complexity - -O(log(N)) with N being the number of elements in the sorted set. - -Removes the specified members from the sorted set stored at `key`. Non existing members are ignored. +Removes the specified members from the sorted set stored at `key`. +Non existing members are ignored. An error is returned when `key` exists and does not hold a sorted set. -@return - -@integer-reply, specifically: - -* The number of members removed from the sorted set, not including non existing members. - -@history - -* `>= 2.4`: Accepts multiple elements. In Redis versions older than 2.4 it was possible to remove a single member per call. - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREM myzset "two" - ZRANGE myzset 0 -1 WITHSCORES - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREM myzset "two" +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zremrangebylex.md b/commands/zremrangebylex.md new file mode 100644 index 0000000000..83df974cdd --- /dev/null +++ b/commands/zremrangebylex.md @@ -0,0 +1,13 @@ +When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command removes all elements in the sorted set stored at `key` between the lexicographical range specified by `min` and `max`. + +The meaning of `min` and `max` are the same of the `ZRANGEBYLEX` command. Similarly, this command actually removes the same elements that `ZRANGEBYLEX` would return if called with the same `min` and `max` arguments. + +@examples + +```cli +ZADD myzset 0 aaaa 0 b 0 c 0 d 0 e +ZADD myzset 0 foo 0 zap 0 zip 0 ALPHA 0 alpha +ZRANGE myzset 0 -1 +ZREMRANGEBYLEX myzset [alpha [omega +ZRANGE myzset 0 -1 +``` diff --git a/commands/zremrangebyrank.md b/commands/zremrangebyrank.md index 99eb68877b..30a068b673 100644 --- a/commands/zremrangebyrank.md +++ b/commands/zremrangebyrank.md @@ -1,25 +1,18 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements removed by the operation. - -Removes all elements in the sorted set stored at `key` with rank between -`start` and `stop`. Both `start` and `stop` are `0`-based indexes with `0` -being the element with the lowest score. These indexes can be negative numbers, -where they indicate offsets starting at the element with the highest score. For -example: `-1` is the element with the highest score, `-2` the element with the -second highest score and so forth. - -@return - -@integer-reply: the number of elements removed. +Removes all elements in the sorted set stored at `key` with rank between `start` +and `stop`. +Both `start` and `stop` are `0` -based indexes with `0` being the element with +the lowest score. +These indexes can be negative numbers, where they indicate offsets starting at +the element with the highest score. +For example: `-1` is the element with the highest score, `-2` the element with +the second highest score and so forth. @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREMRANGEBYRANK myzset 0 1 - ZRANGE myzset 0 -1 WITHSCORES - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREMRANGEBYRANK myzset 0 1 +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zremrangebyscore.md b/commands/zremrangebyscore.md index 88e38f94a4..839b17b3c7 100644 --- a/commands/zremrangebyscore.md +++ b/commands/zremrangebyscore.md @@ -1,24 +1,12 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements removed by the operation. - Removes all elements in the sorted set stored at `key` with a score between `min` and `max` (inclusive). -Since version 2.1.6, `min` and `max` can be exclusive, following the syntax of -`ZRANGEBYSCORE`. - -@return - -@integer-reply: the number of elements removed. - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREMRANGEBYSCORE myzset -inf (2 - ZRANGE myzset 0 -1 WITHSCORES - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREMRANGEBYSCORE myzset -inf (2 +ZRANGE myzset 0 -1 WITHSCORES +``` diff --git a/commands/zrevrange.md b/commands/zrevrange.md index 3888b8a88c..2a36390456 100644 --- a/commands/zrevrange.md +++ b/commands/zrevrange.md @@ -1,26 +1,16 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the -sorted set and M the number of elements returned. - -Returns the specified range of elements in the sorted set stored at `key`. The -elements are considered to be ordered from the highest to the lowest score. +Returns the specified range of elements in the sorted set stored at `key`. +The elements are considered to be ordered from the highest to the lowest score. Descending lexicographical order is used for elements with equal score. Apart from the reversed ordering, `ZREVRANGE` is similar to `ZRANGE`. -@return - -@multi-bulk-reply: list of elements in the specified range (optionally with -their scores). - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANGE myzset 0 -1 - ZREVRANGE myzset 2 3 - ZREVRANGE myzset -2 -1 - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANGE myzset 0 -1 +ZREVRANGE myzset 2 3 +ZREVRANGE myzset -2 -1 +``` diff --git a/commands/zrevrangebylex.md b/commands/zrevrangebylex.md new file mode 100644 index 0000000000..eb9ad8f436 --- /dev/null +++ b/commands/zrevrangebylex.md @@ -0,0 +1,12 @@ +When all the elements in a sorted set are inserted with the same score, in order to force lexicographical ordering, this command returns all the elements in the sorted set at `key` with a value between `max` and `min`. + +Apart from the reversed ordering, `ZREVRANGEBYLEX` is similar to `ZRANGEBYLEX`. + +@examples + +```cli +ZADD myzset 0 a 0 b 0 c 0 d 0 e 0 f 0 g +ZREVRANGEBYLEX myzset [c - +ZREVRANGEBYLEX myzset (c - +ZREVRANGEBYLEX myzset (g [aaa +``` diff --git a/commands/zrevrangebyscore.md b/commands/zrevrangebyscore.md index 721cf06b31..c6e2e3e537 100644 --- a/commands/zrevrangebyscore.md +++ b/commands/zrevrangebyscore.md @@ -1,32 +1,22 @@ -@complexity - -O(log(N)+M) with N being the number of elements in the sorted set and M the -number of elements being returned. If M is constant (e.g. always asking for the -first 10 elements with `LIMIT`), you can consider it O(log(N)). - Returns all the elements in the sorted set at `key` with a score between `max` -and `min` (including elements with score equal to `max` or `min`). In contrary -to the default ordering of sorted sets, for this command the elements are -considered to be ordered from high to low scores. +and `min` (including elements with score equal to `max` or `min`). +In contrary to the default ordering of sorted sets, for this command the +elements are considered to be ordered from high to low scores. -The elements having the same score are returned in reverse lexicographical order. +The elements having the same score are returned in reverse lexicographical +order. Apart from the reversed ordering, `ZREVRANGEBYSCORE` is similar to `ZRANGEBYSCORE`. -@return - -@multi-bulk-reply: list of elements in the specified score range (optionally with -their scores). - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANGEBYSCORE myzset +inf -inf - ZREVRANGEBYSCORE myzset 2 1 - ZREVRANGEBYSCORE myzset 2 (1 - ZREVRANGEBYSCORE myzset (2 (1 - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANGEBYSCORE myzset +inf -inf +ZREVRANGEBYSCORE myzset 2 1 +ZREVRANGEBYSCORE myzset 2 (1 +ZREVRANGEBYSCORE myzset (2 (1 +``` diff --git a/commands/zrevrank.md b/commands/zrevrank.md index 46283dc22f..c79868920b 100644 --- a/commands/zrevrank.md +++ b/commands/zrevrank.md @@ -1,27 +1,21 @@ -@complexity - -O(log(N)) - - Returns the rank of `member` in the sorted set stored at `key`, with the scores -ordered from high to low. The rank (or index) is 0-based, which means that the -member with the highest score has rank `0`. +ordered from high to low. +The rank (or index) is 0-based, which means that the member with the highest +score has rank `0`. + +The optional `WITHSCORE` argument supplements the command's reply with the score of the element returned. Use `ZRANK` to get the rank of an element with the scores ordered from low to high. -@return - -* If `member` exists in the sorted set, @integer-reply: the rank of `member`. -* If `member` does not exist in the sorted set or `key` does not exist, -@bulk-reply: `nil`. - @examples - @cli - ZADD myzset 1 "one" - ZADD myzset 2 "two" - ZADD myzset 3 "three" - ZREVRANK myzset "one" - ZREVRANK myzset "four" - +```cli +ZADD myzset 1 "one" +ZADD myzset 2 "two" +ZADD myzset 3 "three" +ZREVRANK myzset "one" +ZREVRANK myzset "four" +ZREVRANK myzset "three" WITHSCORE +ZREVRANK myzset "four" WITHSCORE +``` diff --git a/commands/zscan.md b/commands/zscan.md new file mode 100644 index 0000000000..3926307fbe --- /dev/null +++ b/commands/zscan.md @@ -0,0 +1 @@ +See `SCAN` for `ZSCAN` documentation. diff --git a/commands/zscore.md b/commands/zscore.md index a683004f3b..324019d940 100644 --- a/commands/zscore.md +++ b/commands/zscore.md @@ -1,21 +1,11 @@ -@complexity - -O(1) - - Returns the score of `member` in the sorted set at `key`. -If `member` does not exist in the sorted set, or `key` does not exist, -`nil` is returned. - -@return - -@bulk-reply: the score of `member` (a double precision floating point number), -represented as string. +If `member` does not exist in the sorted set, or `key` does not exist, `nil` is +returned. @examples - @cli - ZADD myzset 1 "one" - ZSCORE myzset "one" - +```cli +ZADD myzset 1 "one" +ZSCORE myzset "one" +``` diff --git a/commands/zunion.md b/commands/zunion.md new file mode 100644 index 0000000000..f85bfd821a --- /dev/null +++ b/commands/zunion.md @@ -0,0 +1,16 @@ +This command is similar to `ZUNIONSTORE`, but instead of storing the resulting +sorted set, it is returned to the client. + +For a description of the `WEIGHTS` and `AGGREGATE` options, see `ZUNIONSTORE`. + +@examples + +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZUNION 2 zset1 zset2 +ZUNION 2 zset1 zset2 WITHSCORES +``` diff --git a/commands/zunionstore.md b/commands/zunionstore.md index 426aafc5d4..efa2fbba55 100644 --- a/commands/zunionstore.md +++ b/commands/zunionstore.md @@ -1,43 +1,34 @@ -@complexity - -O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, -and M being the number of elements in the resulting sorted set. - Computes the union of `numkeys` sorted sets given by the specified keys, and -stores the result in `destination`. It is mandatory to provide the number of -input keys (`numkeys`) before passing the input keys and the other (optional) -arguments. +stores the result in `destination`. +It is mandatory to provide the number of input keys (`numkeys`) before passing +the input keys and the other (optional) arguments. By default, the resulting score of an element is the sum of its scores in the sorted sets where it exists. Using the `WEIGHTS` option, it is possible to specify a multiplication factor -for each input sorted set. This means that the score of every element in every -input sorted set is multiplied by this factor before being passed to the -aggregation function. When `WEIGHTS` is not given, the multiplication factors -default to `1`. +for each input sorted set. +This means that the score of every element in every input sorted set is +multiplied by this factor before being passed to the aggregation function. +When `WEIGHTS` is not given, the multiplication factors default to `1`. With the `AGGREGATE` option, it is possible to specify how the results of the -union are aggregated. This option defaults to `SUM`, where the score of an -element is summed across the inputs where it exists. When this option is set to -either `MIN` or `MAX`, the resulting set will contain the minimum or maximum -score of an element across the inputs where it exists. +union are aggregated. +This option defaults to `SUM`, where the score of an element is summed across +the inputs where it exists. +When this option is set to either `MIN` or `MAX`, the resulting set will contain +the minimum or maximum score of an element across the inputs where it exists. If `destination` already exists, it is overwritten. -@return - -@integer-reply: the number of elements in the resulting sorted set at -`destination`. - @examples - @cli - ZADD zset1 1 "one" - ZADD zset1 2 "two" - ZADD zset2 1 "one" - ZADD zset2 2 "two" - ZADD zset2 3 "three" - ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 - ZRANGE out 0 -1 WITHSCORES - +```cli +ZADD zset1 1 "one" +ZADD zset1 2 "two" +ZADD zset2 1 "one" +ZADD zset2 2 "two" +ZADD zset2 3 "three" +ZUNIONSTORE out 2 zset1 zset2 WEIGHTS 2 3 +ZRANGE out 0 -1 WITHSCORES +``` diff --git a/community/_index.md b/community/_index.md new file mode 100644 index 0000000000..b67d2a2aaf --- /dev/null +++ b/community/_index.md @@ -0,0 +1,50 @@ +--- +title: Community +linkTitle: Community +--- + +Since 2009, the Redis project has inspired an enthusiastic and active community of users and contributors. We continue to be committed to fostering an open, welcoming, diverse, inclusive, and healthy community. + +## Code of Conduct + +Redis has adopted the [Contributor Covenant Code of Conduct](https://github.com/redis/redis/blob/unstable/CODE_OF_CONDUCT.md). + +## Getting help + +### Discord server + +On the [Redis Discord server](https://discord.gg/redis), you can chat with members of the Redis community in real time. You'll meet Redis users, contributors, and developer advocates. This is a great place to stop in for quick questions or to share your latest Redis discoveries. + +### Mailing list + +Join the [Redis mailing list](https://groups.google.com/g/redis-db) to discuss the ongoing development of Redis and to find out about new Redis releases. + +### Stack Overflow + +Have a question about Redis? Search the [Stack Overflow Redis tag](https://stackoverflow.com/questions/tagged/redis) for answers, or post a question of your own. + +## Redis news + +For occasional updates on the new Redis releases, you can either [subscribe to the Redis mailing list](https://groups.google.com/g/redis-db) or [follow the Redis News Feed Twitter account](https://twitter.com/redisfeed). + +To keep up with the latest from Redis Inc., including news on Redis Cloud and Redis Stack, consider [following the Redis Twitter feed](https://twitter.com/redisinc). + +## Contributing to Redis + +> Future releases of Redis will be dual-licensed under a source-available license. You can choose between the [Redis Source Available License 2.0 (RSALv2)](/docs/about/license) or the Server Side Public License v1 (SSPLv1). + +There are many ways to contribute to Redis, starting with documentation all the way to changes to the Redis server. Here are a few ways you can get involved. + +### Contributing to docs + +We welcome contributions to the [Redis docs](https://github.com/redis/redis-doc). For small changes and typos, we recommend creating a pull request against [redis-doc repo](https://github.com/redis/redis-doc/pulls). + +### Reporting bugs + +To report a bug in Redis, create a [Redis Github issue](https://github.com/redis/redis/issues). + +For larger doc changes, we ask that you first create an issue describing your proposed changes. This is a good way to get feedback in advance to increase the likelihood that your changes will be accepted. + +### Client libraries + +The Redis client libraries are nearly always open source and accepting of contributions. Consult the contribution guidelines for the library you're interested in. \ No newline at end of file diff --git a/docs/_index.md b/docs/_index.md new file mode 100644 index 0000000000..5cd611c468 --- /dev/null +++ b/docs/_index.md @@ -0,0 +1,12 @@ +--- +title: "Documentation" +linkTitle: "Documentation" +weight: 20 +aliases: + - /documentation + - /documentation/ + - /topics + - /topics/ +--- + +Welcome to the Redis documentation. diff --git a/docs/about/_index.md b/docs/about/_index.md new file mode 100644 index 0000000000..56dc574a12 --- /dev/null +++ b/docs/about/_index.md @@ -0,0 +1,46 @@ +--- +title: Introduction to Redis +linkTitle: "About" +weight: 10 +description: Learn about Redis +aliases: + - /topics/introduction + - /buzz +--- + +Redis is an open source (BSD licensed), in-memory __data structure store__ used as a database, cache, message broker, and streaming engine. + +> Future releases of Redis will be dual-licensed under a source-available license. You can choose between the [Redis Source Available License 2.0 (RSALv2)](/docs/about/license) or the Server Side Public License v1 (SSPLv1). + +Redis provides [data structures](/docs/data-types/) such as [strings](/docs/data-types/strings/), [hashes](/docs/data-types/hashes/), [lists](/docs/data-types/lists/), [sets](/docs/data-types/sets/), [sorted sets](/docs/data-types/sorted-sets/) with range queries, [bitmaps](/docs/data-types/bitmaps/), [hyperloglogs](/docs/data-types/hyperloglogs/), [geospatial indexes](/docs/data-types/geospatial/), and [streams](/docs/data-types/streams/). Redis has built-in [replication](/topics/replication), [Lua scripting](/commands/eval), [LRU eviction](/docs/reference/eviction/), [transactions](/topics/transactions), and different levels of [on-disk persistence](/topics/persistence), and provides high availability via [Redis Sentinel](/topics/sentinel) and automatic partitioning with [Redis Cluster](/topics/cluster-tutorial). + +You can run __atomic operations__ +on these types, like [appending to a string](/commands/append); +[incrementing the value in a hash](/commands/hincrby); [pushing an element to a +list](/commands/lpush); [computing set intersection](/commands/sinter), +[union](/commands/sunion) and [difference](/commands/sdiff); +or [getting the member with highest ranking in a sorted set](/commands/zrange). + +To achieve top performance, Redis works with an +**in-memory dataset**. Depending on your use case, Redis can persist your data either +by periodically [dumping the dataset to disk](/topics/persistence#snapshotting) +or by [appending each command to a disk-based log](/topics/persistence#append-only-file). You can also disable persistence if you just need a feature-rich, networked, in-memory cache. + +Redis supports [asynchronous replication](/topics/replication), with fast non-blocking synchronization and auto-reconnection with partial resynchronization on net split. + +Redis also includes: + +* [Transactions](/topics/transactions) +* [Pub/Sub](/topics/pubsub) +* [Lua scripting](/commands/eval) +* [Keys with a limited time-to-live](/commands/expire) +* [LRU eviction of keys](/docs/reference/eviction) +* [Automatic failover](/topics/sentinel) + +You can use Redis from [most programming languages](/clients). + +Redis is written in **ANSI C** and works on most POSIX systems like Linux, +\*BSD, and Mac OS X, without external dependencies. Linux and OS X are the two operating systems where Redis is developed and tested the most, and we **recommend using Linux for deployment**. Redis may work in Solaris-derived systems like SmartOS, but support is *best effort*. +There is no official support for Windows builds. + +
diff --git a/docs/about/license.md b/docs/about/license.md new file mode 100644 index 0000000000..6f78dd1f2a --- /dev/null +++ b/docs/about/license.md @@ -0,0 +1,95 @@ +--- +title: "Redis license" +linkTitle: "License" +weight: 5 +description: > + Redis license and trademark information +aliases: + - /topics/license + - /docs/stack/license/ +--- + + +* Redis is source-available software, available under the terms of the RSALv2 and SSPLv1 licenses. Most of the Redis source code was written and is copyrighted by Salvatore Sanfilippo and Pieter Noordhuis. A list of other contributors can be found in the git history. + + The Redis trademark and logo are owned by Redis Ltd. and can be +used in accordance with the [Redis Trademark Guidelines](https://redis.com/legal/trademark-guidelines/). + +* RedisInsight is licensed under the Server Side Public License (SSPL). + +* Redis Stack Server, which combines open source Redis with Search and Query features, JSON, Time Series, and Probabilistic data structures is dual-licensed under the Redis Source Available License (RSALv2), as described below, and the [Server Side Public License](https://redis.com/legal/server-side-public-license-sspl/) (SSPL). For information about licensing per version, see [Versions and licenses](/docs/about/about-stack/#redis-stack-license). + + +## Licenses: + +### REDIS SOURCE AVAILABLE LICENSE (RSAL) 2.0 + +Last updated: November 15, 2022 + +#### Acceptance + +By using the software, you agree to all of the terms and conditions below. + +#### Copyright License + +The licensor grants you a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable license to use, copy, distribute, make available, and prepare derivative works of the software, in each case subject to the limitations and conditions below. + +#### Limitations + +You may not make the functionality of the software or a modified version available to third parties as a service, or distribute the software or a modified version in a manner that makes the functionality of the software available to third parties. Making the functionality of the software or modified version available to third parties includes, without limitation, enabling third parties to interact with the functionality of the software or modified version in distributed form or remotely through a computer network, offering a product or service the value of which entirely or primarily derives from the value of the software or modified version, or offering a product or service that accomplishes for users the primary purpose of the software or modified version. + +You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor’s trademarks is subject to applicable law. + +#### Patents + +The licensor grants you a license, under any patent claims the licensor can license, or becomes able to license, to make, have made, use, sell, offer for sale, import and have imported the software, in each case subject to the limitations and conditions in this license. This license does not cover any patent claims that you cause to be infringed by modifications or additions to the software. If you or your company make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company. + +#### Notices + +You must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms. If you modify the software, you must include in any modified copies of the software prominent notices stating that you have modified the software. + +#### No Other Rights + +These terms do not imply any licenses other than those expressly granted in these terms. + +#### Termination + +If you use the software in violation of these terms, such use is not licensed, and your licenses will automatically terminate. If the licensor provides you with a notice of your violation, and you cease all violations of this license no later than 30 days after you receive that notice, your licenses will be reinstated retroactively. However, if you violate these terms after such reinstatement, any additional violation of these terms will cause your licenses to terminate automatically and permanently. + +#### No Liability + +As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim. + +#### Definitions + +The licensor is the entity offering these terms, and the software is the software the licensor makes available under these terms, including any portion of it. + +To modify a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission other than making an exact copy. The resulting work is called a modified version of the earlier work. + +**you** refers to the individual or entity agreeing to these terms. + +**your company** is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization. + +**control** means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise. Control can be direct or indirect. + +**your licenses** are all the licenses granted to you for the software under these terms. + +**use** means anything you do with the software requiring one of your licenses. + +**trademark** means trademarks, service marks, and similar rights. + +#### Third-party files and licenses + +Redis uses source code from third parties. All this code contains a BSD or BSD-compatible license. The following is a list of third-party files and information about their copyright. + +* Redis uses the [LHF compression library](http://oldhome.schmorp.de/marc/liblzf.html). LibLZF is copyright Marc Alexander Lehmann and is released under the terms of the two-clause BSD license. + +* Redis uses the sha1.c file that is copyright by Steve Reid and released under the public domain. This file is extremely popular and used among open source and proprietary code. + +* When compiled on Linux, Redis uses the [Jemalloc allocator](https://github.com/jemalloc/jemalloc), which is copyrighted by Jason Evans, Mozilla Foundation, and Facebook, Inc and released under the two-clause BSD license. + +* Inside Jemalloc, the file pprof is copyrighted by Google Inc. and released under the three-clause BSD license. + +* Inside Jemalloc the files inttypes.h, stdbool.h, stdint.h, strings.h under the msvc_compat directory are copyright Alexander Chemeris and released under the three-clause BSD license. + +* The libraries hiredis and linenoise also included inside the Redis distribution are copyright Salvatore Sanfilippo and Pieter Noordhuis and released under the terms respectively of the three-clause BSD license and two-clause BSD license. \ No newline at end of file diff --git a/docs/about/releases.md b/docs/about/releases.md new file mode 100644 index 0000000000..f437c3360f --- /dev/null +++ b/docs/about/releases.md @@ -0,0 +1,147 @@ +--- +title: "Redis release cycle" +linkTitle: "Release cycle" +weight: 4 +description: How are new versions of Redis released? +aliases: + - /topics/releases +--- + +Redis is system software and a type of system software that holds user data, so +it is among the most critical pieces of a software stack. + +For this reason, Redis' release cycle is such that it ensures highly-stable +releases, even at the cost of slower cycles. + +New releases are published in the [Redis GitHub repository](http://github.com/redis/redis) +and are also available for [download](/download). Announcements are sent to the +[Redis mailing list](http://groups.google.com/group/redis-db) and by +[@redisfeed on Twitter](https://twitter.com/redisfeed). + +## Release cycle + +A given version of Redis can be at three different levels of stability: + +* Unstable +* Release Candidate +* Stable + +### Unstable tree + +The unstable version of Redis is located in the `unstable` branch in the +[Redis GitHub repository](http://github.com/redis/redis). + +This branch is the source tree where most of the new features are under +development. `unstable` is not considered production-ready: it may contain +critical bugs, incomplete features, and is potentially unstable. + +However, we try hard to make sure that even the unstable branch is usable most +of the time in a development environment without significant issues. + +### Release candidate + +New minor and major versions of Redis begin as forks of the `unstable` branch. +The forked branch's name is the target release + +For example, when Redis 6.0 was released as a release candidate, the `unstable` +branch was forked into the `6.0` branch. The new branch is the release +candidate (RC) for that version. + +Bug fixes and new features that can be stabilized during the release's time +frame are committed to the unstable branch and backported to the release +candidate branch. The `unstable` branch may include additional work that is not +a part of the release candidate and scheduled for future releases. + +The first release candidate, or RC1, is released once it can be used for +development purposes and for testing the new version. At this stage, most of +the new features and changes the new version brings are ready for review, and +the release's purpose is collecting the public's feedback. + +Subsequent release candidates are released every three weeks or so, primarily +for fixing bugs. These may also add new features and introduce changes, but at +a decreasing rate and decreasing potential risk towards the final release +candidate. + +### Stable tree + +Once development has ended and the frequency of critical bug reports for the +release candidate wanes, it is ready for the final release. At this point, the +release is marked as stable and is released with "0" as its patch-level +version. + +## Versioning + +Stable releases liberally follow the usual `major.minor.patch` semantic +versioning schema. The primary goal is to provide explicit guarantees regarding +backward compatibility. + +### Patch-Level versions + +Patches primarily consist of bug fixes and very rarely introduce any +compatibility issues. + +Upgrading from a previous patch-level version is almost always safe and +seamless. + +New features and configuration directives may be added, or default values +changed, as long as these don’t carry significant impacts or introduce +operations-related issues. + +### Minor versions + +Minor versions usually deliver maturity and extended functionality. + +Upgrading between minor versions does not introduce any application-level +compatibility issues. + +Minor releases may include new commands and data types that introduce +operations-related incompatibilities, including changes in data persistence +format and replication protocol. + +### Major versions + +Major versions introduce new capabilities and significant changes. + +Ideally, these don't introduce application-level compatibility issues. + +## Release schedule + +A new major version is planned for release once a year. + +Generally, every major release is followed by a minor version after six months. + +Patches are released as needed to fix high-urgency issues, or once a stable +version accumulates enough fixes to justify it. + +For contacting the core team on sensitive matters and security issues, please +email [redis@redis.io](mailto:redis@redis.io). + +## Support + +As a rule, older versions are not supported as we try very hard to make the +Redis API mostly backward compatible. + +Upgrading to newer versions is the recommended approach and is usually trivial. + +The latest stable release is always fully supported and maintained. + +Two additional versions receive maintenance only, meaning that only fixes for +critical bugs and major security issues are committed and released as patches: + +* The previous minor version of the latest stable release. +* The previous stable major release. + +For example, consider the following hypothetical versions: 1.2, 2.0, 2.2, 3.0, +3.2. + +When version 2.2 is the latest stable release, both 2.0 and 1.2 are maintained. + +Once version 3.0.0 replaces 2.2 as the latest stable, versions 2.0 and 2.2 are +maintained, whereas version 1.x reaches its end of life. + +This process repeats with version 3.2.0, after which only versions 2.2 and 3.0 +are maintained. + +The above are guidelines rather than rules set in stone and will not replace +common sense. + diff --git a/docs/about/users.md b/docs/about/users.md new file mode 100644 index 0000000000..342368905c --- /dev/null +++ b/docs/about/users.md @@ -0,0 +1,19 @@ +--- +title: "Who's using Redis?" +linkTitle: "Who's using Redis?" +weight: 2 +description: > + Select list of organizations running Redis in production +aliases: + - /topics/whos-using-redis +--- + +A list of well known companies using Redis: + +* [Twitter](https://www.infoq.com/presentations/Real-Time-Delivery-Twitter) +* [GitHub](https://github.com/blog/530-how-we-made-github-fast) +* [Snapchat](https://twitter.com/robustcloud/status/448503100056535040) +* [Craigslist](https://blog.zawodny.com/2011/02/26/redis-sharding-at-craigslist/) +* [StackOverflow](https://meta.stackoverflow.com/questions/69164/does-stackoverflow-use-caching-and-if-so-how/69172) + +And **many others**! [techstacks.io](https://techstacks.io) maintains a list of [popular sites using Redis](https://techstacks.io/tech/redis). diff --git a/docs/connect/_index.md b/docs/connect/_index.md new file mode 100644 index 0000000000..a53d05d406 --- /dev/null +++ b/docs/connect/_index.md @@ -0,0 +1,39 @@ +--- +title: Connect to Redis +linkTitle: Connect +description: Learn how to use user interfaces and client libraries +weight: 35 +aliases: + - /docs/ui +--- + +You can connect to Redis in the following ways: + +* With the `redis-cli` command line tool +* Use RedisInsight as a graphical user interface +* Via a client library for your programming language + +## Redis command line interface + +The [Redis command line interface](/docs/connect/cli) (also known as `redis-cli`) is a terminal program that sends commands to and reads replies from the Redis server. It has the following two main modes: + +1. An interactive Read Eval Print Loop (REPL) mode where the user types Redis commands and receives replies. +2. A command mode where `redis-cli` is executed with additional arguments, and the reply is printed to the standard output. + +## RedisInsight + +[RedisInsight](/docs/connect/insight) combines a graphical user interface with Redis CLI to let you work with any Redis deployment. You can visually browse and interact with data, take advantage of diagnostic tools, learn by example, and much more. Best of all, RedisInsight is free. + +## Client libraries + +It's easy to connect your application to a Redis database. The official client libraries cover the following languages: + +* [C#/.NET](/docs/connect/clients/dotnet) +* [Go](/docs/connect/clients/go) +* [Java](/docs/connect/clients/java) +* [Node.js](/docs/connect/clients/nodejs) +* [Python](/docs/connect/clients/python) + +You can find a complete list of all client libraries, including the community-maintained ones, on the [clients page](/resources/clients/). + +
diff --git a/docs/connect/cli.md b/docs/connect/cli.md new file mode 100644 index 0000000000..4a9a4b2e97 --- /dev/null +++ b/docs/connect/cli.md @@ -0,0 +1,804 @@ +--- +title: "Redis CLI" +linkTitle: "CLI" +weight: 1 +description: > + Overview of redis-cli, the Redis command line interface +aliases: + - /docs/manual/cli + - /docs/management/cli + - /docs/ui/cli +--- + +In interactive mode, `redis-cli` has basic line editing capabilities to provide a familiar typing experience. + +To launch the program in special modes, you can use several options, including: + +* Simulate a replica and print the replication stream it receives from the primary. +* Check the latency of a Redis server and display statistics. +* Request ASCII-art spectrogram of latency samples and frequencies. + +This topic covers the different aspects of `redis-cli`, starting from the simplest and ending with the more advanced features. + +## Command line usage + +To run a Redis command and return a standard output at the terminal, include the command to execute as separate arguments of `redis-cli`: + + $ redis-cli INCR mycounter + (integer) 7 + +The reply of the command is "7". Since Redis replies are typed (strings, arrays, integers, nil, errors, etc.), you see the type of the reply between parentheses. This additional information may not be ideal when the output of `redis-cli` must be used as input of another command or redirected into a file. + +`redis-cli` only shows additional information for human readability when it detects the standard output is a tty, or terminal. For all other outputs it will auto-enable the *raw output mode*, as in the following example: + + $ redis-cli INCR mycounter > /tmp/output.txt + $ cat /tmp/output.txt + 8 + +Note that `(integer)` is omitted from the output because `redis-cli` detects +the output is no longer written to the terminal. You can force raw output +even on the terminal with the `--raw` option: + + $ redis-cli --raw INCR mycounter + 9 + +You can force human readable output when writing to a file or in +pipe to other commands by using `--no-raw`. + +## String quoting and escaping + +When `redis-cli` parses a command, whitespace characters automatically delimit the arguments. +In interactive mode, a newline sends the command for parsing and execution. +To input string values that contain whitespaces or non-printable characters, you can use quoted and escaped strings. + +Quoted string values are enclosed in double (`"`) or single (`'`) quotation marks. +Escape sequences are used to put nonprintable characters in character and string literals. + +An escape sequence contains a backslash (`\`) symbol followed by one of the escape sequence characters. + +Doubly-quoted strings support the following escape sequences: + +* `\"` - double-quote +* `\n` - newline +* `\r` - carriage return +* `\t` - horizontal tab +* `\b` - backspace +* `\a` - alert +* `\\` - backslash +* `\xhh` - any ASCII character represented by a hexadecimal number (_hh_) + +Single quotes assume the string is literal, and allow only the following escape sequences: +* `\'` - single quote +* `\\` - backslash + +For example, to return `Hello World` on two lines: + +``` +127.0.0.1:6379> SET mykey "Hello\nWorld" +OK +127.0.0.1:6379> GET mykey +Hello +World +``` + +When you input strings that contain single or double quotes, as you might in passwords, for example, escape the string, like so: + +``` +127.0.0.1:6379> AUTH some_admin_user ">^8T>6Na{u|jp>+v\"55\@_;OU(OR]7mbAYGqsfyu48(j'%hQH7;v*f1H${*gD(Se'" + ``` + +## Host, port, password, and database + +By default, `redis-cli` connects to the server at the address 127.0.0.1 with port 6379. +You can change the port using several command line options. To specify a different host name or an IP address, use the `-h` option. In order to set a different port, use `-p`. + + $ redis-cli -h redis15.localnet.org -p 6390 PING + PONG + +If your instance is password protected, the `-a ` option will +perform authentication saving the need of explicitly using the `AUTH` command: + + $ redis-cli -a myUnguessablePazzzzzword123 PING + PONG + +**NOTE:** For security reasons, provide the password to `redis-cli` automatically via the +`REDISCLI_AUTH` environment variable. + +Finally, it's possible to send a command that operates on a database number +other than the default number zero by using the `-n ` option: + + $ redis-cli FLUSHALL + OK + $ redis-cli -n 1 INCR a + (integer) 1 + $ redis-cli -n 1 INCR a + (integer) 2 + $ redis-cli -n 2 INCR a + (integer) 1 + +Some or all of this information can also be provided by using the `-u ` +option and the URI pattern `redis://user:password@host:port/dbnum`: + + $ redis-cli -u redis://LJenkins:p%40ssw0rd@redis-16379.hosted.com:16379/0 PING + PONG + +**NOTE:** +User, password and dbnum are optional. +For authentication without a username, use username `default`. +For TLS, use the scheme `rediss`. + +## SSL/TLS + +By default, `redis-cli` uses a plain TCP connection to connect to Redis. +You may enable SSL/TLS using the `--tls` option, along with `--cacert` or +`--cacertdir` to configure a trusted root certificate bundle or directory. + +If the target server requires authentication using a client side certificate, +you can specify a certificate and a corresponding private key using `--cert` and +`--key`. + +## Getting input from other programs + +There are two ways you can use `redis-cli` in order to receive input from other +commands via the standard input. One is to use the target payload as the last argument +from *stdin*. For example, in order to set the Redis key `net_services` +to the content of the file `/etc/services` from a local file system, use the `-x` +option: + + $ redis-cli -x SET net_services < /etc/services + OK + $ redis-cli GETRANGE net_services 0 50 + "#\n# Network services, Internet style\n#\n# Note that " + +In the first line of the above session, `redis-cli` was executed with the `-x` option and a file was redirected to the CLI's +standard input as the value to satisfy the `SET net_services` command phrase. This is useful for scripting. + +A different approach is to feed `redis-cli` a sequence of commands written in a +text file: + + $ cat /tmp/commands.txt + SET item:3374 100 + INCR item:3374 + APPEND item:3374 xxx + GET item:3374 + $ cat /tmp/commands.txt | redis-cli + OK + (integer) 101 + (integer) 6 + "101xxx" + +All the commands in `commands.txt` are executed consecutively by +`redis-cli` as if they were typed by the user in interactive mode. Strings can be +quoted inside the file if needed, so that it's possible to have single +arguments with spaces, newlines, or other special characters: + + $ cat /tmp/commands.txt + SET arg_example "This is a single argument" + STRLEN arg_example + $ cat /tmp/commands.txt | redis-cli + OK + (integer) 25 + +## Continuously run the same command + +It is possible to execute a single command a specified number of times +with a user-selected pause between executions. This is useful in +different contexts - for example when we want to continuously monitor some +key content or `INFO` field output, or when we want to simulate some +recurring write event, such as pushing a new item into a list every 5 seconds. + +This feature is controlled by two options: `-r ` and `-i `. +The `-r` option states how many times to run a command and `-i` sets +the delay between the different command calls in seconds (with the ability +to specify values such as 0.1 to represent 100 milliseconds). + +By default the interval (or delay) is set to 0, so commands are just executed +ASAP: + + $ redis-cli -r 5 INCR counter_value + (integer) 1 + (integer) 2 + (integer) 3 + (integer) 4 + (integer) 5 + +To run the same command indefinitely, use `-1` as the count value. +To monitor over time the RSS memory size it's possible to use the following command: + + $ redis-cli -r -1 -i 1 INFO | grep rss_human + used_memory_rss_human:2.71M + used_memory_rss_human:2.73M + used_memory_rss_human:2.73M + used_memory_rss_human:2.73M + ... a new line will be printed each second ... + +## Mass insertion of data using `redis-cli` + +Mass insertion using `redis-cli` is covered in a separate page as it is a +worthwhile topic itself. Please refer to our [mass insertion guide](/topics/mass-insert). + +## CSV output + +A CSV (Comma Separated Values) output feature exists within `redis-cli` to export data from Redis to an external program. + + $ redis-cli LPUSH mylist a b c d + (integer) 4 + $ redis-cli --csv LRANGE mylist 0 -1 + "d","c","b","a" + +Note that the `--csv` flag will only work on a single command, not the entirety of a DB as an export. + +## Running Lua scripts + +The `redis-cli` has extensive support for using the debugging facility +of Lua scripting, available with Redis 3.2 onwards. For this feature, refer to the [Redis Lua debugger documentation](/topics/ldb). + +Even without using the debugger, `redis-cli` can be used to +run scripts from a file as an argument: + + $ cat /tmp/script.lua + return redis.call('SET',KEYS[1],ARGV[1]) + $ redis-cli --eval /tmp/script.lua location:hastings:temp , 23 + OK + +The Redis `EVAL` command takes the list of keys the script uses, and the +other non key arguments, as different arrays. When calling `EVAL` you +provide the number of keys as a number. + +When calling `redis-cli` with the `--eval` option above, there is no need to specify the number of keys +explicitly. Instead it uses the convention of separating keys and arguments +with a comma. This is why in the above call you see `location:hastings:temp , 23` as arguments. + +So `location:hastings:temp` will populate the `KEYS` array, and `23` the `ARGV` array. + +The `--eval` option is useful when writing simple scripts. For more +complex work, the Lua debugger is recommended. It is possible to mix the two approaches, since the debugger can also execute scripts from an external file. + +Interactive mode +=== + +We have explored how to use the Redis CLI as a command line program. +This is useful for scripts and certain types of testing, however most +people will spend the majority of time in `redis-cli` using its interactive +mode. + +In interactive mode the user types Redis commands at the prompt. The command +is sent to the server, processed, and the reply is parsed back and rendered +into a simpler form to read. + +Nothing special is needed for running the `redis-cli` in interactive mode - +just execute it without any arguments + + $ redis-cli + 127.0.0.1:6379> PING + PONG + +The string `127.0.0.1:6379>` is the prompt. It displays the connected Redis server instance's hostname and port. + +The prompt updates as the connected server changes or when operating on a database different from the database number zero: + + 127.0.0.1:6379> SELECT 2 + OK + 127.0.0.1:6379[2]> DBSIZE + (integer) 1 + 127.0.0.1:6379[2]> SELECT 0 + OK + 127.0.0.1:6379> DBSIZE + (integer) 503 + +## Handling connections and reconnections + +Using the `CONNECT` command in interactive mode makes it possible to connect +to a different instance, by specifying the *hostname* and *port* we want +to connect to: + + 127.0.0.1:6379> CONNECT metal 6379 + metal:6379> PING + PONG + +As you can see the prompt changes accordingly when connecting to a different server instance. +If a connection is attempted to an instance that is unreachable, the `redis-cli` goes into disconnected +mode and attempts to reconnect with each new command: + + 127.0.0.1:6379> CONNECT 127.0.0.1 9999 + Could not connect to Redis at 127.0.0.1:9999: Connection refused + not connected> PING + Could not connect to Redis at 127.0.0.1:9999: Connection refused + not connected> PING + Could not connect to Redis at 127.0.0.1:9999: Connection refused + +Generally after a disconnection is detected, `redis-cli` always attempts to +reconnect transparently; if the attempt fails, it shows the error and +enters the disconnected state. The following is an example of disconnection +and reconnection: + + 127.0.0.1:6379> INFO SERVER + Could not connect to Redis at 127.0.0.1:6379: Connection refused + not connected> PING + PONG + 127.0.0.1:6379> + (now we are connected again) + +When a reconnection is performed, `redis-cli` automatically re-selects the +last database number selected. However, all other states about the +connection is lost, such as within a MULTI/EXEC transaction: + + $ redis-cli + 127.0.0.1:6379> MULTI + OK + 127.0.0.1:6379> PING + QUEUED + + ( here the server is manually restarted ) + + 127.0.0.1:6379> EXEC + (error) ERR EXEC without MULTI + +This is usually not an issue when using the `redis-cli` in interactive mode for +testing, but this limitation should be known. + +## Editing, history, completion and hints + +Because `redis-cli` uses the +[linenoise line editing library](http://github.com/antirez/linenoise), it +always has line editing capabilities, without depending on `libreadline` or +other optional libraries. + +Command execution history can be accessed in order to avoid retyping commands by pressing the arrow keys (up and down). +The history is preserved between restarts of the CLI, in a file named +`.rediscli_history` inside the user home directory, as specified +by the `HOME` environment variable. It is possible to use a different +history filename by setting the `REDISCLI_HISTFILE` environment variable, +and disable it by setting it to `/dev/null`. + +The `redis-cli` is also able to perform command-name completion by pressing the TAB +key, as in the following example: + + 127.0.0.1:6379> Z + 127.0.0.1:6379> ZADD + 127.0.0.1:6379> ZCARD + +Once Redis command name has been entered at the prompt, the `redis-cli` will display +syntax hints. Like command history, this behavior can be turned on and off via the `redis-cli` preferences. + +## Preferences + +There are two ways to customize `redis-cli` behavior. The file `.redisclirc` +in the home directory is loaded by the CLI on startup. You can override the +file's default location by setting the `REDISCLI_RCFILE` environment variable to +an alternative path. Preferences can also be set during a CLI session, in which +case they will last only the duration of the session. + +To set preferences, use the special `:set` command. The following preferences +can be set, either by typing the command in the CLI or adding it to the +`.redisclirc` file: + +* `:set hints` - enables syntax hints +* `:set nohints` - disables syntax hints + +## Running the same command N times + +It is possible to run the same command multiple times in interactive mode by prefixing the command +name by a number: + + 127.0.0.1:6379> 5 INCR mycounter + (integer) 1 + (integer) 2 + (integer) 3 + (integer) 4 + (integer) 5 + +## Showing help about Redis commands + +`redis-cli` provides online help for most Redis [commands](/commands), using the `HELP` command. The command can be used +in two forms: + +* `HELP @` shows all the commands about a given category. The +categories are: + - `@generic` + - `@string` + - `@list` + - `@set` + - `@sorted_set` + - `@hash` + - `@pubsub` + - `@transactions` + - `@connection` + - `@server` + - `@scripting` + - `@hyperloglog` + - `@cluster` + - `@geo` + - `@stream` +* `HELP ` shows specific help for the command given as argument. + +For example in order to show help for the `PFADD` command, use: + + 127.0.0.1:6379> HELP PFADD + + PFADD key element [element ...] + summary: Adds the specified elements to the specified HyperLogLog. + since: 2.8.9 + +Note that `HELP` supports TAB completion as well. + +## Clearing the terminal screen + +Using the `CLEAR` command in interactive mode clears the terminal's screen. + +Special modes of operation +=== + +So far we saw two main modes of `redis-cli`. + +* Command line execution of Redis commands. +* Interactive "REPL" usage. + +The CLI performs other auxiliary tasks related to Redis that +are explained in the next sections: + +* Monitoring tool to show continuous stats about a Redis server. +* Scanning a Redis database for very large keys. +* Key space scanner with pattern matching. +* Acting as a [Pub/Sub](/topics/pubsub) client to subscribe to channels. +* Monitoring the commands executed into a Redis instance. +* Checking the [latency](/topics/latency) of a Redis server in different ways. +* Checking the scheduler latency of the local computer. +* Transferring RDB backups from a remote Redis server locally. +* Acting as a Redis replica for showing what a replica receives. +* Simulating [LRU](/topics/lru-cache) workloads for showing stats about keys hits. +* A client for the Lua debugger. + +## Continuous stats mode + +Continuous stats mode is probably one of the lesser known yet very useful features of `redis-cli` to monitor Redis instances in real time. To enable this mode, the `--stat` option is used. +The output is very clear about the behavior of the CLI in this mode: + + $ redis-cli --stat + ------- data ------ --------------------- load -------------------- - child - + keys mem clients blocked requests connections + 506 1015.00K 1 0 24 (+0) 7 + 506 1015.00K 1 0 25 (+1) 7 + 506 3.40M 51 0 60461 (+60436) 57 + 506 3.40M 51 0 146425 (+85964) 107 + 507 3.40M 51 0 233844 (+87419) 157 + 507 3.40M 51 0 321715 (+87871) 207 + 508 3.40M 51 0 408642 (+86927) 257 + 508 3.40M 51 0 497038 (+88396) 257 + +In this mode a new line is printed every second with useful information and differences of request values between old data points. Memory usage, client connection counts, and various other statistics about the connected Redis database can be easily understood with this auxiliary `redis-cli` tool. + +The `-i ` option in this case works as a modifier in order to +change the frequency at which new lines are emitted. The default is one +second. + +## Scanning for big keys + +In this special mode, `redis-cli` works as a key space analyzer. It scans the +dataset for big keys, but also provides information about the data types +that the data set consists of. This mode is enabled with the `--bigkeys` option, +and produces verbose output: + + $ redis-cli --bigkeys + + # Scanning the entire keyspace to find biggest keys as well as + # average sizes per key type. You can use -i 0.01 to sleep 0.01 sec + # per SCAN command (not usually needed). + + [00.00%] Biggest string found so far 'key-419' with 3 bytes + [05.14%] Biggest list found so far 'mylist' with 100004 items + [35.77%] Biggest string found so far 'counter:__rand_int__' with 6 bytes + [73.91%] Biggest hash found so far 'myobject' with 3 fields + + -------- summary ------- + + Sampled 506 keys in the keyspace! + Total key length in bytes is 3452 (avg len 6.82) + + Biggest string found 'counter:__rand_int__' has 6 bytes + Biggest list found 'mylist' has 100004 items + Biggest hash found 'myobject' has 3 fields + + 504 strings with 1403 bytes (99.60% of keys, avg size 2.78) + 1 lists with 100004 items (00.20% of keys, avg size 100004.00) + 0 sets with 0 members (00.00% of keys, avg size 0.00) + 1 hashs with 3 fields (00.20% of keys, avg size 3.00) + 0 zsets with 0 members (00.00% of keys, avg size 0.00) + +In the first part of the output, each new key larger than the previous larger +key (of the same type) encountered is reported. The summary section +provides general stats about the data inside the Redis instance. + +The program uses the `SCAN` command, so it can be executed against a busy +server without impacting the operations, however the `-i` option can be +used in order to throttle the scanning process of the specified fraction +of second for each `SCAN` command. + +For example, `-i 0.01` will slow down the program execution considerably, but will also reduce the load on the server +to a negligible amount. + +Note that the summary also reports in a cleaner form the biggest keys found +for each time. The initial output is just to provide some interesting info +ASAP if running against a very large data set. + +## Getting a list of keys + +It is also possible to scan the key space, again in a way that does not +block the Redis server (which does happen when you use a command +like `KEYS *`), and print all the key names, or filter them for specific +patterns. This mode, like the `--bigkeys` option, uses the `SCAN` command, +so keys may be reported multiple times if the dataset is changing, but no +key would ever be missing, if that key was present since the start of the +iteration. Because of the command that it uses this option is called `--scan`. + + $ redis-cli --scan | head -10 + key-419 + key-71 + key-236 + key-50 + key-38 + key-458 + key-453 + key-499 + key-446 + key-371 + +Note that `head -10` is used in order to print only the first ten lines of the +output. + +Scanning is able to use the underlying pattern matching capability of +the `SCAN` command with the `--pattern` option. + + $ redis-cli --scan --pattern '*-11*' + key-114 + key-117 + key-118 + key-113 + key-115 + key-112 + key-119 + key-11 + key-111 + key-110 + key-116 + +Piping the output through the `wc` command can be used to count specific +kind of objects, by key name: + + $ redis-cli --scan --pattern 'user:*' | wc -l + 3829433 + +You can use `-i 0.01` to add a delay between calls to the `SCAN` command. +This will make the command slower but will significantly reduce load on the server. + +## Pub/sub mode + +The CLI is able to publish messages in Redis Pub/Sub channels using +the `PUBLISH` command. Subscribing to channels in order to receive +messages is different - the terminal is blocked and waits for +messages, so this is implemented as a special mode in `redis-cli`. Unlike +other special modes this mode is not enabled by using a special option, +but simply by using the `SUBSCRIBE` or `PSUBSCRIBE` command, which are available in +interactive or command mode: + + $ redis-cli PSUBSCRIBE '*' + Reading messages... (press Ctrl-C to quit) + 1) "PSUBSCRIBE" + 2) "*" + 3) (integer) 1 + +The *reading messages* message shows that we entered Pub/Sub mode. +When another client publishes some message in some channel, such as with the command `redis-cli PUBLISH mychannel mymessage`, the CLI in Pub/Sub mode will show something such as: + + 1) "pmessage" + 2) "*" + 3) "mychannel" + 4) "mymessage" + +This is very useful for debugging Pub/Sub issues. +To exit the Pub/Sub mode just process `CTRL-C`. + +## Monitoring commands executed in Redis + +Similarly to the Pub/Sub mode, the monitoring mode is entered automatically +once you use the `MONITOR` command. All commands received by the active Redis instance will be printed to the standard output: + + $ redis-cli MONITOR + OK + 1460100081.165665 [0 127.0.0.1:51706] "set" "shipment:8000736522714:status" "sorting" + 1460100083.053365 [0 127.0.0.1:51707] "get" "shipment:8000736522714:status" + +Note that it is possible to pipe the output, so you can monitor +for specific patterns using tools such as `grep`. + +## Monitoring the latency of Redis instances + +Redis is often used in contexts where latency is very critical. Latency +involves multiple moving parts within the application, from the client library +to the network stack, to the Redis instance itself. + +The `redis-cli` has multiple facilities for studying the latency of a Redis +instance and understanding the latency's maximum, average and distribution. + +The basic latency-checking tool is the `--latency` option. Using this +option the CLI runs a loop where the `PING` command is sent to the Redis +instance and the time to receive a reply is measured. This happens 100 +times per second, and stats are updated in a real time in the console: + + $ redis-cli --latency + min: 0, max: 1, avg: 0.19 (427 samples) + +The stats are provided in milliseconds. Usually, the average latency of +a very fast instance tends to be overestimated a bit because of the +latency due to the kernel scheduler of the system running `redis-cli` +itself, so the average latency of 0.19 above may easily be 0.01 or less. +However this is usually not a big problem, since most developers are interested in +events of a few milliseconds or more. + +Sometimes it is useful to study how the maximum and average latencies +evolve during time. The `--latency-history` option is used for that +purpose: it works exactly like `--latency`, but every 15 seconds (by +default) a new sampling session is started from scratch: + + $ redis-cli --latency-history + min: 0, max: 1, avg: 0.14 (1314 samples) -- 15.01 seconds range + min: 0, max: 1, avg: 0.18 (1299 samples) -- 15.00 seconds range + min: 0, max: 1, avg: 0.20 (113 samples)^C + +Sampling sessions' length can be changed with the `-i ` option. + +The most advanced latency study tool, but also the most complex to +interpret for non-experienced users, is the ability to use color terminals +to show a spectrum of latencies. You'll see a colored output that indicates the +different percentages of samples, and different ASCII characters that indicate +different latency figures. This mode is enabled using the `--latency-dist` +option: + + $ redis-cli --latency-dist + (output not displayed, requires a color terminal, try it!) + +There is another pretty unusual latency tool implemented inside `redis-cli`. +It does not check the latency of a Redis instance, but the latency of the +computer running `redis-cli`. This latency is intrinsic to the kernel scheduler, +the hypervisor in case of virtualized instances, and so forth. + +Redis calls it *intrinsic latency* because it's mostly opaque to the programmer. +If the Redis instance has high latency regardless of all the obvious things +that may be the source cause, it's worth to check what's the best your system +can do by running `redis-cli` in this special mode directly in the system you +are running Redis servers on. + +By measuring the intrinsic latency, you know that this is the baseline, +and Redis cannot outdo your system. In order to run the CLI +in this mode, use the `--intrinsic-latency `. Note that the test time is in seconds and dictates how long the test should run. + + $ ./redis-cli --intrinsic-latency 5 + Max latency so far: 1 microseconds. + Max latency so far: 7 microseconds. + Max latency so far: 9 microseconds. + Max latency so far: 11 microseconds. + Max latency so far: 13 microseconds. + Max latency so far: 15 microseconds. + Max latency so far: 34 microseconds. + Max latency so far: 82 microseconds. + Max latency so far: 586 microseconds. + Max latency so far: 739 microseconds. + + 65433042 total runs (avg latency: 0.0764 microseconds / 764.14 nanoseconds per run). + Worst run took 9671x longer than the average latency. + +IMPORTANT: this command must be executed on the computer that runs the Redis server instance, not on a different host. It does not connect to a Redis instance and performs the test locally. + +In the above case, the system cannot do better than 739 microseconds of worst +case latency, so one can expect certain queries to occasionally run less than 1 millisecond. + +## Remote backups of RDB files + +During a Redis replication's first synchronization, the primary and the replica +exchange the whole data set in the form of an RDB file. This feature is exploited +by `redis-cli` in order to provide a remote backup facility that allows a +transfer of an RDB file from any Redis instance to the local computer running +`redis-cli`. To use this mode, call the CLI with the `--rdb ` +option: + + $ redis-cli --rdb /tmp/dump.rdb + SYNC sent to master, writing 13256 bytes to '/tmp/dump.rdb' + Transfer finished with success. + +This is a simple but effective way to ensure disaster recovery +RDB backups exist of your Redis instance. When using this options in +scripts or `cron` jobs, make sure to check the return value of the command. +If it is non zero, an error occurred as in the following example: + + $ redis-cli --rdb /tmp/dump.rdb + SYNC with master failed: -ERR Can't SYNC while not connected with my master + $ echo $? + 1 + +## Replica mode + +The replica mode of the CLI is an advanced feature useful for +Redis developers and for debugging operations. +It allows for the inspection of the content a primary sends to its replicas in the replication +stream in order to propagate the writes to its replicas. The option +name is simply `--replica`. The following is a working example: + + $ redis-cli --replica + SYNC with master, discarding 13256 bytes of bulk transfer... + SYNC done. Logging commands from master. + "PING" + "SELECT","0" + "SET","last_name","Enigk" + "PING" + "INCR","mycounter" + +The command begins by discarding the RDB file of the first synchronization +and then logs each command received in CSV format. + +If you think some of the commands are not replicated correctly in your replicas +this is a good way to check what's happening, and also useful information +in order to improve the bug report. + +## Performing an LRU simulation + +Redis is often used as a cache with [LRU eviction](/topics/lru-cache). +Depending on the number of keys and the amount of memory allocated for the +cache (specified via the `maxmemory` directive), the amount of cache hits +and misses will change. Sometimes, simulating the rate of hits is very +useful to correctly provision your cache. + +The `redis-cli` has a special mode where it performs a simulation of GET and SET +operations, using an 80-20% power law distribution in the requests pattern. +This means that 20% of keys will be requested 80% of times, which is a +common distribution in caching scenarios. + +Theoretically, given the distribution of the requests and the Redis memory +overhead, it should be possible to compute the hit rate analytically +with a mathematical formula. However, Redis can be configured with +different LRU settings (number of samples) and LRU's implementation, which +is approximated in Redis, changes a lot between different versions. Similarly +the amount of memory per key may change between versions. That is why this +tool was built: its main motivation was for testing the quality of Redis' LRU +implementation, but now is also useful for testing how a given version +behaves with the settings originally intended for deployment. + +To use this mode, specify the amount of keys in the test and configure a sensible `maxmemory` setting as a first attempt. + +IMPORTANT NOTE: Configuring the `maxmemory` setting in the Redis configuration +is crucial: if there is no cap to the maximum memory usage, the hit will +eventually be 100% since all the keys can be stored in memory. If too many keys are specified with maximum memory, eventually all of the computer RAM will be used. It is also needed to configure an appropriate +*maxmemory policy*; most of the time `allkeys-lru` is selected. + +In the following example there is a configured a memory limit of 100MB and an LRU +simulation using 10 million keys. + +WARNING: the test uses pipelining and will stress the server, don't use it +with production instances. + + $ ./redis-cli --lru-test 10000000 + 156000 Gets/sec | Hits: 4552 (2.92%) | Misses: 151448 (97.08%) + 153750 Gets/sec | Hits: 12906 (8.39%) | Misses: 140844 (91.61%) + 159250 Gets/sec | Hits: 21811 (13.70%) | Misses: 137439 (86.30%) + 151000 Gets/sec | Hits: 27615 (18.29%) | Misses: 123385 (81.71%) + 145000 Gets/sec | Hits: 32791 (22.61%) | Misses: 112209 (77.39%) + 157750 Gets/sec | Hits: 42178 (26.74%) | Misses: 115572 (73.26%) + 154500 Gets/sec | Hits: 47418 (30.69%) | Misses: 107082 (69.31%) + 151250 Gets/sec | Hits: 51636 (34.14%) | Misses: 99614 (65.86%) + +The program shows stats every second. In the first seconds the cache starts to be populated. The misses rate later stabilizes into the actual figure that can be expected: + + 120750 Gets/sec | Hits: 48774 (40.39%) | Misses: 71976 (59.61%) + 122500 Gets/sec | Hits: 49052 (40.04%) | Misses: 73448 (59.96%) + 127000 Gets/sec | Hits: 50870 (40.06%) | Misses: 76130 (59.94%) + 124250 Gets/sec | Hits: 50147 (40.36%) | Misses: 74103 (59.64%) + +A miss rate of 59% may not be acceptable for certain use cases therefor +100MB of memory is not enough. Observe an example using a half gigabyte of memory. After several +minutes the output stabilizes to the following figures: + + 140000 Gets/sec | Hits: 135376 (96.70%) | Misses: 4624 (3.30%) + 141250 Gets/sec | Hits: 136523 (96.65%) | Misses: 4727 (3.35%) + 140250 Gets/sec | Hits: 135457 (96.58%) | Misses: 4793 (3.42%) + 140500 Gets/sec | Hits: 135947 (96.76%) | Misses: 4553 (3.24%) + +With 500MB there is sufficient space for the key quantity (10 million) and distribution (80-20 style). diff --git a/docs/connect/clients/_index.md b/docs/connect/clients/_index.md new file mode 100644 index 0000000000..d6e8fcff39 --- /dev/null +++ b/docs/connect/clients/_index.md @@ -0,0 +1,28 @@ +--- +title: "Connect with Redis clients" +linkTitle: "Clients" +description: Connect your application to a Redis database and try an example +weight: 45 +aliases: + - /docs/redis-clients + - /docs/stack/get-started/clients/ + - /docs/clients/ +--- + +Here, you will learn how to connect your application to a Redis database. If you're new to Redis, you might first want to [install Redis with Redis Stack and RedisInsight](/docs/getting-started/install-stack/). + +For more Redis topics, see [Using](/docs/manual/) and [Managing](/docs/management/) Redis. + +If you're ready to get started, see the following guides for the official client libraries you can use with Redis. For a complete list of community-driven clients, see [Clients](/resources/clients/). + + +## High-level client libraries + +The Redis OM client libraries let you use the document modeling, indexing, and querying capabilities of Redis Stack much like the way you'd use an [ORM](https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping). The following Redis OM libraries support Redis Stack: + +* [Redis OM .NET](/docs/clients/om-clients/stack-dotnet/) +* [Redis OM Node](/docs/clients/om-clients/stack-node/) +* [Redis OM Python](/docs/clients/om-clients/stack-python/) +* [Redis OM Spring](/docs/clients/om-clients/stack-spring/) + +
\ No newline at end of file diff --git a/docs/connect/clients/dotnet.md b/docs/connect/clients/dotnet.md new file mode 100644 index 0000000000..c38c7e3f3b --- /dev/null +++ b/docs/connect/clients/dotnet.md @@ -0,0 +1,273 @@ +--- +title: "C#/.NET guide" +linkTitle: "C#/.NET" +description: Connect your .NET application to a Redis database +weight: 1 +aliases: + - /docs/clients/dotnet/ + - /docs/redis-clients/dotnet/ +--- + +Install Redis and the Redis client, then connect your .NET application to a Redis database. + +## NRedisStack + +[NRedisStack](https://github.com/redis/NRedisStack) is a .NET client for Redis. +`NredisStack` requires a running Redis or [Redis Stack](https://redis.io/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. + +### Install + +Using the `dotnet` CLI, run: + +``` +dotnet add package NRedisStack +``` + +### Connect + +Connect to localhost on port 6379. + +``` +using NRedisStack; +using NRedisStack.RedisStackCommands; +using StackExchange.Redis; +//... +ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost"); +IDatabase db = redis.GetDatabase(); +``` + +Store and retrieve a simple string. + +```csharp +db.StringSet("foo", "bar"); +Console.WriteLine(db.StringGet("foo")); // prints bar +``` + +Store and retrieve a HashMap. + +```csharp +var hash = new HashEntry[] { + new HashEntry("name", "John"), + new HashEntry("surname", "Smith"), + new HashEntry("company", "Redis"), + new HashEntry("age", "29"), + }; +db.HashSet("user-session:123", hash); + +var hashFields = db.HashGetAll("user-session:123"); +Console.WriteLine(String.Join("; ", hashFields)); +// Prints: +// name: John; surname: Smith; company: Redis; age: 29 +``` + +To access Redis Stack capabilities, you should use appropriate interface like this: + +``` +IBloomCommands bf = db.BF(); +ICuckooCommands cf = db.CF(); +ICmsCommands cms = db.CMS(); +IGraphCommands graph = db.GRAPH(); +ITopKCommands topk = db.TOPK(); +ITdigestCommands tdigest = db.TDIGEST(); +ISearchCommands ft = db.FT(); +IJsonCommands json = db.JSON(); +ITimeSeriesCommands ts = db.TS(); +``` + +#### Connect to a Redis cluster + +To connect to a Redis cluster, you just need to specify one or all cluster endpoints in the client configuration: + +```csharp +ConfigurationOptions options = new ConfigurationOptions +{ + //list of available nodes of the cluster along with the endpoint port. + EndPoints = { + { "localhost", 16379 }, + { "localhost", 16380 }, + // ... + }, +}; + +ConnectionMultiplexer cluster = ConnectionMultiplexer.Connect(options); +IDatabase db = cluster.GetDatabase(); + +db.StringSet("foo", "bar"); +Console.WriteLine(db.StringGet("foo")); // prints bar +``` + +#### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. + +Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. + +To convert user certificate and private key from the PEM format to `pfx`, use this command: + +```bash +openssl pkcs12 -inkey redis_user_private.key -in redis_user.crt -export -out redis.pfx +``` + +Enter password to protect your `pfx` file. + +Establish a secure connection with your Redis database using this snippet. + +```csharp +ConfigurationOptions options = new ConfigurationOptions +{ + EndPoints = { { "my-redis.cloud.redislabs.com", 6379 } }, + User = "default", // use your Redis user. More info https://redis.io/docs/management/security/acl/ + Password = "secret", // use your Redis password + Ssl = true, + SslProtocols = System.Security.Authentication.SslProtocols.Tls12 +}; + +options.CertificateSelection += delegate +{ + return new X509Certificate2("redis.pfx", "secret"); // use the password you specified for pfx file +}; +options.CertificateValidation += ValidateServerCertificate; + +bool ValidateServerCertificate( + object sender, + X509Certificate? certificate, + X509Chain? chain, + SslPolicyErrors sslPolicyErrors) +{ + if (certificate == null) { + return false; + } + + var ca = new X509Certificate2("redis_ca.pem"); + bool verdict = (certificate.Issuer == ca.Subject); + if (verdict) { + return true; + } + Console.WriteLine("Certificate error: {0}", sslPolicyErrors); + return false; +} + +ConnectionMultiplexer muxer = ConnectionMultiplexer.Connect(options); + +//Creation of the connection to the DB +IDatabase conn = muxer.GetDatabase(); + +//send SET command +conn.StringSet("foo", "bar"); + +//send GET command and print the value +Console.WriteLine(conn.StringGet("foo")); +``` + +### Example: Indexing and querying JSON documents + +This example shows how to convert Redis search results to JSON format using `NRedisStack`. + +Make sure that you have Redis Stack and `NRedisStack` installed. + +Import dependencies and connect to the Redis server: + +```csharp +using NRedisStack; +using NRedisStack.RedisStackCommands; +using NRedisStack.Search; +using NRedisStack.Search.Aggregation; +using NRedisStack.Search.Literals.Enums; +using StackExchange.Redis; + +// ... + +ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost"); +``` + +Get a reference to the database and for search and JSON commands. + +```csharp +var db = redis.GetDatabase(); +var ft = db.FT(); +var json = db.JSON(); +``` + +Let's create some test data to add to your database. + +```csharp +var user1 = new { + name = "Paul John", + email = "paul.john@example.com", + age = 42, + city = "London" +}; + +var user2 = new { + name = "Eden Zamir", + email = "eden.zamir@example.com", + age = 29, + city = "Tel Aviv" +}; + +var user3 = new { + name = "Paul Zamir", + email = "paul.zamir@example.com", + age = 35, + city = "Tel Aviv" +}; +``` + +Create an index. In this example, all JSON documents with the key prefix `user:` are indexed. For more information, see [Query syntax](/docs/interact/search-and-query/query/). + +```csharp +var schema = new Schema() + .AddTextField(new FieldName("$.name", "name")) + .AddTagField(new FieldName("$.city", "city")) + .AddNumericField(new FieldName("$.age", "age")); + +ft.Create( + "idx:users", + new FTCreateParams().On(IndexDataType.JSON).Prefix("user:"), + schema); +``` + +Use `JSON.SET` to set each user value at the specified path. + +```csharp +json.Set("user:1", "$", user1); +json.Set("user:2", "$", user2); +json.Set("user:3", "$", user3); +``` + +Let's find user `Paul` and filter the results by age. + +```csharp +var res = ft.Search("idx:users", new Query("Paul @age:[30 40]")).Documents.Select(x => x["json"]); +Console.WriteLine(string.Join("\n", res)); +// Prints: {"name":"Paul Zamir","email":"paul.zamir@example.com","age":35,"city":"Tel Aviv"} +``` + +Return only the `city` field. + +```csharp +var res_cities = ft.Search("idx:users", new Query("Paul").ReturnFields(new FieldName("$.city", "city"))).Documents.Select(x => x["city"]); +Console.WriteLine(string.Join(", ", res_cities)); +// Prints: London, Tel Aviv +``` + +Count all users in the same city. + +```csharp +var request = new AggregationRequest("*").GroupBy("@city", Reducers.Count().As("count")); +var result = ft.Aggregate("idx:users", request); + +for (var i=0; i:@localhost:6379/") +if err != nil { + panic(err) +} + +client := redis.NewClient(opt) +``` + +Store and retrieve a simple string. + +```go +ctx := context.Background() + +err := client.Set(ctx, "foo", "bar", 0).Err() +if err != nil { + panic(err) +} + +val, err := client.Get(ctx, "foo").Result() +if err != nil { + panic(err) +} +fmt.Println("foo", val) +``` + +Store and retrieve a map. + +```go +session := map[string]string{"name": "John", "surname": "Smith", "company": "Redis", "age": "29"} +for k, v := range session { + err := client.HSet(ctx, "user-session:123", k, v).Err() + if err != nil { + panic(err) + } +} + +userSession := client.HGetAll(ctx, "user-session:123").Val() +fmt.Println(userSession) + ``` + +#### Connect to a Redis cluster + +To connect to a Redis cluster, use `NewClusterClient`. + +```go +client := redis.NewClusterClient(&redis.ClusterOptions{ + Addrs: []string{":16379", ":16380", ":16381", ":16382", ":16383", ":16384"}, + + // To route commands by latency or randomly, enable one of the following. + //RouteByLatency: true, + //RouteRandomly: true, +}) +``` + +#### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. + +Establish a secure connection with your Redis database using this snippet. + +```go +// Load client cert +cert, err := tls.LoadX509KeyPair("redis_user.crt", "redis_user_private.key") +if err != nil { + log.Fatal(err) +} + +// Load CA cert +caCert, err := os.ReadFile("redis_ca.pem") +if err != nil { + log.Fatal(err) +} +caCertPool := x509.NewCertPool() +caCertPool.AppendCertsFromPEM(caCert) + +client := redis.NewClient(&redis.Options{ + Addr: "my-redis.cloud.redislabs.com:6379", + Username: "default", // use your Redis user. More info https://redis.io/docs/management/security/acl/ + Password: "secret", // use your Redis password + TLSConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + Certificates: []tls.Certificate{cert}, + RootCAs: caCertPool, + }, +}) + +//send SET command +err = client.Set(ctx, "foo", "bar", 0).Err() +if err != nil { + panic(err) +} + +//send GET command and print the value +val, err := client.Get(ctx, "foo").Result() +if err != nil { + panic(err) +} +fmt.Println("foo", val) +``` + + +#### dial tcp: i/o timeout + +You get a `dial tcp: i/o timeout` error when `go-redis` can't connect to the Redis Server, for example, when the server is down or the port is protected by a firewall. To check if Redis Server is listening on the port, run telnet command on the host where the `go-redis` client is running. + +```go +telnet localhost 6379 +Trying 127.0.0.1... +telnet: Unable to connect to remote host: Connection refused +``` + +If you use Docker, Istio, or any other service mesh/sidecar, make sure the app starts after the container is fully available, for example, by configuring healthchecks with Docker and holdApplicationUntilProxyStarts with Istio. +For more information, see [Healthcheck](https://docs.docker.com/engine/reference/run/#healthcheck). + +### Learn more + +* [Documentation](https://redis.uptrace.dev/guide/) +* [GitHub](https://github.com/redis/go-redis) + diff --git a/docs/connect/clients/java/_index.md b/docs/connect/clients/java/_index.md new file mode 100644 index 0000000000..21a90ffe4b --- /dev/null +++ b/docs/connect/clients/java/_index.md @@ -0,0 +1,11 @@ +--- +title: "Connect with Redis Java clients" +linkTitle: "Java" +description: Connect your application to a Redis database using Java and try an example +weight: 3 +--- + +You have two choices of Java clients that you can use with Redis: + +- Jedis, for synchronous applications. +- Lettuce, for asynchronous and reactive applications. diff --git a/docs/connect/clients/java/jedis.md b/docs/connect/clients/java/jedis.md new file mode 100644 index 0000000000..7384fa6c77 --- /dev/null +++ b/docs/connect/clients/java/jedis.md @@ -0,0 +1,310 @@ +--- +title: "Jedis guide" +linkTitle: "Jedis" +description: Connect your Java application to a Redis database +weight: 1 +aliases: + - /docs/clients/java/ + - /docs/redis-clients/java/ +--- + +Install Redis and the Redis client, then connect your Java application to a Redis database. + +## Jedis + +[Jedis](https://github.com/redis/jedis) is a Java client for Redis designed for performance and ease of use. + +### Install + +To include `Jedis` as a dependency in your application, edit the dependency file, as follows. + +* If you use **Maven**: + + ```xml + + redis.clients + jedis + 5.1.2 + + ``` + +* If you use **Gradle**: + + ``` + repositories { + mavenCentral() + } + //... + dependencies { + implementation 'redis.clients:jedis:5.1.2' + //... + } + ``` + +* If you use the JAR files, download the latest Jedis and Apache Commons Pool2 JAR files from [Maven Central](https://central.sonatype.com/) or any other Maven repository. + +* Build from [source](https://github.com/redis/jedis) + +### Connect + +For many applications, it's best to use a connection pool. You can instantiate and use a `Jedis` connection pool like so: + +```java +package org.example; +import redis.clients.jedis.Jedis; +import redis.clients.jedis.JedisPool; + +public class Main { + public static void main(String[] args) { + JedisPool pool = new JedisPool("localhost", 6379); + + try (Jedis jedis = pool.getResource()) { + // Store & Retrieve a simple string + jedis.set("foo", "bar"); + System.out.println(jedis.get("foo")); // prints bar + + // Store & Retrieve a HashMap + Map hash = new HashMap<>();; + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + jedis.hset("user-session:123", hash); + System.out.println(jedis.hgetAll("user-session:123")); + // Prints: {name=John, surname=Smith, company=Redis, age=29} + } + } +} +``` + +Because adding a `try-with-resources` block for each command can be cumbersome, consider using `JedisPooled` as an easier way to pool connections. + +```java +import redis.clients.jedis.JedisPooled; + +//... + +JedisPooled jedis = new JedisPooled("localhost", 6379); +jedis.set("foo", "bar"); +System.out.println(jedis.get("foo")); // prints "bar" +``` + +#### Connect to a Redis cluster + +To connect to a Redis cluster, use `JedisCluster`. + +```java +import redis.clients.jedis.JedisCluster; +import redis.clients.jedis.HostAndPort; + +//... + +Set jedisClusterNodes = new HashSet(); +jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7379)); +jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7380)); +JedisCluster jedis = new JedisCluster(jedisClusterNodes); +``` + +#### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. + +Before connecting your application to the TLS-enabled Redis server, ensure that your certificates and private keys are in the correct format. + +To convert user certificate and private key from the PEM format to `pkcs12`, use this command: + +``` +openssl pkcs12 -export -in ./redis_user.crt -inkey ./redis_user_private.key -out redis-user-keystore.p12 -name "redis" +``` + +Enter password to protect your `pkcs12` file. + +Convert the server (CA) certificate to the JKS format using the [keytool](https://docs.oracle.com/en/java/javase/12/tools/keytool.html) shipped with JDK. + +``` +keytool -importcert -keystore truststore.jks \ + -storepass REPLACE_WITH_YOUR_PASSWORD \ + -file redis_ca.pem +``` + +Establish a secure connection with your Redis database using this snippet. + +```java +package org.example; + +import redis.clients.jedis.*; + +import javax.net.ssl.*; +import java.io.FileInputStream; +import java.io.IOException; +import java.security.GeneralSecurityException; +import java.security.KeyStore; + +public class Main { + + public static void main(String[] args) throws GeneralSecurityException, IOException { + HostAndPort address = new HostAndPort("my-redis-instance.cloud.redislabs.com", 6379); + + SSLSocketFactory sslFactory = createSslSocketFactory( + "./truststore.jks", + "secret!", // use the password you specified for keytool command + "./redis-user-keystore.p12", + "secret!" // use the password you specified for openssl command + ); + + JedisClientConfig config = DefaultJedisClientConfig.builder() + .ssl(true).sslSocketFactory(sslFactory) + .user("default") // use your Redis user. More info https://redis.io/docs/management/security/acl/ + .password("secret!") // use your Redis password + .build(); + + JedisPooled jedis = new JedisPooled(address, config); + jedis.set("foo", "bar"); + System.out.println(jedis.get("foo")); // prints bar + } + + private static SSLSocketFactory createSslSocketFactory( + String caCertPath, String caCertPassword, String userCertPath, String userCertPassword) + throws IOException, GeneralSecurityException { + + KeyStore keyStore = KeyStore.getInstance("pkcs12"); + keyStore.load(new FileInputStream(userCertPath), userCertPassword.toCharArray()); + + KeyStore trustStore = KeyStore.getInstance("jks"); + trustStore.load(new FileInputStream(caCertPath), caCertPassword.toCharArray()); + + TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance("X509"); + trustManagerFactory.init(trustStore); + + KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance("PKIX"); + keyManagerFactory.init(keyStore, userCertPassword.toCharArray()); + + SSLContext sslContext = SSLContext.getInstance("TLS"); + sslContext.init(keyManagerFactory.getKeyManagers(), trustManagerFactory.getTrustManagers(), null); + + return sslContext.getSocketFactory(); + } +} +``` + +### Production usage + +### Configuring Connection pool +As mentioned in the previous section, use `JedisPool` or `JedisPooled` to create a connection pool. +`JedisPooled`, added in Jedis version 4.0.0, provides capabilities similar to `JedisPool` but with a more straightforward API. +A connection pool holds a specified number of connections, creates more connections when necessary, and terminates them when they are no longer needed. + +Here is a simplified connection lifecycle in a pool: + +1. A connection is requested from the pool. +2. A connection is served: + - An idle connection is served when non-active connections are available, or + - A new connection is created when the number of connections is under `maxTotal`. +3. The connection becomes active. +4. The connection is released back to the pool. +5. The connection is marked as stale. +6. The connection is kept idle for `minEvictableIdleTime`. +7. The connection becomes evictable if the number of connections is greater than `minIdle`. +8. The connection is ready to be closed. + +It's important to configure the connection pool correctly. +Use `GenericObjectPoolConfig` from [Apache Commons Pool2](https://commons.apache.org/proper/commons-pool/apidocs/org/apache/commons/pool2/impl/GenericObjectPoolConfig.html). + +```java +ConnectionPoolConfig poolConfig = new ConnectionPoolConfig(); +// maximum active connections in the pool, +// tune this according to your needs and application type +// default is 8 +poolConfig.setMaxTotal(8); + +// maximum idle connections in the pool, default is 8 +poolConfig.setMaxIdle(8); +// minimum idle connections in the pool, default 0 +poolConfig.setMinIdle(0); + +// Enables waiting for a connection to become available. +poolConfig.setBlockWhenExhausted(true); +// The maximum number of seconds to wait for a connection to become available +poolConfig.setMaxWait(Duration.ofSeconds(1)); + +// Enables sending a PING command periodically while the connection is idle. +poolConfig.setTestWhileIdle(true); +// controls the period between checks for idle connections in the pool +poolConfig.setTimeBetweenEvictionRuns(Duration.ofSeconds(1)); + +// JedisPooled does all hard work on fetching and releasing connection to the pool +// to prevent connection starvation +JedisPooled jedis = new JedisPooled(poolConfig, "localhost", 6379); +``` + +### Timeout + +To set a timeout for a connection, use the `JedisPooled` or `JedisPool` constructor with the `timeout` parameter, or use `JedisClientConfig` with the `socketTimeout` and `connectionTimeout` parameters: + +```java +HostAndPort hostAndPort = new HostAndPort("localhost", 6379); + +JedisPooled jedisWithTimeout = new JedisPooled(hostAndPort, + DefaultJedisClientConfig.builder() + .socketTimeoutMillis(5000) // set timeout to 5 seconds + .connectionTimeoutMillis(5000) // set connection timeout to 5 seconds + .build(), + poolConfig +); +``` + +### Exception handling +The Jedis Exception Hierarchy is rooted on `JedisException`, which implements `RuntimeException`, and are therefore all unchecked exceptions. + +``` +JedisException +├── JedisDataException +│ ├── JedisRedirectionException +│ │ ├── JedisMovedDataException +│ │ └── JedisAskDataException +│ ├── AbortedTransactionException +│ ├── JedisAccessControlException +│ └── JedisNoScriptException +├── JedisClusterException +│ ├── JedisClusterOperationException +│ ├── JedisConnectionException +│ └── JedisValidationException +└── InvalidURIException +``` + +#### General Exceptions +In general, Jedis can throw the following exceptions while executing commands: + +- `JedisConnectionException` - when the connection to Redis is lost or closed unexpectedly. Configure failover to handle this exception automatically with Resilience4J and the built-in Jedis failover mechanism. +- `JedisAccessControlException` - when the user does not have the permission to execute the command or the user ID and/or password are incorrect. +- `JedisDataException` - when there is a problem with the data being sent to or received from the Redis server. Usually, the error message will contain more information about the failed command. +- `JedisException` - this exception is a catch-all exception that can be thrown for any other unexpected errors. + +Conditions when `JedisException` can be thrown: +- Bad return from a health check with the `PING` command +- Failure during SHUTDOWN +- Pub/Sub failure when issuing commands (disconnect) +- Any unknown server messages +- Sentinel: can connect to sentinel but master is not monitored or all Sentinels are down. +- MULTI or DISCARD command failed +- Shard commands key hash check failed or no Reachable Shards +- Retry deadline exceeded/number of attempts (Retry Command Executor) +- POOL - pool exhausted, error adding idle objects, returning broken resources to the pool + +All the Jedis exceptions are runtime exceptions and in most cases irrecoverable, so in general bubble up to the API capturing the error message. + +## DNS cache and Redis + +When you connect to a Redis with multiple endpoints, such as [Redis Enterprise Active-Active](https://redis.com/redis-enterprise/technology/active-active-geo-distribution/), it's recommended to disable the JVM's DNS cache to load-balance requests across multiple endpoints. + +You can do this in your application's code with the following snippet: +```java +java.security.Security.setProperty("networkaddress.cache.ttl","0"); +java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); +``` + +### Learn more + +* [Jedis API reference](https://www.javadoc.io/doc/redis.clients/jedis/latest/index.html) +* [Failover with Jedis](https://github.com/redis/jedis/blob/master/docs/failover.md) +* [GitHub](https://github.com/redis/jedis) diff --git a/docs/connect/clients/java/lettuce.md b/docs/connect/clients/java/lettuce.md new file mode 100644 index 0000000000..47b182c5c6 --- /dev/null +++ b/docs/connect/clients/java/lettuce.md @@ -0,0 +1,246 @@ +--- +title: "Lettuce guide" +linkTitle: "Lettuce" +description: Connect your Lettuce application to a Redis database +weight: 2 +--- + +Install Redis and the Redis client, then connect your Lettuce application to a Redis database. + +## Lettuce + +Lettuce offers a powerful and efficient way to interact with Redis through its asynchronous and reactive APIs. By leveraging these capabilities, you can build high-performance, scalable Java applications that make optimal use of Redis's capabilities. + +## Install + +To include Lettuce as a dependency in your application, edit the appropriate dependency file as shown below. + +If you use Maven, add the following dependency to your `pom.xml`: + +```xml + + io.lettuce + lettuce-core + 6.3.2.RELEASE + +``` + +If you use Gradle, include this line in your `build.gradle` file: + +``` +dependencies { + compile 'io.lettuce:lettuce-core:6.3.2.RELEASE +} +``` + +If you wish to use the JAR files directly, download the latest Lettuce and, optionally, Apache Commons Pool2 JAR files from Maven Central or any other Maven repository. + +To build from source, see the instructions on the [Lettuce source code GitHub repo](https://github.com/lettuce-io/lettuce-core). + +## Connect + +Start by creating a connection to your Redis server. There are many ways to achieve this using Lettuce. Here are a few. + +### Asynchronous connection + +```java +package org.example; +import java.util.*; +import java.util.concurrent.ExecutionException; + +import io.lettuce.core.*; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.api.StatefulRedisConnection; + +public class Async { + public static void main(String[] args) { + RedisClient redisClient = RedisClient.create("redis://localhost:6379"); + + try (StatefulRedisConnection connection = redisClient.connect()) { + RedisAsyncCommands asyncCommands = connection.async(); + + // Asynchronously store & retrieve a simple string + asyncCommands.set("foo", "bar").get(); + System.out.println(asyncCommands.get("foo").get()); // prints bar + + // Asynchronously store key-value pairs in a hash directly + Map hash = new HashMap<>(); + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + asyncCommands.hset("user-session:123", hash).get(); + + System.out.println(asyncCommands.hgetall("user-session:123").get()); + // Prints: {name=John, surname=Smith, company=Redis, age=29} + } catch (ExecutionException | InterruptedException e) { + throw new RuntimeException(e); + } finally { + redisClient.shutdown(); + } + } +} +``` + +Learn more about asynchronous Lettuce API in [the reference guide](https://lettuce.io/core/release/reference/index.html#asynchronous-api). + +### Reactive connection + +```java +package org.example; +import java.util.*; +import io.lettuce.core.*; +import io.lettuce.core.api.reactive.RedisReactiveCommands; +import io.lettuce.core.api.StatefulRedisConnection; + +public class Main { + public static void main(String[] args) { + RedisClient redisClient = RedisClient.create("redis://localhost:6379"); + + try (StatefulRedisConnection connection = redisClient.connect()) { + RedisReactiveCommands reactiveCommands = connection.reactive(); + + // Reactively store & retrieve a simple string + reactiveCommands.set("foo", "bar").block(); + reactiveCommands.get("foo").doOnNext(System.out::println).block(); // prints bar + + // Reactively store key-value pairs in a hash directly + Map hash = new HashMap<>(); + hash.put("name", "John"); + hash.put("surname", "Smith"); + hash.put("company", "Redis"); + hash.put("age", "29"); + + reactiveCommands.hset("user-session:124", hash).then( + reactiveCommands.hgetall("user-session:124") + .collectMap(KeyValue::getKey, KeyValue::getValue).doOnNext(System.out::println)) + .block(); + // Prints: {surname=Smith, name=John, company=Redis, age=29} + + } finally { + redisClient.shutdown(); + } + } +} +``` + +Learn more about reactive Lettuce API in [the reference guide](https://lettuce.io/core/release/reference/index.html#reactive-api). + +### Redis Cluster connection + +```java +import io.lettuce.core.RedisURI; +import io.lettuce.core.cluster.RedisClusterClient; +import io.lettuce.core.cluster.api.StatefulRedisClusterConnection; +import io.lettuce.core.cluster.api.async.RedisAdvancedClusterAsyncCommands; + +// ... + +RedisURI redisUri = RedisURI.Builder.redis("localhost").withPassword("authentication").build(); + +RedisClusterClient clusterClient = RedisClusterClient.create(redisUri); +StatefulRedisClusterConnection connection = clusterClient.connect(); +RedisAdvancedClusterAsyncCommands commands = connection.async(); + +// ... + +connection.close(); +clusterClient.shutdown(); +``` + +### TLS connection + +When you deploy your application, use TLS and follow the [Redis security guidelines](/docs/management/security/). + +```java +RedisURI redisUri = RedisURI.Builder.redis("localhost") + .withSsl(true) + .withPassword("secret!") // use your Redis password + .build(); + +RedisClient client = RedisClient.create(redisUri); +``` + + + +## Connection Management in Lettuce + +Lettuce uses `ClientResources` for efficient management of shared resources like event loop groups and thread pools. +For connection pooling, Lettuce leverages `RedisClient` or `RedisClusterClient`, which can handle multiple concurrent connections efficiently. + +A typical approach with Lettuce is to create a single `RedisClient` instance and reuse it to establish connections to your Redis server(s). +These connections are multiplexed; that is, multiple commands can be run concurrently over a single or a small set of connections, making explicit pooling less critical. + +Lettuce provides pool config to be used with Lettuce asynchronous connection methods. + + +```java +package org.example; +import io.lettuce.core.RedisClient; +import io.lettuce.core.RedisURI; +import io.lettuce.core.TransactionResult; +import io.lettuce.core.api.StatefulRedisConnection; +import io.lettuce.core.api.async.RedisAsyncCommands; +import io.lettuce.core.codec.StringCodec; +import io.lettuce.core.support.*; + +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; + +public class Pool { + public static void main(String[] args) { + RedisClient client = RedisClient.create(); + + String host = "localhost"; + int port = 6379; + + CompletionStage>> poolFuture + = AsyncConnectionPoolSupport.createBoundedObjectPoolAsync( + () -> client.connectAsync(StringCodec.UTF8, RedisURI.create(host, port)), + BoundedPoolConfig.create()); + + // await poolFuture initialization to avoid NoSuchElementException: Pool exhausted when starting your application + AsyncPool> pool = poolFuture.toCompletableFuture() + .join(); + + // execute work + CompletableFuture transactionResult = pool.acquire() + .thenCompose(connection -> { + + RedisAsyncCommands async = connection.async(); + + async.multi(); + async.set("key", "value"); + async.set("key2", "value2"); + System.out.println("Executed commands in pipeline"); + return async.exec().whenComplete((s, throwable) -> pool.release(connection)); + }); + transactionResult.join(); + + // terminating + pool.closeAsync(); + + // after pool completion + client.shutdownAsync(); + } +} +``` + +In this setup, `LettuceConnectionFactory` is a custom class you would need to implement, adhering to Apache Commons Pool's `PooledObjectFactory` interface, to manage lifecycle events of pooled `StatefulRedisConnection` objects. + +## DNS cache and Redis + +When you connect to a Redis database with multiple endpoints, such as Redis Enterprise Active-Active, it's recommended to disable the JVM's DNS cache to load-balance requests across multiple endpoints. + +You can do this in your application's code with the following snippet: + +```java +java.security.Security.setProperty("networkaddress.cache.ttl","0"); +java.security.Security.setProperty("networkaddress.cache.negative.ttl", "0"); +``` + +## Learn more + +- [Lettuce reference documentation](https://lettuce.io/docs/) +- [Redis commands](https://redis.io/commands) +- [Project Reactor](https://projectreactor.io/) \ No newline at end of file diff --git a/docs/connect/clients/nodejs.md b/docs/connect/clients/nodejs.md new file mode 100644 index 0000000000..4c7aa13c0f --- /dev/null +++ b/docs/connect/clients/nodejs.md @@ -0,0 +1,214 @@ +--- +title: "Node.js guide" +linkTitle: "Node.js" +description: Connect your Node.js application to a Redis database +weight: 4 +aliases: + - /docs/clients/nodejs/ + - /docs/redis-clients/nodejs/ +--- + +Install Redis and the Redis client, then connect your Node.js application to a Redis database. + +## node-redis + +[node-redis](https://github.com/redis/node-redis) is a modern, high-performance Redis client for Node.js. +`node-redis` requires a running Redis or [Redis Stack](https://redis.io/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. + +### Install + +To install node-redis, run: + +``` +npm install redis +``` + +### Connect + +Connect to localhost on port 6379. + +```js +import { createClient } from 'redis'; + +const client = createClient(); + +client.on('error', err => console.log('Redis Client Error', err)); + +await client.connect(); +``` + +Store and retrieve a simple string. + +```js +await client.set('key', 'value'); +const value = await client.get('key'); +``` + +Store and retrieve a map. + +```js +await client.hSet('user-session:123', { + name: 'John', + surname: 'Smith', + company: 'Redis', + age: 29 +}) + +let userSession = await client.hGetAll('user-session:123'); +console.log(JSON.stringify(userSession, null, 2)); +/* +{ + "surname": "Smith", + "name": "John", + "company": "Redis", + "age": "29" +} + */ +``` + +To connect to a different host or port, use a connection string in the format `redis[s]://[[username][:password]@][host][:port][/db-number]`: + +```js +createClient({ + url: 'redis://alice:foobared@awesome.redis.server:6380' +}); +``` +To check if the client is connected and ready to send commands, use `client.isReady`, which returns a Boolean. `client.isOpen` is also available. This returns `true` when the client's underlying socket is open, and `false` when it isn't (for example, when the client is still connecting or reconnecting after a network error). + +#### Connect to a Redis cluster + +To connect to a Redis cluster, use `createCluster`. + +```js +import { createCluster } from 'redis'; + +const cluster = createCluster({ + rootNodes: [ + { + url: 'redis://127.0.0.1:16379' + }, + { + url: 'redis://127.0.0.1:16380' + }, + // ... + ] +}); + +cluster.on('error', (err) => console.log('Redis Cluster Error', err)); + +await cluster.connect(); + +await cluster.set('foo', 'bar'); +const value = await cluster.get('foo'); +console.log(value); // returns 'bar' + +await cluster.quit(); +``` + +#### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. + +```js +const client = createClient({ + username: 'default', // use your Redis user. More info https://redis.io/docs/management/security/acl/ + password: 'secret', // use your password here + socket: { + host: 'my-redis.cloud.redislabs.com', + port: 6379, + tls: true, + key: readFileSync('./redis_user_private.key'), + cert: readFileSync('./redis_user.crt'), + ca: [readFileSync('./redis_ca.pem')] + } +}); + +client.on('error', (err) => console.log('Redis Client Error', err)); + +await client.connect(); + +await client.set('foo', 'bar'); +const value = await client.get('foo'); +console.log(value) // returns 'bar' + +await client.disconnect(); +``` + +You can also use discrete parameters and UNIX sockets. Details can be found in the [client configuration guide](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md). + +### Production usage + +#### Handling errors +Node-Redis provides [multiple events to handle various scenarios](https://github.com/redis/node-redis?tab=readme-ov-file#events), among which the most critical is the `error` event. + +This event is triggered whenever an error occurs within the client. + +It is crucial to listen for error events. + + +If a client does not register at least one error listener and an error occurs, the system will throw that error, potentially causing the Node.js process to exit unexpectedly. +See [the EventEmitter docs](https://nodejs.org/api/events.html#events_error_events) for more details. + +```typescript +const client = createClient({ + // ... client options +}); +// Always ensure there's a listener for errors in the client to prevent process crashes due to unhandled errors +client.on('error', error => { + console.error(`Redis client error:`, error); +}); +``` + + +#### Handling reconnections + +If network issues or other problems unexpectedly close the socket, the client will reject all commands already sent, since the server might have already executed them. +The rest of the pending commands will remain queued in memory until a new socket is established. +This behaviour is controlled by the `enableOfflineQueue` option, which is enabled by default. + +The client uses `reconnectStrategy` to decide when to attempt to reconnect. +The default strategy is to calculate the delay before each attempt based on the attempt number `Math.min(retries * 50, 500)`. You can customize this strategy by passing a supported value to `reconnectStrategy` option: + + +1. Define a callback `(retries: number, cause: Error) => false | number | Error` **(recommended)** +```typescript +const client = createClient({ + socket: { + reconnectStrategy: function(retries) { + if (retries > 20) { + console.log("Too many attempts to reconnect. Redis connection was terminated"); + return new Error("Too many retries."); + } else { + return retries * 500; + } + } + } +}); +client.on('error', error => console.error('Redis client error:', error)); +``` +In the provided reconnection strategy callback, the client attempts to reconnect up to 20 times with a delay of `retries * 500` milliseconds between attempts. +After approximately two minutes, the client logs an error message and terminates the connection if the maximum retry limit is exceeded. + + +2. Use a numerical value to set a fixed delay in milliseconds. +3. Use `false` to disable reconnection attempts. This option should only be used for testing purposes. + +#### Timeout + +To set a timeout for a connection, use the `connectTimeout` option: +```typescript +const client = createClient({ + // setting a 10-second timeout + connectTimeout: 10000 // in milliseconds +}); +client.on('error', error => console.error('Redis client error:', error)); +``` + +### Learn more + +* [Node-Redis Configuration Options](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md) +* [Redis commands](https://redis.js.org/#node-redis-usage-redis-commands) +* [Programmability](https://redis.js.org/#node-redis-usage-programmability) +* [Clustering](https://redis.js.org/#node-redis-usage-clustering) +* [GitHub](https://github.com/redis/node-redis) + diff --git a/docs/connect/clients/python.md b/docs/connect/clients/python.md new file mode 100644 index 0000000000..7c0e5e87aa --- /dev/null +++ b/docs/connect/clients/python.md @@ -0,0 +1,218 @@ +--- +title: "Python guide" +linkTitle: "Python" +description: Connect your Python application to a Redis database +weight: 5 +aliases: + - /docs/clients/python/ + - /docs/redis-clients/python/ +--- + +Install Redis and the Redis client, then connect your Python application to a Redis database. + +## redis-py + +Get started with the [redis-py](https://github.com/redis/redis-py) client for Redis. + +`redis-py` requires a running Redis or [Redis Stack](/docs/getting-started/install-stack/) server. See [Getting started](/docs/getting-started/) for Redis installation instructions. + +### Install + +To install `redis-py`, enter: + +```bash +pip install redis +``` + +For faster performance, install Redis with [`hiredis`](https://github.com/redis/hiredis) support. This provides a compiled response parser, and for most cases requires zero code changes. By default, if `hiredis` >= 1.0 is available, `redis-py` attempts to use it for response parsing. + +{{% alert title="Note" %}} +The Python `distutils` packaging scheme is no longer part of Python 3.12 and greater. If you're having difficulties getting `redis-py` installed in a Python 3.12 environment, consider updating to a recent release of `redis-py`. +{{% /alert %}} + +```bash +pip install redis[hiredis] +``` + +### Connect + +Connect to localhost on port 6379, set a value in Redis, and retrieve it. All responses are returned as bytes in Python. To receive decoded strings, set `decode_responses=True`. For more connection options, see [these examples](https://redis.readthedocs.io/en/stable/examples.html). + +```python +r = redis.Redis(host='localhost', port=6379, decode_responses=True) +``` + +Store and retrieve a simple string. + +```python +r.set('foo', 'bar') +# True +r.get('foo') +# bar +``` + +Store and retrieve a dict. + +```python +r.hset('user-session:123', mapping={ + 'name': 'John', + "surname": 'Smith', + "company": 'Redis', + "age": 29 +}) +# True + +r.hgetall('user-session:123') +# {'surname': 'Smith', 'name': 'John', 'company': 'Redis', 'age': '29'} +``` + +#### Connect to a Redis cluster + +To connect to a Redis cluster, use `RedisCluster`. + +```python +from redis.cluster import RedisCluster + +rc = RedisCluster(host='localhost', port=16379) + +print(rc.get_nodes()) +# [[host=127.0.0.1,port=16379,name=127.0.0.1:16379,server_type=primary,redis_connection=Redis>>], ... + +rc.set('foo', 'bar') +# True + +rc.get('foo') +# b'bar' +``` +For more information, see [redis-py Clustering](https://redis-py.readthedocs.io/en/stable/clustering.html). + +#### Connect to your production Redis with TLS + +When you deploy your application, use TLS and follow the [Redis security](/docs/management/security/) guidelines. + +```python +import redis + +r = redis.Redis( + host="my-redis.cloud.redislabs.com", port=6379, + username="default", # use your Redis user. More info https://redis.io/docs/management/security/acl/ + password="secret", # use your Redis password + ssl=True, + ssl_certfile="./redis_user.crt", + ssl_keyfile="./redis_user_private.key", + ssl_ca_certs="./redis_ca.pem", +) +r.set('foo', 'bar') +# True + +r.get('foo') +# b'bar' +``` +For more information, see [redis-py TLS examples](https://redis-py.readthedocs.io/en/stable/examples/ssl_connection_examples.html). + +### Example: Indexing and querying JSON documents + +Make sure that you have Redis Stack and `redis-py` installed. Import dependencies: + +```python +import redis +from redis.commands.json.path import Path +import redis.commands.search.aggregation as aggregations +import redis.commands.search.reducers as reducers +from redis.commands.search.field import TextField, NumericField, TagField +from redis.commands.search.indexDefinition import IndexDefinition, IndexType +from redis.commands.search.query import NumericFilter, Query +``` + +Connect to your Redis database. + +```python +r = redis.Redis(host='localhost', port=6379) +``` + +Let's create some test data to add to your database. + +```python +user1 = { + "name": "Paul John", + "email": "paul.john@example.com", + "age": 42, + "city": "London" +} +user2 = { + "name": "Eden Zamir", + "email": "eden.zamir@example.com", + "age": 29, + "city": "Tel Aviv" +} +user3 = { + "name": "Paul Zamir", + "email": "paul.zamir@example.com", + "age": 35, + "city": "Tel Aviv" +} +``` + +Define indexed fields and their data types using `schema`. Use JSON path expressions to map specific JSON elements to the schema fields. + +```python +schema = ( + TextField("$.name", as_name="name"), + TagField("$.city", as_name="city"), + NumericField("$.age", as_name="age") +) +``` + +Create an index. In this example, all JSON documents with the key prefix `user:` will be indexed. For more information, see [Query syntax](/docs/interact/search-and-query/query/). + +```python +rs = r.ft("idx:users") +rs.create_index( + schema, + definition=IndexDefinition( + prefix=["user:"], index_type=IndexType.JSON + ) +) +# b'OK' +``` + +Use `JSON.SET` to set each user value at the specified path. + +```python +r.json().set("user:1", Path.root_path(), user1) +r.json().set("user:2", Path.root_path(), user2) +r.json().set("user:3", Path.root_path(), user3) +``` + +Let's find user `Paul` and filter the results by age. + +```python +res = rs.search( + Query("Paul @age:[30 40]") +) +# Result{1 total, docs: [Document {'id': 'user:3', 'payload': None, 'json': '{"name":"Paul Zamir","email":"paul.zamir@example.com","age":35,"city":"Tel Aviv"}'}]} +``` + +Query using JSON Path expressions. + +```python +rs.search( + Query("Paul").return_field("$.city", as_field="city") +).docs +# [Document {'id': 'user:1', 'payload': None, 'city': 'London'}, Document {'id': 'user:3', 'payload': None, 'city': 'Tel Aviv'}] +``` + +Aggregate your results using `FT.AGGREGATE`. + +```python +req = aggregations.AggregateRequest("*").group_by('@city', reducers.count().alias('count')) +print(rs.aggregate(req).rows) +# [[b'city', b'Tel Aviv', b'count', b'2'], [b'city', b'London', b'count', b'1']] +``` + +### Learn more + +* [Command reference](https://redis-py.readthedocs.io/en/stable/commands.html) +* [Tutorials](https://redis.readthedocs.io/en/stable/examples.html) +* [GitHub](https://github.com/redis/redis-py) + diff --git a/docs/data-types/_index.md b/docs/data-types/_index.md new file mode 100644 index 0000000000..ca95521d16 --- /dev/null +++ b/docs/data-types/_index.md @@ -0,0 +1,112 @@ +--- +title: "Understand Redis data types" +linkTitle: "Understand data types" +description: Overview of data types supported by Redis +weight: 35 +aliases: + - /docs/manual/data-types + - /topics/data-types + - /docs/data-types/tutorial +--- + +Redis is a data structure server. +At its core, Redis provides a collection of native data types that help you solve a wide variety of problems, from [caching](/docs/manual/client-side-caching/) to [queuing](/docs/data-types/lists/) to [event processing](/docs/data-types/streams/). +Below is a short description of each data type, with links to broader overviews and command references. + +If you'd like to try a comprehensive tutorial for each data structure, see their overview pages below. + + +## Core + +### Strings + +[Redis strings](/docs/data-types/strings) are the most basic Redis data type, representing a sequence of bytes. +For more information, see: + +* [Overview of Redis strings](/docs/data-types/strings/) +* [Redis string command reference](/commands/?group=string) + +### Lists + +[Redis lists](/docs/data-types/lists) are lists of strings sorted by insertion order. +For more information, see: + +* [Overview of Redis lists](/docs/data-types/lists/) +* [Redis list command reference](/commands/?group=list) + +### Sets + +[Redis sets](/docs/data-types/sets) are unordered collections of unique strings that act like the sets from your favorite programming language (for example, [Java HashSets](https://docs.oracle.com/javase/7/docs/api/java/util/HashSet.html), [Python sets](https://docs.python.org/3.10/library/stdtypes.html#set-types-set-frozenset), and so on). +With a Redis set, you can add, remove, and test for existence in O(1) time (in other words, regardless of the number of set elements). +For more information, see: + +* [Overview of Redis sets](/docs/data-types/sets/) +* [Redis set command reference](/commands/?group=set) + +### Hashes + +[Redis hashes](/docs/data-types/hashes) are record types modeled as collections of field-value pairs. +As such, Redis hashes resemble [Python dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries), [Java HashMaps](https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html), and [Ruby hashes](https://ruby-doc.org/core-3.1.2/Hash.html). +For more information, see: + +* [Overview of Redis hashes](/docs/data-types/hashes/) +* [Redis hashes command reference](/commands/?group=hash) + +### Sorted sets + +[Redis sorted sets](/docs/data-types/sorted-sets) are collections of unique strings that maintain order by each string's associated score. +For more information, see: + +* [Overview of Redis sorted sets](/docs/data-types/sorted-sets) +* [Redis sorted set command reference](/commands/?group=sorted-set) + +### Streams + +A [Redis stream](/docs/data-types/streams) is a data structure that acts like an append-only log. +Streams help record events in the order they occur and then syndicate them for processing. +For more information, see: + +* [Overview of Redis Streams](/docs/data-types/streams) +* [Redis Streams command reference](/commands/?group=stream) + +### Geospatial indexes + +[Redis geospatial indexes](/docs/data-types/geospatial) are useful for finding locations within a given geographic radius or bounding box. +For more information, see: + +* [Overview of Redis geospatial indexes](/docs/data-types/geospatial/) +* [Redis geospatial indexes command reference](/commands/?group=geo) + +### Bitmaps + +[Redis bitmaps](/docs/data-types/bitmaps/) let you perform bitwise operations on strings. +For more information, see: + +* [Overview of Redis bitmaps](/docs/data-types/bitmaps/) +* [Redis bitmap command reference](/commands/?group=bitmap) + +### Bitfields + +[Redis bitfields](/docs/data-types/bitfields/) efficiently encode multiple counters in a string value. +Bitfields provide atomic get, set, and increment operations and support different overflow policies. +For more information, see: + +* [Overview of Redis bitfields](/docs/data-types/bitfields/) +* The `BITFIELD` command. + +### HyperLogLog + +The [Redis HyperLogLog](/docs/data-types/hyperloglogs) data structures provide probabilistic estimates of the cardinality (i.e., number of elements) of large sets. For more information, see: + +* [Overview of Redis HyperLogLog](/docs/data-types/hyperloglogs) +* [Redis HyperLogLog command reference](/commands/?group=hyperloglog) + +## Extensions + +To extend the features provided by the included data types, use one of these options: + +1. Write your own custom [server-side functions in Lua](/docs/manual/programmability/). +1. Write your own Redis module using the [modules API](/docs/reference/modules/) or check out the [community-supported modules](/docs/modules/). +1. Use [JSON](/docs/stack/json/), [querying](/docs/stack/search/), [time series](/docs/stack/timeseries/), and other capabilities provided by [Redis Stack](/docs/stack/). + +
diff --git a/docs/data-types/bitfields.md b/docs/data-types/bitfields.md new file mode 100644 index 0000000000..0f693c2277 --- /dev/null +++ b/docs/data-types/bitfields.md @@ -0,0 +1,47 @@ +--- +title: "Redis bitfields" +linkTitle: "Bitfields" +weight: 130 +description: > + Introduction to Redis bitfields +--- + +Redis bitfields let you set, increment, and get integer values of arbitrary bit length. +For example, you can operate on anything from unsigned 1-bit integers to signed 63-bit integers. + +These values are stored using binary-encoded Redis strings. +Bitfields support atomic read, write and increment operations, making them a good choice for managing counters and similar numerical values. + + +## Basic commands + +* `BITFIELD` atomically sets, increments and reads one or more values. +* `BITFIELD_RO` is a read-only variant of `BITFIELD`. + + +## Examples + +## Example + +Suppose you want to maintain two metrics for various bicycles: the current price and the number of owners over time. You can represent these counters with a 32-bit wide bitfield per for each bike. + +* Bike 1 initially costs 1,000 (counter in offset 0) and has never had an owner. After being sold, it's now considered used and the price instantly drops to reflect its new condition, and it now has an owner (offset 1). After quite some time, the bike becomes a classic. The original owner sells it for a profit, so the price goes up and the number of owners does as well.Finally, you can look at the bike's current price and number of owners. + +{{< clients-example bitfield_tutorial bf >}} +> BITFIELD bike:1:stats SET u32 #0 1000 +1) (integer) 0 +> BITFIELD bike:1:stats INCRBY u32 #0 -50 INCRBY u32 #1 1 +1) (integer) 950 +2) (integer) 1 +> BITFIELD bike:1:stats INCRBY u32 #0 500 INCRBY u32 #1 1 +1) (integer) 1450 +2) (integer) 2 +> BITFIELD bike:1:stats GET u32 #0 GET u32 #1 +1) (integer) 1450 +2) (integer) 2 +{{< /clients-example >}} + + +## Performance + +`BITFIELD` is O(n), where _n_ is the number of counters accessed. diff --git a/docs/data-types/bitmaps.md b/docs/data-types/bitmaps.md new file mode 100644 index 0000000000..d71ea742f1 --- /dev/null +++ b/docs/data-types/bitmaps.md @@ -0,0 +1,113 @@ +--- +title: "Redis bitmaps" +linkTitle: "Bitmaps" +weight: 120 +description: > + Introduction to Redis bitmaps +--- + +Bitmaps are not an actual data type, but a set of bit-oriented operations +defined on the String type which is treated like a bit vector. +Since strings are binary safe blobs and their maximum length is 512 MB, +they are suitable to set up to 2^32 different bits. + +You can perform bitwise operations on one or more strings. +Some examples of bitmap use cases include: + +* Efficient set representations for cases where the members of a set correspond to the integers 0-N. +* Object permissions, where each bit represents a particular permission, similar to the way that file systems store permissions. + +## Basic commands + +* `SETBIT` sets a bit at the provided offset to 0 or 1. +* `GETBIT` returns the value of a bit at a given offset. + +See the [complete list of bitmap commands](https://redis.io/commands/?group=bitmap). + + +## Example + +Suppose you have 1000 cyclists racing through the country-side, with sensors on their bikes labeled 0-999. +You want to quickly determine whether a given sensor has pinged a tracking server within the hour to check in on a rider. + +You can represent this scenario using a bitmap whose key references the current hour. + +* Rider 123 pings the server on January 1, 2024 within the 00:00 hour. You can then confirm that rider 123 pinged the server. You can also check to see if rider 456 has pinged the server for that same hour. + +{{< clients-example bitmap_tutorial ping >}} +> SETBIT pings:2024-01-01-00:00 123 1 +(integer) 0 +> GETBIT pings:2024-01-01-00:00 123 +1 +> GETBIT pings:2024-01-01-00:00 456 +0 +{{< /clients-example >}} + + +## Bit Operations + +Bit operations are divided into two groups: constant-time single bit +operations, like setting a bit to 1 or 0, or getting its value, and +operations on groups of bits, for example counting the number of set +bits in a given range of bits (e.g., population counting). + +One of the biggest advantages of bitmaps is that they often provide +extreme space savings when storing information. For example in a system +where different users are represented by incremental user IDs, it is possible +to remember a single bit information (for example, knowing whether +a user wants to receive a newsletter) of 4 billion users using just 512 MB of memory. + +The `SETBIT` command takes as its first argument the bit number, and as its second +argument the value to set the bit to, which is 1 or 0. The command +automatically enlarges the string if the addressed bit is outside the +current string length. + +`GETBIT` just returns the value of the bit at the specified index. +Out of range bits (addressing a bit that is outside the length of the string +stored into the target key) are always considered to be zero. + +There are three commands operating on group of bits: + +1. `BITOP` performs bit-wise operations between different strings. The provided operations are AND, OR, XOR and NOT. +2. `BITCOUNT` performs population counting, reporting the number of bits set to 1. +3. `BITPOS` finds the first bit having the specified value of 0 or 1. + +Both `BITPOS` and `BITCOUNT` are able to operate with byte ranges of the +string, instead of running for the whole length of the string. We can trivially see the number of bits that have been set in a bitmap. + +{{< clients-example bitmap_tutorial bitcount >}} +> BITCOUNT pings:2024-01-01-00:00 +(integer) 1 +{{< /clients-example >}} + +For example imagine you want to know the longest streak of daily visits of +your web site users. You start counting days starting from zero, that is the +day you made your web site public, and set a bit with `SETBIT` every time +the user visits the web site. As a bit index you simply take the current unix +time, subtract the initial offset, and divide by the number of seconds in a day +(normally, 3600\*24). + +This way for each user you have a small string containing the visit +information for each day. With `BITCOUNT` it is possible to easily get +the number of days a given user visited the web site, while with +a few `BITPOS` calls, or simply fetching and analyzing the bitmap client-side, +it is possible to easily compute the longest streak. + +Bitmaps are trivial to split into multiple keys, for example for +the sake of sharding the data set and because in general it is better to +avoid working with huge keys. To split a bitmap across different keys +instead of setting all the bits into a key, a trivial strategy is just +to store M bits per key and obtain the key name with `bit-number/M` and +the Nth bit to address inside the key with `bit-number MOD M`. + + + +## Performance + +`SETBIT` and `GETBIT` are O(1). +`BITOP` is O(n), where _n_ is the length of the longest string in the comparison. + +## Learn more + +* [Redis Bitmaps Explained](https://www.youtube.com/watch?v=oj8LdJQjhJo) teaches you how to use bitmaps for map exploration in an online game. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis bitmaps in detail. diff --git a/docs/data-types/geospatial.md b/docs/data-types/geospatial.md new file mode 100644 index 0000000000..1f87de74c3 --- /dev/null +++ b/docs/data-types/geospatial.md @@ -0,0 +1,48 @@ +--- +title: "Redis geospatial" +linkTitle: "Geospatial" +weight: 80 +description: > + Introduction to the Redis Geospatial data type +--- + +Redis geospatial indexes let you store coordinates and search for them. +This data structure is useful for finding nearby points within a given radius or bounding box. + +## Basic commands + +* `GEOADD` adds a location to a given geospatial index (note that longitude comes before latitude with this command). +* `GEOSEARCH` returns locations with a given radius or a bounding box. + +See the [complete list of geospatial index commands](https://redis.io/commands/?group=geo). + + +## Examples + +Suppose you're building a mobile app that lets you find all of the bike rental stations closest to your current location. + +Add several locations to a geospatial index: +{{< clients-example geo_tutorial geoadd >}} +> GEOADD bikes:rentable -122.27652 37.805186 station:1 +(integer) 1 +> GEOADD bikes:rentable -122.2674626 37.8062344 station:2 +(integer) 1 +> GEOADD bikes:rentable -122.2469854 37.8104049 station:3 +(integer) 1 +{{< /clients-example >}} + +Find all locations within a 5 kilometer radius of a given location, and return the distance to each location: +{{< clients-example geo_tutorial geosearch >}} +> GEOSEARCH bikes:rentable FROMLONLAT -122.2612767 37.7936847 BYRADIUS 5 km WITHDIST +1) 1) "station:1" + 2) "1.8523" +2) 1) "station:2" + 2) "1.4979" +3) 1) "station:3" + 2) "2.2441" +{{< /clients-example >}} + +## Learn more + +* [Redis Geospatial Explained](https://www.youtube.com/watch?v=qftiVQraxmI) introduces geospatial indexes by showing you how to build a map of local park attractions. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis geospatial indexes in detail. diff --git a/docs/data-types/hashes.md b/docs/data-types/hashes.md new file mode 100644 index 0000000000..68f9825e9d --- /dev/null +++ b/docs/data-types/hashes.md @@ -0,0 +1,106 @@ +--- +title: "Redis hashes" +linkTitle: "Hashes" +weight: 40 +description: > + Introduction to Redis hashes +--- + +Redis hashes are record types structured as collections of field-value pairs. +You can use hashes to represent basic objects and to store groupings of counters, among other things. + +{{< clients-example hash_tutorial set_get_all >}} +> HSET bike:1 model Deimos brand Ergonom type 'Enduro bikes' price 4972 +(integer) 4 +> HGET bike:1 model +"Deimos" +> HGET bike:1 price +"4972" +> HGETALL bike:1 +1) "model" +2) "Deimos" +3) "brand" +4) "Ergonom" +5) "type" +6) "Enduro bikes" +7) "price" +8) "4972" + +{{< /clients-example >}} + +While hashes are handy to represent *objects*, actually the number of fields you can +put inside a hash has no practical limits (other than available memory), so you can use +hashes in many different ways inside your application. + +The command `HSET` sets multiple fields of the hash, while `HGET` retrieves +a single field. `HMGET` is similar to `HGET` but returns an array of values: + +{{< clients-example hash_tutorial hmget >}} +> HMGET bike:1 model price no-such-field +1) "Deimos" +2) "4972" +3) (nil) +{{< /clients-example >}} + +There are commands that are able to perform operations on individual fields +as well, like `HINCRBY`: + +{{< clients-example hash_tutorial hincrby >}} +> HINCRBY bike:1 price 100 +(integer) 5072 +> HINCRBY bike:1 price -100 +(integer) 4972 +{{< /clients-example >}} + +You can find the [full list of hash commands in the documentation](https://redis.io/commands#hash). + +It is worth noting that small hashes (i.e., a few elements with small values) are +encoded in special way in memory that make them very memory efficient. + +## Basic commands + +* `HSET` sets the value of one or more fields on a hash. +* `HGET` returns the value at a given field. +* `HMGET` returns the values at one or more given fields. +* `HINCRBY` increments the value at a given field by the integer provided. + +See the [complete list of hash commands](https://redis.io/commands/?group=hash). + + +## Examples + +* Store counters for the number of times bike:1 has been ridden, has crashed, or has changed owners: +{{< clients-example hash_tutorial incrby_get_mget >}} +> HINCRBY bike:1:stats rides 1 +(integer) 1 +> HINCRBY bike:1:stats rides 1 +(integer) 2 +> HINCRBY bike:1:stats rides 1 +(integer) 3 +> HINCRBY bike:1:stats crashes 1 +(integer) 1 +> HINCRBY bike:1:stats owners 1 +(integer) 1 +> HGET bike:1:stats rides +"3" +> HMGET bike:1:stats owners crashes +1) "1" +2) "1" +{{< /clients-example >}} + + +## Performance + +Most Redis hash commands are O(1). + +A few commands - such as `HKEYS`, `HVALS`, and `HGETALL` - are O(n), where _n_ is the number of field-value pairs. + +## Limits + +Every hash can store up to 4,294,967,295 (2^32 - 1) field-value pairs. +In practice, your hashes are limited only by the overall memory on the VMs hosting your Redis deployment. + +## Learn more + +* [Redis Hashes Explained](https://www.youtube.com/watch?v=-KdITaRkQ-U) is a short, comprehensive video explainer covering Redis hashes. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis hashes in detail. \ No newline at end of file diff --git a/docs/data-types/lists.md b/docs/data-types/lists.md new file mode 100644 index 0000000000..f2275e2332 --- /dev/null +++ b/docs/data-types/lists.md @@ -0,0 +1,415 @@ +--- +title: "Redis lists" +linkTitle: "Lists" +weight: 20 +description: > + Introduction to Redis lists +--- + +Redis lists are linked lists of string values. +Redis lists are frequently used to: + +* Implement stacks and queues. +* Build queue management for background worker systems. + +## Basic commands + +* `LPUSH` adds a new element to the head of a list; `RPUSH` adds to the tail. +* `LPOP` removes and returns an element from the head of a list; `RPOP` does the same but from the tails of a list. +* `LLEN` returns the length of a list. +* `LMOVE` atomically moves elements from one list to another. +* `LTRIM` reduces a list to the specified range of elements. + +### Blocking commands + +Lists support several blocking commands. +For example: + +* `BLPOP` removes and returns an element from the head of a list. + If the list is empty, the command blocks until an element becomes available or until the specified timeout is reached. +* `BLMOVE` atomically moves elements from a source list to a target list. + If the source list is empty, the command will block until a new element becomes available. + +See the [complete series of list commands](https://redis.io/commands/?group=list). + +## Examples + +* Treat a list like a queue (first in, first out): +{{< clients-example list_tutorial queue >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> RPOP bikes:repairs +"bike:1" +> RPOP bikes:repairs +"bike:2" +{{< /clients-example >}} + +* Treat a list like a stack (first in, last out): +{{< clients-example list_tutorial stack >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> LPOP bikes:repairs +"bike:2" +> LPOP bikes:repairs +"bike:1" +{{< /clients-example >}} + +* Check the length of a list: +{{< clients-example list_tutorial llen >}} +> LLEN bikes:repairs +(integer) 0 +{{< /clients-example >}} + +* Atomically pop an element from one list and push to another: +{{< clients-example list_tutorial lmove_lrange >}} +> LPUSH bikes:repairs bike:1 +(integer) 1 +> LPUSH bikes:repairs bike:2 +(integer) 2 +> LMOVE bikes:repairs bikes:finished LEFT LEFT +"bike:2" +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +> LRANGE bikes:finished 0 -1 +1) "bike:2" +{{< /clients-example >}} + +* To limit the length of a list you can call `LTRIM`: +{{< clients-example list_tutorial ltrim.1 >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs 0 2 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +2) "bike:2" +3) "bike:3" +{{< /clients-example >}} + +### What are Lists? +To explain the List data type it's better to start with a little bit of theory, +as the term *List* is often used in an improper way by information technology +folks. For instance "Python Lists" are not what the name may suggest (Linked +Lists), but rather Arrays (the same data type is called Array in +Ruby actually). + +From a very general point of view a List is just a sequence of ordered +elements: 10,20,1,2,3 is a list. But the properties of a List implemented using +an Array are very different from the properties of a List implemented using a +*Linked List*. + +Redis lists are implemented via Linked Lists. This means that even if you have +millions of elements inside a list, the operation of adding a new element in +the head or in the tail of the list is performed *in constant time*. The speed of adding a +new element with the `LPUSH` command to the head of a list with ten +elements is the same as adding an element to the head of list with 10 +million elements. + +What's the downside? Accessing an element *by index* is very fast in lists +implemented with an Array (constant time indexed access) and not so fast in +lists implemented by linked lists (where the operation requires an amount of +work proportional to the index of the accessed element). + +Redis Lists are implemented with linked lists because for a database system it +is crucial to be able to add elements to a very long list in a very fast way. +Another strong advantage, as you'll see in a moment, is that Redis Lists can be +taken at constant length in constant time. + +When fast access to the middle of a large collection of elements is important, +there is a different data structure that can be used, called sorted sets. +Sorted sets are covered in the [Sorted sets](/docs/data-types/sorted-sets) tutorial page. + +### First steps with Redis Lists + +The `LPUSH` command adds a new element into a list, on the +left (at the head), while the `RPUSH` command adds a new +element into a list, on the right (at the tail). Finally the +`LRANGE` command extracts ranges of elements from lists: + +{{< clients-example list_tutorial lpush_rpush >}} +> RPUSH bikes:repairs bike:1 +(integer) 1 +> RPUSH bikes:repairs bike:2 +(integer) 2 +> LPUSH bikes:repairs bike:important_bike +(integer) 3 +> LRANGE bikes:repairs 0 -1 +1) "bike:important_bike" +2) "bike:1" +3) "bike:2" +{{< /clients-example >}} + +Note that `LRANGE` takes two indexes, the first and the last +element of the range to return. Both the indexes can be negative, telling Redis +to start counting from the end: so -1 is the last element, -2 is the +penultimate element of the list, and so forth. + +As you can see `RPUSH` appended the elements on the right of the list, while +the final `LPUSH` appended the element on the left. + +Both commands are *variadic commands*, meaning that you are free to push +multiple elements into a list in a single call: + +{{< clients-example list_tutorial variadic >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> LPUSH bikes:repairs bike:important_bike bike:very_important_bike +> LRANGE mylist 0 -1 +1) "bike:very_important_bike" +2) "bike:important_bike" +3) "bike:1" +4) "bike:2" +5) "bike:3" +{{< /clients-example >}} + +An important operation defined on Redis lists is the ability to *pop elements*. +Popping elements is the operation of both retrieving the element from the list, +and eliminating it from the list, at the same time. You can pop elements +from left and right, similarly to how you can push elements in both sides +of the list. We'll add three elements and pop three elements, so at the end of this +sequence of commands the list is empty and there are no more elements to +pop: + +{{< clients-example list_tutorial lpop_rpop >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> RPOP bikes:repairs +"bike:3" +> LPOP bikes:repairs +"bike:1" +> RPOP bikes:repairs +"bike:2" +> RPOP bikes:repairs +(nil) +{{< /clients-example >}} + +Redis returned a NULL value to signal that there are no elements in the +list. + +### Common use cases for lists + +Lists are useful for a number of tasks, two very representative use cases +are the following: + +* Remember the latest updates posted by users into a social network. +* Communication between processes, using a consumer-producer pattern where the producer pushes items into a list, and a consumer (usually a *worker*) consumes those items and executes actions. Redis has special list commands to make this use case both more reliable and efficient. + +For example both the popular Ruby libraries [resque](https://github.com/resque/resque) and +[sidekiq](https://github.com/mperham/sidekiq) use Redis lists under the hood in order to +implement background jobs. + +The popular Twitter social network [takes the latest tweets](http://www.infoq.com/presentations/Real-Time-Delivery-Twitter) +posted by users into Redis lists. + +To describe a common use case step by step, imagine your home page shows the latest +photos published in a photo sharing social network and you want to speedup access. + +* Every time a user posts a new photo, we add its ID into a list with `LPUSH`. +* When users visit the home page, we use `LRANGE 0 9` in order to get the latest 10 posted items. + +### Capped lists + +In many use cases we just want to use lists to store the *latest items*, +whatever they are: social network updates, logs, or anything else. + +Redis allows us to use lists as a capped collection, only remembering the latest +N items and discarding all the oldest items using the `LTRIM` command. + +The `LTRIM` command is similar to `LRANGE`, but **instead of displaying the +specified range of elements** it sets this range as the new list value. All +the elements outside the given range are removed. + +For example, if you're adding bikes on the end of a list of repairs, but only +want to worry about the 3 that have been on the list the longest: + +{{< clients-example list_tutorial ltrim >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs 0 2 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:1" +2) "bike:2" +3) "bike:3" +{{< /clients-example >}} + +The above `LTRIM` command tells Redis to keep just list elements from index +0 to 2, everything else will be discarded. This allows for a very simple but +useful pattern: doing a List push operation + a List trim operation together +to add a new element and discard elements exceeding a limit. Using +`LTRIM` with negative indexes can then be used to keep only the 3 most recently added: + +{{< clients-example list_tutorial ltrim_end_of_list >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> LTRIM bikes:repairs -3 -1 +OK +> LRANGE bikes:repairs 0 -1 +1) "bike:3" +2) "bike:4" +3) "bike:5" +{{< /clients-example >}} + +The above combination adds new elements and keeps only the 3 +newest elements into the list. With `LRANGE` you can access the top items +without any need to remember very old data. + +Note: while `LRANGE` is technically an O(N) command, accessing small ranges +towards the head or the tail of the list is a constant time operation. + +Blocking operations on lists +--- + +Lists have a special feature that make them suitable to implement queues, +and in general as a building block for inter process communication systems: +blocking operations. + +Imagine you want to push items into a list with one process, and use +a different process in order to actually do some kind of work with those +items. This is the usual producer / consumer setup, and can be implemented +in the following simple way: + +* To push items into the list, producers call `LPUSH`. +* To extract / process items from the list, consumers call `RPOP`. + +However it is possible that sometimes the list is empty and there is nothing +to process, so `RPOP` just returns NULL. In this case a consumer is forced to wait +some time and retry again with `RPOP`. This is called *polling*, and is not +a good idea in this context because it has several drawbacks: + +1. Forces Redis and clients to process useless commands (all the requests when the list is empty will get no actual work done, they'll just return NULL). +2. Adds a delay to the processing of items, since after a worker receives a NULL, it waits some time. To make the delay smaller, we could wait less between calls to `RPOP`, with the effect of amplifying problem number 1, i.e. more useless calls to Redis. + +So Redis implements commands called `BRPOP` and `BLPOP` which are versions +of `RPOP` and `LPOP` able to block if the list is empty: they'll return to +the caller only when a new element is added to the list, or when a user-specified +timeout is reached. + +This is an example of a `BRPOP` call we could use in the worker: + +{{< clients-example list_tutorial brpop >}} +> RPUSH bikes:repairs bike:1 bike:2 +(integer) 2 +> BRPOP bikes:repairs 1 +1) "bikes:repairs" +2) "bike:2" +> BRPOP bikes:repairs 1 +1) "bikes:repairs" +2) "bike:1" +> BRPOP bikes:repairs 1 +(nil) +(2.01s) +{{< /clients-example >}} + +It means: "wait for elements in the list `bikes:repairs`, but return if after 1 second +no element is available". + +Note that you can use 0 as timeout to wait for elements forever, and you can +also specify multiple lists and not just one, in order to wait on multiple +lists at the same time, and get notified when the first list receives an +element. + +A few things to note about `BRPOP`: + +1. Clients are served in an ordered way: the first client that blocked waiting for a list, is served first when an element is pushed by some other client, and so forth. +2. The return value is different compared to `RPOP`: it is a two-element array since it also includes the name of the key, because `BRPOP` and `BLPOP` are able to block waiting for elements from multiple lists. +3. If the timeout is reached, NULL is returned. + +There are more things you should know about lists and blocking ops. We +suggest that you read more on the following: + +* It is possible to build safer queues or rotating queues using `LMOVE`. +* There is also a blocking variant of the command, called `BLMOVE`. + +## Automatic creation and removal of keys + +So far in our examples we never had to create empty lists before pushing +elements, or removing empty lists when they no longer have elements inside. +It is Redis' responsibility to delete keys when lists are left empty, or to create +an empty list if the key does not exist and we are trying to add elements +to it, for example, with `LPUSH`. + +This is not specific to lists, it applies to all the Redis data types +composed of multiple elements -- Streams, Sets, Sorted Sets and Hashes. + +Basically we can summarize the behavior with three rules: + +1. When we add an element to an aggregate data type, if the target key does not exist, an empty aggregate data type is created before adding the element. +2. When we remove elements from an aggregate data type, if the value remains empty, the key is automatically destroyed. The Stream data type is the only exception to this rule. +3. Calling a read-only command such as `LLEN` (which returns the length of the list), or a write command removing elements, with an empty key, always produces the same result as if the key is holding an empty aggregate type of the type the command expects to find. + +Examples of rule 1: + +{{< clients-example list_tutorial rule_1 >}} +> DEL new_bikes +(integer) 0 +> LPUSH new_bikes bike:1 bike:2 bike:3 +(integer) 3 +{{< /clients-example >}} + +However we can't perform operations against the wrong type if the key exists: + +{{< clients-example list_tutorial rule_1.1 >}} +> SET new_bikes bike:1 +OK +> TYPE new_bikes +string +> LPUSH new_bikes bike:2 bike:3 +(error) WRONGTYPE Operation against a key holding the wrong kind of value +{{< /clients-example >}} + +Example of rule 2: + +{{< clients-example list_tutorial rule_2 >}} +> RPUSH bikes:repairs bike:1 bike:2 bike:3 +(integer) 3 +> EXISTS bikes:repairs +(integer) 1 +> LPOP bikes:repairs +"bike:3" +> LPOP bikes:repairs +"bike:2" +> LPOP bikes:repairs +"bike:1" +> EXISTS bikes:repairs +(integer) 0 +{{< /clients-example >}} + +The key no longer exists after all the elements are popped. + +Example of rule 3: + +{{< clients-example list_tutorial rule_3 >}} +> DEL bikes:repairs +(integer) 0 +> LLEN bikes:repairs +(integer) 0 +> LPOP bikes:repairs +(nil) +{{< /clients-example >}} + + +## Limits + +The max length of a Redis list is 2^32 - 1 (4,294,967,295) elements. + + +## Performance + +List operations that access its head or tail are O(1), which means they're highly efficient. +However, commands that manipulate elements within a list are usually O(n). +Examples of these include `LINDEX`, `LINSERT`, and `LSET`. +Exercise caution when running these commands, mainly when operating on large lists. + +## Alternatives + +Consider [Redis streams](/docs/data-types/streams) as an alternative to lists when you need to store and process an indeterminate series of events. + +## Learn more + +* [Redis Lists Explained](https://www.youtube.com/watch?v=PB5SeOkkxQc) is a short, comprehensive video explainer on Redis lists. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis lists in detail. diff --git a/docs/data-types/probabilistic/hyperloglogs.md b/docs/data-types/probabilistic/hyperloglogs.md new file mode 100644 index 0000000000..89bbd11a38 --- /dev/null +++ b/docs/data-types/probabilistic/hyperloglogs.md @@ -0,0 +1,100 @@ +--- +title: "HyperLogLog" +linkTitle: "HyperLogLog" +weight: 1 +description: > + HyperLogLog is a probabilistic data structure that estimates the cardinality of a set. +aliases: + - /docs/data-types/hyperloglogs/ +--- + +HyperLogLog is a probabilistic data structure that estimates the cardinality of a set. As a probabilistic data structure, HyperLogLog trades perfect accuracy for efficient space utilization. + +The Redis HyperLogLog implementation uses up to 12 KB and provides a standard error of 0.81%. + +Counting unique items usually requires an amount of memory +proportional to the number of items you want to count, because you need +to remember the elements you have already seen in the past in order to avoid +counting them multiple times. However, a set of algorithms exist that trade +memory for precision: they return an estimated measure with a standard error, +which, in the case of the Redis implementation for HyperLogLog, is less than 1%. +The magic of this algorithm is that you no longer need to use an amount of memory +proportional to the number of items counted, and instead can use a +constant amount of memory; 12k bytes in the worst case, or a lot less if your +HyperLogLog (We'll just call them HLL from now) has seen very few elements. + +HLLs in Redis, while technically a different data structure, are encoded +as a Redis string, so you can call `GET` to serialize a HLL, and `SET` +to deserialize it back to the server. + +Conceptually the HLL API is like using Sets to do the same task. You would +`SADD` every observed element into a set, and would use `SCARD` to check the +number of elements inside the set, which are unique since `SADD` will not +re-add an existing element. + +While you don't really *add items* into an HLL, because the data structure +only contains a state that does not include actual elements, the API is the +same: + +* Every time you see a new element, you add it to the count with `PFADD`. +* When you want to retrieve the current approximation of unique elements added using the `PFADD` command, you can use the `PFCOUNT` command. If you need to merge two different HLLs, the `PFMERGE` command is available. Since HLLs provide approximate counts of unique elements, the result of the merge will give you an approximation of the number of unique elements across both source HLLs. + +{{< clients-example hll_tutorial pfadd >}} +> PFADD bikes Hyperion Deimos Phoebe Quaoar +(integer) 1 +> PFCOUNT bikes +(integer) 4 +> PFADD commuter_bikes Salacia Mimas Quaoar +(integer) 1 +> PFMERGE all_bikes bikes commuter_bikes +OK +> PFCOUNT all_bikes +(integer) 6 +{{< /clients-example >}} + +Some examples of use cases for this data structure is counting unique queries +performed by users in a search form every day, number of unique visitors to a web page and other similar cases. + +Redis is also able to perform the union of HLLs, please check the +[full documentation](/commands#hyperloglog) for more information. + +## Use cases + +**Anonymous unique visits of a web page (SaaS, analytics tools)** + +This application answers these questions: + +- How many unique visits has this page had on this day? +- How many unique users have played this song? +- How many unique users have viewed this video? + +{{% alert title="Note" color="warning" %}} + +Storing the IP address or any other kind of personal identifier is against the law in some countries, which makes it impossible to get unique visitor statistics on your website. + +{{% /alert %}} + +One HyperLogLog is created per page (video/song) per period, and every IP/identifier is added to it on every visit. + +## Basic commands + +* `PFADD` adds an item to a HyperLogLog. +* `PFCOUNT` returns an estimate of the number of items in the set. +* `PFMERGE` combines two or more HyperLogLogs into one. + +See the [complete list of HyperLogLog commands](https://redis.io/commands/?group=hyperloglog). + +## Performance + +Writing (`PFADD`) to and reading from (`PFCOUNT`) the HyperLogLog is done in constant time and space. +Merging HLLs is O(n), where _n_ is the number of sketches. + +## Limits + +The HyperLogLog can estimate the cardinality of sets with up to 18,446,744,073,709,551,616 (2^64) members. + +## Learn more + +* [Redis new data structure: the HyperLogLog](http://antirez.com/news/75) has a lot of details about the data structure and its implementation in Redis. +* [Redis HyperLogLog Explained](https://www.youtube.com/watch?v=MunL8nnwscQ) shows you how to use Redis HyperLogLog data structures to build a traffic heat map. + diff --git a/docs/data-types/sets.md b/docs/data-types/sets.md new file mode 100644 index 0000000000..f4f51f5e2f --- /dev/null +++ b/docs/data-types/sets.md @@ -0,0 +1,180 @@ +--- +title: "Redis sets" +linkTitle: "Sets" +weight: 30 +description: > + Introduction to Redis sets +--- + +A Redis set is an unordered collection of unique strings (members). +You can use Redis sets to efficiently: + +* Track unique items (e.g., track all unique IP addresses accessing a given blog post). +* Represent relations (e.g., the set of all users with a given role). +* Perform common set operations such as intersection, unions, and differences. + +## Basic commands + +* `SADD` adds a new member to a set. +* `SREM` removes the specified member from the set. +* `SISMEMBER` tests a string for set membership. +* `SINTER` returns the set of members that two or more sets have in common (i.e., the intersection). +* `SCARD` returns the size (a.k.a. cardinality) of a set. + +See the [complete list of set commands](https://redis.io/commands/?group=set). + +## Examples + +* Store the sets of bikes racing in France and the USA. Note that +if you add a member that already exists, it will be ignored. +{{< clients-example sets_tutorial sadd >}} +> SADD bikes:racing:france bike:1 +(integer) 1 +> SADD bikes:racing:france bike:1 +(integer) 0 +> SADD bikes:racing:france bike:2 bike:3 +(integer) 2 +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +{{< /clients-example >}} + +* Check whether bike:1 or bike:2 are racing in the US. +{{< clients-example sets_tutorial sismember >}} +> SISMEMBER bikes:racing:usa bike:1 +(integer) 1 +> SISMEMBER bikes:racing:usa bike:2 +(integer) 0 +{{< /clients-example >}} + +* Which bikes are competing in both races? +{{< clients-example sets_tutorial sinter >}} +> SINTER bikes:racing:france bikes:racing:usa +1) "bike:1" +{{< /clients-example >}} + +* How many bikes are racing in France? +{{< clients-example sets_tutorial scard >}} +> SCARD bikes:racing:france +(integer) 3 +{{< /clients-example >}} +## Tutorial + +The `SADD` command adds new elements to a set. It's also possible +to do a number of other operations against sets like testing if a given element +already exists, performing the intersection, union or difference between +multiple sets, and so forth. + +{{< clients-example sets_tutorial sadd_smembers >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 +(integer) 3 +> SMEMBERS bikes:racing:france +1) bike:3 +2) bike:1 +3) bike:2 +{{< /clients-example >}} + +Here I've added three elements to my set and told Redis to return all the +elements. There is no order guarantee with a set. Redis is free to return the +elements in any order at every call. + +Redis has commands to test for set membership. These commands can be used on single as well as multiple items: + +{{< clients-example sets_tutorial smismember >}} +> SISMEMBER bikes:racing:france bike:1 +(integer) 1 +> SMISMEMBER bikes:racing:france bike:2 bike:3 bike:4 +1) (integer) 1 +2) (integer) 1 +3) (integer) 0 +{{< /clients-example >}} + +We can also find the difference between two sets. For instance, we may want +to know which bikes are racing in France but not in the USA: + +{{< clients-example sets_tutorial sdiff >}} +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +> SDIFF bikes:racing:france bikes:racing:usa +1) "bike:3" +2) "bike:2" +{{< /clients-example >}} + +There are other non trivial operations that are still easy to implement +using the right Redis commands. For instance we may want a list of all the +bikes racing in France, the USA, and some other races. We can do this using +the `SINTER` command, which performs the intersection between different +sets. In addition to intersection you can also perform +unions, difference, and more. For example +if we add a third race we can see some of these commands in action: + +{{< clients-example sets_tutorial multisets >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 +(integer) 3 +> SADD bikes:racing:usa bike:1 bike:4 +(integer) 2 +> SADD bikes:racing:italy bike:1 bike:2 bike:3 bike:4 +(integer) 4 +> SINTER bikes:racing:france bikes:racing:usa bikes:racing:italy +1) "bike:1" +> SUNION bikes:racing:france bikes:racing:usa bikes:racing:italy +1) "bike:2" +2) "bike:1" +3) "bike:4" +4) "bike:3" +> SDIFF bikes:racing:france bikes:racing:usa bikes:racing:italy +(empty array) +> SDIFF bikes:racing:france bikes:racing:usa +1) "bike:3" +2) "bike:2" +> SDIFF bikes:racing:usa bikes:racing:france +1) "bike:4" +{{< /clients-example >}} + +You'll note that the `SDIFF` command returns an empty array when the +difference between all sets is empty. You'll also note that the order of sets +passed to `SDIFF` matters, since the difference is not commutative. + +When you want to remove items from a set, you can use the `SREM` command to +remove one or more items from a set, or you can use the `SPOP` command to +remove a random item from a set. You can also _return_ a random item from a +set without removing it using the `SRANDMEMBER` command: + +{{< clients-example sets_tutorial srem >}} +> SADD bikes:racing:france bike:1 bike:2 bike:3 bike:4 bike:5 +(integer) 5 +> SREM bikes:racing:france bike:1 +(integer) 1 +> SPOP bikes:racing:france +"bike:3" +> SMEMBERS bikes:racing:france +1) "bike:2" +2) "bike:4" +3) "bike:5" +> SRANDMEMBER bikes:racing:france +"bike:2" +{{< /clients-example >}} + +## Limits + +The max size of a Redis set is 2^32 - 1 (4,294,967,295) members. + +## Performance + +Most set operations, including adding, removing, and checking whether an item is a set member, are O(1). +This means that they're highly efficient. +However, for large sets with hundreds of thousands of members or more, you should exercise caution when running the `SMEMBERS` command. +This command is O(n) and returns the entire set in a single response. +As an alternative, consider the `SSCAN`, which lets you retrieve all members of a set iteratively. + +## Alternatives + +Sets membership checks on large datasets (or on streaming data) can use a lot of memory. +If you're concerned about memory usage and don't need perfect precision, consider a [Bloom filter or Cuckoo filter](/docs/stack/bloom) as an alternative to a set. + +Redis sets are frequently used as a kind of index. +If you need to index and query your data, consider the [JSON](/docs/stack/json) data type and the [Search and query](/docs/stack/search) features. + +## Learn more + +* [Redis Sets Explained](https://www.youtube.com/watch?v=PKdCppSNTGQ) and [Redis Sets Elaborated](https://www.youtube.com/watch?v=aRw5ME_5kMY) are two short but thorough video explainers covering Redis sets. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sets in detail. diff --git a/docs/data-types/sorted-sets.md b/docs/data-types/sorted-sets.md new file mode 100644 index 0000000000..9351e2a09c --- /dev/null +++ b/docs/data-types/sorted-sets.md @@ -0,0 +1,246 @@ +--- +title: "Redis sorted sets" +linkTitle: "Sorted sets" +weight: 50 +description: > + Introduction to Redis sorted sets +--- + +A Redis sorted set is a collection of unique strings (members) ordered by an associated score. +When more than one string has the same score, the strings are ordered lexicographically. +Some use cases for sorted sets include: + +* Leaderboards. For example, you can use sorted sets to easily maintain ordered lists of the highest scores in a massive online game. +* Rate limiters. In particular, you can use a sorted set to build a sliding-window rate limiter to prevent excessive API requests. + +You can think of sorted sets as a mix between a Set and +a Hash. Like sets, sorted sets are composed of unique, non-repeating +string elements, so in some sense a sorted set is a set as well. + +However while elements inside sets are not ordered, every element in +a sorted set is associated with a floating point value, called *the score* +(this is why the type is also similar to a hash, since every element +is mapped to a value). + +Moreover, elements in a sorted set are *taken in order* (so they are not +ordered on request, order is a peculiarity of the data structure used to +represent sorted sets). They are ordered according to the following rule: + +* If B and A are two elements with a different score, then A > B if A.score is > B.score. +* If B and A have exactly the same score, then A > B if the A string is lexicographically greater than the B string. B and A strings can't be equal since sorted sets only have unique elements. + +Let's start with a simple example, we'll add all our racers and the score they got in the first race: + +{{< clients-example ss_tutorial zadd >}} +> ZADD racer_scores 10 "Norem" +(integer) 1 +> ZADD racer_scores 12 "Castilla" +(integer) 1 +> ZADD racer_scores 8 "Sam-Bodden" 10 "Royce" 6 "Ford" 14 "Prickett" +(integer) 4 +{{< /clients-example >}} + + +As you can see `ZADD` is similar to `SADD`, but takes one additional argument +(placed before the element to be added) which is the score. +`ZADD` is also variadic, so you are free to specify multiple score-value +pairs, even if this is not used in the example above. + +With sorted sets it is trivial to return a list of hackers sorted by their +birth year because actually *they are already sorted*. + +Implementation note: Sorted sets are implemented via a +dual-ported data structure containing both a skip list and a hash table, so +every time we add an element Redis performs an O(log(N)) operation. That's +good, but when we ask for sorted elements Redis does not have to do any work at +all, it's already sorted. Note that the `ZRANGE` order is low to high, while the `ZREVRANGE` order is high to low: + +{{< clients-example ss_tutorial zrange >}} +> ZRANGE racer_scores 0 -1 +1) "Ford" +2) "Sam-Bodden" +3) "Norem" +4) "Royce" +5) "Castilla" +6) "Prickett" +> ZREVRANGE racer_scores 0 -1 +1) "Prickett" +2) "Castilla" +3) "Royce" +4) "Norem" +5) "Sam-Bodden" +6) "Ford" +{{< /clients-example >}} + +Note: 0 and -1 means from element index 0 to the last element (-1 works +here just as it does in the case of the `LRANGE` command). + +It is possible to return scores as well, using the `WITHSCORES` argument: + +{{< clients-example ss_tutorial zrange_withscores >}} +> ZRANGE racer_scores 0 -1 withscores + 1) "Ford" + 2) "6" + 3) "Sam-Bodden" + 4) "8" + 5) "Norem" + 6) "10" + 7) "Royce" + 8) "10" + 9) "Castilla" +10) "12" +11) "Prickett" +12) "14" +{{< /clients-example >}} + +### Operating on ranges + +Sorted sets are more powerful than this. They can operate on ranges. +Let's get all the racers with 10 or fewer points. We +use the `ZRANGEBYSCORE` command to do it: + +{{< clients-example ss_tutorial zrangebyscore >}} +> ZRANGEBYSCORE racer_scores -inf 10 +1) "Ford" +2) "Sam-Bodden" +3) "Norem" +4) "Royce" +{{< /clients-example >}} + +We asked Redis to return all the elements with a score between negative +infinity and 10 (both extremes are included). + +To remove an element we'd simply call `ZREM` with the racer's name. +It's also possible to remove ranges of elements. Let's remove racer Castilla along with all +the racers with strictly fewer than 10 points: + +{{< clients-example ss_tutorial zremrangebyscore >}} +> ZREM racer_scores "Castilla" +(integer) 1 +> ZREMRANGEBYSCORE racer_scores -inf 9 +(integer) 2 +> ZRANGE racer_scores 0 -1 +1) "Norem" +2) "Royce" +3) "Prickett" +{{< /clients-example >}} + +`ZREMRANGEBYSCORE` is perhaps not the best command name, +but it can be very useful, and returns the number of removed elements. + +Another extremely useful operation defined for sorted set elements +is the get-rank operation. It is possible to ask what is the +position of an element in the set of ordered elements. +The `ZREVRANK` command is also available in order to get the rank, considering +the elements sorted in a descending way. + +{{< clients-example ss_tutorial zrank >}} +> ZRANK racer_scores "Norem" +(integer) 0 +> ZREVRANK racer_scores "Norem" +(integer) 3 +{{< /clients-example >}} + +### Lexicographical scores + +In version Redis 2.8, a new feature was introduced that allows +getting ranges lexicographically, assuming elements in a sorted set are all +inserted with the same identical score (elements are compared with the C +`memcmp` function, so it is guaranteed that there is no collation, and every +Redis instance will reply with the same output). + +The main commands to operate with lexicographical ranges are `ZRANGEBYLEX`, +`ZREVRANGEBYLEX`, `ZREMRANGEBYLEX` and `ZLEXCOUNT`. + +For example, let's add again our list of famous hackers, but this time +using a score of zero for all the elements. We'll see that because of the sorted sets ordering rules, they are already sorted lexicographically. Using `ZRANGEBYLEX` we can ask for lexicographical ranges: + +{{< clients-example ss_tutorial zadd_lex >}} +> ZADD racer_scores 0 "Norem" 0 "Sam-Bodden" 0 "Royce" 0 "Castilla" 0 "Prickett" 0 "Ford" +(integer) 3 +> ZRANGE racer_scores 0 -1 +1) "Castilla" +2) "Ford" +3) "Norem" +4) "Prickett" +5) "Royce" +6) "Sam-Bodden" +> ZRANGEBYLEX racer_scores [A [L +1) "Castilla" +2) "Ford" +{{< /clients-example >}} + +Ranges can be inclusive or exclusive (depending on the first character), +also string infinite and minus infinite are specified respectively with +the `+` and `-` strings. See the documentation for more information. + +This feature is important because it allows us to use sorted sets as a generic +index. For example, if you want to index elements by a 128-bit unsigned +integer argument, all you need to do is to add elements into a sorted +set with the same score (for example 0) but with a 16 byte prefix +consisting of **the 128 bit number in big endian**. Since numbers in big +endian, when ordered lexicographically (in raw bytes order) are actually +ordered numerically as well, you can ask for ranges in the 128 bit space, +and get the element's value discarding the prefix. + +If you want to see the feature in the context of a more serious demo, +check the [Redis autocomplete demo](http://autocomplete.redis.io). + +Updating the score: leaderboards +--- + +Just a final note about sorted sets before switching to the next topic. +Sorted sets' scores can be updated at any time. Just calling `ZADD` against +an element already included in the sorted set will update its score +(and position) with O(log(N)) time complexity. As such, sorted sets are suitable +when there are tons of updates. + +Because of this characteristic a common use case is leaderboards. +The typical application is a Facebook game where you combine the ability to +take users sorted by their high score, plus the get-rank operation, in order +to show the top-N users, and the user rank in the leader board (e.g., "you are +the #4932 best score here"). + +## Examples + +* There are two ways we can use a sorted set to represent a leaderboard. If we know a racer's new score, we can update it directly via the `ZADD` command. However, if we want to add points to an existing score, we can use the `ZINCRBY` command. +{{< clients-example ss_tutorial leaderboard >}} +> ZADD racer_scores 100 "Wood" +(integer) 1 +> ZADD racer_scores 100 "Henshaw" +(integer) 1 +> ZADD racer_scores 150 "Henshaw" +(integer) 0 +> ZINCRBY racer_scores 50 "Wood" +"150" +> ZINCRBY racer_scores 50 "Henshaw" +"200" +{{< /clients-example >}} + +You'll see that `ZADD` returns 0 when the member already exists (the score is updated), while `ZINCRBY` returns the new score. The score for racer Henshaw went from 100, was changed to 150 with no regard for what score was there before, and then was incremented by 50 to 200. + +## Basic commands + +* `ZADD` adds a new member and associated score to a sorted set. If the member already exists, the score is updated. +* `ZRANGE` returns members of a sorted set, sorted within a given range. +* `ZRANK` returns the rank of the provided member, assuming the sorted is in ascending order. +* `ZREVRANK` returns the rank of the provided member, assuming the sorted set is in descending order. + +See the [complete list of sorted set commands](https://redis.io/commands/?group=sorted-set). + +## Performance + +Most sorted set operations are O(log(n)), where _n_ is the number of members. + +Exercise some caution when running the `ZRANGE` command with large returns values (e.g., in the tens of thousands or more). +This command's time complexity is O(log(n) + m), where _m_ is the number of results returned. + +## Alternatives + +Redis sorted sets are sometimes used for indexing other Redis data structures. +If you need to index and query your data, consider the [JSON](/docs/stack/json) data type and the [Search and query](/docs/stack/search) features. + +## Learn more + +* [Redis Sorted Sets Explained](https://www.youtube.com/watch?v=MUKlxdBQZ7g) is an entertaining introduction to sorted sets in Redis. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) explores Redis sorted sets in detail. diff --git a/docs/data-types/streams.md b/docs/data-types/streams.md new file mode 100644 index 0000000000..86152a6b82 --- /dev/null +++ b/docs/data-types/streams.md @@ -0,0 +1,934 @@ +--- +title: "Redis Streams" +linkTitle: "Streams" +weight: 60 +description: > + Introduction to Redis streams +aliases: + - /topics/streams-intro + - /docs/manual/data-types/streams + - /docs/data-types/streams-tutorial/ +--- + +A Redis stream is a data structure that acts like an append-only log but also implements several operations to overcome some of the limits of a typical append-only log. These include random access in O(1) time and complex consumption strategies, such as consumer groups. +You can use streams to record and simultaneously syndicate events in real time. +Examples of Redis stream use cases include: + +* Event sourcing (e.g., tracking user actions, clicks, etc.) +* Sensor monitoring (e.g., readings from devices in the field) +* Notifications (e.g., storing a record of each user's notifications in a separate stream) + +Redis generates a unique ID for each stream entry. +You can use these IDs to retrieve their associated entries later or to read and process all subsequent entries in the stream. Note that because these IDs are related to time, the ones shown here may vary and will be different from the IDs you see in your own Redis instance. + +Redis streams support several trimming strategies (to prevent streams from growing unbounded) and more than one consumption strategy (see `XREAD`, `XREADGROUP`, and `XRANGE`). + +## Basic commands +* `XADD` adds a new entry to a stream. +* `XREAD` reads one or more entries, starting at a given position and moving forward in time. +* `XRANGE` returns a range of entries between two supplied entry IDs. +* `XLEN` returns the length of a stream. + +See the [complete list of stream commands](https://redis.io/commands/?group=stream). + + +## Examples + +* When our racers pass a checkpoint, we add a stream entry for each racer that includes the racer's name, speed, position, and location ID: +{{< clients-example stream_tutorial xadd >}} +> XADD race:france * rider Castilla speed 30.2 position 1 location_id 1 +"1692632086370-0" +> XADD race:france * rider Norem speed 28.8 position 3 location_id 1 +"1692632094485-0" +> XADD race:france * rider Prickett speed 29.7 position 2 location_id 1 +"1692632102976-0" +{{< /clients-example >}} + +* Read two stream entries starting at ID `1692632086370-0`: +{{< clients-example stream_tutorial xrange >}} +> XRANGE race:france 1692632086370-0 + COUNT 2 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +* Read up to 100 new stream entries, starting at the end of the stream, and block for up to 300 ms if no entries are being written: +{{< clients-example stream_tutorial xread_block >}} +> XREAD COUNT 100 BLOCK 300 STREAMS race:france $ +(nil) +{{< /clients-example >}} + +## Performance + +Adding an entry to a stream is O(1). +Accessing any single entry is O(n), where _n_ is the length of the ID. +Since stream IDs are typically short and of a fixed length, this effectively reduces to a constant time lookup. +For details on why, note that streams are implemented as [radix trees](https://en.wikipedia.org/wiki/Radix_tree). + +Simply put, Redis streams provide highly efficient inserts and reads. +See each command's time complexity for the details. + + +## Streams basics + +Streams are an append-only data structure. The fundamental write command, called `XADD`, appends a new entry to the specified stream. + +Each stream entry consists of one or more field-value pairs, somewhat like a dictionary or a Redis hash: + +{{< clients-example stream_tutorial xadd_2 >}} +> XADD race:france * rider Castilla speed 29.9 position 1 location_id 2 +"1692632147973-0" +{{< /clients-example >}} + +The above call to the `XADD` command adds an entry `rider: Castilla, speed: 29.9, position: 1, location_id: 2` to the stream at key `race:france`, using an auto-generated entry ID, which is the one returned by the command, specifically `1692632147973-0`. It gets as its first argument the key name `race:france`, the second argument is the entry ID that identifies every entry inside a stream. However, in this case, we passed `*` because we want the server to generate a new ID for us. Every new ID will be monotonically increasing, so in more simple terms, every new entry added will have a higher ID compared to all the past entries. Auto-generation of IDs by the server is almost always what you want, and the reasons for specifying an ID explicitly are very rare. We'll talk more about this later. The fact that each Stream entry has an ID is another similarity with log files, where line numbers, or the byte offset inside the file, can be used in order to identify a given entry. Returning back at our `XADD` example, after the key name and ID, the next arguments are the field-value pairs composing our stream entry. + +It is possible to get the number of items inside a Stream just using the `XLEN` command: + +{{< clients-example stream_tutorial xlen >}} +> XLEN race:france +(integer) 4 +{{< /clients-example >}} + +### Entry IDs + +The entry ID returned by the `XADD` command, and identifying univocally each entry inside a given stream, is composed of two parts: + +``` +- +``` + +The milliseconds time part is actually the local time in the local Redis node generating the stream ID, however if the current milliseconds time happens to be smaller than the previous entry time, then the previous entry time is used instead, so if a clock jumps backward the monotonically incrementing ID property still holds. The sequence number is used for entries created in the same millisecond. Since the sequence number is 64 bit wide, in practical terms there is no limit to the number of entries that can be generated within the same millisecond. + +The format of such IDs may look strange at first, and the gentle reader may wonder why the time is part of the ID. The reason is that Redis streams support range queries by ID. Because the ID is related to the time the entry is generated, this gives the ability to query for time ranges basically for free. We will see this soon while covering the `XRANGE` command. + +If for some reason the user needs incremental IDs that are not related to time but are actually associated to another external system ID, as previously mentioned, the `XADD` command can take an explicit ID instead of the `*` wildcard ID that triggers auto-generation, like in the following examples: + +{{< clients-example stream_tutorial xadd_id >}} +> XADD race:usa 0-1 racer Castilla +0-1 +> XADD race:usa 0-2 racer Norem +0-2 +{{< /clients-example >}} + +Note that in this case, the minimum ID is 0-1 and that the command will not accept an ID equal or smaller than a previous one: + +{{< clients-example stream_tutorial xadd_bad_id >}} +> XADD race:usa 0-1 racer Prickett +(error) ERR The ID specified in XADD is equal or smaller than the target stream top item +{{< /clients-example >}} + +If you're running Redis 7 or later, you can also provide an explicit ID consisting of the milliseconds part only. In this case, the sequence portion of the ID will be automatically generated. To do this, use the syntax below: + +{{< clients-example stream_tutorial xadd_7 >}} +> XADD race:usa 0-* racer Prickett +0-3 +{{< /clients-example >}} + +## Getting data from Streams + +Now we are finally able to append entries in our stream via `XADD`. However, while appending data to a stream is quite obvious, the way streams can be queried in order to extract data is not so obvious. If we continue with the analogy of the log file, one obvious way is to mimic what we normally do with the Unix command `tail -f`, that is, we may start to listen in order to get the new messages that are appended to the stream. Note that unlike the blocking list operations of Redis, where a given element will reach a single client which is blocking in a *pop style* operation like `BLPOP`, with streams we want multiple consumers to see the new messages appended to the stream (the same way many `tail -f` processes can see what is added to a log). Using the traditional terminology we want the streams to be able to *fan out* messages to multiple clients. + +However, this is just one potential access mode. We could also see a stream in quite a different way: not as a messaging system, but as a *time series store*. In this case, maybe it's also useful to get the new messages appended, but another natural query mode is to get messages by ranges of time, or alternatively to iterate the messages using a cursor to incrementally check all the history. This is definitely another useful access mode. + +Finally, if we see a stream from the point of view of consumers, we may want to access the stream in yet another way, that is, as a stream of messages that can be partitioned to multiple consumers that are processing such messages, so that groups of consumers can only see a subset of the messages arriving in a single stream. In this way, it is possible to scale the message processing across different consumers, without single consumers having to process all the messages: each consumer will just get different messages to process. This is basically what Kafka (TM) does with consumer groups. Reading messages via consumer groups is yet another interesting mode of reading from a Redis Stream. + +Redis Streams support all three of the query modes described above via different commands. The next sections will show them all, starting from the simplest and most direct to use: range queries. + +### Querying by range: XRANGE and XREVRANGE + +To query the stream by range we are only required to specify two IDs, *start* and *end*. The range returned will include the elements having start or end as ID, so the range is inclusive. The two special IDs `-` and `+` respectively mean the smallest and the greatest ID possible. + +{{< clients-example stream_tutorial xrange_all >}} +> XRANGE race:france - + +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +3) 1) "1692632102976-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +4) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Each entry returned is an array of two items: the ID and the list of field-value pairs. We already said that the entry IDs have a relation with the time, because the part at the left of the `-` character is the Unix time in milliseconds of the local node that created the stream entry, at the moment the entry was created (however note that streams are replicated with fully specified `XADD` commands, so the replicas will have identical IDs to the master). This means that I could query a range of time using `XRANGE`. In order to do so, however, I may want to omit the sequence part of the ID: if omitted, in the start of the range it will be assumed to be 0, while in the end part it will be assumed to be the maximum sequence number available. This way, querying using just two milliseconds Unix times, we get all the entries that were generated in that range of time, in an inclusive way. For instance, if I want to query a two milliseconds period I could use: + +{{< clients-example stream_tutorial xrange_time >}} +> XRANGE race:france 1692632086369 1692632086371 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +I have only a single entry in this range. However in real data sets, I could query for ranges of hours, or there could be many items in just two milliseconds, and the result returned could be huge. For this reason, `XRANGE` supports an optional **COUNT** option at the end. By specifying a count, I can just get the first *N* items. If I want more, I can get the last ID returned, increment the sequence part by one, and query again. Let's see this in the following example. Let's assume that the stream `race:france` was populated with 4 items. To start my iteration, getting 2 items per command, I start with the full range, but with a count of 2. + +{{< clients-example stream_tutorial xrange_step_1 >}} +> XRANGE race:france - + COUNT 2 +1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" +2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +To continue the iteration with the next two items, I have to pick the last ID returned, that is `1692632094485-0`, and add the prefix `(` to it. The resulting exclusive range interval, that is `(1692632094485-0` in this case, can now be used as the new *start* argument for the next `XRANGE` call: + +{{< clients-example stream_tutorial xrange_step_2 >}} +> XRANGE race:france (1692632094485-0 + COUNT 2 +1) 1) "1692632102976-0" + 2) 1) "rider" + 2) "Prickett" + 3) "speed" + 4) "29.7" + 5) "position" + 6) "2" + 7) "location_id" + 8) "1" +2) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Now that we've retrieved 4 items out of a stream that only had 4 entries in it, if we try to retrieve more items, we'll get an empty array: + +{{< clients-example stream_tutorial xrange_empty >}} +> XRANGE race:france (1692632147973-0 + COUNT 2 +(empty array) +{{< /clients-example >}} + +Since `XRANGE` complexity is *O(log(N))* to seek, and then *O(M)* to return M elements, with a small count the command has a logarithmic time complexity, which means that each step of the iteration is fast. So `XRANGE` is also the de facto *streams iterator* and does not require an **XSCAN** command. + +The command `XREVRANGE` is the equivalent of `XRANGE` but returning the elements in inverted order, so a practical use for `XREVRANGE` is to check what is the last item in a Stream: + +{{< clients-example stream_tutorial xrevrange >}} +> XREVRANGE race:france + - COUNT 1 +1) 1) "1692632147973-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "29.9" + 5) "position" + 6) "1" + 7) "location_id" + 8) "2" +{{< /clients-example >}} + +Note that the `XREVRANGE` command takes the *start* and *stop* arguments in reverse order. + +## Listening for new items with XREAD + +When we do not want to access items by a range in a stream, usually what we want instead is to *subscribe* to new items arriving to the stream. This concept may appear related to Redis Pub/Sub, where you subscribe to a channel, or to Redis blocking lists, where you wait for a key to get new elements to fetch, but there are fundamental differences in the way you consume a stream: + +1. A stream can have multiple clients (consumers) waiting for data. Every new item, by default, will be delivered to *every consumer* that is waiting for data in a given stream. This behavior is different than blocking lists, where each consumer will get a different element. However, the ability to *fan out* to multiple consumers is similar to Pub/Sub. +2. While in Pub/Sub messages are *fire and forget* and are never stored anyway, and while when using blocking lists, when a message is received by the client it is *popped* (effectively removed) from the list, streams work in a fundamentally different way. All the messages are appended in the stream indefinitely (unless the user explicitly asks to delete entries): different consumers will know what is a new message from its point of view by remembering the ID of the last message received. +3. Streams Consumer Groups provide a level of control that Pub/Sub or blocking lists cannot achieve, with different groups for the same stream, explicit acknowledgment of processed items, ability to inspect the pending items, claiming of unprocessed messages, and coherent history visibility for each single client, that is only able to see its private past history of messages. + +The command that provides the ability to listen for new messages arriving into a stream is called `XREAD`. It's a bit more complex than `XRANGE`, so we'll start showing simple forms, and later the whole command layout will be provided. + +{{< clients-example stream_tutorial xread >}} +> XREAD COUNT 2 STREAMS race:france 0 +1) 1) "race:france" + 2) 1) 1) "1692632086370-0" + 2) 1) "rider" + 2) "Castilla" + 3) "speed" + 4) "30.2" + 5) "position" + 6) "1" + 7) "location_id" + 8) "1" + 2) 1) "1692632094485-0" + 2) 1) "rider" + 2) "Norem" + 3) "speed" + 4) "28.8" + 5) "position" + 6) "3" + 7) "location_id" + 8) "1" +{{< /clients-example >}} + +The above is the non-blocking form of `XREAD`. Note that the **COUNT** option is not mandatory, in fact the only mandatory option of the command is the **STREAMS** option, that specifies a list of keys together with the corresponding maximum ID already seen for each stream by the calling consumer, so that the command will provide the client only with messages with an ID greater than the one we specified. + +In the above command we wrote `STREAMS race:france 0` so we want all the messages in the Stream `race:france` having an ID greater than `0-0`. As you can see in the example above, the command returns the key name, because actually it is possible to call this command with more than one key to read from different streams at the same time. I could write, for instance: `STREAMS race:france race:italy 0 0`. Note how after the **STREAMS** option we need to provide the key names, and later the IDs. For this reason, the **STREAMS** option must always be the last option. +Any other options must come before the **STREAMS** option. + +Apart from the fact that `XREAD` can access multiple streams at once, and that we are able to specify the last ID we own to just get newer messages, in this simple form the command is not doing something so different compared to `XRANGE`. However, the interesting part is that we can turn `XREAD` into a *blocking command* easily, by specifying the **BLOCK** argument: + +``` +> XREAD BLOCK 0 STREAMS race:france $ +``` + +Note that in the example above, other than removing **COUNT**, I specified the new **BLOCK** option with a timeout of 0 milliseconds (that means to never timeout). Moreover, instead of passing a normal ID for the stream `mystream` I passed the special ID `$`. This special ID means that `XREAD` should use as last ID the maximum ID already stored in the stream `mystream`, so that we will receive only *new* messages, starting from the time we started listening. This is similar to the `tail -f` Unix command in some way. + +Note that when the **BLOCK** option is used, we do not have to use the special ID `$`. We can use any valid ID. If the command is able to serve our request immediately without blocking, it will do so, otherwise it will block. Normally if we want to consume the stream starting from new entries, we start with the ID `$`, and after that we continue using the ID of the last message received to make the next call, and so forth. + +The blocking form of `XREAD` is also able to listen to multiple Streams, just by specifying multiple key names. If the request can be served synchronously because there is at least one stream with elements greater than the corresponding ID we specified, it returns with the results. Otherwise, the command will block and will return the items of the first stream which gets new data (according to the specified ID). + +Similarly to blocking list operations, blocking stream reads are *fair* from the point of view of clients waiting for data, since the semantics is FIFO style. The first client that blocked for a given stream will be the first to be unblocked when new items are available. + +`XREAD` has no other options than **COUNT** and **BLOCK**, so it's a pretty basic command with a specific purpose to attach consumers to one or multiple streams. More powerful features to consume streams are available using the consumer groups API, however reading via consumer groups is implemented by a different command called `XREADGROUP`, covered in the next section of this guide. + +## Consumer groups + +When the task at hand is to consume the same stream from different clients, then `XREAD` already offers a way to *fan-out* to N clients, potentially also using replicas in order to provide more read scalability. However in certain problems what we want to do is not to provide the same stream of messages to many clients, but to provide a *different subset* of messages from the same stream to many clients. An obvious case where this is useful is that of messages which are slow to process: the ability to have N different workers that will receive different parts of the stream allows us to scale message processing, by routing different messages to different workers that are ready to do more work. + +In practical terms, if we imagine having three consumers C1, C2, C3, and a stream that contains the messages 1, 2, 3, 4, 5, 6, 7 then what we want is to serve the messages according to the following diagram: + +``` +1 -> C1 +2 -> C2 +3 -> C3 +4 -> C1 +5 -> C2 +6 -> C3 +7 -> C1 +``` + +In order to achieve this, Redis uses a concept called *consumer groups*. It is very important to understand that Redis consumer groups have nothing to do, from an implementation standpoint, with Kafka (TM) consumer groups. Yet they are similar in functionality, so I decided to keep Kafka's (TM) terminology, as it originally popularized this idea. + +A consumer group is like a *pseudo consumer* that gets data from a stream, and actually serves multiple consumers, providing certain guarantees: + +1. Each message is served to a different consumer so that it is not possible that the same message will be delivered to multiple consumers. +2. Consumers are identified, within a consumer group, by a name, which is a case-sensitive string that the clients implementing consumers must choose. This means that even after a disconnect, the stream consumer group retains all the state, since the client will claim again to be the same consumer. However, this also means that it is up to the client to provide a unique identifier. +3. Each consumer group has the concept of the *first ID never consumed* so that, when a consumer asks for new messages, it can provide just messages that were not previously delivered. +4. Consuming a message, however, requires an explicit acknowledgment using a specific command. Redis interprets the acknowledgment as: this message was correctly processed so it can be evicted from the consumer group. +5. A consumer group tracks all the messages that are currently pending, that is, messages that were delivered to some consumer of the consumer group, but are yet to be acknowledged as processed. Thanks to this feature, when accessing the message history of a stream, each consumer *will only see messages that were delivered to it*. + +In a way, a consumer group can be imagined as some *amount of state* about a stream: + +``` ++----------------------------------------+ +| consumer_group_name: mygroup | +| consumer_group_stream: somekey | +| last_delivered_id: 1292309234234-92 | +| | +| consumers: | +| "consumer-1" with pending messages | +| 1292309234234-4 | +| 1292309234232-8 | +| "consumer-42" with pending messages | +| ... (and so forth) | ++----------------------------------------+ +``` + +If you see this from this point of view, it is very simple to understand what a consumer group can do, how it is able to just provide consumers with their history of pending messages, and how consumers asking for new messages will just be served with message IDs greater than `last_delivered_id`. At the same time, if you look at the consumer group as an auxiliary data structure for Redis streams, it is obvious that a single stream can have multiple consumer groups, that have a different set of consumers. Actually, it is even possible for the same stream to have clients reading without consumer groups via `XREAD`, and clients reading via `XREADGROUP` in different consumer groups. + +Now it's time to zoom in to see the fundamental consumer group commands. They are the following: + +* `XGROUP` is used in order to create, destroy and manage consumer groups. +* `XREADGROUP` is used to read from a stream via a consumer group. +* `XACK` is the command that allows a consumer to mark a pending message as correctly processed. + +## Creating a consumer group + +Assuming I have a key `race:france` of type stream already existing, in order to create a consumer group I just need to do the following: + +{{< clients-example stream_tutorial xgroup_create >}} +> XGROUP CREATE race:france france_riders $ +OK +{{< /clients-example >}} + +As you can see in the command above when creating the consumer group we have to specify an ID, which in the example is just `$`. This is needed because the consumer group, among the other states, must have an idea about what message to serve next at the first consumer connecting, that is, what was the *last message ID* when the group was just created. If we provide `$` as we did, then only new messages arriving in the stream from now on will be provided to the consumers in the group. If we specify `0` instead the consumer group will consume *all* the messages in the stream history to start with. Of course, you can specify any other valid ID. What you know is that the consumer group will start delivering messages that are greater than the ID you specify. Because `$` means the current greatest ID in the stream, specifying `$` will have the effect of consuming only new messages. + +`XGROUP CREATE` also supports creating the stream automatically, if it doesn't exist, using the optional `MKSTREAM` subcommand as the last argument: + +{{< clients-example stream_tutorial xgroup_create_mkstream >}} +> XGROUP CREATE race:italy italy_riders $ MKSTREAM +OK +{{< /clients-example >}} + +Now that the consumer group is created we can immediately try to read messages via the consumer group using the `XREADGROUP` command. We'll read from consumers, that we will call Alice and Bob, to see how the system will return different messages to Alice or Bob. + +`XREADGROUP` is very similar to `XREAD` and provides the same **BLOCK** option, otherwise it is a synchronous command. However there is a *mandatory* option that must be always specified, which is **GROUP** and has two arguments: the name of the consumer group, and the name of the consumer that is attempting to read. The option **COUNT** is also supported and is identical to the one in `XREAD`. + +We'll add riders to the race:italy stream and try reading something using the consumer group: +Note: *here rider is the field name, and the name is the associated value. Remember that stream items are small dictionaries.* + +{{< clients-example stream_tutorial xgroup_read >}} +> XADD race:italy * rider Castilla +"1692632639151-0" +> XADD race:italy * rider Royce +"1692632647899-0" +> XADD race:italy * rider Sam-Bodden +"1692632662819-0" +> XADD race:italy * rider Prickett +"1692632670501-0" +> XADD race:italy * rider Norem +"1692632678249-0" +> XREADGROUP GROUP italy_riders Alice COUNT 1 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +{{< /clients-example >}} + +`XREADGROUP` replies are just like `XREAD` replies. Note however the `GROUP ` provided above. It states that I want to read from the stream using the consumer group `mygroup` and I'm the consumer `Alice`. Every time a consumer performs an operation with a consumer group, it must specify its name, uniquely identifying this consumer inside the group. + +There is another very important detail in the command line above, after the mandatory **STREAMS** option the ID requested for the key `mystream` is the special ID `>`. This special ID is only valid in the context of consumer groups, and it means: **messages never delivered to other consumers so far**. + +This is almost always what you want, however it is also possible to specify a real ID, such as `0` or any other valid ID, in this case, however, what happens is that we request from `XREADGROUP` to just provide us with the **history of pending messages**, and in such case, will never see new messages in the group. So basically `XREADGROUP` has the following behavior based on the ID we specify: + +* If the ID is the special ID `>` then the command will return only new messages never delivered to other consumers so far, and as a side effect, will update the consumer group's *last ID*. +* If the ID is any other valid numerical ID, then the command will let us access our *history of pending messages*. That is, the set of messages that were delivered to this specified consumer (identified by the provided name), and never acknowledged so far with `XACK`. + +We can test this behavior immediately specifying an ID of 0, without any **COUNT** option: we'll just see the only pending message, that is, the one about Castilla: + +{{< clients-example stream_tutorial xgroup_read_id >}} +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) 1) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +{{< /clients-example >}} + +However, if we acknowledge the message as processed, it will no longer be part of the pending messages history, so the system will no longer report anything: + +{{< clients-example stream_tutorial xack >}} +> XACK race:italy italy_riders 1692632639151-0 +(integer) 1 +> XREADGROUP GROUP italy_riders Alice STREAMS race:italy 0 +1) 1) "race:italy" + 2) (empty array) +{{< /clients-example >}} + +Don't worry if you yet don't know how `XACK` works, the idea is just that processed messages are no longer part of the history that we can access. + +Now it's Bob's turn to read something: + +{{< clients-example stream_tutorial xgroup_read_bob >}} +> XREADGROUP GROUP italy_riders Bob COUNT 2 STREAMS race:italy > +1) 1) "race:italy" + 2) 1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" + 2) 1) "1692632662819-0" + 2) 1) "rider" + 2) "Sam-Bodden" +{{< /clients-example >}} + +Bob asked for a maximum of two messages and is reading via the same group `mygroup`. So what happens is that Redis reports just *new* messages. As you can see the "Castilla" message is not delivered, since it was already delivered to Alice, so Bob gets Royce and Sam-Bodden and so forth. + +This way Alice, Bob, and any other consumer in the group, are able to read different messages from the same stream, to read their history of yet to process messages, or to mark messages as processed. This allows creating different topologies and semantics for consuming messages from a stream. + +There are a few things to keep in mind: + +* Consumers are auto-created the first time they are mentioned, no need for explicit creation. +* Even with `XREADGROUP` you can read from multiple keys at the same time, however for this to work, you need to create a consumer group with the same name in every stream. This is not a common need, but it is worth mentioning that the feature is technically available. +* `XREADGROUP` is a *write command* because even if it reads from the stream, the consumer group is modified as a side effect of reading, so it can only be called on master instances. + +An example of a consumer implementation, using consumer groups, written in the Ruby language could be the following. The Ruby code is aimed to be readable by virtually any experienced programmer, even if they do not know Ruby: + +```ruby +require 'redis' + +if ARGV.length == 0 + puts "Please specify a consumer name" + exit 1 +end + +ConsumerName = ARGV[0] +GroupName = "mygroup" +r = Redis.new + +def process_message(id,msg) + puts "[#{ConsumerName}] #{id} = #{msg.inspect}" +end + +$lastid = '0-0' + +puts "Consumer #{ConsumerName} starting..." +check_backlog = true +while true + # Pick the ID based on the iteration: the first time we want to + # read our pending messages, in case we crashed and are recovering. + # Once we consumed our history, we can start getting new messages. + if check_backlog + myid = $lastid + else + myid = '>' + end + + items = r.xreadgroup('GROUP',GroupName,ConsumerName,'BLOCK','2000','COUNT','10','STREAMS',:my_stream_key,myid) + + if items == nil + puts "Timeout!" + next + end + + # If we receive an empty reply, it means we were consuming our history + # and that the history is now empty. Let's start to consume new messages. + check_backlog = false if items[0][1].length == 0 + + items[0][1].each{|i| + id,fields = i + + # Process the message + process_message(id,fields) + + # Acknowledge the message as processed + r.xack(:my_stream_key,GroupName,id) + + $lastid = id + } +end +``` + +As you can see the idea here is to start by consuming the history, that is, our list of pending messages. This is useful because the consumer may have crashed before, so in the event of a restart we want to re-read messages that were delivered to us without getting acknowledged. Note that we might process a message multiple times or one time (at least in the case of consumer failures, but there are also the limits of Redis persistence and replication involved, see the specific section about this topic). + +Once the history was consumed, and we get an empty list of messages, we can switch to using the `>` special ID in order to consume new messages. + +## Recovering from permanent failures + +The example above allows us to write consumers that participate in the same consumer group, each taking a subset of messages to process, and when recovering from failures re-reading the pending messages that were delivered just to them. However in the real world consumers may permanently fail and never recover. What happens to the pending messages of the consumer that never recovers after stopping for any reason? + +Redis consumer groups offer a feature that is used in these situations in order to *claim* the pending messages of a given consumer so that such messages will change ownership and will be re-assigned to a different consumer. The feature is very explicit. A consumer has to inspect the list of pending messages, and will have to claim specific messages using a special command, otherwise the server will leave the messages pending forever and assigned to the old consumer. In this way different applications can choose if to use such a feature or not, and exactly how to use it. + +The first step of this process is just a command that provides observability of pending entries in the consumer group and is called `XPENDING`. +This is a read-only command which is always safe to call and will not change ownership of any message. +In its simplest form, the command is called with two arguments, which are the name of the stream and the name of the consumer group. + +{{< clients-example stream_tutorial xpending >}} +> XPENDING race:italy italy_riders +1) (integer) 2 +2) "1692632647899-0" +3) "1692632662819-0" +4) 1) 1) "Bob" + 2) "2" +{{< /clients-example >}} + +When called in this way, the command outputs the total number of pending messages in the consumer group (two in this case), the lower and higher message ID among the pending messages, and finally a list of consumers and the number of pending messages they have. +We have only Bob with two pending messages because the single message that Alice requested was acknowledged using `XACK`. + +We can ask for more information by giving more arguments to `XPENDING`, because the full command signature is the following: + +``` +XPENDING [[IDLE ] []] +``` + +By providing a start and end ID (that can be just `-` and `+` as in `XRANGE`) and a count to control the amount of information returned by the command, we are able to know more about the pending messages. The optional final argument, the consumer name, is used if we want to limit the output to just messages pending for a given consumer, but won't use this feature in the following example. + +{{< clients-example stream_tutorial xpending_plus_minus >}} +> XPENDING race:italy italy_riders - + 10 +1) 1) "1692632647899-0" + 2) "Bob" + 3) (integer) 74642 + 4) (integer) 1 +2) 1) "1692632662819-0" + 2) "Bob" + 3) (integer) 74642 + 4) (integer) 1 +{{< /clients-example >}} + +Now we have the details for each message: the ID, the consumer name, the *idle time* in milliseconds, which is how many milliseconds have passed since the last time the message was delivered to some consumer, and finally the number of times that a given message was delivered. +We have two messages from Bob, and they are idle for 60000+ milliseconds, about a minute. + +Note that nobody prevents us from checking what the first message content was by just using `XRANGE`. + +{{< clients-example stream_tutorial xrange_pending >}} +> XRANGE race:italy 1692632647899-0 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +We have just to repeat the same ID twice in the arguments. Now that we have some ideas, Alice may decide that after 1 minute of not processing messages, Bob will probably not recover quickly, and it's time to *claim* such messages and resume the processing in place of Bob. To do so, we use the `XCLAIM` command. + +This command is very complex and full of options in its full form, since it is used for replication of consumer groups changes, but we'll use just the arguments that we need normally. In this case it is as simple as: + +``` +XCLAIM ... +``` + +Basically we say, for this specific key and group, I want that the message IDs specified will change ownership, and will be assigned to the specified consumer name ``. However, we also provide a minimum idle time, so that the operation will only work if the idle time of the mentioned messages is greater than the specified idle time. This is useful because maybe two clients are retrying to claim a message at the same time: + +``` +Client 1: XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +Client 2: XCLAIM race:italy italy_riders Lora 60000 1692632647899-0 +``` + +However, as a side effect, claiming a message will reset its idle time and will increment its number of deliveries counter, so the second client will fail claiming it. In this way we avoid trivial re-processing of messages (even if in the general case you cannot obtain exactly once processing). + +This is the result of the command execution: + +{{< clients-example stream_tutorial xclaim >}} +> XCLAIM race:italy italy_riders Alice 60000 1692632647899-0 +1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +The message was successfully claimed by Alice, who can now process the message and acknowledge it, and move things forward even if the original consumer is not recovering. + +It is clear from the example above that as a side effect of successfully claiming a given message, the `XCLAIM` command also returns it. However this is not mandatory. The **JUSTID** option can be used in order to return just the IDs of the message successfully claimed. This is useful if you want to reduce the bandwidth used between the client and the server (and also the performance of the command) and you are not interested in the message because your consumer is implemented in a way that it will rescan the history of pending messages from time to time. + +Claiming may also be implemented by a separate process: one that just checks the list of pending messages, and assigns idle messages to consumers that appear to be active. Active consumers can be obtained using one of the observability features of Redis streams. This is the topic of the next section. + +## Automatic claiming + +The `XAUTOCLAIM` command, added in Redis 6.2, implements the claiming process that we've described above. +`XPENDING` and `XCLAIM` provide the basic building blocks for different types of recovery mechanisms. +This command optimizes the generic process by having Redis manage it and offers a simple solution for most recovery needs. + +`XAUTOCLAIM` identifies idle pending messages and transfers ownership of them to a consumer. +The command's signature looks like this: + +``` +XAUTOCLAIM [COUNT count] [JUSTID] +``` + +So, in the example above, I could have used automatic claiming to claim a single message like this: + +{{< clients-example stream_tutorial xautoclaim >}} +> XAUTOCLAIM race:italy italy_riders Alice 60000 0-0 COUNT 1 +1) "0-0" +2) 1) 1) "1692632662819-0" + 2) 1) "rider" + 2) "Sam-Bodden" +{{< /clients-example >}} + +Like `XCLAIM`, the command replies with an array of the claimed messages, but it also returns a stream ID that allows iterating the pending entries. +The stream ID is a cursor, and I can use it in my next call to continue in claiming idle pending messages: + +{{< clients-example stream_tutorial xautoclaim_cursor >}} +> XAUTOCLAIM race:italy italy_riders Lora 60000 (1692632662819-0 COUNT 1 +1) "1692632662819-0" +2) 1) 1) "1692632647899-0" + 2) 1) "rider" + 2) "Royce" +{{< /clients-example >}} + +When `XAUTOCLAIM` returns the "0-0" stream ID as a cursor, that means that it reached the end of the consumer group pending entries list. +That doesn't mean that there are no new idle pending messages, so the process continues by calling `XAUTOCLAIM` from the beginning of the stream. + +## Claiming and the delivery counter + +The counter that you observe in the `XPENDING` output is the number of deliveries of each message. The counter is incremented in two ways: when a message is successfully claimed via `XCLAIM` or when an `XREADGROUP` call is used in order to access the history of pending messages. + +When there are failures, it is normal that messages will be delivered multiple times, but eventually they usually get processed and acknowledged. However there might be a problem processing some specific message, because it is corrupted or crafted in a way that triggers a bug in the processing code. In such a case what happens is that consumers will continuously fail to process this particular message. Because we have the counter of the delivery attempts, we can use that counter to detect messages that for some reason are not processable. So once the deliveries counter reaches a given large number that you chose, it is probably wiser to put such messages in another stream and send a notification to the system administrator. This is basically the way that Redis Streams implements the *dead letter* concept. + +## Streams observability + +Messaging systems that lack observability are very hard to work with. Not knowing who is consuming messages, what messages are pending, the set of consumer groups active in a given stream, makes everything opaque. For this reason, Redis Streams and consumer groups have different ways to observe what is happening. We already covered `XPENDING`, which allows us to inspect the list of messages that are under processing at a given moment, together with their idle time and number of deliveries. + +However we may want to do more than that, and the `XINFO` command is an observability interface that can be used with sub-commands in order to get information about streams or consumer groups. + +This command uses subcommands in order to show different information about the status of the stream and its consumer groups. For instance **XINFO STREAM ** reports information about the stream itself. + +{{< clients-example stream_tutorial xinfo >}} +> XINFO STREAM race:italy + 1) "length" + 2) (integer) 5 + 3) "radix-tree-keys" + 4) (integer) 1 + 5) "radix-tree-nodes" + 6) (integer) 2 + 7) "last-generated-id" + 8) "1692632678249-0" + 9) "groups" +10) (integer) 1 +11) "first-entry" +12) 1) "1692632639151-0" + 2) 1) "rider" + 2) "Castilla" +13) "last-entry" +14) 1) "1692632678249-0" + 2) 1) "rider" + 2) "Norem" +{{< /clients-example >}} + +The output shows information about how the stream is encoded internally, and also shows the first and last message in the stream. Another piece of information available is the number of consumer groups associated with this stream. We can dig further asking for more information about the consumer groups. + +{{< clients-example stream_tutorial xinfo_groups >}} +> XINFO GROUPS race:italy +1) 1) "name" + 2) "italy_riders" + 3) "consumers" + 4) (integer) 3 + 5) "pending" + 6) (integer) 2 + 7) "last-delivered-id" + 8) "1692632662819-0" +{{< /clients-example >}} + +As you can see in this and in the previous output, the `XINFO` command outputs a sequence of field-value items. Because it is an observability command this allows the human user to immediately understand what information is reported, and allows the command to report more information in the future by adding more fields without breaking compatibility with older clients. Other commands that must be more bandwidth efficient, like `XPENDING`, just report the information without the field names. + +The output of the example above, where the **GROUPS** subcommand is used, should be clear observing the field names. We can check in more detail the state of a specific consumer group by checking the consumers that are registered in the group. + +{{< clients-example stream_tutorial xinfo_consumers >}} +> XINFO CONSUMERS race:italy italy_riders +1) 1) "name" + 2) "Alice" + 3) "pending" + 4) (integer) 1 + 5) "idle" + 6) (integer) 177546 +2) 1) "name" + 2) "Bob" + 3) "pending" + 4) (integer) 0 + 5) "idle" + 6) (integer) 424686 +3) 1) "name" + 2) "Lora" + 3) "pending" + 4) (integer) 1 + 5) "idle" + 6) (integer) 72241 +{{< /clients-example >}} + +In case you do not remember the syntax of the command, just ask the command itself for help: + +``` +> XINFO HELP +1) XINFO [ [value] [opt] ...]. Subcommands are: +2) CONSUMERS +3) Show consumers of . +4) GROUPS +5) Show the stream consumer groups. +6) STREAM [FULL [COUNT ] +7) Show information about the stream. +8) HELP +9) Prints this help. +``` + +## Differences with Kafka (TM) partitions + +Consumer groups in Redis streams may resemble in some way Kafka (TM) partitioning-based consumer groups, however note that Redis streams are, in practical terms, very different. The partitions are only *logical* and the messages are just put into a single Redis key, so the way the different clients are served is based on who is ready to process new messages, and not from which partition clients are reading. For instance, if the consumer C3 at some point fails permanently, Redis will continue to serve C1 and C2 all the new messages arriving, as if now there are only two *logical* partitions. + +Similarly, if a given consumer is much faster at processing messages than the other consumers, this consumer will receive proportionally more messages in the same unit of time. This is possible since Redis tracks all the unacknowledged messages explicitly, and remembers who received which message and the ID of the first message never delivered to any consumer. + +However, this also means that in Redis if you really want to partition messages in the same stream into multiple Redis instances, you have to use multiple keys and some sharding system such as Redis Cluster or some other application-specific sharding system. A single Redis stream is not automatically partitioned to multiple instances. + +We could say that schematically the following is true: + +* If you use 1 stream -> 1 consumer, you are processing messages in order. +* If you use N streams with N consumers, so that only a given consumer hits a subset of the N streams, you can scale the above model of 1 stream -> 1 consumer. +* If you use 1 stream -> N consumers, you are load balancing to N consumers, however in that case, messages about the same logical item may be consumed out of order, because a given consumer may process message 3 faster than another consumer is processing message 4. + +So basically Kafka partitions are more similar to using N different Redis keys, while Redis consumer groups are a server-side load balancing system of messages from a given stream to N different consumers. + +## Capped Streams + +Many applications do not want to collect data into a stream forever. Sometimes it is useful to have at maximum a given number of items inside a stream, other times once a given size is reached, it is useful to move data from Redis to a storage which is not in memory and not as fast but suited to store the history for, potentially, decades to come. Redis streams have some support for this. One is the **MAXLEN** option of the `XADD` command. This option is very simple to use: + +{{< clients-example stream_tutorial maxlen >}} +> XADD race:italy MAXLEN 2 * rider Jones +"1692633189161-0" +> XADD race:italy MAXLEN 2 * rider Wood +"1692633198206-0" +> XADD race:italy MAXLEN 2 * rider Henshaw +"1692633208557-0" +> XLEN race:italy +(integer) 2 +> XRANGE race:italy - + +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +2) 1) "1692633208557-0" + 2) 1) "rider" + 2) "Henshaw" +{{< /clients-example >}} + +Using **MAXLEN** the old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. There is currently no option to tell the stream to just retain items that are not older than a given period, because such command, in order to run consistently, would potentially block for a long time in order to evict items. Imagine for example what happens if there is an insertion spike, then a long pause, and another insertion, all with the same maximum time. The stream would block to evict the data that became too old during the pause. So it is up to the user to do some planning and understand what is the maximum stream length desired. Moreover, while the length of the stream is proportional to the memory used, trimming by time is less simple to control and anticipate: it depends on the insertion rate which often changes over time (and when it does not change, then to just trim by size is trivial). + +However trimming with **MAXLEN** can be expensive: streams are represented by macro nodes into a radix tree, in order to be very memory efficient. Altering the single macro node, consisting of a few tens of elements, is not optimal. So it's possible to use the command in the following special form: + +``` +XADD race:italy MAXLEN ~ 1000 * ... entry fields here ... +``` + +The `~` argument between the **MAXLEN** option and the actual count means, I don't really need this to be exactly 1000 items. It can be 1000 or 1010 or 1030, just make sure to save at least 1000 items. With this argument, the trimming is performed only when we can remove a whole node. This makes it much more efficient, and it is usually what you want. You'll note here that the client libraries have various implementations of this. For example, the Python client defaults to approximate and has to be explicitly set to a true length. + +There is also the `XTRIM` command, which performs something very similar to what the **MAXLEN** option does above, except that it can be run by itself: + +{{< clients-example stream_tutorial xtrim >}} +> XTRIM race:italy MAXLEN 10 +(integer) 0 +{{< /clients-example >}} + +Or, as for the `XADD` option: + +{{< clients-example stream_tutorial xtrim2 >}} +> XTRIM mystream MAXLEN ~ 10 +(integer) 0 +{{< /clients-example >}} + +However, `XTRIM` is designed to accept different trimming strategies. Another trimming strategy is **MINID**, that evicts entries with IDs lower than the one specified. + +As `XTRIM` is an explicit command, the user is expected to know about the possible shortcomings of different trimming strategies. + +Another useful eviction strategy that may be added to `XTRIM` in the future, is to remove by a range of IDs to ease use of `XRANGE` and `XTRIM` to move data from Redis to other storage systems if needed. + +## Special IDs in the streams API + +You may have noticed that there are several special IDs that can be used in the Redis API. Here is a short recap, so that they can make more sense in the future. + +The first two special IDs are `-` and `+`, and are used in range queries with the `XRANGE` command. Those two IDs respectively mean the smallest ID possible (that is basically `0-1`) and the greatest ID possible (that is `18446744073709551615-18446744073709551615`). As you can see it is a lot cleaner to write `-` and `+` instead of those numbers. + +Then there are APIs where we want to say, the ID of the item with the greatest ID inside the stream. This is what `$` means. So for instance if I want only new entries with `XREADGROUP` I use this ID to signify I already have all the existing entries, but not the new ones that will be inserted in the future. Similarly when I create or set the ID of a consumer group, I can set the last delivered item to `$` in order to just deliver new entries to the consumers in the group. + +As you can see `$` does not mean `+`, they are two different things, as `+` is the greatest ID possible in every possible stream, while `$` is the greatest ID in a given stream containing given entries. Moreover APIs will usually only understand `+` or `$`, yet it was useful to avoid loading a given symbol with multiple meanings. + +Another special ID is `>`, that is a special meaning only related to consumer groups and only when the `XREADGROUP` command is used. This special ID means that we want only entries that were never delivered to other consumers so far. So basically the `>` ID is the *last delivered ID* of a consumer group. + +Finally the special ID `*`, that can be used only with the `XADD` command, means to auto select an ID for us for the new entry. + +So we have `-`, `+`, `$`, `>` and `*`, and all have a different meaning, and most of the time, can be used in different contexts. + +## Persistence, replication and message safety + +A Stream, like any other Redis data structure, is asynchronously replicated to replicas and persisted into AOF and RDB files. However what may not be so obvious is that also the consumer groups full state is propagated to AOF, RDB and replicas, so if a message is pending in the master, also the replica will have the same information. Similarly, after a restart, the AOF will restore the consumer groups' state. + +However note that Redis streams and consumer groups are persisted and replicated using the Redis default replication, so: + +* AOF must be used with a strong fsync policy if persistence of messages is important in your application. +* By default the asynchronous replication will not guarantee that `XADD` commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of replicas to receive the data from the master. +* The `WAIT` command may be used in order to force the propagation of the changes to a set of replicas. However note that while this makes it very unlikely that data is lost, the Redis failover process as operated by Sentinel or Redis Cluster performs only a *best effort* check to failover to the replica which is the most updated, and under certain specific failure conditions may promote a replica that lacks some data. + +So when designing an application using Redis streams and consumer groups, make sure to understand the semantical properties your application should have during failures, and configure things accordingly, evaluating whether it is safe enough for your use case. + +## Removing single items from a stream + +Streams also have a special command for removing items from the middle of a stream, just by ID. Normally for an append only data structure this may look like an odd feature, but it is actually useful for applications involving, for instance, privacy regulations. The command is called `XDEL` and receives the name of the stream followed by the IDs to delete: + +{{< clients-example stream_tutorial xdel >}} +> XRANGE race:italy - + COUNT 2 +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +2) 1) "1692633208557-0" + 2) 1) "rider" + 2) "Henshaw" +> XDEL race:italy 1692633208557-0 +(integer) 1 +> XRANGE race:italy - + COUNT 2 +1) 1) "1692633198206-0" + 2) 1) "rider" + 2) "Wood" +{{< /clients-example >}} + +However in the current implementation, memory is not really reclaimed until a macro node is completely empty, so you should not abuse this feature. + +## Zero length streams + +A difference between streams and other Redis data structures is that when the other data structures no longer have any elements, as a side effect of calling commands that remove elements, the key itself will be removed. So for instance, a sorted set will be completely removed when a call to `ZREM` will remove the last element in the sorted set. Streams, on the other hand, are allowed to stay at zero elements, both as a result of using a **MAXLEN** option with a count of zero (`XADD` and `XTRIM` commands), or because `XDEL` was called. + +The reason why such an asymmetry exists is because Streams may have associated consumer groups, and we do not want to lose the state that the consumer groups defined just because there are no longer any items in the stream. Currently the stream is not deleted even when it has no associated consumer groups. + +## Total latency of consuming a message + +Non blocking stream commands like `XRANGE` and `XREAD` or `XREADGROUP` without the BLOCK option are served synchronously like any other Redis command, so to discuss latency of such commands is meaningless: it is more interesting to check the time complexity of the commands in the Redis documentation. It should be enough to say that stream commands are at least as fast as sorted set commands when extracting ranges, and that `XADD` is very fast and can easily insert from half a million to one million items per second in an average machine if pipelining is used. + +However latency becomes an interesting parameter if we want to understand the delay of processing a message, in the context of blocking consumers in a consumer group, from the moment the message is produced via `XADD`, to the moment the message is obtained by the consumer because `XREADGROUP` returned with the message. + +## How serving blocked consumers works + +Before providing the results of performed tests, it is interesting to understand what model Redis uses in order to route stream messages (and in general actually how any blocking operation waiting for data is managed). + +* The blocked client is referenced in a hash table that maps keys for which there is at least one blocking consumer, to a list of consumers that are waiting for such key. This way, given a key that received data, we can resolve all the clients that are waiting for such data. +* When a write happens, in this case when the `XADD` command is called, it calls the `signalKeyAsReady()` function. This function will put the key into a list of keys that need to be processed, because such keys may have new data for blocked consumers. Note that such *ready keys* will be processed later, so in the course of the same event loop cycle, it is possible that the key will receive other writes. +* Finally, before returning into the event loop, the *ready keys* are finally processed. For each key the list of clients waiting for data is scanned, and if applicable, such clients will receive the new data that arrived. In the case of streams the data is the messages in the applicable range requested by the consumer. + +As you can see, basically, before returning to the event loop both the client calling `XADD` and the clients blocked to consume messages, will have their reply in the output buffers, so the caller of `XADD` should receive the reply from Redis at about the same time the consumers will receive the new messages. + +This model is *push-based*, since adding data to the consumers buffers will be performed directly by the action of calling `XADD`, so the latency tends to be quite predictable. + +## Latency tests results + +In order to check these latency characteristics a test was performed using multiple instances of Ruby programs pushing messages having as an additional field the computer millisecond time, and Ruby programs reading the messages from the consumer group and processing them. The message processing step consisted of comparing the current computer time with the message timestamp, in order to understand the total latency. + +Results obtained: + +``` +Processed between 0 and 1 ms -> 74.11% +Processed between 1 and 2 ms -> 25.80% +Processed between 2 and 3 ms -> 0.06% +Processed between 3 and 4 ms -> 0.01% +Processed between 4 and 5 ms -> 0.02% +``` + +So 99.9% of requests have a latency <= 2 milliseconds, with the outliers that remain still very close to the average. + +Adding a few million unacknowledged messages to the stream does not change the gist of the benchmark, with most queries still processed with very short latency. + +A few remarks: + +* Here we processed up to 10k messages per iteration, this means that the `COUNT` parameter of `XREADGROUP` was set to 10000. This adds a lot of latency but is needed in order to allow the slow consumers to be able to keep with the message flow. So you can expect a real world latency that is a lot smaller. +* The system used for this benchmark is very slow compared to today's standards. + + + + +## Learn more + +* The [Redis Streams Tutorial](/docs/data-types/streams-tutorial) explains Redis streams with many examples. +* [Redis Streams Explained](https://www.youtube.com/watch?v=Z8qcpXyMAiA) is an entertaining introduction to streams in Redis. +* [Redis University's RU202](https://university.redis.com/courses/ru202/) is a free, online course dedicated to Redis Streams. diff --git a/docs/data-types/strings.md b/docs/data-types/strings.md new file mode 100644 index 0000000000..52348dcf59 --- /dev/null +++ b/docs/data-types/strings.md @@ -0,0 +1,133 @@ +--- +title: "Redis Strings" +linkTitle: "Strings" +weight: 10 +description: > + Introduction to Redis strings +--- + +Redis strings store sequences of bytes, including text, serialized objects, and binary arrays. +As such, strings are the simplest type of value you can associate with +a Redis key. +They're often used for caching, but they support additional functionality that lets you implement counters and perform bitwise operations, too. + +Since Redis keys are strings, when we use the string type as a value too, +we are mapping a string to another string. The string data type is useful +for a number of use cases, like caching HTML fragments or pages. + +{{< clients-example set_tutorial set_get >}} + > SET bike:1 Deimos + OK + > GET bike:1 + "Deimos" +{{< /clients-example >}} + +As you can see using the `SET` and the `GET` commands are the way we set +and retrieve a string value. Note that `SET` will replace any existing value +already stored into the key, in the case that the key already exists, even if +the key is associated with a non-string value. So `SET` performs an assignment. + +Values can be strings (including binary data) of every kind, for instance you +can store a jpeg image inside a value. A value can't be bigger than 512 MB. + +The `SET` command has interesting options, that are provided as additional +arguments. For example, I may ask `SET` to fail if the key already exists, +or the opposite, that it only succeed if the key already exists: + +{{< clients-example set_tutorial setnx_xx >}} + > set bike:1 bike nx + (nil) + > set bike:1 bike xx + OK +{{< /clients-example >}} + +There are a number of other commands for operating on strings. For example +the `GETSET` command sets a key to a new value, returning the old value as the +result. You can use this command, for example, if you have a +system that increments a Redis key using `INCR` +every time your web site receives a new visitor. You may want to collect this +information once every hour, without losing a single increment. +You can `GETSET` the key, assigning it the new value of "0" and reading the +old value back. + +The ability to set or retrieve the value of multiple keys in a single +command is also useful for reduced latency. For this reason there are +the `MSET` and `MGET` commands: + +{{< clients-example set_tutorial mset >}} + > mset bike:1 "Deimos" bike:2 "Ares" bike:3 "Vanth" + OK + > mget bike:1 bike:2 bike:3 + 1) "Deimos" + 2) "Ares" + 3) "Vanth" +{{< /clients-example >}} + +When `MGET` is used, Redis returns an array of values. + +### Strings as counters +Even if strings are the basic values of Redis, there are interesting operations +you can perform with them. For instance, one is atomic increment: + +{{< clients-example set_tutorial incr >}} + > set total_crashes 0 + OK + > incr total_crashes + (integer) 1 + > incrby total_crashes 10 + (integer) 11 +{{< /clients-example >}} + +The `INCR` command parses the string value as an integer, +increments it by one, and finally sets the obtained value as the new value. +There are other similar commands like `INCRBY`, +`DECR` and `DECRBY`. Internally it's +always the same command, acting in a slightly different way. + +What does it mean that INCR is atomic? +That even multiple clients issuing INCR against +the same key will never enter into a race condition. For instance, it will never +happen that client 1 reads "10", client 2 reads "10" at the same time, both +increment to 11, and set the new value to 11. The final value will always be +12 and the read-increment-set operation is performed while all the other +clients are not executing a command at the same time. + + +## Limits + +By default, a single Redis string can be a maximum of 512 MB. + +## Basic commands + +### Getting and setting Strings + +* `SET` stores a string value. +* `SETNX` stores a string value only if the key doesn't already exist. Useful for implementing locks. +* `GET` retrieves a string value. +* `MGET` retrieves multiple string values in a single operation. + +### Managing counters + +* `INCRBY` atomically increments (and decrements when passing a negative number) counters stored at a given key. +* Another command exists for floating point counters: `INCRBYFLOAT`. + +### Bitwise operations + +To perform bitwise operations on a string, see the [bitmaps data type](/docs/data-types/bitmaps) docs. + +See the [complete list of string commands](/commands/?group=string). + +## Performance + +Most string operations are O(1), which means they're highly efficient. +However, be careful with the `SUBSTR`, `GETRANGE`, and `SETRANGE` commands, which can be O(n). +These random-access string commands may cause performance issues when dealing with large strings. + +## Alternatives + +If you're storing structured data as a serialized string, you may also want to consider Redis [hashes](/docs/data-types/hashes) or [JSON](/docs/stack/json). + +## Learn more + +* [Redis Strings Explained](https://www.youtube.com/watch?v=7CUt4yWeRQE) is a short, comprehensive video explainer on Redis strings. +* [Redis University's RU101](https://university.redis.com/courses/ru101/) covers Redis strings in detail. diff --git a/docs/get-started/_index.md b/docs/get-started/_index.md new file mode 100644 index 0000000000..e94be52b99 --- /dev/null +++ b/docs/get-started/_index.md @@ -0,0 +1,20 @@ +--- +title: "Quick starts" +linkTitle: "Quick starts" +hideListLinks: true +weight: 20 +description: > + Redis quick start guides +aliases: + - /docs/getting-started/ +--- + +Redis can be used as a database, cache, streaming engine, message broker, and more. The following quick start guides will show you how to use Redis for the following specific purposes: + +1. [Data structure store](/docs/get-started/data-store) +2. [Document database](/docs/get-started/document-database) +3. [Vector database](/docs/get-started/vector-database) + +Please select the guide that aligns best with your specific usage scenario. + +You can find answers to frequently asked questions in the [FAQ](/docs/get-started/faq/). diff --git a/docs/get-started/data-store.md b/docs/get-started/data-store.md new file mode 100644 index 0000000000..7c9455a620 --- /dev/null +++ b/docs/get-started/data-store.md @@ -0,0 +1,88 @@ +--- +title: "Redis as an in-memory data structure store quick start guide" +linkTitle: "Data structure store" +weight: 1 +description: Understand how to use basic Redis data types +--- + +This quick start guide shows you how to: + +1. Get started with Redis +2. Store data under a key in Redis +3. Retrieve data with a key from Redis +4. Scan the keyspace for keys that match a specific pattern + +The examples in this article refer to a simple bicycle inventory. + +## Setup + +The easiest way to get started with Redis is to use Redis Cloud: + +1. Create a [free account](https://redis.com/try-free?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). + + +2. Follow the instructions to create a free database. + +You can alternatively follow the [installation guides](/docs/install/install-stack/) to install Redis on your local machine. + +## Connect + +The first step is to connect to Redis. You can find further details about the connection options in this documentation site's [connection section](/docs/connect). The following example shows how to connect to a Redis server that runs on localhost (`-h 127.0.0.1`) and listens on the default port (`-p 6379`): + +{{< clients-example search_quickstart connect >}} +> redis-cli -h 127.0.0.1 -p 6379 +{{< /clients-example>}} +
+{{% alert title="Tip" color="warning" %}} +You can copy and paste the connection details from the Redis Cloud database configuration page. Here is an example connection string of a Cloud database that is hosted in the AWS region `us-east-1` and listens on port 16379: `redis-16379.c283.us-east-1-4.ec2.cloud.redislabs.com:16379`. The connection string has the format `host:port`. You must also copy and paste the username and password of your Cloud database and then either pass the credentials to your client or use the [AUTH command](/commands/auth/) after the connection is established. +{{% /alert %}} + +## Store and retrieve data + +Redis stands for Remote Dictionary Server. You can use the same data types as in your local programming environment but on the server side within Redis. + +Similar to byte arrays, Redis strings store sequences of bytes, including text, serialized objects, counter values, and binary arrays. The following example shows you how to set and get a string value: + +{{< clients-example set_and_get >}} +SET bike:1 "Process 134" +GET bike:1 +{{< /clients-example >}} + +Hashes are the equivalent of dictionaries (dicts or hash maps). Among other things, you can use hashes to represent plain objects and to store groupings of counters. The following example explains how to set and access field values of an object: + +{{< clients-example hash_tutorial set_get_all >}} +> HSET bike:1 model Deimos brand Ergonom type 'Enduro bikes' price 4972 +(integer) 4 +> HGET bike:1 model +"Deimos" +> HGET bike:1 price +"4972" +> HGETALL bike:1 +1) "model" +2) "Deimos" +3) "brand" +4) "Ergonom" +5) "type" +6) "Enduro bikes" +7) "price" +8) "4972" +{{< /clients-example >}} + +You can get a complete overview of available data types in this documentation site's [data types section](/docs/data-types/). Each data type has commands allowing you to manipulate or retrieve data. The [commands reference](/commands/) provides a sophisticated explanation. + +## Scan the keyspace + +Each item within Redis has a unique key. All items live within the Redis [keyspace](/docs/manual/keyspace/). You can scan the Redis keyspace via the [SCAN command](/commands/scan/). Here is an example that scans for the first 100 keys that have the prefix `bike:`: + +{{< clients-example scan_example >}} +SCAN 0 MATCH "bike:*" COUNT 100 +{{< /clients-example >}} + +[SCAN](/commands/scan/) returns a cursor position, allowing you to scan iteratively for the next batch of keys until you reach the cursor value 0. + +## Next steps + +You can address more use cases with Redis by learning about Redis Stack. Here are two additional quick start guides: + +* [Redis as a document database](/docs/get-started/document-database/) +* [Redis as a vector database](/docs/get-started/vector-database/) \ No newline at end of file diff --git a/docs/get-started/data/bikes.json b/docs/get-started/data/bikes.json new file mode 100644 index 0000000000..f8186ec93c --- /dev/null +++ b/docs/get-started/data/bikes.json @@ -0,0 +1,127 @@ +[ + { + "model": "Jigger", + "brand": "Velorim", + "price": 270, + "type": "Kids bikes", + "specs": { + "material": "aluminium", + "weight": "10" + }, + "description": "Small and powerful, the Jigger is the best ride for the smallest of tikes! This is the tiniest kids’ pedal bike on the market available without a coaster brake, the Jigger is the vehicle of choice for the rare tenacious little rider raring to go. We say rare because this smokin’ little bike is not ideal for a nervous first-time rider, but it’s a true giddy up for a true speedster. The Jigger is a 12 inch lightweight kids bicycle and it will meet your little one’s need for speed. It’s a single speed bike that makes learning to pump pedals simple and intuitive. It even has a handle in the bottom of the saddle so you can easily help your child during training! The Jigger is among the most lightweight children’s bikes on the planet. It is designed so that 2-3 year-olds fit comfortably in a molded ride position that allows for efficient riding, balanced handling and agility. The Jigger’s frame design and gears work together so your buddingbiker can stand up out of the seat, stop rapidly, rip over trails and pump tracks. The Jigger’s is amazing on dirt or pavement. Your tike will speed down the bike path in no time. The Jigger will ship with a coaster brake. A freewheel kit is provided at no cost. " + }, + { + "model": "Hillcraft", + "brand": "Bicyk", + "price": 1200, + "type": "Kids Mountain Bikes", + "specs": { + "material": "carbon", + "weight": "11" + }, + "description": "Kids want to ride with as little weight as possible. Especially on an incline! They may be at the age when a 27.5\" wheel bike is just too clumsy coming off a 24\" bike. The Hillcraft 26 is just the solution they need! Imagine 120mm travel. Boost front/rear. You have NOTHING to tweak because it is easy to assemble right out of the box. The Hillcraft 26 is an efficient trail trekking machine. Up or down does not matter - dominate the trails going both down and up with this amazing bike. The name Monarch comes from Monarch trail in Colorado where we love to ride. It’s a highly technical, steep and rocky trail but the rip on the waydown is so fulfilling. Don’t ride the trail on a hardtail! It is so much more fun on the full suspension Hillcraft! Hit your local trail with the Hillcraft Monarch 26 to get to where you want to go. " + }, + { + "model": "Chook air 5", + "brand": "Nord", + "price": 815, + "type": "Kids Mountain Bikes", + "specs": { + "material": "alloy", + "weight": "9.1" + }, + "description": "The Chook Air 5 gives kids aged six years and older a durable and uberlight mountain bike for their first experience on tracks and easy cruising through forests and fields. The lower top tube makes it easy to mount and dismount in any situation, giving your kids greater safety on the trails. The Chook Air 5 is the perfect intro to mountain biking." + }, + { + "model": "Eva 291", + "brand": "Eva", + "price": 3400, + "type": "Mountain Bikes", + "specs": { + "material": "carbon", + "weight": "9.1" + }, + "description": "The sister company to Nord, Eva launched in 2005 as the first and only women-dedicated bicycle brand. Designed by women for women, allEva bikes are optimized for the feminine physique using analytics from a body metrics database. If you like 29ers, try the Eva 291. It’s a brand new bike for 2022.. This full-suspension, cross-country ride has been designed for velocity. The 291 has 100mm of front and rear travel, a superlight aluminum frame and fast-rolling 29-inch wheels. Yippee!" + }, + { + "model": "Kahuna", + "brand": "Noka Bikes", + "price": 3200, + "type": "Mountain Bikes", + "specs": { + "material": "alloy", + "weight": "9.8" + }, + "description": "Whether you want to try your hand at XC racing or are looking for a lively trail bike that's just as inspiring on the climbs as it is over rougher ground, the Wilder is one heck of a bike built specifically for short women. Both the frames and components have been tweaked to include a women’s saddle, different bars and unique colourway." + }, + { + "model": "XBN 2.1 Alloy", + "brand": "Breakout", + "price": 810, + "type": "Road Bikes", + "specs": { + "material": "alloy", + "weight": "7.2" + }, + "description": "The XBN 2.1 Alloy is our entry-level road bike – but that’s not to say that it’s a basic machine. With an internal weld aluminium frame, a full carbon fork, and the slick-shifting Claris gears from Shimano’s, this is a bike which doesn’t break the bank and delivers craved performance. The 6061 alloy frame is triple-butted which ensures a lighter weight and smoother ride. And it’s comfortable with dropped seat stays and the carbon fork. The carefully crafted 50-34 tooth chainset and 11-32 tooth cassette give an easy-on-the-legs bottom gear for climbing, and the high-quality Vittoria Zaffiro tires balance grip, rolling friction and puncture protection when coasting down the other side. " + }, + { + "model": "WattBike", + "brand": "ScramBikes", + "price": 2300, + "type": "eBikes", + "specs": { + "material": "alloy", + "weight": "15" + }, + "description": "The WattBike is the best e-bike for people who still feel young at heart. It has a Bafang 500 watt geared hub motor that can reach 20 miles per hour on both steep inclines and city streets. The lithium-ion battery, which gets nearly 40 miles per charge, has a lightweight form factor, making it easier for seniors to use. It comes fully assembled (no convoluted instructions!) and includes a sturdy helmet at no cost. The Plush saddle softens over time with use. The included Seatpost, however, is easily adjustable and adds to this bike’s fantastic rating for seniors, as do the hydraulic disc brakes from Tektro. " + }, + { + "model": "Soothe Electric bike", + "brand": "Peaknetic", + "price": 1950, + "type": "eBikes", + "specs": { + "material": "alloy", + "weight": "14.7" + }, + "description": "The Soothe is an everyday electric bike, from the makers of Exercycle bikes, that conveys style while you get around the city. The Soothe lives up to its name by keeping your posture upright and relaxed for the ride ahead, keeping those aches and pains from riding at bay. It includes a low-step frame , our memory foam seat, bump-resistant shocks and conveniently placed thumb throttle. " + }, + { + "model": "Secto", + "brand": "Peaknetic", + "price": 430, + "type": "Commuter bikes", + "specs": { + "material": "aluminium", + "weight": "10.0" + }, + "description": "If you struggle with stiff fingers or a kinked neck or back after a few minutes on the road, this lightweight, aluminum bike alleviates those issues and allows you to enjoy the ride. From the ergonomic grips to the lumbar-supporting seat position, the Roll Low-Entry offers incredible comfort. The rear-inclined seat tube facilitates stability by allowing you to put a foot on the ground to balance at a stop, and the low step-over frame makes it accessible for all ability and mobility levels. The saddle is very soft, with a wide back to support your hip joints and a cutout in the center to redistribute that pressure. Rim brakes deliver satisfactory braking control, and the wide tires provide a smooth, stable ride on paved roads and gravel. Rack and fender mounts facilitate setting up the Roll Low-Entry as your preferred commuter, and the BMX-like handlebar offers space for mounting a flashlight, bell, or phone holder." + }, + { + "model": "Summit", + "brand": "nHill", + "price": 1200, + "type": "Mountain Bike", + "specs": { + "material": "alloy", + "weight": "11.3" + }, + "description": "This budget mountain bike from nHill performs well both on bike paths and on the trail. The fork with 100mm of travel absorbs rough terrain. Fat Kenda Booster tires give you grip in corners and on wet trails. The Shimano Tourney drivetrain offered enough gears for finding a comfortable pace to ride uphill, and the Tektro hydraulic disc brakes break smoothly. Whether you want an affordable bike that you can take to work, but also take trail riding on the weekends or you’re just after a stable, comfortable ride for the bike path, the Summit gives a good value for money." + }, + { + "model": "ThrillCycle", + "brand": "BikeShind", + "price": 815, + "type": "Commuter Bikes", + "specs": { + "material": "alloy", + "weight": "12.7" + }, + "description": "An artsy, retro-inspired bicycle that’s as functional as it is pretty: The ThrillCycle steel frame offers a smooth ride. A 9-speed drivetrain has enough gears for coasting in the city, but we wouldn’t suggest taking it to the mountains. Fenders protect you from mud, and a rear basket lets you transport groceries, flowers and books. The ThrillCycle comes with a limited lifetime warranty, so this little guy will last you long past graduation." + } + + + + +] diff --git a/docs/get-started/faq.md b/docs/get-started/faq.md new file mode 100644 index 0000000000..0c8531ac97 --- /dev/null +++ b/docs/get-started/faq.md @@ -0,0 +1,159 @@ +--- +title: "Redis FAQ" +linkTitle: "FAQ" +weight: 100 +description: > + Commonly asked questions when getting started with Redis +aliases: + - /docs/getting-started/faq +--- +## How is Redis different from other key-value stores? + +* Redis has a different evolution path in the key-value DBs where values can contain more complex data types, with atomic operations defined on those data types. Redis data types are closely related to fundamental data structures and are exposed to the programmer as such, without additional abstraction layers. +* Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. Another advantage of +in-memory databases is that the memory representation of complex data structures +is much simpler to manipulate compared to the same data structures on disk, so +Redis can do a lot with little internal complexity. At the same time the +two on-disk storage formats (RDB and AOF) don't need to be suitable for random +access, so they are compact and always generated in an append-only fashion +(Even the AOF log rotation is an append-only operation, since the new version +is generated from the copy of data in memory). However this design also involves +different challenges compared to traditional on-disk stores. Being the main data +representation on memory, Redis operations must be carefully handled to make sure +there is always an updated version of the data set on disk. + +## What's the Redis memory footprint? + +To give you a few examples (all obtained using 64-bit instances): + +* An empty instance uses ~ 3MB of memory. +* 1 Million small Keys -> String Value pairs use ~ 85MB of memory. +* 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory. + +Testing your use case is trivial. Use the `redis-benchmark` utility to generate random data sets then check the space used with the `INFO memory` command. + +64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can +have a lot of memory in 64-bit systems, so in order to run large Redis servers a 64-bit system is more or less required. The alternative is sharding. + +## Why does Redis keep its entire dataset in memory? + +In the past the Redis developers experimented with Virtual Memory and other systems in order to allow larger than RAM datasets, but after all we are very happy if we can do one thing well: data served from memory, disk used for storage. So for now there are no plans to create an on disk backend for Redis. Most of what +Redis is, after all, a direct result of its current design. + +If your real problem is not the total RAM needed, but the fact that you need +to split your data set into multiple Redis instances, please read the +[partitioning page](/topics/partitioning) in this documentation for more info. + +Redis Ltd., the company sponsoring Redis development, has developed a +"Redis on Flash" solution that uses a mixed RAM/flash approach for +larger data sets with a biased access pattern. You may check their offering +for more information, however this feature is not part of the open source Redis +code base. + +## Can you use Redis with a disk-based database? + +Yes, a common design pattern involves taking very write-heavy small data +in Redis (and data you need the Redis data structures to model your problem +in an efficient way), and big *blobs* of data into an SQL or eventually +consistent on-disk database. Similarly sometimes Redis is used in order to +take in memory another copy of a subset of the same data stored in the on-disk +database. This may look similar to caching, but actually is a more advanced model +since normally the Redis dataset is updated together with the on-disk DB dataset, +and not refreshed on cache misses. + +## How can I reduce Redis' overall memory usage? + +A good practice is to consider memory consumption when mapping your logical data model to the physical data model within Redis. These considerations include using specific data types, key patterns, and normalization. + +Beyond data modeling, there is more info in the [Memory Optimization page](/topics/memory-optimization). + +## What happens if Redis runs out of memory? + +Redis has built-in protections allowing the users to set a max limit on memory +usage, using the `maxmemory` option in the configuration file to put a limit +to the memory Redis can use. If this limit is reached, Redis will start to reply +with an error to write commands (but will continue to accept read-only +commands). + +You can also configure Redis to evict keys when the max memory limit +is reached. See the [eviction policy docs](/docs/manual/eviction/) for more information on this. + +## Background saving fails with a fork() error on Linux? + +Short answer: `echo 1 > /proc/sys/vm/overcommit_memory` :) + +And now the long one: + +The Redis background saving schema relies on the copy-on-write semantic of the `fork` system call in +modern operating systems: Redis forks (creates a child process) that is an +exact copy of the parent. The child process dumps the DB on disk and finally +exits. In theory the child should use as much memory as the parent being a +copy, but actually thanks to the copy-on-write semantic implemented by most +modern operating systems the parent and child process will _share_ the common +memory pages. A page will be duplicated only when it changes in the child or in +the parent. Since in theory all the pages may change while the child process is +saving, Linux can't tell in advance how much memory the child will take, so if +the `overcommit_memory` setting is set to zero the fork will fail unless there is +as much free RAM as required to really duplicate all the parent memory pages. +If you have a Redis dataset of 3 GB and just 2 GB of free +memory it will fail. + +Setting `overcommit_memory` to 1 tells Linux to relax and perform the fork in a +more optimistic allocation fashion, and this is indeed what you want for Redis. + +You can refer to the [proc(5)][proc5] man page for explanations of the +available values. + +[proc5]: http://man7.org/linux/man-pages/man5/proc.5.html + +## Are Redis on-disk snapshots atomic? + +Yes, the Redis background saving process is always forked when the server is +outside of the execution of a command, so every command reported to be atomic +in RAM is also atomic from the point of view of the disk snapshot. + +## How can Redis use multiple CPUs or cores? + +It's not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound. +For instance, when using pipelining a Redis instance running on an average Linux system can deliver 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU. + +However, to maximize CPU usage you can start multiple instances of Redis in +the same box and treat them as different servers. At some point a single +box may not be enough anyway, so if you want to use multiple CPUs you can +start thinking of some way to shard earlier. + +You can find more information about using multiple Redis instances in the [Partitioning page](/topics/partitioning). + +As of version 4.0, Redis has started implementing threaded actions. For now this is limited to deleting objects in the background and blocking commands implemented via Redis modules. For subsequent releases, the plan is to make Redis more and more threaded. + +## What is the maximum number of keys a single Redis instance can hold? What is the maximum number of elements in a Hash, List, Set, and Sorted Set? + +Redis can handle up to 2^32 keys, and was tested in practice to +handle at least 250 million keys per instance. + +Every hash, list, set, and sorted set, can hold 2^32 elements. + +In other words your limit is likely the available memory in your system. + +## Why does my replica have a different number of keys its master instance? + +If you use keys with limited time to live (Redis expires) this is normal behavior. This is what happens: + +* The primary generates an RDB file on the first synchronization with the replica. +* The RDB file will not include keys already expired in the primary but which are still in memory. +* These keys are still in the memory of the Redis primary, even if logically expired. They'll be considered non-existent, and their memory will be reclaimed later, either incrementally or explicitly on access. While these keys are not logically part of the dataset, they are accounted for in the `INFO` output and in the `DBSIZE` command. +* When the replica reads the RDB file generated by the primary, this set of keys will not be loaded. + +Because of this, it's common for users with many expired keys to see fewer keys in the replicas. However, logically, the primary and replica will have the same content. + +## Where does the name "Redis" come from? + +Redis is an acronym that stands for **RE**mote **DI**ctionary **S**erver. + +## Why did Salvatore Sanfilippo start the Redis project? + +Salvatore originally created Redis to scale [LLOOGG](https://github.com/antirez/lloogg), a real-time log analysis tool. But after getting the basic Redis server working, he decided to share the work with other people and turn Redis into an open source project. + +## How is Redis pronounced? + +"Redis" (/ˈrɛd-ɪs/) is pronounced like the word "red" plus the word "kiss" without the "k". diff --git a/docs/get-started/img/free-cloud-db.png b/docs/get-started/img/free-cloud-db.png new file mode 100644 index 0000000000..3336f3353a Binary files /dev/null and b/docs/get-started/img/free-cloud-db.png differ diff --git a/docs/install/_index.md b/docs/install/_index.md new file mode 100644 index 0000000000..4685cefe7e --- /dev/null +++ b/docs/install/_index.md @@ -0,0 +1,18 @@ +--- +title: "Install Redis or Redis Stack" +linkTitle: "Install" +weight: 30 +hideListLinks: true +description: How to install your preferred Redis flavor on your target platform +aliases: + - /docs/getting-started +--- + +You can install [Redis](https://redis.io/docs/about/) or [Redis Stack](/docs/about/about-stack) locally on your machine. Redis and Redis Stack are available on Linux, macOS, and Windows. + +Here are the installation instructions: + +* [Install Redis](/docs/install/install-redis) +* [Install Redis Stack](/docs/install/install-stack) + +While you can install Redis (Stack) locally, you might also consider using Redis Cloud by creating a [free account](https://redis.com/try-free/?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). diff --git a/docs/install/install-redis/_index.md b/docs/install/install-redis/_index.md new file mode 100644 index 0000000000..e316a5565b --- /dev/null +++ b/docs/install/install-redis/_index.md @@ -0,0 +1,166 @@ +--- +title: "Install Redis" +linkTitle: "Install Redis" +weight: 1 +description: > + Install Redis on Linux, macOS, and Windows +aliases: +- /docs/getting-started/installation +- /docs/getting-started/tutorial +--- + +This is a an installation guide. You'll learn how to install, run, and experiment with the Redis server process. + +While you can install Redis on any of the platforms listed below, you might also consider using Redis Cloud by creating a [free account](https://redis.com/try-free?utm_source=redisio&utm_medium=referral&utm_campaign=2023-09-try_free&utm_content=cu-redis_cloud_users). + +## Install Redis + +How you install Redis depends on your operating system and whether you'd like to install it bundled with Redis Stack and Redis UI. See the guide below that best fits your needs: + +* [Install Redis from Source](/docs/install/install-redis/install-redis-from-source) +* [Install Redis on Linux](/docs/install/install-redis/install-redis-on-linux) +* [Install Redis on macOS](/docs/install/install-redis/install-redis-on-mac-os) +* [Install Redis on Windows](/docs/install/install-redis/install-redis-on-windows) +* [Install Redis with Redis Stack and RedisInsight](/docs/install/install-stack/) + +Refer to [Redis Administration](/docs/management/admin/) for detailed setup tips. + +## Test if you can connect using the CLI + +After you have Redis up and running, you can connect using `redis-cli`. + +External programs talk to Redis using a TCP socket and a Redis specific protocol. This protocol is implemented in the Redis client libraries for the different programming languages. However, to make hacking with Redis simpler, Redis provides a command line utility that can be used to send commands to Redis. This program is called **redis-cli**. + +The first thing to do to check if Redis is working properly is sending a **PING** command using redis-cli: + +``` +$ redis-cli ping +PONG +``` + +Running **redis-cli** followed by a command name and its arguments will send this command to the Redis instance running on localhost at port 6379. You can change the host and port used by `redis-cli` - just try the `--help` option to check the usage information. + +Another interesting way to run `redis-cli` is without arguments: the program will start in interactive mode. You can type different commands and see their replies. + +``` +$ redis-cli +redis 127.0.0.1:6379> ping +PONG +``` + +## Securing Redis + +By default Redis binds to **all the interfaces** and has no authentication at all. If you use Redis in a very controlled environment, separated from the external internet and in general from attackers, that's fine. However, if an unhardened Redis is exposed to the internet, it is a big security concern. If you are not 100% sure your environment is secured properly, please check the following steps in order to make Redis more secure: + +1. Make sure the port Redis uses to listen for connections (by default 6379 and additionally 16379 if you run Redis in cluster mode, plus 26379 for Sentinel) is firewalled, so that it is not possible to contact Redis from the outside world. +2. Use a configuration file where the `bind` directive is set in order to guarantee that Redis listens on only the network interfaces you are using. For example, only the loopback interface (127.0.0.1) if you are accessing Redis locally from the same computer. +3. Use the `requirepass` option to add an additional layer of security so that clients will be required to authenticate using the `AUTH` command. +4. Use [spiped](http://www.tarsnap.com/spiped.html) or another SSL tunneling software to encrypt traffic between Redis servers and Redis clients if your environment requires encryption. + +Note that a Redis instance exposed to the internet without any security [is very simple to exploit](http://antirez.com/news/96), so make sure you understand the above and apply **at least** a firewall layer. After the firewall is in place, try to connect with `redis-cli` from an external host to confirm that the instance is not reachable. + +## Use Redis from your application + +Of course using Redis just from the command line interface is not enough as the goal is to use it from your application. To do so, you need to download and install a Redis client library for your programming language. + +You'll find a [full list of clients for different languages in this page](/clients). + + +## Redis persistence + +You can learn [how Redis persistence works on this page](/docs/management/persistence/). It is important to understand that, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time. For example, after at least five minutes if you have at least 100 changes in your data. If you want your database to persist and be reloaded after a restart make sure to call the **SAVE** command manually every time you want to force a data set snapshot. Alternatively, you can save the data on disk before quitting by using the **SHUTDOWN** command: + +``` +$ redis-cli shutdown +``` + +This way, Redis will save the data on disk before quitting. Reading the [persistence page](/docs/management/persistence/) is strongly suggested to better understand how Redis persistence works. + +## Install Redis properly + +Running Redis from the command line is fine just to hack a bit or for development. However, at some point you'll have some actual application to run on a real server. For this kind of usage you have two different choices: + +* Run Redis using screen. +* Install Redis in your Linux box in a proper way using an init script, so that after a restart everything will start again properly. + +A proper install using an init script is strongly recommended. + +{{% alert title="Note" color="warning" %}} +The available packages for supported Linux distributions already include the capability of starting the Redis server from `/etc/init`. +{{% /alert %}} + +{{% alert title="Note" color="warning" %}} +The remainder of this section assumes you've [installed Redis from its source code](/docs/install/install-redis/install-redis-from-source). If instead you have installed Redis Stack, you will need to download a [basic init script](https://raw.githubusercontent.com/redis/redis/7.2/utils/redis_init_script) and then modify both it and the following instructions to conform to the way Redis Stack was installed on your platform. For example, on Ubuntu 20.04 LTS, Redis Stack is installed in `/opt/redis-stack`, not `/usr/local`, so you'll need to adjust accordingly. +{{% /alert %}} + +The following instructions can be used to perform a proper installation using the init script shipped with the Redis source code, `/path/to/redis-stable/utils/redis_init_script`. + +If you have not yet run `make install` after building the Redis source, you will need to do so before continuing. By default, `make install` will copy the `redis-server` and `redis-cli` binaries to `/usr/local/bin`. + +* Create a directory in which to store your Redis config files and your data: + + ``` + sudo mkdir /etc/redis + sudo mkdir /var/redis + ``` + +* Copy the init script that you'll find in the Redis distribution under the **utils** directory into `/etc/init.d`. We suggest calling it with the name of the port where you are running this instance of Redis. Make sure the resulting file has `0755` permissions. + + ``` + sudo cp utils/redis_init_script /etc/init.d/redis_6379 + ``` + +* Edit the init script. + + ``` + sudo vi /etc/init.d/redis_6379 + ``` + +Make sure to set the **REDISPORT** variable to the port you are using. +Both the pid file path and the configuration file name depend on the port number. + +* Copy the template configuration file you'll find in the root directory of the Redis distribution into `/etc/redis/` using the port number as the name, for instance: + + ``` + sudo cp redis.conf /etc/redis/6379.conf + ``` + +* Create a directory inside `/var/redis` that will work as both data and working directory for this Redis instance: + + ``` + sudo mkdir /var/redis/6379 + ``` + +* Edit the configuration file, making sure to perform the following changes: + * Set **daemonize** to yes (by default it is set to no). + * Set the **pidfile** to `/var/run/redis_6379.pid`, modifying the port as necessary. + * Change the **port** accordingly. In our example it is not needed as the default port is already `6379`. + * Set your preferred **loglevel**. + * Set the **logfile** to `/var/log/redis_6379.log`. + * Set the **dir** to `/var/redis/6379` (very important step!). +* Finally, add the new Redis init script to all the default runlevels using the following command: + + ``` + sudo update-rc.d redis_6379 defaults + ``` + +You are done! Now you can try running your instance with: + +``` +sudo /etc/init.d/redis_6379 start +``` + +Make sure that everything is working as expected: + +1. Try pinging your instance within a `redis-cli` session using the `PING` command. +2. Do a test save with `redis-cli save` and check that a dump file is correctly saved to `/var/redis/6379/dump.rdb`. +3. Check that your Redis instance is logging to the `/var/log/redis_6379.log` file. +4. If it's a new machine where you can try it without problems, make sure that after a reboot everything is still working. + +{{% alert title="Note" color="warning" %}} +The above instructions don't include all of the Redis configuration parameters that you could change. For example, to use AOF persistence instead of RDB persistence, or to set up replication, and so forth. +{{% /alert %}} + +You should also read the example [redis.conf](/docs/management/config-file/) file, which is heavily annotated to help guide you on making changes. Further details can also be found in the [configuration article on this site](/docs/management/config/). + +
diff --git a/docs/install/install-redis/install-redis-from-source.md b/docs/install/install-redis/install-redis-from-source.md new file mode 100644 index 0000000000..f910965350 --- /dev/null +++ b/docs/install/install-redis/install-redis-from-source.md @@ -0,0 +1,62 @@ +--- +title: "Install Redis from Source" +linkTitle: "Source code" +weight: 5 +description: > + Compile and install Redis from source +aliases: +- /docs/getting-started/installation/install-redis-from-source +--- + +You can compile and install Redis from source on variety of platforms and operating systems including Linux and macOS. Redis has no dependencies other than a C compiler and `libc`. + +## Downloading the source files + +The Redis source files are available from the [Download](/download) page. You can verify the integrity of these downloads by checking them against the digests in the [redis-hashes git repository](https://github.com/redis/redis-hashes). + +To obtain the source files for the latest stable version of Redis from the Redis downloads site, run: + +{{< highlight bash >}} +wget https://download.redis.io/redis-stable.tar.gz +{{< / highlight >}} + +## Compiling Redis + +To compile Redis, first extract the tarball, change to the root directory, and then run `make`: + +{{< highlight bash >}} +tar -xzvf redis-stable.tar.gz +cd redis-stable +make +{{< / highlight >}} + +To build with TLS support, you'll need to install OpenSSL development libraries (e.g., libssl-dev on Debian/Ubuntu) and then run: + +{{< highlight bash >}} +make BUILD_TLS=yes +{{< / highlight >}} + +If the compile succeeds, you'll find several Redis binaries in the `src` directory, including: + +* **redis-server**: the Redis Server itself +* **redis-cli** is the command line interface utility to talk with Redis. + +To install these binaries in `/usr/local/bin`, run: + +{{< highlight bash >}} +sudo make install +{{< / highlight >}} + +### Starting and stopping Redis in the foreground + +Once installed, you can start Redis by running + +{{< highlight bash >}} +redis-server +{{< / highlight >}} + +If successful, you'll see the startup logs for Redis, and Redis will be running in the foreground. + +To stop Redis, enter `Ctrl-C`. + +For a more complete installation, continue with [these instructions](/docs/install/#install-redis-more-properly). diff --git a/docs/install/install-redis/install-redis-on-linux.md b/docs/install/install-redis/install-redis-on-linux.md new file mode 100644 index 0000000000..2e47efe945 --- /dev/null +++ b/docs/install/install-redis/install-redis-on-linux.md @@ -0,0 +1,47 @@ +--- +title: "Install Redis on Linux" +linkTitle: "Linux" +weight: 1 +description: > + How to install Redis on Linux +aliases: +- /docs/getting-started/installation/install-redis-on-linux +--- + +Most major Linux distributions provide packages for Redis. + +## Install on Ubuntu/Debian + +You can install recent stable versions of Redis from the official `packages.redis.io` APT repository. + +{{% alert title="Prerequisites" color="warning" %}} +If you're running a very minimal distribution (such as a Docker container) you may need to install `lsb-release`, `curl` and `gpg` first: + +{{< highlight bash >}} +sudo apt install lsb-release curl gpg +{{< / highlight >}} +{{% /alert %}} + +Add the repository to the apt index, update it, and then install: + +{{< highlight bash >}} +curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg + +echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list + +sudo apt-get update +sudo apt-get install redis +{{< / highlight >}} + +## Install from Snapcraft + +The [Snapcraft store](https://snapcraft.io/store) provides [Redis packages](https://snapcraft.io/redis) that can be installed on platforms that support snap. +Snap is supported and available on most major Linux distributions. + +To install via snap, run: + +{{< highlight bash >}} +sudo snap install redis +{{< / highlight >}} + +If your Linux does not currently have snap installed, install it using the instructions described in [Installing snapd](https://snapcraft.io/docs/installing-snapd). diff --git a/docs/install/install-redis/install-redis-on-mac-os.md b/docs/install/install-redis/install-redis-on-mac-os.md new file mode 100644 index 0000000000..f2bd70cfab --- /dev/null +++ b/docs/install/install-redis/install-redis-on-mac-os.md @@ -0,0 +1,96 @@ +--- +title: "Install Redis on macOS" +linkTitle: "MacOS" +weight: 1 +description: Use Homebrew to install and start Redis on macOS +aliases: +- /docs/getting-started/installation/install-redis-on-mac-os +--- + +This guide shows you how to install Redis on macOS using Homebrew. Homebrew is the easiest way to install Redis on macOS. If you'd prefer to build Redis from the source files on macOS, see [Installing Redis from Source](/docs/install/install-redis/install-redis-from-source). + +## Prerequisites + +First, make sure you have Homebrew installed. From the terminal, run: + +{{< highlight bash >}} +brew --version +{{< / highlight >}} + +If this command fails, you'll need to [follow the Homebrew installation instructions](https://brew.sh/). + +## Installation + +From the terminal, run: + +{{< highlight bash >}} +brew install redis +{{< / highlight >}} + +This will install Redis on your system. + +## Starting and stopping Redis in the foreground + +To test your Redis installation, you can run the `redis-server` executable from the command line: + +{{< highlight bash >}} +redis-server +{{< / highlight >}} + +If successful, you'll see the startup logs for Redis, and Redis will be running in the foreground. + +To stop Redis, enter `Ctrl-C`. + +### Starting and stopping Redis using launchd + +As an alternative to running Redis in the foreground, you can also use `launchd` to start the process in the background: + +{{< highlight bash >}} +brew services start redis +{{< / highlight >}} + +This launches Redis and restarts it at login. You can check the status of a `launchd` managed Redis by running the following: + +{{< highlight bash >}} +brew services info redis +{{< / highlight >}} + +If the service is running, you'll see output like the following: + +{{< highlight bash >}} +redis (homebrew.mxcl.redis) +Running: ✔ +Loaded: ✔ +User: miranda +PID: 67975 +{{< / highlight >}} + +To stop the service, run: + +{{< highlight bash >}} +brew services stop redis +{{< / highlight >}} + +## Connect to Redis + +Once Redis is running, you can test it by running `redis-cli`: + +{{< highlight bash >}} +redis-cli +{{< / highlight >}} + +This will open the Redis REPL. Try running some commands: + +{{< highlight bash >}} +127.0.0.1:6379> lpush demos redis-macOS-demo +OK +127.0.0.1:6379> rpop demos +"redis-macOS-demo" +{{< / highlight >}} + +## Next steps + +Once you have a running Redis instance, you may want to: + +* Try the Redis CLI tutorial +* Connect using one of the Redis clients diff --git a/docs/install/install-redis/install-redis-on-windows.md b/docs/install/install-redis/install-redis-on-windows.md new file mode 100644 index 0000000000..087f3ba119 --- /dev/null +++ b/docs/install/install-redis/install-redis-on-windows.md @@ -0,0 +1,46 @@ +--- +title: "Install Redis on Windows" +linkTitle: "Windows" +weight: 1 +description: Use Redis on Windows for development +aliases: +- /docs/getting-started/installation/install-redis-on-windows/ +--- + +Redis is not officially supported on Windows. However, you can install Redis on Windows for development by following the instructions below. + +To install Redis on Windows, you'll first need to enable [WSL2](https://docs.microsoft.com/en-us/windows/wsl/install) (Windows Subsystem for Linux). WSL2 lets you run Linux binaries natively on Windows. For this method to work, you'll need to be running Windows 10 version 2004 and higher or Windows 11. + +## Install or enable WSL2 + +Microsoft provides [detailed instructions for installing WSL](https://docs.microsoft.com/en-us/windows/wsl/install). Follow these instructions, and take note of the default Linux distribution it installs. This guide assumes Ubuntu. + +## Install Redis + +Once you're running Ubuntu on Windows, you can follow the steps detailed at [Install on Ubuntu/Debian](/docs/install/install-redis/install-redis-on-linux#install-on-ubuntu-debian) to install recent stable versions of Redis from the official `packages.redis.io` APT repository. +Add the repository to the apt index, update it, and then install: + +{{< highlight bash >}} +curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg + +echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list + +sudo apt-get update +sudo apt-get install redis +{{< / highlight >}} + +Lastly, start the Redis server like so: + +{{< highlight bash >}} +sudo service redis-server start +{{< / highlight >}} + +## Connect to Redis + +You can test that your Redis server is running by connecting with the Redis CLI: + +{{< highlight bash >}} +redis-cli +127.0.0.1:6379> ping +PONG +{{< / highlight >}} diff --git a/docs/install/install-redisinsight/_index.md b/docs/install/install-redisinsight/_index.md new file mode 100644 index 0000000000..a45a4defcf --- /dev/null +++ b/docs/install/install-redisinsight/_index.md @@ -0,0 +1,9 @@ +--- +title: "Install RedisInsight" +linkTitle: "Install RedisInsight" +weight: 3 +description: > + Install RedisInsite on AWS, Docker, and Kubernetes +--- + +This is a an installation guide. You'll learn how to install RedisInsight on Amazon Web Services (AWS), Docker, and Kubernetes. \ No newline at end of file diff --git a/docs/install/install-redisinsight/env-variables.md b/docs/install/install-redisinsight/env-variables.md new file mode 100644 index 0000000000..dbf115bcff --- /dev/null +++ b/docs/install/install-redisinsight/env-variables.md @@ -0,0 +1,19 @@ +--- +title: "Environment variables" +linkTitle: "Environment variables" +weight: 1 +description: > + RedisInsight supported environment variables +--- +You can configure RedisInsight with the following environment variables. + +| Variable | Purpose | Default | Additional info | +| --- | --- | --- | --- | +| RI_APP_PORT | The port that RedisInsight listens on |
  • Docker: 5540
  • desktop: 5530
| See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| +| RI_APP_HOST | The host that RedisInsight connects to |
  • Docker: 0.0.0.0
  • desktop: 127.0.0.1
| See [Express Documentation](https://expressjs.com/en/api.html#app.listen)| +| RI_SERVER_TLS_KEY | Private key for HTTPS | n/a | Private key in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| +| RI_SERVER_TLS_CERT | Certificate for supplied private key | n/a | Public certificate in [PEM format](https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/#ftoc-heading-3). Can be a path to a file or a string in PEM format.| +| RI_ENCRYPTION_KEY | Key to encrypt data with | n/a | Available only for Docker.
Redisinsight stores sensitive information (database passwords, Workbench history, etc.) locally (using [sqlite3](https://github.com/TryGhost/node-sqlite3)). This variable allows you to store sensitive information encrypted using the specified encryption key.
Note: The same encryption key should be provided for subsequent `docker run` commands with the same volume attached to decrypt the information. | +| RI_LOG_LEVEL | Configures the log level of the application. | `info` | Supported logging levels are prioritized from highest to lowest:
  • error
  • warn
  • info
  • http
  • verbose
  • debug
  • silly
| +| RI_FILES_LOGGER | Log to file | `true` | By default, you can find log files in the following folders:
  • Docker: `/data/logs`
  • desktop: `/.refisinsight-app/logs`
| +| RI_STDOUT_LOGGER | Log to STDOUT | `true` | | diff --git a/docs/install/install-redisinsight/install-on-aws.md b/docs/install/install-redisinsight/install-on-aws.md new file mode 100644 index 0000000000..93fe59299e --- /dev/null +++ b/docs/install/install-redisinsight/install-on-aws.md @@ -0,0 +1,92 @@ +--- +title: "Install on AWS EC2" +linkTitle: "Install on AWS EC2" +weight: 3 +description: > + How to install RedisInsight on AWS EC2 +--- +This tutorial shows you how to install RedisInsight on an AWS EC2 instance and manage ElastiCache Redis instances using RedisInsight. To complete this tutorial you must have access to the AWS Console and permissions to launch EC2 instances. + +Step 1: Create a new IAM Role (optional) +-------------- + +RedisInsight needs read-only access to S3 and ElastiCache APIs. This is an optional step. + +1. Log in to the AWS Console and navigate to the IAM screen. +1. Create a new IAM Role. +1. Under *Select type of trusted entity*, choose EC2. The role is used by an EC2 instance. +1. Assign the following permissions: + * AmazonS3ReadOnlyAccess + * AmazonElastiCacheReadOnlyAccess + +Step 2: Launch EC2 Instance +-------------- + +Next, launch an EC2 instance. + +1. Navigate to EC2 under AWS Console. +1. Click Launch Instance. +1. Choose 64-bit Amazon Linux AMI. +1. Choose at least a t2.medium instance. The size of the instance depends on the memory used by your ElastiCache instance that you want to analyze. +1. Under Configure Instance: + * Choose the VPC that has your ElastiCache instances. + * Choose a subnet that has network access to your ElastiCache instances. + * Ensure that your EC2 instance has a public IP Address. + * Assign the IAM role that you created in Step 1. +1. Under the storage section, allocate at least 100 GiB storage. +1. Under security group, ensure that: + * Incoming traffic is allowed on port 5540 + * Incoming traffic is allowed on port 22 only during installation +1. Review and launch the ec2 instance. + +Step 3: Verify permissions and connectivity +---------- + +Next, verify that the EC2 instance has the required IAM permissions and can connect to ElastiCache Redis instances. + +1. SSH into the newly launched EC2 instance. +1. Open a command prompt. +1. Run the command `aws s3 ls`. This should list all S3 buckets. + 1. If the `aws` command cannot be found, make sure your EC2 instance is based of Amazon Linux. +1. Next, find the hostname of the ElastiCache instance you want to analyze and run the command `echo info | nc 6379`. +1. If you see some details about the ElastiCache Redis instance, you can proceed to the next step. +1. If you cannot connect to redis, you should review your VPC, subnet, and security group settings. + +Step 4: Install Docker on EC2 +------- + +Next, install Docker on the EC2 instance. Run the following commands: + +1. `sudo yum update -y` +1. `sudo yum install -y docker` +1. `sudo service docker start` +1. `sudo usermod -a -G docker ec2-user` +1. Log out and log back in again to pick up the new docker group permissions. +1. To verify, run `docker ps`. You should see some output without having to run `sudo`. + +Step 5: Run RedisInsight in the Docker container +------- + +Finally, install RedisInsight using one of the options described below. + +1. If you do not want to persist your RedisInsight data: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest +``` +2. If you want to persist your RedisInsight data, first attach the Docker volume to the `/data` path and then run the following command: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data +``` + +If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permission to access the volume provided (`redisinsight` in the command above). + +Find the IP Address of your EC2 instances and launch your browser at `http://:5540`. Accept the EULA and start using RedisInsight. + +RedisInsight also provides a health check endpoint at `http://:5540/api/health/` to monitor the health of the running container. + +Summary +------ + +In this guide, we installed RedisInsight on an AWS EC2 instance running Docker. As a next step, you should add an ElastiCache Redis Instance and then run the memory analysis. diff --git a/docs/install/install-redisinsight/install-on-docker.md b/docs/install/install-redisinsight/install-on-docker.md new file mode 100644 index 0000000000..c73a15ea97 --- /dev/null +++ b/docs/install/install-redisinsight/install-on-docker.md @@ -0,0 +1,34 @@ +--- +title: "Install on Docker" +linkTitle: "Install on Docker" +weight: 2 +description: > + How to install RedisInsight on Docker +--- +This tutorial shows how to install RedisInsight on [Docker](https://www.docker.com/) so you can use RedisInsight in development. +See a separate guide for installing [RedisInsight on AWS](/docs/install/install-redisinsight/install-on-aws/). + +## Install Docker + +The first step is to [install Docker for your operating system](https://docs.docker.com/install/). + +## Run RedisInsight Docker image + +You can install RedisInsight using one of the options described below. + +1. If you do not want to persist your RedisInsight data: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest +``` +2. If you want to persist your RedisInsight data, first attach the Docker volume to the `/data` path and then run the following command: + +```bash +docker run -d --name redisinsight -p 5540:5540 redis/redisinsight:latest -v redisinsight:/data +``` + +If the previous command returns a permission error, ensure that the user with `ID = 1000` has the necessary permissions to access the volume provided (`redisinsight` in the command above). + +Next, point your browser to `http://localhost:5540`. + +RedisInsight also provides a health check endpoint at `http://localhost:5540/api/health/` to monitor the health of the running container. diff --git a/docs/install/install-redisinsight/install-on-k8s.md b/docs/install/install-redisinsight/install-on-k8s.md new file mode 100644 index 0000000000..8a42865fec --- /dev/null +++ b/docs/install/install-redisinsight/install-on-k8s.md @@ -0,0 +1,267 @@ +--- +title: "Install on Kubernetes" +linkTitle: "Install on Kubernetes" +weight: 4 +description: > + How to install RedisInsight on Kubernetes +--- +This tutorial shows how to install RedisInsight on [Kubernetes](https://kubernetes.io/) (K8s). +This is an easy way to use RedisInsight with a [Redis Enterprise K8s deployment](https://redis.io/docs/about/redis-enterprise/#:~:text=and%20Multi%2Dcloud-,Redis%20Enterprise%20Software,-Redis%20Enterprise%20Software). + +## Create the RedisInsight deployment and service + +Below is an annotated YAML file that will create a RedisInsight +deployment and a service in a K8s cluster. + +1. Create a new file named `redisinsight.yaml` with the content below. + +```yaml +# RedisInsight service with name 'redisinsight-service' +apiVersion: v1 +kind: Service +metadata: + name: redisinsight-service # name should not be 'redisinsight' + # since the service creates + # environment variables that + # conflicts with redisinsight + # application's environment + # variables `RI_APP_HOST` and + # `RI_APP_PORT` +spec: + type: LoadBalancer + ports: + - port: 80 + targetPort: 5540 + selector: + app: redisinsight +--- +# RedisInsight deployment with name 'redisinsight' +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight #deployment name + labels: + app: redisinsight #deployment label +spec: + replicas: 1 #a single replica pod + selector: + matchLabels: + app: redisinsight #which pods is the deployment managing, as defined by the pod template + template: #pod template + metadata: + labels: + app: redisinsight #label for pod/s + spec: + containers: + + - name: redisinsight #Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest #repo/image + imagePullPolicy: IfNotPresent #Installs the latest RedisInsight version + volumeMounts: + - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 #exposed container port and protocol + protocol: TCP + volumes: + - name: redisinsight + emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir +``` + +2. Create the RedisInsight deployment and service: + +```sh +kubectl apply -f redisinsight.yaml +``` + +3. Once the deployment and service are successfully applied and complete, access RedisInsight. This can be accomplished by using the `` of the service we created to reach RedisInsight. + +```sh +$ kubectl get svc redisinsight-service +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +redisinsight-service 80:32143/TCP 1m +``` + +4. If you are using minikube, run `minikube list` to list the service and access RedisInsight at `http://:`. +``` +$ minikube list +|-------------|----------------------|--------------|---------------------------------------------| +| NAMESPACE | NAME | TARGET PORT | URL | +|-------------|----------------------|--------------|---------------------------------------------| +| default | kubernetes | No node port | | +| default | redisinsight-service | 80 | http://: | +| kube-system | kube-dns | No node port | | +|-------------|----------------------|--------------|---------------------------------------------| +``` + +## Create the RedisInsight deployment with persistant storage + +Below is an annotated YAML file that will create a RedisInsight +deployment in a K8s cluster. It will assign a peristent volume created from a volume claim template. +Write access to the container is configured in an init container. When using deployments +with persistent writeable volumes, it's best to set the strategy to `Recreate`. Otherwise you may find yourself +with two pods trying to use the same volume. + +1. Create a new file `redisinsight.yaml` with the content below. + +```yaml +# RedisInsight service with name 'redisinsight-service' +apiVersion: v1 +kind: Service +metadata: + name: redisinsight-service # name should not be 'redisinsight' + # since the service creates + # environment variables that + # conflicts with redisinsight + # application's environment + # variables `RI_APP_HOST` and + # `RI_APP_PORT` +spec: + type: LoadBalancer + ports: + - port: 80 + targetPort: 5540 + selector: + app: redisinsight +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: redisinsight-pv-claim + labels: + app: redisinsight +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2Gi + storageClassName: default +--- +# RedisInsight deployment with name 'redisinsight' +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight #deployment name + labels: + app: redisinsight #deployment label +spec: + replicas: 1 #a single replica pod + strategy: + type: Recreate + selector: + matchLabels: + app: redisinsight #which pods is the deployment managing, as defined by the pod template + template: #pod template + metadata: + labels: + app: redisinsight #label for pod/s + spec: + volumes: + - name: redisinsight + persistentVolumeClaim: + claimName: redisinsight-pv-claim + initContainers: + - name: init + image: busybox + command: + - /bin/sh + - '-c' + - | + chown -R 1001 /data + resources: {} + volumeMounts: + - name: redisinsight + mountPath: /data + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + containers: + - name: redisinsight #Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest #repo/image + imagePullPolicy: IfNotPresent #Always pull image + volumeMounts: + - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 #exposed container port and protocol + protocol: TCP +``` + +2. Create the RedisInsight deployment and service. + +```sh +kubectl apply -f redisinsight.yaml +``` + +## Create the RedisInsight deployment without a service. + +Below is an annotated YAML file that will create a RedisInsight +deployment in a K8s cluster. + +1. Create a new file redisinsight.yaml with the content below + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: redisinsight #deployment name + labels: + app: redisinsight #deployment label +spec: + replicas: 1 #a single replica pod + selector: + matchLabels: + app: redisinsight #which pods is the deployment managing, as defined by the pod template + template: #pod template + metadata: + labels: + app: redisinsight #label for pod/s + spec: + containers: + - name: redisinsight #Container name (DNS_LABEL, unique) + image: redis/redisinsight:latest #repo/image + imagePullPolicy: IfNotPresent #Always pull image + env: + # If there's a service named 'redisinsight' that exposes the + # deployment, we manually set `RI_APP_HOST` and + # `RI_APP_PORT` to override the service environment + # variables. + - name: RI_APP_HOST + value: "0.0.0.0" + - name: RI_APP_PORT + value: "5540" + volumeMounts: + - name: redisinsight #Pod volumes to mount into the container's filesystem. Cannot be updated. + mountPath: /data + ports: + - containerPort: 5540 #exposed container port and protocol + protocol: TCP + livenessProbe: + httpGet: + path : /healthcheck/ # exposed RI endpoint for healthcheck + port: 5540 # exposed container port + initialDelaySeconds: 5 # number of seconds to wait after the container starts to perform liveness probe + periodSeconds: 5 # period in seconds after which liveness probe is performed + failureThreshold: 1 # number of liveness probe failures after which container restarts + volumes: + - name: redisinsight + emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir +``` + +2. Create the RedisInsight deployment + +```sh +kubectl apply -f redisinsight.yaml +``` + +{{< alert title="Note" >}} +If the deployment will be exposed by a service whose name is 'redisinsight', set `RI_APP_HOST` and `RI_APP_PORT` environment variables to override the environment variables created by the service. +{{< /alert >}} + +3. Once the deployment has been successfully applied and the deployment is complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below: + +```sh +kubectl port-forward deployment/redisinsight 5540 +``` + +Open your browser and point to diff --git a/docs/interact/_index.md b/docs/interact/_index.md new file mode 100644 index 0000000000..a69055546b --- /dev/null +++ b/docs/interact/_index.md @@ -0,0 +1,9 @@ +--- +title: "Interact with data in Redis" +linkTitle: "Interact with data" + +weight: 40 + +description: > + How to interact with data in Redis, including searching, querying, triggered functions, transactions, and pub/sub. +--- \ No newline at end of file diff --git a/docs/interact/programmability/_index.md b/docs/interact/programmability/_index.md new file mode 100644 index 0000000000..14ac7489c3 --- /dev/null +++ b/docs/interact/programmability/_index.md @@ -0,0 +1,120 @@ +--- +title: "Redis programmability" +linkTitle: "Programmability" +weight: 20 +description: > + Extending Redis with Lua and Redis Functions +aliases: + - /topics/programmability + - /docs/manual/programmability/ +--- + +Redis provides a programming interface that lets you execute custom scripts on the server itself. In Redis 7 and beyond, you can use [Redis Functions](/docs/manual/programmability/functions-intro) to manage and run your scripts. In Redis 6.2 and below, you use [Lua scripting with the EVAL command](/docs/manual/programmability/eval-intro) to program the server. + +## Background + +Redis is, by [definition](https://github.com/redis/redis/blob/unstable/MANIFESTO#L7), a _"domain-specific language for abstract data types"_. +The language that Redis speaks consists of its [commands](/commands). +Most the commands specialize at manipulating core [data types](/topics/data-types-intro) in different ways. +In many cases, these commands provide all the functionality that a developer requires for managing application data in Redis. + +The term **programmability** in Redis means having the ability to execute arbitrary user-defined logic by the server. +We refer to such pieces of logic as **scripts**. +In our case, scripts enable processing the data where it lives, a.k.a _data locality_. +Furthermore, the responsible embedding of programmatic workflows in the Redis server can help in reducing network traffic and improving overall performance. +Developers can use this capability for implementing robust, application-specific APIs. +Such APIs can encapsulate business logic and maintain a data model across multiple keys and different data structures. + +User scripts are executed in Redis by an embedded, sandboxed scripting engine. +Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. + +Please refer to the [Redis Lua API Reference](/topics/lua-api) page for complete documentation. + +## Running scripts + +Redis provides two means for running scripts. + +Firstly, and ever since Redis 2.6.0, the `EVAL` command enables running server-side scripts. +Eval scripts provide a quick and straightforward way to have Redis run your scripts ad-hoc. +However, using them means that the scripted logic is a part of your application (not an extension of the Redis server). +Every applicative instance that runs a script must have the script's source code readily available for loading at any time. +That is because scripts are only cached by the server and are volatile. +As your application grows, this approach can become harder to develop and maintain. + +Secondly, added in v7.0, Redis Functions are essentially scripts that are first-class database elements. +As such, functions decouple scripting from application logic and enable independent development, testing, and deployment of scripts. +To use functions, they need to be loaded first, and then they are available for use by all connected clients. +In this case, loading a function to the database becomes an administrative deployment task (such as loading a Redis module, for example), which separates the script from the application. + +Please refer to the following pages for more information: + +* [Redis Eval Scripts](/topics/eval-intro) +* [Redis Functions](/topics/functions-intro) + +When running a script or a function, Redis guarantees its atomic execution. +The script's execution blocks all server activities during its entire time, similarly to the semantics of [transactions](/topics/transactions). +These semantics mean that all of the script's effects either have yet to happen or had already happened. +The blocking semantics of an executed script apply to all connected clients at all times. + +Note that the potential downside of this blocking approach is that executing slow scripts is not a good idea. +It is not hard to create fast scripts because scripting's overhead is very low. +However, if you intend to use a slow script in your application, be aware that all other clients are blocked and can't execute any command while it is running. + +## Read-only scripts + +A read-only script is a script that only executes commands that don't modify any keys within Redis. +Read-only scripts can be executed either by adding the `no-writes` [flag](/topics/lua-api#script_flags) to the script or by executing the script with one of the read-only script command variants: `EVAL_RO`, `EVALSHA_RO`, or `FCALL_RO`. +They have the following properties: + +* They can always be executed on replicas. +* They can always be killed by the `SCRIPT KILL` command. +* They never fail with OOM error when redis is over the memory limit. +* They are not blocked during write pauses, such as those that occur during coordinated failovers. +* They cannot execute any command that may modify the data set. +* Currently `PUBLISH`, `SPUBLISH` and `PFCOUNT` are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. + +In addition to the benefits provided by all read-only scripts, the read-only script commands have the following advantages: + +* They can be used to configure an ACL user to only be able to execute read-only scripts. +* Many clients also better support routing the read-only script commands to replicas for applications that want to use replicas for read scaling. + +#### Read-only script history + +Read-only scripts and read-only script commands were introduced in Redis 7.0 + +* Before Redis 7.0.1 `PUBLISH`, `SPUBLISH` and `PFCOUNT` were not considered write commands in scripts +* Before Redis 7.0.1 the `no-writes` [flag](/topics/lua-api#script_flags) did not imply `allow-oom` +* Before Redis 7.0.1 the `no-writes` flag did not permit the script to run during write pauses. + + +The recommended approach is to use the standard scripting commands with the `no-writes` flag unless you need one of the previously mentioned features. + +## Sandboxed script context + +Redis places the engine that executes user scripts inside a sandbox. +The sandbox attempts to prevent accidental misuse and reduce potential threats from the server's environment. + +Scripts should never try to access the Redis server's underlying host systems, such as the file system, network, or attempt to perform any other system call other than those supported by the API. + +Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. + +## Maximum execution time + +Scripts are subject to a maximum execution time (set by default to five seconds). +This default timeout is enormous since a script usually runs in less than a millisecond. +The limit is in place to handle accidental infinite loops created during development. + +It is possible to modify the maximum time a script can be executed with millisecond precision, +either via `redis.conf` or by using the `CONFIG SET` command. +The configuration parameter affecting max execution time is called `busy-reply-threshold`. + +When a script reaches the timeout threshold, it isn't terminated by Redis automatically. +Doing so would violate the contract between Redis and the scripting engine that ensures that scripts are atomic. +Interrupting the execution of a script has the potential of leaving the dataset with half-written changes. + +Therefore, when a script executes longer than the configured timeout, the following happens: + +* Redis logs that a script is running for too long. +* It starts accepting commands again from other clients but will reply with a BUSY error to all the clients sending normal commands. The only commands allowed in this state are `SCRIPT KILL`, `FUNCTION KILL`, and `SHUTDOWN NOSAVE`. +* It is possible to terminate a script that only executes read-only commands using the `SCRIPT KILL` and `FUNCTION KILL` commands. These commands do not violate the scripting semantic as no data was written to the dataset by the script yet. +* If the script had already performed even a single write operation, the only command allowed is `SHUTDOWN NOSAVE` that stops the server without saving the current data set on disk (basically, the server is aborted). diff --git a/docs/interact/programmability/eval-intro.md b/docs/interact/programmability/eval-intro.md new file mode 100644 index 0000000000..04776e3dbf --- /dev/null +++ b/docs/interact/programmability/eval-intro.md @@ -0,0 +1,441 @@ +--- +title: "Scripting with Lua" +linkTitle: "Lua scripting" +weight: 2 +description: > + Executing Lua in Redis +aliases: + - /topics/eval-intro + - /docs/manual/programmability/eval-intro/ +--- + +Redis lets users upload and execute Lua scripts on the server. +Scripts can employ programmatic control structures and use most of the [commands](/commands) while executing to access the database. +Because scripts execute in the server, reading and writing data from scripts is very efficient. + +Redis guarantees the script's atomic execution. +While executing the script, all server activities are blocked during its entire runtime. +These semantics mean that all of the script's effects either have yet to happen or had already happened. + +Scripting offers several properties that can be valuable in many cases. +These include: + +* Providing locality by executing logic where data lives. Data locality reduces overall latency and saves networking resources. +* Blocking semantics that ensure the script's atomic execution. +* Enabling the composition of simple capabilities that are either missing from Redis or are too niche to be a part of it. + +Lua lets you run part of your application logic inside Redis. +Such scripts can perform conditional updates across multiple keys, possibly combining several different data types atomically. + +Scripts are executed in Redis by an embedded execution engine. +Presently, Redis supports a single scripting engine, the [Lua 5.1](https://www.lua.org/) interpreter. +Please refer to the [Redis Lua API Reference](/topics/lua-api) page for complete documentation. + +Although the server executes them, Eval scripts are regarded as a part of the client-side application, which is why they're not named, versioned, or persisted. +So all scripts may need to be reloaded by the application at any time if missing (after a server restart, fail-over to a replica, etc.). +As of version 7.0, [Redis Functions](/topics/functions-intro) offer an alternative approach to programmability which allow the server itself to be extended with additional programmed logic. + +## Getting started + +We'll start scripting with Redis by using the `EVAL` command. + +Here's our first example: + +``` +> EVAL "return 'Hello, scripting!'" 0 +"Hello, scripting!" +``` + +In this example, `EVAL` takes two arguments. +The first argument is a string that consists of the script's Lua source code. +The script doesn't need to include any definitions of Lua function. +It is just a Lua program that will run in the Redis engine's context. + +The second argument is the number of arguments that follow the script's body, starting from the third argument, representing Redis key names. +In this example, we used the value _0_ because we didn't provide the script with any arguments, whether the names of keys or not. + +## Script parameterization + +It is possible, although highly ill-advised, to have the application dynamically generate script source code per its needs. +For example, the application could send these two entirely different, but at the same time perfectly identical scripts: + +``` +redis> EVAL "return 'Hello'" 0 +"Hello" +redis> EVAL "return 'Scripting!'" 0 +"Scripting!" +``` + +Although this mode of operation isn't blocked by Redis, it is an anti-pattern due to script cache considerations (more on the topic below). +Instead of having your application generate subtle variations of the same scripts, you can parametrize them and pass any arguments needed for to execute them. + +The following example demonstrates how to achieve the same effects as above, but via parameterization: + +``` +redis> EVAL "return ARGV[1]" 0 Hello +"Hello" +redis> EVAL "return ARGV[1]" 0 Parameterization! +"Parameterization!" +``` + +At this point, it is essential to understand the distinction Redis makes between input arguments that are names of keys and those that aren't. + +While key names in Redis are just strings, unlike any other string values, these represent keys in the database. +The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. + +**Important:** +to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. +The script **should only** access keys whose names are given as input arguments. +Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. + +Any input to the function that isn't the name of a key is a regular input argument. + +In the example above, both _Hello_ and _Parameterization!_ regular input arguments for the script. +Because the script doesn't touch any keys, we use the numerical argument _0_ to specify there are no key name arguments. +The execution context makes arguments available to the script through [_KEYS_](/topics/lua-api#the-keys-global-variable) and [_ARGV_](/topics/lua-api#the-argv-global-variable) global runtime variables. +The _KEYS_ table is pre-populated with all key name arguments provided to the script before its execution, whereas the _ARGV_ table serves a similar purpose but for regular arguments. + +The following attempts to demonstrate the distribution of input arguments between the scripts _KEYS_ and _ARGV_ runtime global variables: + + +``` +redis> EVAL "return { KEYS[1], KEYS[2], ARGV[1], ARGV[2], ARGV[3] }" 2 key1 key2 arg1 arg2 arg3 +1) "key1" +2) "key2" +3) "arg1" +4) "arg2" +5) "arg3" +``` + +**Note:** +as can been seen above, Lua's table arrays are returned as [RESP2 array replies](/topics/protocol#resp-arrays), so it is likely that your client's library will convert it to the native array data type in your programming language. +Please refer to the rules that govern [data type conversion](/topics/lua-api#data-type-conversion) for more pertinent information. + +## Interacting with Redis from a script + +It is possible to call Redis commands from a Lua script either via [`redis.call()`](/topics/lua-api#redis.call) or [`redis.pcall()`](/topics/lua-api#redis.pcall). + +The two are nearly identical. +Both execute a Redis command along with its provided arguments, if these represent a well-formed command. +However, the difference between the two functions lies in the manner in which runtime errors (such as syntax errors, for example) are handled. +Errors raised from calling `redis.call()` function are returned directly to the client that had executed it. +Conversely, errors encountered when calling the `redis.pcall()` function are returned to the script's execution context instead for possible handling. + +For example, consider the following: + +``` +> EVAL "return redis.call('SET', KEYS[1], ARGV[1])" 1 foo bar +OK +``` +The above script accepts one key name and one value as its input arguments. +When executed, the script calls the `SET` command to set the input key, _foo_, with the string value "bar". + +## Script cache + +Until this point, we've used the `EVAL` command to run our script. + +Whenever we call `EVAL`, we also include the script's source code with the request. +Repeatedly calling `EVAL` to execute the same set of parameterized scripts, wastes both network bandwidth and also has some overheads in Redis. +Naturally, saving on network and compute resources is key, so, instead, Redis provides a caching mechanism for scripts. + +Every script you execute with `EVAL` is stored in a dedicated cache that the server keeps. +The cache's contents are organized by the scripts' SHA1 digest sums, so the SHA1 digest sum of a script uniquely identifies it in the cache. +You can verify this behavior by running `EVAL` and calling `INFO` afterward. +You'll notice that the _used_memory_scripts_eval_ and _number_of_cached_scripts_ metrics grow with every new script that's executed. + +As mentioned above, dynamically-generated scripts are an anti-pattern. +Generating scripts during the application's runtime may, and probably will, exhaust the host's memory resources for caching them. +Instead, scripts should be as generic as possible and provide customized execution via their arguments. + +A script is loaded to the server's cache by calling the `SCRIPT LOAD` command and providing its source code. +The server doesn't execute the script, but instead just compiles and loads it to the server's cache. +Once loaded, you can execute the cached script with the SHA1 digest returned from the server. + +Here's an example of loading and then executing a cached script: + +``` +redis> SCRIPT LOAD "return 'Immabe a cached script'" +"c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f" +redis> EVALSHA c664a3bf70bd1d45c4284ffebb65a6f2299bfc9f 0 +"Immabe a cached script" +``` + +### Cache volatility + +The Redis script cache is **always volatile**. +It isn't considered as a part of the database and is **not persisted**. +The cache may be cleared when the server restarts, during fail-over when a replica assumes the master role, or explicitly by `SCRIPT FLUSH`. +That means that cached scripts are ephemeral, and the cache's contents can be lost at any time. + +Applications that use scripts should always call `EVALSHA` to execute them. +The server returns an error if the script's SHA1 digest is not in the cache. +For example: + +``` +redis> EVALSHA ffffffffffffffffffffffffffffffffffffffff 0 +(error) NOSCRIPT No matching script +``` + +In this case, the application should first load it with `SCRIPT LOAD` and then call `EVALSHA` once more to run the cached script by its SHA1 sum. +Most of [Redis' clients](/clients) already provide utility APIs for doing that automatically. +Please consult your client's documentation regarding the specific details. + +### `!EVALSHA` in the context of pipelining + +Special care should be given executing `EVALSHA` in the context of a [pipelined request](/topics/pipelining). +The commands in a pipelined request run in the order they are sent, but other clients' commands may be interleaved for execution between these. +Because of that, the `NOSCRIPT` error can return from a pipelined request but can't be handled. + +Therefore, a client library's implementation should revert to using plain `EVAL` of parameterized in the context of a pipeline. + +### Script cache semantics + +During normal operation, an application's scripts are meant to stay indefinitely in the cache (that is, until the server is restarted or the cache being flushed). +The underlying reasoning is that the script cache contents of a well-written application are unlikely to grow continuously. +Even large applications that use hundreds of cached scripts shouldn't be an issue in terms of cache memory usage. + +The only way to flush the script cache is by explicitly calling the `SCRIPT FLUSH` command. +Running the command will _completely flush_ the scripts cache, removing all the scripts executed so far. +Typically, this is only needed when the instance is going to be instantiated for another customer or application in a cloud environment. + +Also, as already mentioned, restarting a Redis instance flushes the non-persistent script cache. +However, from the point of view of the Redis client, there are only two ways to make sure that a Redis instance was not restarted between two different commands: + +* The connection we have with the server is persistent and was never closed so far. +* The client explicitly checks the `run_id` field in the `INFO` command to ensure the server was not restarted and is still the same process. + +Practically speaking, it is much simpler for the client to assume that in the context of a given connection, cached scripts are guaranteed to be there unless the administrator explicitly invoked the `SCRIPT FLUSH` command. +The fact that the user can count on Redis to retain cached scripts is semantically helpful in the context of pipelining. + +## The `!SCRIPT` command + +The Redis `SCRIPT` provides several ways for controlling the scripting subsystem. +These are: + +* `SCRIPT FLUSH`: this command is the only way to force Redis to flush the scripts cache. + It is most useful in environments where the same Redis instance is reassigned to different uses. + It is also helpful for testing client libraries' implementations of the scripting feature. + +* `SCRIPT EXISTS`: given one or more SHA1 digests as arguments, this command returns an array of _1_'s and _0_'s. + _1_ means the specific SHA1 is recognized as a script already present in the scripting cache. _0_'s meaning is that a script with this SHA1 wasn't loaded before (or at least never since the latest call to `SCRIPT FLUSH`). + +* `SCRIPT LOAD script`: this command registers the specified script in the Redis script cache. + It is a useful command in all the contexts where we want to ensure that `EVALSHA` doesn't not fail (for instance, in a pipeline or when called from a [`MULTI`/`EXEC` transaction](/topics/transactions)), without the need to execute the script. + +* `SCRIPT KILL`: this command is the only way to interrupt a long-running script (a.k.a slow script), short of shutting down the server. + A script is deemed as slow once its execution's duration exceeds the configured [maximum execution time](/topics/programmability#maximum-execution-time) threshold. + The `SCRIPT KILL` command can be used only with scripts that did not modify the dataset during their execution (since stopping a read-only script does not violate the scripting engine's guaranteed atomicity). + +* `SCRIPT DEBUG`: controls use of the built-in [Redis Lua scripts debugger](/topics/ldb). + +## Script replication + +In standalone deployments, a single Redis instance called _master_ manages the entire database. +A [clustered deployment](/topics/cluster-tutorial) has at least three masters managing the sharded database. +Redis uses [replication](/topics/replication) to maintain one or more replicas, or exact copies, for any given master. + +Because scripts can modify the data, Redis ensures all write operations performed by a script are also sent to replicas to maintain consistency. +There are two conceptual approaches when it comes to script replication: + +1. Verbatim replication: the master sends the script's source code to the replicas. + Replicas then execute the script and apply the write effects. + This mode can save on replication bandwidth in cases where short scripts generate many commands (for example, a _for_ loop). + However, this replication mode means that replicas redo the same work done by the master, which is wasteful. + More importantly, it also requires [all write scripts to be deterministic](#scripts-with-deterministic-writes). +1. Effects replication: only the script's data-modifying commands are replicated. + Replicas then run the commands without executing any scripts. + While potentially lengthier in terms of network traffic, this replication mode is deterministic by definition and therefore doesn't require special consideration. + +Verbatim script replication was the only mode supported until Redis 3.2, in which effects replication was added. +The _lua-replicate-commands_ configuration directive and [`redis.replicate_commands()`](/topics/lua-api#redis.replicate_commands) Lua API can be used to enable it. + +In Redis 5.0, effects replication became the default mode. +As of Redis 7.0, verbatim replication is no longer supported. + +### Replicating commands instead of scripts + +Starting with Redis 3.2, it is possible to select an alternative replication method. +Instead of replicating whole scripts, we can replicate the write commands generated by the script. +We call this **script effects replication**. + +**Note:** +starting with Redis 5.0, script effects replication is the default mode and does not need to be explicitly enabled. + +In this replication mode, while Lua scripts are executed, Redis collects all the commands executed by the Lua scripting engine that actually modify the dataset. +When the script execution finishes, the sequence of commands that the script generated are wrapped into a [`MULTI`/`EXEC` transaction](/topics/transactions) and are sent to the replicas and AOF. + +This is useful in several ways depending on the use case: + +* When the script is slow to compute, but the effects can be summarized by a few write commands, it is a shame to re-compute the script on the replicas or when reloading the AOF. + In this case, it is much better to replicate just the effects of the script. +* When script effects replication is enabled, the restrictions on non-deterministic functions are removed. + You can, for example, use the `TIME` or `SRANDMEMBER` commands inside your scripts freely at any place. +* The Lua PRNG in this mode is seeded randomly on every call. + +Unless already enabled by the server's configuration or defaults (before Redis 7.0), you need to issue the following Lua command before the script performs a write: + +```lua +redis.replicate_commands() +``` + +The [`redis.replicate_commands()`](/topics/lua-api#redis.replicate_commands) function returns _true) if script effects replication was enabled; +otherwise, if the function was called after the script already called a write command, +it returns _false_, and normal whole script replication is used. + +This function is deprecated as of Redis 7.0, and while you can still call it, it will always succeed. + +### Scripts with deterministic writes + +**Note:** +Starting with Redis 5.0, script replication is by default effect-based rather than verbatim. +In Redis 7.0, verbatim script replication had been removed entirely. +The following section only applies to versions lower than Redis 7.0 when not using effect-based script replication. + +An important part of scripting is writing scripts that only change the database in a deterministic way. +Scripts executed in a Redis instance are, by default until version 5.0, propagated to replicas and to the AOF file by sending the script itself -- not the resulting commands. +Since the script will be re-run on the remote host (or when reloading the AOF file), its changes to the database must be reproducible. + +The reason for sending the script is that it is often much faster than sending the multiple commands that the script generates. +If the client is sending many scripts to the master, converting the scripts into individual commands for the replica / AOF would result in too much bandwidth for the replication link or the Append Only File (and also too much CPU since dispatching a command received via the network is a lot more work for Redis compared to dispatching a command invoked by Lua scripts). + +Normally replicating scripts instead of the effects of the scripts makes sense, however not in all the cases. +So starting with Redis 3.2, the scripting engine is able to, alternatively, replicate the sequence of write commands resulting from the script execution, instead of replication the script itself. + +In this section, we'll assume that scripts are replicated verbatim by sending the whole script. +Let's call this replication mode **verbatim scripts replication**. + +The main drawback with the *whole scripts replication* approach is that scripts are required to have the following property: +the script **always must** execute the same Redis _write_ commands with the same arguments given the same input data set. +Operations performed by the script can't depend on any hidden (non-explicit) information or state that may change as the script execution proceeds or between different executions of the script. +Nor can it depend on any external input from I/O devices. + +Acts such as using the system time, calling Redis commands that return random values (e.g., `RANDOMKEY`), or using Lua's random number generator, could result in scripts that will not evaluate consistently. + +To enforce the deterministic behavior of scripts, Redis does the following: + +* Lua does not export commands to access the system time or other external states. +* Redis will block the script with an error if a script calls a Redis command able to alter the data set **after** a Redis _random_ command like `RANDOMKEY`, `SRANDMEMBER`, `TIME`. + That means that read-only scripts that don't modify the dataset can call those commands. + Note that a _random command_ does not necessarily mean a command that uses random numbers: any non-deterministic command is considered as a random command (the best example in this regard is the `TIME` command). +* In Redis version 4.0, commands that may return elements in random order, such as `SMEMBERS` (because Redis Sets are _unordered_), exhibit a different behavior when called from Lua, +and undergo a silent lexicographical sorting filter before returning data to Lua scripts. + So `redis.call("SMEMBERS",KEYS[1])` will always return the Set elements in the same order, while the same command invoked by normal clients may return different results even if the key contains exactly the same elements. + However, starting with Redis 5.0, this ordering is no longer performed because replicating effects circumvents this type of non-determinism. + In general, even when developing for Redis 4.0, never assume that certain commands in Lua will be ordered, but instead rely on the documentation of the original command you call to see the properties it provides. +* Lua's pseudo-random number generation function `math.random` is modified and always uses the same seed for every execution. + This means that calling [`math.random`](/topics/lua-api#runtime-libraries) will always generate the same sequence of numbers every time a script is executed (unless `math.randomseed` is used). + +All that said, you can still use commands that write and random behavior with a simple trick. +Imagine that you want to write a Redis script that will populate a list with N random integers. + +The initial implementation in Ruby could look like this: + +``` +require 'rubygems' +require 'redis' + +r = Redis.new + +RandomPushScript = < 0) do + res = redis.call('LPUSH',KEYS[1],math.random()) + i = i-1 + end + return res +EOF + +r.del(:mylist) +puts r.eval(RandomPushScript,[:mylist],[10,rand(2**32)]) +``` + +Every time this code runs, the resulting list will have exactly the +following elements: + +``` +redis> LRANGE mylist 0 -1 + 1) "0.74509509873814" + 2) "0.87390407681181" + 3) "0.36876626981831" + 4) "0.6921941534114" + 5) "0.7857992587545" + 6) "0.57730350670279" + 7) "0.87046522734243" + 8) "0.09637165539729" + 9) "0.74990198051087" +10) "0.17082803611217" +``` + +To make the script both deterministic and still have it produce different random elements, +we can add an extra argument to the script that's the seed to Lua's pseudo-random number generator. +The new script is as follows: + +``` +RandomPushScript = < 0) do + res = redis.call('LPUSH',KEYS[1],math.random()) + i = i-1 + end + return res +EOF + +r.del(:mylist) +puts r.eval(RandomPushScript,1,:mylist,10,rand(2**32)) +``` + +What we are doing here is sending the seed of the PRNG as one of the arguments. +The script output will always be the same given the same arguments (our requirement) but we are changing one of the arguments at every invocation, +generating the random seed client-side. +The seed will be propagated as one of the arguments both in the replication link and in the Append Only File, +guaranteeing that the same changes will be generated when the AOF is reloaded or when the replica processes the script. + +Note: an important part of this behavior is that the PRNG that Redis implements as `math.random` and `math.randomseed` is guaranteed to have the same output regardless of the architecture of the system running Redis. +32-bit, 64-bit, big-endian and little-endian systems will all produce the same output. + +## Debugging Eval scripts + +Starting with Redis 3.2, Redis has support for native Lua debugging. +The Redis Lua debugger is a remote debugger consisting of a server, which is Redis itself, and a client, which is by default [`redis-cli`](/topics/rediscli). + +The Lua debugger is described in the [Lua scripts debugging](/topics/ldb) section of the Redis documentation. + +## Execution under low memory conditions + +When memory usage in Redis exceeds the `maxmemory` limit, the first write command encountered in the script that uses additional memory will cause the script to abort (unless [`redis.pcall`](/topics/lua-api#redis.pcall) was used). + +However, an exception to the above is when the script's first write command does not use additional memory, as is the case with (for example, `DEL` and `LREM`). +In this case, Redis will allow all commands in the script to run to ensure atomicity. +If subsequent writes in the script consume additional memory, Redis' memory usage can exceed the threshold set by the `maxmemory` configuration directive. + +Another scenario in which a script can cause memory usage to cross the `maxmemory` threshold is when the execution begins when Redis is slightly below `maxmemory`, so the script's first write command is allowed. +As the script executes, subsequent write commands consume more memory leading to the server using more RAM than the configured `maxmemory` directive. + +In those scenarios, you should consider setting the `maxmemory-policy` configuration directive to any values other than `noeviction`. +In addition, Lua scripts should be as fast as possible so that eviction can kick in between executions. + +Note that you can change this behaviour by using [flags](#eval-flags) + +## Eval flags + +Normally, when you run an Eval script, the server does not know how it accesses the database. +By default, Redis assumes that all scripts read and write data. +However, starting with Redis 7.0, there's a way to declare flags when creating a script in order to tell Redis how it should behave. + +The way to do that is by using a Shebang statement on the first line of the script like so: + +``` +#!lua flags=no-writes,allow-stale +local x = redis.call('get','x') +return x +``` + +Note that as soon as Redis sees the `#!` comment, it'll treat the script as if it declares flags, even if no flags are defined, +it still has a different set of defaults compared to a script without a `#!` line. + +Another difference is that scripts without `#!` can run commands that access keys belonging to different cluster hash slots, but ones with `#!` inherit the default flags, so they cannot. + +Please refer to [Script flags](/topics/lua-api#script_flags) to learn about the various scripts and the defaults. diff --git a/docs/interact/programmability/functions-intro.md b/docs/interact/programmability/functions-intro.md new file mode 100644 index 0000000000..7d7d8e2543 --- /dev/null +++ b/docs/interact/programmability/functions-intro.md @@ -0,0 +1,451 @@ +--- +title: "Redis functions" +linkTitle: "Functions" +weight: 1 +description: > + Scripting with Redis 7 and beyond +aliases: + - /topics/functions-intro + - /docs/manual/programmability/functions-intro/ +--- + +Redis Functions is an API for managing code to be executed on the server. This feature, which became available in Redis 7, supersedes the use of [EVAL](/docs/manual/programmability/eval-intro) in prior versions of Redis. + +## Prologue (or, what's wrong with Eval Scripts?) + +Prior versions of Redis made scripting available only via the `EVAL` command, which allows a Lua script to be sent for execution by the server. +The core use cases for [Eval Scripts](/topics/eval-intro) is executing part of your application logic inside Redis, efficiently and atomically. +Such script can perform conditional updates across multiple keys, possibly combining several different data types. + +Using `EVAL` requires that the application sends the entire script for execution every time. +Because this results in network and script compilation overheads, Redis provides an optimization in the form of the `EVALSHA` command. By first calling `SCRIPT LOAD` to obtain the script's SHA1, the application can invoke it repeatedly afterward with its digest alone. + +By design, Redis only caches the loaded scripts. +That means that the script cache can become lost at any time, such as after calling `SCRIPT FLUSH`, after restarting the server, or when failing over to a replica. +The application is responsible for reloading scripts during runtime if any are missing. +The underlying assumption is that scripts are a part of the application and not maintained by the Redis server. + +This approach suits many light-weight scripting use cases, but introduces several difficulties once an application becomes complex and relies more heavily on scripting, namely: + +1. All client application instances must maintain a copy of all scripts. That means having some mechanism that applies script updates to all of the application's instances. +1. Calling cached scripts within the context of a [transaction](/topics/transactions) increases the probability of the transaction failing because of a missing script. Being more likely to fail makes using cached scripts as building blocks of workflows less attractive. +1. SHA1 digests are meaningless, making debugging the system extremely hard (e.g., in a `MONITOR` session). +1. When used naively, `EVAL` promotes an anti-pattern in which scripts the client application renders verbatim scripts instead of responsibly using the [`!KEYS` and `ARGV` Lua APIs](/topics/lua-api#runtime-globals). +1. Because they are ephemeral, a script can't call another script. This makes sharing and reusing code between scripts nearly impossible, short of client-side preprocessing (see the first point). + +To address these needs while avoiding breaking changes to already-established and well-liked ephemeral scripts, Redis v7.0 introduces Redis Functions. + +## What are Redis Functions? + +Redis functions are an evolutionary step from ephemeral scripting. + +Functions provide the same core functionality as scripts but are first-class software artifacts of the database. +Redis manages functions as an integral part of the database and ensures their availability via data persistence and replication. +Because functions are part of the database and therefore declared before use, applications aren't required to load them during runtime nor risk aborted transactions. +An application that uses functions depends only on their APIs rather than on the embedded script logic in the database. + +Whereas ephemeral scripts are considered a part of the application's domain, functions extend the database server itself with user-provided logic. +They can be used to expose a richer API composed of core Redis commands, similar to modules, developed once, loaded at startup, and used repeatedly by various applications / clients. +Every function has a unique user-defined name, making it much easier to call and trace its execution. + +The design of Redis Functions also attempts to demarcate between the programming language used for writing functions and their management by the server. +Lua, the only language interpreter that Redis presently support as an embedded execution engine, is meant to be simple and easy to learn. +However, the choice of Lua as a language still presents many Redis users with a challenge. + +The Redis Functions feature makes no assumptions about the implementation's language. +An execution engine that is part of the definition of the function handles running it. +An engine can theoretically execute functions in any language as long as it respects several rules (such as the ability to terminate an executing function). + +Presently, as noted above, Redis ships with a single embedded [Lua 5.1](/topics/lua-api) engine. +There are plans to support additional engines in the future. +Redis functions can use all of Lua's available capabilities to ephemeral scripts, +with the only exception being the [Redis Lua scripts debugger](/topics/ldb). + +Functions also simplify development by enabling code sharing. +Every function belongs to a single library, and any given library can consist of multiple functions. +The library's contents are immutable, and selective updates of its functions aren't allowed. +Instead, libraries are updated as a whole with all of their functions together in one operation. +This allows calling functions from other functions within the same library, or sharing code between functions by using a common code in library-internal methods, that can also take language native arguments. + +Functions are intended to better support the use case of maintaining a consistent view for data entities through a logical schema, as mentioned above. +As such, functions are stored alongside the data itself. +Functions are also persisted to the AOF file and replicated from master to replicas, so they are as durable as the data itself. +When Redis is used as an ephemeral cache, additional mechanisms (described below) are required to make functions more durable. + +Like all other operations in Redis, the execution of a function is atomic. +A function's execution blocks all server activities during its entire time, similarly to the semantics of [transactions](/topics/transactions). +These semantics mean that all of the script's effects either have yet to happen or had already happened. +The blocking semantics of an executed function apply to all connected clients at all times. +Because running a function blocks the Redis server, functions are meant to finish executing quickly, so you should avoid using long-running functions. + +## Loading libraries and functions + +Let's explore Redis Functions via some tangible examples and Lua snippets. + +At this point, if you're unfamiliar with Lua in general and specifically in Redis, you may benefit from reviewing some of the examples in [Introduction to Eval Scripts](/topics/eval-intro) and [Lua API](/topics/lua-api) pages for a better grasp of the language. + +Every Redis function belongs to a single library that's loaded to Redis. +Loading a library to the database is done with the `FUNCTION LOAD` command. +The command gets the library payload as input, +the library payload must start with Shebang statement that provides a metadata about the library (like the engine to use and the library name). +The Shebang format is: +``` +#! name= +``` + +Let's try loading an empty library: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n" +(error) ERR No functions registered +``` + +The error is expected, as there are no functions in the loaded library. Every library needs to include at least one registered function to load successfully. +A registered function is named and acts as an entry point to the library. +When the target execution engine handles the `FUNCTION LOAD` command, it registers the library's functions. + +The Lua engine compiles and evaluates the library source code when loaded, and expects functions to be registered by calling the `redis.register_function()` API. + +The following snippet demonstrates a simple library registering a single function named _knockknock_, returning a string reply: + +```lua +#!lua name=mylib +redis.register_function( + 'knockknock', + function() return 'Who\'s there?' end +) +``` + +In the example above, we provide two arguments about the function to Lua's `redis.register_function()` API: its registered name and a callback. + +We can load our library and use `FCALL` to call the registered function: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\nredis.register_function('knockknock', function() return 'Who\\'s there?' end)" +mylib +redis> FCALL knockknock 0 +"Who's there?" +``` + +Notice that the `FUNCTION LOAD` command returns the name of the loaded library, this name can later be used `FUNCTION LIST` and `FUNCTION DELETE`. + +We've provided `FCALL` with two arguments: the function's registered name and the numeric value `0`. This numeric value indicates the number of key names that follow it (the same way `EVAL` and `EVALSHA` work). + +We'll explain immediately how key names and additional arguments are available to the function. As this simple example doesn't involve keys, we simply use 0 for now. + +## Input keys and regular arguments + +Before we move to the following example, it is vital to understand the distinction Redis makes between arguments that are names of keys and those that aren't. + +While key names in Redis are just strings, unlike any other string values, these represent keys in the database. +The name of a key is a fundamental concept in Redis and is the basis for operating the Redis Cluster. + +**Important:** +To ensure the correct execution of Redis Functions, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. + +Any input to the function that isn't the name of a key is a regular input argument. + +Now, let's pretend that our application stores some of its data in Redis Hashes. +We want an `HSET`-like way to set and update fields in said Hashes and store the last modification time in a new field named `_last_modified_`. +We can implement a function to do all that. + +Our function will call `TIME` to get the server's clock reading and update the target Hash with the new fields' values and the modification's timestamp. +The function we'll implement accepts the following input arguments: the Hash's key name and the field-value pairs to update. + +The Lua API for Redis Functions makes these inputs accessible as the first and second arguments to the function's callback. +The callback's first argument is a Lua table populated with all key names inputs to the function. +Similarly, the callback's second argument consists of all regular arguments. + +The following is a possible implementation for our function and its library registration: + +```lua +#!lua name=mylib + +local function my_hset(keys, args) + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +redis.register_function('my_hset', my_hset) +``` + +If we create a new file named _mylib.lua_ that consists of the library's definition, we can load it like so (without stripping the source code of helpful whitespaces): + +```bash +$ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE +``` + +We've added the `REPLACE` modifier to the call to `FUNCTION LOAD` to tell Redis that we want to overwrite the existing library definition. +Otherwise, we would have gotten an error from Redis complaining that the library already exists. + +Now that the library's updated code is loaded to Redis, we can proceed and call our function: + +``` +redis> FCALL my_hset 1 myhash myfield "some value" another_field "another value" +(integer) 3 +redis> HGETALL myhash +1) "_last_modified_" +2) "1640772721" +3) "myfield" +4) "some value" +5) "another_field" +6) "another value" +``` + +In this case, we had invoked `FCALL` with _1_ as the number of key name arguments. +That means that the function's first input argument is a name of a key (and is therefore included in the callback's `keys` table). +After that first argument, all following input arguments are considered regular arguments and constitute the `args` table passed to the callback as its second argument. + +## Expanding the library + +We can add more functions to our library to benefit our application. +The additional metadata field we've added to the Hash shouldn't be included in responses when accessing the Hash's data. +On the other hand, we do want to provide the means to obtain the modification timestamp for a given Hash key. + +We'll add two new functions to our library to accomplish these objectives: + +1. The `my_hgetall` Redis Function will return all fields and their respective values from a given Hash key name, excluding the metadata (i.e., the `_last_modified_` field). +1. The `my_hlastmodified` Redis Function will return the modification timestamp for a given Hash key name. + +The library's source code could look something like the following: + +```lua +#!lua name=mylib + +local function my_hset(keys, args) + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +local function my_hgetall(keys, args) + redis.setresp(3) + local hash = keys[1] + local res = redis.call('HGETALL', hash) + res['map']['_last_modified_'] = nil + return res +end + +local function my_hlastmodified(keys, args) + local hash = keys[1] + return redis.call('HGET', hash, '_last_modified_') +end + +redis.register_function('my_hset', my_hset) +redis.register_function('my_hgetall', my_hgetall) +redis.register_function('my_hlastmodified', my_hlastmodified) +``` + +While all of the above should be straightforward, note that the `my_hgetall` also calls [`redis.setresp(3)`](/topics/lua-api#redis.setresp). +That means that the function expects [RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) replies after calling `redis.call()`, which, unlike the default RESP2 protocol, provides dictionary (associative arrays) replies. +Doing so allows the function to delete (or set to `nil` as is the case with Lua tables) specific fields from the reply, and in our case, the `_last_modified_` field. + +Assuming you've saved the library's implementation in the _mylib.lua_ file, you can replace it with: + +```bash +$ cat mylib.lua | redis-cli -x FUNCTION LOAD REPLACE +``` + +Once loaded, you can call the library's functions with `FCALL`: + +``` +redis> FCALL my_hgetall 1 myhash +1) "myfield" +2) "some value" +3) "another_field" +4) "another value" +redis> FCALL my_hlastmodified 1 myhash +"1640772721" +``` + +You can also get the library's details with the `FUNCTION LIST` command: + +``` +redis> FUNCTION LIST +1) 1) "library_name" + 2) "mylib" + 3) "engine" + 4) "LUA" + 5) "functions" + 6) 1) 1) "name" + 2) "my_hset" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) + 2) 1) "name" + 2) "my_hgetall" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) + 3) 1) "name" + 2) "my_hlastmodified" + 3) "description" + 4) (nil) + 5) "flags" + 6) (empty array) +``` + +You can see that it is easy to update our library with new capabilities. + +## Reusing code in the library + +On top of bundling functions together into database-managed software artifacts, libraries also facilitate code sharing. +We can add to our library an error handling helper function called from other functions. +The helper function `check_keys()` verifies that the input _keys_ table has a single key. +Upon success it returns `nil`, otherwise it returns an [error reply](/topics/lua-api#redis.error_reply). + +The updated library's source code would be: + +```lua +#!lua name=mylib + +local function check_keys(keys) + local error = nil + local nkeys = table.getn(keys) + if nkeys == 0 then + error = 'Hash key name not provided' + elseif nkeys > 1 then + error = 'Only one key name is allowed' + end + + if error ~= nil then + redis.log(redis.LOG_WARNING, error); + return redis.error_reply(error) + end + return nil +end + +local function my_hset(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + local hash = keys[1] + local time = redis.call('TIME')[1] + return redis.call('HSET', hash, '_last_modified_', time, unpack(args)) +end + +local function my_hgetall(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + redis.setresp(3) + local hash = keys[1] + local res = redis.call('HGETALL', hash) + res['map']['_last_modified_'] = nil + return res +end + +local function my_hlastmodified(keys, args) + local error = check_keys(keys) + if error ~= nil then + return error + end + + local hash = keys[1] + return redis.call('HGET', keys[1], '_last_modified_') +end + +redis.register_function('my_hset', my_hset) +redis.register_function('my_hgetall', my_hgetall) +redis.register_function('my_hlastmodified', my_hlastmodified) +``` + +After you've replaced the library in Redis with the above, you can immediately try out the new error handling mechanism: + +``` +127.0.0.1:6379> FCALL my_hset 0 myhash nope nope +(error) Hash key name not provided +127.0.0.1:6379> FCALL my_hgetall 2 myhash anotherone +(error) Only one key name is allowed +``` + +And your Redis log file should have lines in it that are similar to: + +``` +... +20075:M 1 Jan 2022 16:53:57.688 # Hash key name not provided +20075:M 1 Jan 2022 16:54:01.309 # Only one key name is allowed +``` + +## Functions in cluster + +As noted above, Redis automatically handles propagation of loaded functions to replicas. +In a Redis Cluster, it is also necessary to load functions to all cluster nodes. This is not handled automatically by Redis Cluster, and needs to be handled by the cluster administrator (like module loading, configuration setting, etc.). + +As one of the goals of functions is to live separately from the client application, this should not be part of the Redis client library responsibilities. Instead, `redis-cli --cluster-only-masters --cluster call host:port FUNCTION LOAD ...` can be used to execute the load command on all master nodes. + +Also, note that `redis-cli --cluster add-node` automatically takes care to propagate the loaded functions from one of the existing nodes to the new node. + +## Functions and ephemeral Redis instances + +In some cases there may be a need to start a fresh Redis server with a set of functions pre-loaded. Common reasons for that could be: + +* Starting Redis in a new environment +* Re-starting an ephemeral (cache-only) Redis, that uses functions + +In such cases, we need to make sure that the pre-loaded functions are available before Redis accepts inbound user connections and commands. + +To do that, it is possible to use `redis-cli --functions-rdb` to extract the functions from an existing server. This generates an RDB file that can be loaded by Redis at startup. + +## Function flags + +Redis needs to have some information about how a function is going to behave when executed, in order to properly enforce resource usage policies and maintain data consistency. + +For example, Redis needs to know that a certain function is read-only before permitting it to execute using `FCALL_RO` on a read-only replica. + +By default, Redis assumes that all functions may perform arbitrary read or write operations. Function Flags make it possible to declare more specific function behavior at the time of registration. Let's see how this works. + +In our previous example, we defined two functions that only read data. We can try executing them using `FCALL_RO` against a read-only replica. + +``` +redis > FCALL_RO my_hgetall 1 myhash +(error) ERR Can not execute a function with write flag using fcall_ro. +``` + +Redis returns this error because a function can, in theory, perform both read and write operations on the database. +As a safeguard and by default, Redis assumes that the function does both, so it blocks its execution. +The server will reply with this error in the following cases: + +1. Executing a function with `FCALL` against a read-only replica. +2. Using `FCALL_RO` to execute a function. +3. A disk error was detected (Redis is unable to persist so it rejects writes). + +In these cases, you can add the `no-writes` flag to the function's registration, disable the safeguard and allow them to run. +To register a function with flags use the [named arguments](/topics/lua-api#redis.register_function_named_args) variant of `redis.register_function`. + +The updated registration code snippet from the library looks like this: + +```lua +redis.register_function('my_hset', my_hset) +redis.register_function{ + function_name='my_hgetall', + callback=my_hgetall, + flags={ 'no-writes' } +} +redis.register_function{ + function_name='my_hlastmodified', + callback=my_hlastmodified, + flags={ 'no-writes' } +} +``` + +Once we've replaced the library, Redis allows running both `my_hgetall` and `my_hlastmodified` with `FCALL_RO` against a read-only replica: + +``` +redis> FCALL_RO my_hgetall 1 myhash +1) "myfield" +2) "some value" +3) "another_field" +4) "another value" +redis> FCALL_RO my_hlastmodified 1 myhash +"1640772721" +``` + +For the complete documentation flags, please refer to [Script flags](/topics/lua-api#script_flags). diff --git a/docs/interact/programmability/lua-api.md b/docs/interact/programmability/lua-api.md new file mode 100644 index 0000000000..3fa62f10c9 --- /dev/null +++ b/docs/interact/programmability/lua-api.md @@ -0,0 +1,892 @@ +--- +title: "Redis Lua API reference" +linkTitle: "Lua API" +weight: 3 +description: > + Executing Lua in Redis +aliases: + - /topics/lua-api + - /docs/manual/programmability/lua-api/ +--- + +Redis includes an embedded [Lua 5.1](https://www.lua.org/) interpreter. +The interpreter runs user-defined [ephemeral scripts](/topics/eval-intro) and [functions](/topics/functions-intro). Scripts run in a sandboxed context and can only access specific Lua packages. This page describes the packages and APIs available inside the execution's context. + +## Sandbox context + +The sandboxed Lua context attempts to prevent accidental misuse and reduce potential threats from the server's environment. + +Scripts should never try to access the Redis server's underlying host systems. +That includes the file system, network, and any other attempt to perform a system call other than those supported by the API. + +Scripts should operate solely on data stored in Redis and data provided as arguments to their execution. + +### Global variables and functions + +The sandboxed Lua execution context blocks the declaration of global variables and functions. +The blocking of global variables is in place to ensure that scripts and functions don't attempt to maintain any runtime context other than the data stored in Redis. +In the (somewhat uncommon) use case that a context needs to be maintain between executions, +you should store the context in Redis' keyspace. + +Redis will return a "Script attempted to create global variable 'my_global_variable" error when trying to execute the following snippet: + +```lua +my_global_variable = 'some value' +``` + +And similarly for the following global function declaration: + +```lua +function my_global_function() + -- Do something amazing +end +``` + +You'll also get a similar error when your script attempts to access any global variables that are undefined in the runtime's context: + +```lua +-- The following will surely raise an error +return an_undefined_global_variable +``` + +Instead, all variable and function definitions are required to be declared as local. +To do so, you'll need to prepend the [_local_](https://www.lua.org/manual/5.1/manual.html#2.4.7) keyword to your declarations. +For example, the following snippet will be considered perfectly valid by Redis: + +```lua +local my_local_variable = 'some value' + +local function my_local_function() + -- Do something else, but equally amazing +end +``` + +**Note:** +the sandbox attempts to prevent the use of globals. +Using Lua's debugging functionality or other approaches such as altering the meta table used for implementing the globals' protection to circumvent the sandbox isn't hard. +However, it is difficult to circumvent the protection by accident. +If the user messes with the Lua global state, the consistency of AOF and replication can't be guaranteed. +In other words, just don't do it. + +### Imported Lua modules + +Using imported Lua modules is not supported inside the sandboxed execution context. +The sandboxed execution context prevents the loading modules by disabling Lua's [`require` function](https://www.lua.org/pil/8.1.html). + +The only libraries that Redis ships with and that you can use in scripts are listed under the [Runtime libraries](#runtime-libraries) section. + +## Runtime globals + +While the sandbox prevents users from declaring globals, the execution context is pre-populated with several of these. + +### The _redis_ singleton + +The _redis_ singleton is an object instance that's accessible from all scripts. +It provides the API to interact with Redis from scripts. +Its description follows [below](#redis_object). + +### The _KEYS_ global variable + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: no + +**Important:** +to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a function accesses must be explicitly provided as input key arguments. +The script **should only** access keys whose names are given as input arguments. +Scripts **should never** access keys with programmatically-generated names or based on the contents of data structures stored in the database. + +The _KEYS_ global variable is available only for [ephemeral scripts](/topics/eval-intro). +It is pre-populated with all key name input arguments. + +### The _ARGV_ global variable + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: no + +The _ARGV_ global variable is available only in [ephemeral scripts](/topics/eval-intro). +It is pre-populated with all regular input arguments. + +## _redis_ object + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The Redis Lua execution context always provides a singleton instance of an object named _redis_. +The _redis_ instance enables the script to interact with the Redis server that's running it. +Following is the API provided by the _redis_ object instance. + +### `redis.call(command [,arg...])` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The `redis.call()` function calls a given Redis command and returns its reply. +Its inputs are the command and arguments, and once called, it executes the command in Redis and returns the reply. + +For example, we can call the `ECHO` command from a script and return its reply like so: + +```lua +return redis.call('ECHO', 'Echo, echo... eco... o...') +``` + +If and when `redis.call()` triggers a runtime exception, the raw exception is raised back to the user as an error, automatically. +Therefore, attempting to execute the following ephemeral script will fail and generate a runtime exception because `ECHO` accepts exactly one argument: + +```lua +redis> EVAL "return redis.call('ECHO', 'Echo,', 'echo... ', 'eco... ', 'o...')" 0 +(error) ERR Wrong number of args calling Redis command from script script: b0345693f4b77517a711221050e76d24ae60b7f7, on @user_script:1. +``` + +Note that the call can fail due to various reasons, see [Execution under low memory conditions](/topics/eval-intro#execution-under-low-memory-conditions) and [Script flags](#script_flags) + +To handle Redis runtime errors use `redis.pcall()` instead. + +### `redis.pcall(command [,arg...])` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function enables handling runtime errors raised by the Redis server. +The `redis.pcall()` function behaves exactly like [`redis.call()`](#redis.call), except that it: + +* Always returns a reply. +* Never throws a runtime exception, and returns in its stead a [`redis.error_reply`](#redis.error_reply) in case that a runtime exception is thrown by the server. + +The following demonstrates how to use `redis.pcall()` to intercept and handle runtime exceptions from within the context of an ephemeral script. + +```lua +local reply = redis.pcall('ECHO', unpack(ARGV)) +if reply['err'] ~= nil then + -- Handle the error sometime, but for now just log it + redis.log(redis.LOG_WARNING, reply['err']) + reply['err'] = 'ERR Something is wrong, but no worries, everything is under control' +end +return reply +``` + +Evaluating this script with more than one argument will return: + +``` +redis> EVAL "..." 0 hello world +(error) ERR Something is wrong, but no worries, everything is under control +``` + +### `redis.error_reply(x)` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This is a helper function that returns an [error reply](/topics/protocol#resp-errors). +The helper accepts a single string argument and returns a Lua table with the _err_ field set to that string. + +The outcome of the following code is that _error1_ and _error2_ are identical for all intents and purposes: + +```lua +local text = 'ERR My very special error' +local reply1 = { err = text } +local reply2 = redis.error_reply(text) +``` + +Therefore, both forms are valid as means for returning an error reply from scripts: + +``` +redis> EVAL "return { err = 'ERR My very special table error' }" 0 +(error) ERR My very special table error +redis> EVAL "return redis.error_reply('ERR My very special reply error')" 0 +(error) ERR My very special reply error +``` + +For returning Redis status replies refer to [`redis.status_reply()`](#redis.status_reply). +Refer to the [Data type conversion](#data-type-conversion) for returning other response types. + +**Note:** +By convention, Redis uses the first word of an error string as a unique error code for specific errors or `ERR` for general-purpose errors. +Scripts are advised to follow this convention, as shown in the example above, but this is not mandatory. + +### `redis.status_reply(x)` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This is a helper function that returns a [simple string reply](/topics/protocol#resp-simple-strings). +"OK" is an example of a standard Redis status reply. +The Lua API represents status replies as tables with a single field, _ok_, set with a simple status string. + +The outcome of the following code is that _status1_ and _status2_ are identical for all intents and purposes: + +```lua +local text = 'Frosty' +local status1 = { ok = text } +local status2 = redis.status_reply(text) +``` + +Therefore, both forms are valid as means for returning status replies from scripts: + +``` +redis> EVAL "return { ok = 'TICK' }" 0 +TICK +redis> EVAL "return redis.status_reply('TOCK')" 0 +TOCK +``` + +For returning Redis error replies refer to [`redis.error_reply()`](#redis.error_reply). +Refer to the [Data type conversion](#data-type-conversion) for returning other response types. + +### `redis.sha1hex(x)` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function returns the SHA1 hexadecimal digest of its single string argument. + +You can, for example, obtain the empty string's SHA1 digest: + +``` +redis> EVAL "return redis.sha1hex('')" 0 +"da39a3ee5e6b4b0d3255bfef95601890afd80709" +``` + +### `redis.log(level, message)` + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +This function writes to the Redis server log. + +It expects two input arguments: the log level and a message. +The message is a string to write to the log file. +Log level can be on of these: + +* `redis.LOG_DEBUG` +* `redis.LOG_VERBOSE` +* `redis.LOG_NOTICE` +* `redis.LOG_WARNING` + +These levels map to the server's log levels. +The log only records messages equal or greater in level than the server's `loglevel` configuration directive. + +The following snippet: + +```lua +redis.log(redis.LOG_WARNING, 'Something is terribly wrong') +``` + +will produce a line similar to the following in your server's log: + +``` +[32343] 22 Mar 15:21:39 # Something is terribly wrong +``` + +### `redis.setresp(x)` + +* Since version: 6.0.0 +* Available in scripts: yes +* Available in functions: yes + +This function allows the executing script to switch between [Redis Serialization Protocol (RESP)](/topics/protocol) versions for the replies returned by [`redis.call()`](#redis.call) and [`redis.pcall()`](#redis.pcall). +It expects a single numerical argument as the protocol's version. +The default protocol version is _2_, but it can be switched to version _3_. + +Here's an example of switching to RESP3 replies: + +```lua +redis.setresp(3) +``` + +Please refer to the [Data type conversion](#data-type-conversion) for more information about type conversions. + +### `redis.set_repl(x)` + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +**Note:** +this feature is only available when script effects replication is employed. +Calling it when using verbatim script replication will result in an error. +As of Redis version 2.6.0, scripts were replicated verbatim, meaning that the scripts' source code was sent for execution by replicas and stored in the AOF. +An alternative replication mode added in version 3.2.0 allows replicating only the scripts' effects. +As of Redis version 7.0, script replication is no longer supported, and the only replication mode available is script effects replication. + +**Warning:** +this is an advanced feature. Misuse can cause damage by violating the contract that binds the Redis master, its replicas, and AOF contents to hold the same logical content. + +This function allows a script to assert control over how its effects are propagated to replicas and the AOF afterward. +A script's effects are the Redis write commands that it calls. + +By default, all write commands that a script executes are replicated. +Sometimes, however, better control over this behavior can be helpful. +This can be the case, for example, when storing intermediate values in the master alone. + +Consider a script that intersects two sets and stores the result in a temporary key with `SUNIONSTORE`. +It then picks five random elements (`SRANDMEMBER`) from the intersection and stores (`SADD`) them in another set. +Finally, before returning, it deletes the temporary key that stores the intersection of the two source sets. + +In this case, only the new set with its five randomly-chosen elements needs to be replicated. +Replicating the `SUNIONSTORE` command and the `DEL`ition of the temporary key is unnecessary and wasteful. + +The `redis.set_repl()` function instructs the server how to treat subsequent write commands in terms of replication. +It accepts a single input argument that only be one of the following: + +* `redis.REPL_ALL`: replicates the effects to the AOF and replicas. +* `redis.REPL_AOF`: replicates the effects to the AOF alone. +* `redis.REPL_REPLICA`: replicates the effects to the replicas alone. +* `redis.REPL_SLAVE`: same as `REPL_REPLICA`, maintained for backward compatibility. +* `redis.REPL_NONE`: disables effect replication entirely. + +By default, the scripting engine is initialized to the `redis.REPL_ALL` setting when a script begins its execution. +You can call the `redis.set_repl()` function at any time during the script's execution to switch between the different replication modes. + +A simple example follows: + +```lua +redis.replicate_commands() -- Enable effects replication in versions lower than Redis v7.0 +redis.call('SET', KEYS[1], ARGV[1]) +redis.set_repl(redis.REPL_NONE) +redis.call('SET', KEYS[2], ARGV[2]) +redis.set_repl(redis.REPL_ALL) +redis.call('SET', KEYS[3], ARGV[3]) +``` + +If you run this script by calling `EVAL "..." 3 A B C 1 2 3`, the result will be that only the keys _A_ and _C_ are created on the replicas and AOF. + +### `redis.replicate_commands()` + +* Since version: 3.2.0 +* Until version: 7.0.0 +* Available in scripts: yes +* Available in functions: no + +This function switches the script's replication mode from verbatim replication to effects replication. +You can use it to override the default verbatim script replication mode used by Redis until version 7.0. + +**Note:** +as of Redis v7.0, verbatim script replication is no longer supported. +The default, and only script replication mode supported, is script effects' replication. +For more information, please refer to [`Replicating commands instead of scripts`](/topics/eval-intro#replicating-commands-instead-of-scripts) + +### `redis.breakpoint()` + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +This function triggers a breakpoint when using the [Redis Lua debugger](/topics/ldb). + +### `redis.debug(x)` + +* Since version: 3.2.0 +* Available in scripts: yes +* Available in functions: no + +This function prints its argument in the [Redis Lua debugger](/topics/ldb) console. + +### `redis.acl_check_cmd(command [,arg...])` + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +This function is used for checking if the current user running the script has [ACL](/topics/acl) permissions to execute the given command with the given arguments. + +The return value is a boolean `true` in case the current user has permissions to execute the command (via a call to [redis.call](#redis.call) or [redis.pcall](#redis.pcall)) or `false` in case they don't. + +The function will raise an error if the passed command or its arguments are invalid. + +### `redis.register_function` + +* Since version: 7.0.0 +* Available in scripts: no +* Available in functions: yes + +This function is only available from the context of the `FUNCTION LOAD` command. +When called, it registers a function to the loaded library. +The function can be called either with positional or named arguments. + +#### positional arguments: `redis.register_function(name, callback)` + +The first argument to `redis.register_function` is a Lua string representing the function name. +The second argument to `redis.register_function` is a Lua function. + +Usage example: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n redis.register_function('noop', function() end)" +``` + +#### Named arguments: `redis.register_function{function_name=name, callback=callback, flags={flag1, flag2, ..}, description=description}` + +The named arguments variant accepts the following arguments: + +* _function\_name_: the function's name. +* _callback_: the function's callback. +* _flags_: an array of strings, each a function flag (optional). +* _description_: function's description (optional). + +Both _function\_name_ and _callback_ are mandatory. + +Usage example: + +``` +redis> FUNCTION LOAD "#!lua name=mylib\n redis.register_function{function_name='noop', callback=function() end, flags={ 'no-writes' }, description='Does nothing'}" +``` + +#### Script flags + +**Important:** +Use script flags with care, which may negatively impact if misused. +Note that the default for Eval scripts are different than the default for functions that are mentioned below, see [Eval Flags](/docs/manual/programmability/eval-intro/#eval-flags) + +When you register a function or load an Eval script, the server does not know how it accesses the database. +By default, Redis assumes that all scripts read and write data. +This results in the following behavior: + +1. They can read and write data. +1. They can run in cluster mode, and are not able to run commands accessing keys of different hash slots. +1. Execution against a stale replica is denied to avoid inconsistent reads. +1. Execution under low memory is denied to avoid exceeding the configured threshold. + +You can use the following flags and instruct the server to treat the scripts' execution differently: + +* `no-writes`: this flag indicates that the script only reads data but never writes. + + By default, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) against read-only replicas, as they may attempt to perform writes. + Similarly, the server will not allow calling scripts with `FCALL_RO` / `EVAL_RO`. + Lastly, when data persistence is at risk due to a disk error, execution is blocked as well. + + Using this flag allows executing the script: + 1. With `FCALL_RO` / `EVAL_RO` + 2. On read-only replicas. + 3. Even if there's a disk error (Redis is unable to persist so it rejects writes). + 4. When over the memory limit since it implies the script doesn't increase memory consumption (see `allow-oom` below) + + However, note that the server will return an error if the script attempts to call a write command. + Also note that currently `PUBLISH`, `SPUBLISH` and `PFCOUNT` are also considered write commands in scripts, because they could attempt to propagate commands to replicas and AOF file. + + For more information please refer to [Read-only scripts](/docs/manual/programmability/#read-only_scripts) + +* `allow-oom`: use this flag to allow a script to execute when the server is out of memory (OOM). + + Unless used, Redis will deny the execution of flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) when in an OOM state. + Furthermore, when you use this flag, the script can call any Redis command, including commands that aren't usually allowed in this state. + Specifying `no-writes` or using `FCALL_RO` / `EVAL_RO` also implies the script can run in OOM state (without specifying `allow-oom`) + +* `allow-stale`: a flag that enables running the flagged scripts (Functions and Eval scripts with [shebang](/topics/eval-intro#eval-flags)) against a stale replica when the `replica-serve-stale-data` config is set to `no` . + + Redis can be set to prevent data consistency problems from using old data by having stale replicas return a runtime error. + For scripts that do not access the data, this flag can be set to allow stale Redis replicas to run the script. + Note however that the script will still be unable to execute any command that accesses stale data. + +* `no-cluster`: the flag causes the script to return an error in Redis cluster mode. + + Redis allows scripts to be executed both in standalone and cluster modes. + Setting this flag prevents executing the script against nodes in the cluster. + +* `allow-cross-slot-keys`: The flag that allows a script to access keys from multiple slots. + + Redis typically prevents any single command from accessing keys that hash to multiple slots. + This flag allows scripts to break this rule and access keys within the script that access multiple slots. + Declared keys to the script are still always required to hash to a single slot. + Accessing keys from multiple slots is discouraged as applications should be designed to only access keys from a single slot at a time, allowing slots to move between Redis servers. + + This flag has no effect when cluster mode is disabled. + +Please refer to [Function Flags](/docs/manual/programmability/functions-intro/#function-flags) and [Eval Flags](/docs/manual/programmability/eval-intro/#eval-flags) for a detailed example. + +### `redis.REDIS_VERSION` + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +Returns the current Redis server version as a Lua string. +The reply's format is `MM.mm.PP`, where: + +* **MM:** is the major version. +* **mm:** is the minor version. +* **PP:** is the patch level. + +### `redis.REDIS_VERSION_NUM` + +* Since version: 7.0.0 +* Available in scripts: yes +* Available in functions: yes + +Returns the current Redis server version as a number. +The reply is a hexadecimal value structured as `0x00MMmmPP`, where: + +* **MM:** is the major version. +* **mm:** is the minor version. +* **PP:** is the patch level. + +## Data type conversion + +Unless a runtime exception is raised, `redis.call()` and `redis.pcall()` return the reply from the executed command to the Lua script. +Redis' replies from these functions are converted automatically into Lua's native data types. + +Similarly, when a Lua script returns a reply with the `return` keyword, +that reply is automatically converted to Redis' protocol. + +Put differently; there's a one-to-one mapping between Redis' replies and Lua's data types and a one-to-one mapping between Lua's data types and the [Redis Protocol](/topics/protocol) data types. +The underlying design is such that if a Redis type is converted into a Lua type and converted back into a Redis type, the result is the same as the initial value. + +Type conversion from Redis protocol replies (i.e., the replies from `redis.call()` and `redis.pcall()`) to Lua data types depends on the Redis Serialization Protocol version used by the script. +The default protocol version during script executions is RESP2. +The script may switch the replies' protocol versions by calling the `redis.setresp()` function. + +Type conversion from a script's returned Lua data type depends on the user's choice of protocol (see the `HELLO` command). + +The following sections describe the type conversion rules between Lua and Redis per the protocol's version. + +### RESP2 to Lua type conversion + +The following type conversion rules apply to the execution's context by default as well as after calling `redis.setresp(2)`: + +* [RESP2 integer reply](/topics/protocol#resp-integers) -> Lua number +* [RESP2 bulk string reply](/topics/protocol#resp-bulk-strings) -> Lua string +* [RESP2 array reply](/topics/protocol#resp-arrays) -> Lua table (may have other Redis data types nested) +* [RESP2 status reply](/topics/protocol#resp-simple-strings) -> Lua table with a single _ok_ field containing the status string +* [RESP2 error reply](/topics/protocol#resp-errors) -> Lua table with a single _err_ field containing the error string +* [RESP2 null bulk reply](/topics/protocol#null-elements-in-arrays) and [null multi bulk reply](/topics/protocol#resp-arrays) -> Lua false boolean type + +## Lua to RESP2 type conversion + +The following type conversion rules apply by default as well as after the user had called `HELLO 2`: + +* Lua number -> [RESP2 integer reply](/topics/protocol#resp-integers) (the number is converted into an integer) +* Lua string -> [RESP bulk string reply](/topics/protocol#resp-bulk-strings) +* Lua table (indexed, non-associative array) -> [RESP2 array reply](/topics/protocol#resp-arrays) (truncated at the first Lua `nil` value encountered in the table, if any) +* Lua table with a single _ok_ field -> [RESP2 status reply](/topics/protocol#resp-simple-strings) +* Lua table with a single _err_ field -> [RESP2 error reply](/topics/protocol#resp-errors) +* Lua boolean false -> [RESP2 null bulk reply](/topics/protocol#null-elements-in-arrays) + +There is an additional Lua-to-Redis conversion rule that has no corresponding Redis-to-Lua conversion rule: + +* Lua Boolean `true` -> [RESP2 integer reply](/topics/protocol#resp-integers) with value of 1. + +There are three additional rules to note about converting Lua to Redis data types: + +* Lua has a single numerical type, Lua numbers. + There is no distinction between integers and floats. + So we always convert Lua numbers into integer replies, removing the decimal part of the number, if any. + **If you want to return a Lua float, it should be returned as a string**, + exactly like Redis itself does (see, for instance, the `ZSCORE` command). +* There's [no simple way to have nils inside Lua arrays](http://www.lua.org/pil/19.1.html) due + to Lua's table semantics. + Therefore, when Redis converts a Lua array to RESP, the conversion stops when it encounters a Lua `nil` value. +* When a Lua table is an associative array that contains keys and their respective values, the converted Redis reply will **not** include them. + +Lua to RESP2 type conversion examples: + +``` +redis> EVAL "return 10" 0 +(integer) 10 + +redis> EVAL "return { 1, 2, { 3, 'Hello World!' } }" 0 +1) (integer) 1 +2) (integer) 2 +3) 1) (integer) 3 + 1) "Hello World!" + +redis> EVAL "return redis.call('get','foo')" 0 +"bar" +``` + +The last example demonstrates receiving and returning the exact return value of `redis.call()` (or `redis.pcall()`) in Lua as it would be returned if the command had been called directly. + +The following example shows how floats and arrays that cont nils and keys are handled: + +``` +redis> EVAL "return { 1, 2, 3.3333, somekey = 'somevalue', 'foo', nil , 'bar' }" 0 +1) (integer) 1 +2) (integer) 2 +3) (integer) 3 +4) "foo" +``` + +As you can see, the float value of _3.333_ gets converted to an integer _3_, the _somekey_ key and its value are omitted, and the string "bar" isn't returned as there is a `nil` value that precedes it. + +### RESP3 to Lua type conversion + +[RESP3](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md) is a newer version of the [Redis Serialization Protocol](/topics/protocol). +It is available as an opt-in choice as of Redis v6.0. + +An executing script may call the [`redis.setresp`](#redis.setresp) function during its execution and switch the protocol version that's used for returning replies from Redis' commands (that can be invoked via [`redis.call()`](#redis.call) or [`redis.pcall()`](#redis.pcall)). + +Once Redis' replies are in RESP3 protocol, all of the [RESP2 to Lua conversion](#resp2-to-lua-type-conversion) rules apply, with the following additions: + +* [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type) -> Lua table with a single _map_ field containing a Lua table representing the fields and values of the map. +* [RESP set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-reply) -> Lua table with a single _set_ field containing a Lua table representing the elements of the set as fields, each with the Lua Boolean value of `true`. +* [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply) -> Lua `nil`. +* [RESP3 true reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua true boolean value. +* [RESP3 false reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) -> Lua false boolean value. +* [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type) -> Lua table with a single _double_ field containing a Lua number representing the double value. +* [RESP3 big number reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) -> Lua table with a single _big_number_ field containing a Lua string representing the big number value. +* [Redis verbatim string reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) -> Lua table with a single _verbatim_string_ field containing a Lua table with two fields, _string_ and _format_, representing the verbatim string and its format, respectively. + +**Note:** +the RESP3 [big number](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#big-number-type) and [verbatim strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#verbatim-string-type) replies are only supported as of Redis v7.0 and greater. +Also, presently, RESP3's [attributes](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#attribute-type), [streamed strings](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-strings) and [streamed aggregate data types](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#streamed-aggregate-data-types) are not supported by the Redis Lua API. + +### Lua to RESP3 type conversion + +Regardless of the script's choice of protocol version set for replies with the [`redis.setresp()` function] when it calls `redis.call()` or `redis.pcall()`, the user may opt-in to using RESP3 (with the `HELLO 3` command) for the connection. +Although the default protocol for incoming client connections is RESP2, the script should honor the user's preference and return adequately-typed RESP3 replies, so the following rules apply on top of those specified in the [Lua to RESP2 type conversion](#lua-to-resp2-type-conversion) section when that is the case. + +* Lua Boolean -> [RESP3 Boolean reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#boolean-reply) (note that this is a change compared to the RESP2, in which returning a Boolean Lua `true` returned the number 1 to the Redis client, and returning a `false` used to return a `null`. +* Lua table with a single _map_ field set to an associative Lua table -> [RESP3 map reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#map-type). +* Lua table with a single _set_ field set to an associative Lua table -> [RESP3 set reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#set-type). Values can be set to anything and are discarded anyway. +* Lua table with a single _double_ field to an associative Lua table -> [RESP3 double reply](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#double-type). +* Lua nil -> [RESP3 null](https://github.com/redis/redis-specifications/blob/master/protocol/RESP3.md#null-reply). + +However, if the connection is set use the RESP2 protocol, and even if the script replies with RESP3-typed responses, Redis will automatically perform a RESP3 to RESP2 conversion of the reply as is the case for regular commands. +That means, for example, that returning the RESP3 map type to a RESP2 connection will result in the reply being converted to a flat RESP2 array that consists of alternating field names and their values, rather than a RESP3 map. + +## Additional notes about scripting + +### Using `SELECT` inside scripts + +You can call the `SELECT` command from your Lua scripts, like you can with any normal client connection. +However, one subtle aspect of the behavior changed between Redis versions 2.8.11 and 2.8.12. +Prior to Redis version 2.8.12, the database selected by the Lua script was *set as the current database* for the client connection that had called it. +As of Redis version 2.8.12, the database selected by the Lua script only affects the execution context of the script, and does not modify the database that's selected by the client calling the script. +This semantic change between patch level releases was required since the old behavior was inherently incompatible with Redis' replication and introduced bugs. + +## Runtime libraries + +The Redis Lua runtime context always comes with several pre-imported libraries. + +The following [standard Lua libraries](https://www.lua.org/manual/5.1/manual.html#5) are available to use: + +* The [_String Manipulation (string)_ library](https://www.lua.org/manual/5.1/manual.html#5.4) +* The [_Table Manipulation (table)_ library](https://www.lua.org/manual/5.1/manual.html#5.5) +* The [_Mathematical Functions (math)_ library](https://www.lua.org/manual/5.1/manual.html#5.6) +* The [_Operating System Facilities (os)_ library](#os-library) + +In addition, the following external libraries are loaded and accessible to scripts: + +* The [_struct_ library](#struct-library) +* The [_cjson_ library](#cjson-library) +* The [_cmsgpack_ library](#cmsgpack-library) +* The [_bitop_ library](#bitop-library) + +### _os_ library + +* Since version: 8.0.0 +* Available in scripts: yes +* Available in functions: yes + +_os_ provides a set of functions for dealing with date, time, and system commands. +More details can be found in the [Operating System Facilities](https://www.lua.org/manual/5.1/manual.html#5.8). +Note that for sandbox security, currently only the following os functions is exposed: + +* `os.clock()` + +### _struct_ library + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +_struct_ is a library for packing and unpacking C-like structures in Lua. +It provides the following functions: + +* [`struct.pack()`](#struct.pack) +* [`struct.unpack()`](#struct.unpack) +* [`struct.size()`](#struct.size) + +All of _struct_'s functions expect their first argument to be a [format string](#struct-formats). + +#### _struct_ formats + +The following are valid format strings for _struct_'s functions: + +* `>`: big endian +* `<`: little endian +* `![num]`: alignment +* `x`: padding +* `b/B`: signed/unsigned byte +* `h/H`: signed/unsigned short +* `l/L`: signed/unsigned long +* `T`: size_t +* `i/In`: signed/unsigned integer with size _n_ (defaults to the size of int) +* `cn`: sequence of _n_ chars (from/to a string); when packing, n == 0 means the + whole string; when unpacking, n == 0 means use the previously read number as + the string's length. +* `s`: zero-terminated string +* `f`: float +* `d`: double +* ` ` (space): ignored + +#### `struct.pack(x)` + +This function returns a struct-encoded string from values. +It accepts a [_struct_ format string](#struct-formats) as its first argument, followed by the values that are to be encoded. + +Usage example: + +``` +redis> EVAL "return struct.pack('HH', 1, 2)" 0 +"\x01\x00\x02\x00" +``` + +#### `struct.unpack(x)` + +This function returns the decoded values from a struct. +It accepts a [_struct_ format string](#struct-formats) as its first argument, followed by encoded struct's string. + +Usage example: + +``` +redis> EVAL "return { struct.unpack('HH', ARGV[1]) }" 0 "\x01\x00\x02\x00" +1) (integer) 1 +2) (integer) 2 +3) (integer) 5 +``` + +#### `struct.size(x)` + +This function returns the size, in bytes, of a struct. +It accepts a [_struct_ format string](#struct-formats) as its only argument. + +Usage example: + +``` +redis> EVAL "return struct.size('HH')" 0 +(integer) 4 +``` + +### _cjson_ library + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The _cjson_ library provides fast [JSON](https://json.org) encoding and decoding from Lua. +It provides these functions. + +#### `cjson.encode(x)` + +This function returns a JSON-encoded string for the Lua data type provided as its argument. + +Usage example: + +``` +redis> EVAL "return cjson.encode({ ['foo'] = 'bar' })" 0 +"{\"foo\":\"bar\"}" +``` + +#### `cjson.decode(x)` + +This function returns a Lua data type from the JSON-encoded string provided as its argument. + +Usage example: + +``` +redis> EVAL "return cjson.decode(ARGV[1])['foo']" 0 '{"foo":"bar"}' +"bar" +``` + +### _cmsgpack_ library + +* Since version: 2.6.0 +* Available in scripts: yes +* Available in functions: yes + +The _cmsgpack_ library provides fast [MessagePack](https://msgpack.org/index.html) encoding and decoding from Lua. +It provides these functions. + +#### `cmsgpack.pack(x)` + +This function returns the packed string encoding of the Lua data type it is given as an argument. + +Usage example: + +``` +redis> EVAL "return cmsgpack.pack({'foo', 'bar', 'baz'})" 0 +"\x93\xa3foo\xa3bar\xa3baz" +``` + +#### `cmsgpack.unpack(x)` + +This function returns the unpacked values from decoding its input string argument. + +Usage example: + +``` +redis> EVAL "return cmsgpack.unpack(ARGV[1])" 0 "\x93\xa3foo\xa3bar\xa3baz" +1) "foo" +2) "bar" +3) "baz" +``` + +### _bit_ library + +* Since version: 2.8.18 +* Available in scripts: yes +* Available in functions: yes + +The _bit_ library provides bitwise operations on numbers. +Its documentation resides at [Lua BitOp documentation](http://bitop.luajit.org/api.html) +It provides the following functions. + +#### `bit.tobit(x)` + +Normalizes a number to the numeric range for bit operations and returns it. + +Usage example: + +``` +redis> EVAL 'return bit.tobit(1)' 0 +(integer) 1 +``` + +#### `bit.tohex(x [,n])` + +Converts its first argument to a hex string. The number of hex digits is given by the absolute value of the optional second argument. + +Usage example: + +``` +redis> EVAL 'return bit.tohex(422342)' 0 +"000671c6" +``` + +#### `bit.bnot(x)` + +Returns the bitwise **not** of its argument. + +#### `bit.bnot(x)` `bit.bor(x1 [,x2...])`, `bit.band(x1 [,x2...])` and `bit.bxor(x1 [,x2...])` + +Returns either the bitwise **or**, bitwise **and**, or bitwise **xor** of all of its arguments. +Note that more than two arguments are allowed. + +Usage example: + +``` +redis> EVAL 'return bit.bor(1,2,4,8,16,32,64,128)' 0 +(integer) 255 +``` + +#### `bit.lshift(x, n)`, `bit.rshift(x, n)` and `bit.arshift(x, n)` + +Returns either the bitwise logical **left-shift**, bitwise logical **right-shift**, or bitwise **arithmetic right-shift** of its first argument by the number of bits given by the second argument. + +#### `bit.rol(x, n)` and `bit.ror(x, n)` + +Returns either the bitwise **left rotation**, or bitwise **right rotation** of its first argument by the number of bits given by the second argument. +Bits shifted out on one side are shifted back in on the other side. + +#### `bit.bswap(x)` + +Swaps the bytes of its argument and returns it. +This can be used to convert little-endian 32-bit numbers to big-endian 32-bit numbers and vice versa. diff --git a/docs/interact/programmability/lua-debugging.md b/docs/interact/programmability/lua-debugging.md new file mode 100644 index 0000000000..26b4b05e1d --- /dev/null +++ b/docs/interact/programmability/lua-debugging.md @@ -0,0 +1,263 @@ +--- +title: Debugging Lua scripts in Redis +linkTitle: Debugging Lua +description: How to use the built-in Lua debugger +weight: 4 +aliases: + - /topics/ldb + - /docs/manual/programmability/lua-debugging/ +--- + +Starting with version 3.2 Redis includes a complete Lua debugger, that can be +used in order to make the task of writing complex Redis scripts much simpler. + +The Redis Lua debugger, codenamed LDB, has the following important features: + +* It uses a server-client model, so it's a remote debugger. +The Redis server acts as the debugging server, while the default client is `redis-cli`. +However other clients can be developed by following the simple protocol implemented by the server. +* By default every new debugging session is a forked session. +It means that while the Redis Lua script is being debugged, the server does not block and is usable for development or in order to execute multiple debugging sessions in parallel. +This also means that changes are **rolled back** after the script debugging session finished, so that's possible to restart a new debugging session again, using exactly the same Redis data set as the previous debugging session. +* An alternative synchronous (non forked) debugging model is available on demand, so that changes to the dataset can be retained. +In this mode the server blocks for the time the debugging session is active. +* Support for step by step execution. +* Support for static and dynamic breakpoints. +* Support from logging the debugged script into the debugger console. +* Inspection of Lua variables. +* Tracing of Redis commands executed by the script. +* Pretty printing of Redis and Lua values. +* Infinite loops and long execution detection, which simulates a breakpoint. + +## Quick start + +A simple way to get started with the Lua debugger is to watch this video +introduction: + + + +> Important Note: please make sure to avoid debugging Lua scripts using your Redis production server. +Use a development server instead. +Also note that using the synchronous debugging mode (which is NOT the default) results in the Redis server blocking for all the time the debugging session lasts. + +To start a new debugging session using `redis-cli` do the following: + +1. Create your script in some file with your preferred editor. Let's assume you are editing your Redis Lua script located at `/tmp/script.lua`. +2. Start a debugging session with: + + ./redis-cli --ldb --eval /tmp/script.lua + +Note that with the `--eval` option of `redis-cli` you can pass key names and arguments to the script, separated by a comma, like in the following example: + +``` +./redis-cli --ldb --eval /tmp/script.lua mykey somekey , arg1 arg2 +``` + +You'll enter a special mode where `redis-cli` no longer accepts its normal +commands, but instead prints a help screen and passes the unmodified debugging +commands directly to Redis. + +The only commands which are not passed to the Redis debugger are: + +* `quit` -- this will terminate the debugging session. +It's like removing all the breakpoints and using the `continue` debugging command. +Moreover the command will exit from `redis-cli`. +* `restart` -- the debugging session will restart from scratch, **reloading the new version of the script from the file**. +So a normal debugging cycle involves modifying the script after some debugging, and calling `restart` in order to start debugging again with the new script changes. +* `help` -- this command is passed to the Redis Lua debugger, that will print a list of commands like the following: + +``` +lua debugger> help +Redis Lua debugger help: +[h]elp Show this help. +[s]tep Run current line and stop again. +[n]ext Alias for step. +[c]ontinue Run till next breakpoint. +[l]ist List source code around current line. +[l]ist [line] List source code around [line]. + line = 0 means: current position. +[l]ist [line] [ctx] In this form [ctx] specifies how many lines + to show before/after [line]. +[w]hole List all source code. Alias for 'list 1 1000000'. +[p]rint Show all the local variables. +[p]rint Show the value of the specified variable. + Can also show global vars KEYS and ARGV. +[b]reak Show all breakpoints. +[b]reak Add a breakpoint to the specified line. +[b]reak - Remove breakpoint from the specified line. +[b]reak 0 Remove all breakpoints. +[t]race Show a backtrace. +[e]val Execute some Lua code (in a different callframe). +[r]edis Execute a Redis command. +[m]axlen [len] Trim logged Redis replies and Lua var dumps to len. + Specifying zero as means unlimited. +[a]bort Stop the execution of the script. In sync + mode dataset changes will be retained. + +Debugger functions you can call from Lua scripts: +redis.debug() Produce logs in the debugger console. +redis.breakpoint() Stop execution as if there was a breakpoint in the + next line of code. +``` + +Note that when you start the debugger it will start in **stepping mode**. +It will stop at the first line of the script that actually does something before executing it. + +From this point you usually call `step` in order to execute the line and go to the next line. +While you step Redis will show all the commands executed by the server like in the following example: + +``` +* Stopped at 1, stop reason = step over +-> 1 redis.call('ping') +lua debugger> step + ping + "+PONG" +* Stopped at 2, stop reason = step over +``` + +The `` and `` lines show the command executed by the line just +executed, and the reply from the server. Note that this happens only in stepping mode. +If you use `continue` in order to execute the script till the next breakpoint, commands will not be dumped on the screen to prevent too much output. + +## Termination of the debugging session + + +When the scripts terminates naturally, the debugging session ends and +`redis-cli` returns in its normal non-debugging mode. You can restart the +session using the `restart` command as usual. + +Another way to stop a debugging session is just interrupting `redis-cli` +manually by pressing `Ctrl+C`. Note that also any event breaking the +connection between `redis-cli` and the `redis-server` will interrupt the +debugging session. + +All the forked debugging sessions are terminated when the server is shut +down. + +## Abbreviating debugging commands + +Debugging can be a very repetitive task. For this reason every Redis +debugger command starts with a different character, and you can use the single +initial character in order to refer to the command. + +So for example instead of typing `step` you can just type `s`. + +## Breakpoints + +Adding and removing breakpoints is trivial as described in the online help. +Just use `b 1 2 3 4` to add a breakpoint in line 1, 2, 3, 4. +The command `b 0` removes all the breakpoints. Selected breakpoints can be +removed using as argument the line where the breakpoint we want to remove is, but prefixed by a minus sign. +So for example `b -3` removes the breakpoint from line 3. + +Note that adding breakpoints to lines that Lua never executes, like declaration of local variables or comments, will not work. +The breakpoint will be added but since this part of the script will never be executed, the program will never stop. + +## Dynamic breakpoints + +Using the `breakpoint` command it is possible to add breakpoints into specific +lines. However sometimes we want to stop the execution of the program only +when something special happens. In order to do so, you can use the +`redis.breakpoint()` function inside your Lua script. When called it simulates +a breakpoint in the next line that will be executed. + +``` +if counter > 10 then redis.breakpoint() end +``` +This feature is extremely useful when debugging, so that we can avoid +continuing the script execution manually multiple times until a given condition +is encountered. + +## Synchronous mode + +As explained previously, but default LDB uses forked sessions with rollback +of all the data changes operated by the script while it has being debugged. +Determinism is usually a good thing to have during debugging, so that successive +debugging sessions can be started without having to reset the database content +to its original state. + +However for tracking certain bugs, you may want to retain the changes performed +to the key space by each debugging session. When this is a good idea you +should start the debugger using a special option, `ldb-sync-mode`, in `redis-cli`. + +``` +./redis-cli --ldb-sync-mode --eval /tmp/script.lua +``` + +> Note: Redis server will be unreachable during the debugging session in this mode, so use with care. + +In this special mode, the `abort` command can stop the script half-way taking the changes operated to the dataset. +Note that this is different compared to ending the debugging session normally. +If you just interrupt `redis-cli` the script will be fully executed and then the session terminated. +Instead with `abort` you can interrupt the script execution in the middle and start a new debugging session if needed. + +## Logging from scripts + +The `redis.debug()` command is a powerful debugging facility that can be +called inside the Redis Lua script in order to log things into the debug +console: + +``` +lua debugger> list +-> 1 local a = {1,2,3} + 2 local b = false + 3 redis.debug(a,b) +lua debugger> continue + line 3: {1; 2; 3}, false +``` + +If the script is executed outside of a debugging session, `redis.debug()` has no effects at all. +Note that the function accepts multiple arguments, that are separated by a comma and a space in the output. + +Tables and nested tables are displayed correctly in order to make values simple to observe for the programmer debugging the script. + +## Inspecting the program state with `print` and `eval` + + +While the `redis.debug()` function can be used in order to print values +directly from within the Lua script, often it is useful to observe the local +variables of a program while stepping or when stopped into a breakpoint. + +The `print` command does just that, and performs lookup in the call frames +starting from the current one back to the previous ones, up to top-level. +This means that even if we are into a nested function inside a Lua script, +we can still use `print foo` to look at the value of `foo` in the context +of the calling function. When called without a variable name, `print` will +print all variables and their respective values. + +The `eval` command executes small pieces of Lua scripts **outside the context of the current call frame** (evaluating inside the context of the current call frame is not possible with the current Lua internals). +However you can use this command in order to test Lua functions. + +``` +lua debugger> e redis.sha1hex('foo') + "0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33" +``` + +## Debugging clients + +LDB uses the client-server model where the Redis server acts as a debugging server that communicates using [RESP](/topics/protocol). While `redis-cli` is the default debug client, any [client](/clients) can be used for debugging as long as it meets one of the following conditions: + +1. The client provides a native interface for setting the debug mode and controlling the debug session. +2. The client provides an interface for sending arbitrary commands over RESP. +3. The client allows sending raw messages to the Redis server. + +For example, the [Redis plugin](https://redis.com/blog/zerobrane-studio-plugin-for-redis-lua-scripts) for [ZeroBrane Studio](http://studio.zerobrane.com/) integrates with LDB using [redis-lua](https://github.com/nrk/redis-lua). The following Lua code is a simplified example of how the plugin achieves that: + +```Lua +local redis = require 'redis' + +-- add LDB's Continue command +redis.commands['ldbcontinue'] = redis.command('C') + +-- script to be debugged +local script = [[ + local x, y = tonumber(ARGV[1]), tonumber(ARGV[2]) + local result = x * y + return result +]] + +local client = redis.connect('127.0.0.1', 6379) +client:script("DEBUG", "YES") +print(unpack(client:eval(script, 0, 6, 9))) +client:ldbcontinue() +``` diff --git a/docs/interact/pubsub.md b/docs/interact/pubsub.md new file mode 100644 index 0000000000..d9ed33e98c --- /dev/null +++ b/docs/interact/pubsub.md @@ -0,0 +1,199 @@ +--- +title: Redis Pub/Sub +linkTitle: "Pub/sub" +weight: 40 +description: How to use pub/sub channels in Redis +aliases: + - /topics/pubsub + - /docs/manual/pub-sub + - /docs/manual/pubsub +--- + +`SUBSCRIBE`, `UNSUBSCRIBE` and `PUBLISH` implement the [Publish/Subscribe messaging paradigm](http://en.wikipedia.org/wiki/Publish/subscribe) where (citing Wikipedia) senders (publishers) are not programmed to send their messages to specific receivers (subscribers). +Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be. +Subscribers express interest in one or more channels and only receive messages that are of interest, without knowledge of what (if any) publishers there are. +This decoupling of publishers and subscribers allows for greater scalability and a more dynamic network topology. + +For instance, to subscribe to channels "channel11" and "ch:00" the client issues a `SUBSCRIBE` providing the names of the channels: + +```bash +SUBSCRIBE channel11 ch:00 +``` + +Messages sent by other clients to these channels will be pushed by Redis to all the subscribed clients. +Subscribers receive the messages in the order that the messages are published. + +A client subscribed to one or more channels shouldn't issue commands, although it can `SUBSCRIBE` and `UNSUBSCRIBE` to and from other channels. +The replies to subscription and unsubscribing operations are sent in the form of messages so that the client can just read a coherent stream of messages where the first element indicates the type of message. +The commands that are allowed in the context of a subscribed RESP2 client are: + +* `PING` +* `PSUBSCRIBE` +* `PUNSUBSCRIBE` +* `QUIT` +* `RESET` +* `SSUBSCRIBE` +* `SUBSCRIBE` +* `SUNSUBSCRIBE` +* `UNSUBSCRIBE` + +However, if RESP3 is used (see `HELLO`), a client can issue any commands while in the subscribed state. + +Please note that when using `redis-cli`, in subscribed mode commands such as `UNSUBSCRIBE` and `PUNSUBSCRIBE` cannot be used because `redis-cli` will not accept any commands and can only quit the mode with `Ctrl-C`. + +## Delivery semantics + +Redis' Pub/Sub exhibits _at-most-once_ message delivery semantics. +As the name suggests, it means that a message will be delivered once if at all. +Once the message is sent by the Redis server, there's no chance of it being sent again. +If the subscriber is unable to handle the message (for example, due to an error or a network disconnect) the message is forever lost. + +If your application requires stronger delivery guarantees, you may want to learn about [Redis Streams](/docs/data-types/streams-tutorial). +Messages in streams are persisted, and support both _at-most-once_ as well as _at-least-once_ delivery semantics. + +## Format of pushed messages + +A message is an [array-reply](/topics/protocol#array-reply) with three elements. + +The first element is the kind of message: + +* `subscribe`: means that we successfully subscribed to the channel given as the second element in the reply. + The third argument represents the number of channels we are currently subscribed to. + +* `unsubscribe`: means that we successfully unsubscribed from the channel given as second element in the reply. + The third argument represents the number of channels we are currently subscribed to. + When the last argument is zero, we are no longer subscribed to any channel, and the client can issue any kind of Redis command as we are outside the Pub/Sub state. + +* `message`: it is a message received as a result of a `PUBLISH` command issued by another client. + The second element is the name of the originating channel, and the third argument is the actual message payload. + +## Database & Scoping + +Pub/Sub has no relation to the key space. +It was made to not interfere with it on any level, including database numbers. + +Publishing on db 10, will be heard by a subscriber on db 1. + +If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production...). + +## Wire protocol example + +``` +SUBSCRIBE first second +*3 +$9 +subscribe +$5 +first +:1 +*3 +$9 +subscribe +$6 +second +:2 +``` + +At this point, from another client we issue a `PUBLISH` operation against the channel named `second`: + +``` +> PUBLISH second Hello +``` + +This is what the first client receives: + +``` +*3 +$7 +message +$6 +second +$5 +Hello +``` + +Now the client unsubscribes itself from all the channels using the `UNSUBSCRIBE` command without additional arguments: + +``` +UNSUBSCRIBE +*3 +$11 +unsubscribe +$6 +second +:1 +*3 +$11 +unsubscribe +$5 +first +:0 +``` + +## Pattern-matching subscriptions + +The Redis Pub/Sub implementation supports pattern matching. +Clients may subscribe to glob-style patterns to receive all the messages sent to channel names matching a given pattern. + +For instance: + +``` +PSUBSCRIBE news.* +``` + +Will receive all the messages sent to the channel `news.art.figurative`, `news.music.jazz`, etc. +All the glob-style patterns are valid, so multiple wildcards are supported. + +``` +PUNSUBSCRIBE news.* +``` + +Will then unsubscribe the client from that pattern. +No other subscriptions will be affected by this call. + +Messages received as a result of pattern matching are sent in a different format: + +* The type of the message is `pmessage`: it is a message received as a result from a `PUBLISH` command issued by another client, matching a pattern-matching subscription. + The second element is the original pattern matched, the third element is the name of the originating channel, and the last element is the actual message payload. + +Similarly to `SUBSCRIBE` and `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands are acknowledged by the system sending a message of type `psubscribe` and `punsubscribe` using the same format as the `subscribe` and `unsubscribe` message format. + +## Messages matching both a pattern and a channel subscription + +A client may receive a single message multiple times if it's subscribed to multiple patterns matching a published message, or if it is subscribed to both patterns and channels matching the message. +This is shown by the following example: + +``` +SUBSCRIBE foo +PSUBSCRIBE f* +``` + +In the above example, if a message is sent to channel `foo`, the client will receive two messages: one of type `message` and one of type `pmessage`. + +## The meaning of the subscription count with pattern matching + +In `subscribe`, `unsubscribe`, `psubscribe` and `punsubscribe` message types, the last argument is the count of subscriptions still active. +This number is the total number of channels and patterns the client is still subscribed to. +So the client will exit the Pub/Sub state only when this count drops to zero as a result of unsubscribing from all the channels and patterns. + +## Sharded Pub/Sub + +From Redis 7.0, sharded Pub/Sub is introduced in which shard channels are assigned to slots by the same algorithm used to assign keys to slots. +A shard message must be sent to a node that owns the slot the shard channel is hashed to. +The cluster makes sure the published shard messages are forwarded to all nodes in the shard, so clients can subscribe to a shard channel by connecting to either the master responsible for the slot, or to any of its replicas. +`SSUBSCRIBE`, `SUNSUBSCRIBE` and `SPUBLISH` are used to implement sharded Pub/Sub. + +Sharded Pub/Sub helps to scale the usage of Pub/Sub in cluster mode. +It restricts the propagation of messages to be within the shard of a cluster. +Hence, the amount of data passing through the cluster bus is limited in comparison to global Pub/Sub where each message propagates to each node in the cluster. +This allows users to horizontally scale the Pub/Sub usage by adding more shards. + +## Programming example + +Pieter Noordhuis provided a great example using EventMachine and Redis to create [a multi user high performance web chat](https://gist.github.com/pietern/348262). + +## Client library implementation hints + +Because all the messages received contain the original subscription causing the message delivery (the channel in the case of message type, and the original pattern in the case of pmessage type) client libraries may bind the original subscription to callbacks (that can be anonymous functions, blocks, function pointers), using a hash table. + +When a message is received an O(1) lookup can be done to deliver the message to the registered callback. diff --git a/docs/interact/transactions.md b/docs/interact/transactions.md new file mode 100644 index 0000000000..e9504fd47e --- /dev/null +++ b/docs/interact/transactions.md @@ -0,0 +1,269 @@ +--- +title: Transactions +linkTitle: Transactions +weight: 30 +description: How transactions work in Redis +aliases: + - /topics/transactions + - /docs/manual/transactions/ +--- + +Redis Transactions allow the execution of a group of commands +in a single step, they are centered around the commands +`MULTI`, `EXEC`, `DISCARD` and `WATCH`. +Redis Transactions make two important guarantees: + +* All the commands in a transaction are serialized and executed +sequentially. A request sent by another client will never be +served **in the middle** of the execution of a Redis Transaction. +This guarantees that the commands are executed as a single +isolated operation. + +* The `EXEC` command +triggers the execution of all the commands in the transaction, so +if a client loses the connection to the server in the context of a +transaction before calling the `EXEC` command none of the operations +are performed, instead if the `EXEC` command is called, all the +operations are performed. When using the +[append-only file](/topics/persistence#append-only-file) Redis makes sure +to use a single write(2) syscall to write the transaction on disk. +However if the Redis server crashes or is killed by the system administrator +in some hard way it is possible that only a partial number of operations +are registered. Redis will detect this condition at restart, and will exit with an error. +Using the `redis-check-aof` tool it is possible to fix the +append only file that will remove the partial transaction so that the +server can start again. + +Starting with version 2.2, Redis allows for an extra guarantee to the +above two, in the form of optimistic locking in a way very similar to a +check-and-set (CAS) operation. +This is documented [later](#cas) on this page. + +## Usage + +A Redis Transaction is entered using the `MULTI` command. The command +always replies with `OK`. At this point the user can issue multiple +commands. Instead of executing these commands, Redis will queue +them. All the commands are executed once `EXEC` is called. + +Calling `DISCARD` instead will flush the transaction queue and will exit +the transaction. + +The following example increments keys `foo` and `bar` atomically. + +``` +> MULTI +OK +> INCR foo +QUEUED +> INCR bar +QUEUED +> EXEC +1) (integer) 1 +2) (integer) 1 +``` + +As is clear from the session above, `EXEC` returns an +array of replies, where every element is the reply of a single command +in the transaction, in the same order the commands were issued. + +When a Redis connection is in the context of a `MULTI` request, +all commands will reply with the string `QUEUED` (sent as a Status Reply +from the point of view of the Redis protocol). A queued command is +simply scheduled for execution when `EXEC` is called. + +## Errors inside a transaction + +During a transaction it is possible to encounter two kind of command errors: + +* A command may fail to be queued, so there may be an error before `EXEC` is called. +For instance the command may be syntactically wrong (wrong number of arguments, +wrong command name, ...), or there may be some critical condition like an out of +memory condition (if the server is configured to have a memory limit using the `maxmemory` directive). +* A command may fail *after* `EXEC` is called, for instance since we performed +an operation against a key with the wrong value (like calling a list operation against a string value). + +Starting with Redis 2.6.5, the server will detect an error during the accumulation of commands. +It will then refuse to execute the transaction returning an error during `EXEC`, discarding the transaction. + +> **Note for Redis < 2.6.5:** Prior to Redis 2.6.5 clients needed to detect errors occurring prior to `EXEC` by checking +the return value of the queued command: if the command replies with QUEUED it was +queued correctly, otherwise Redis returns an error. +If there is an error while queueing a command, most clients +will abort and discard the transaction. Otherwise, if the client elected to proceed with the transaction +the `EXEC` command would execute all commands queued successfully regardless of previous errors. + +Errors happening *after* `EXEC` instead are not handled in a special way: +all the other commands will be executed even if some command fails during the transaction. + +This is more clear on the protocol level. In the following example one +command will fail when executed even if the syntax is right: + +``` +Trying 127.0.0.1... +Connected to localhost. +Escape character is '^]'. +MULTI ++OK +SET a abc ++QUEUED +LPOP a ++QUEUED +EXEC +*2 ++OK +-WRONGTYPE Operation against a key holding the wrong kind of value +``` + +`EXEC` returned two-element [bulk string reply](/topics/protocol#bulk-string-reply) where one is an `OK` code and +the other an error reply. It's up to the client library to find a +sensible way to provide the error to the user. + +It's important to note that +**even when a command fails, all the other commands in the queue are processed** – Redis will _not_ stop the +processing of commands. + +Another example, again using the wire protocol with `telnet`, shows how +syntax errors are reported ASAP instead: + +``` +MULTI ++OK +INCR a b c +-ERR wrong number of arguments for 'incr' command +``` + +This time due to the syntax error the bad `INCR` command is not queued +at all. + +## What about rollbacks? + +Redis does not support rollbacks of transactions since supporting rollbacks +would have a significant impact on the simplicity and performance of Redis. + +## Discarding the command queue + +`DISCARD` can be used in order to abort a transaction. In this case, no +commands are executed and the state of the connection is restored to +normal. + +``` +> SET foo 1 +OK +> MULTI +OK +> INCR foo +QUEUED +> DISCARD +OK +> GET foo +"1" +``` + + +## Optimistic locking using check-and-set + +`WATCH` is used to provide a check-and-set (CAS) behavior to Redis +transactions. + +`WATCH`ed keys are monitored in order to detect changes against them. If +at least one watched key is modified before the `EXEC` command, the +whole transaction aborts, and `EXEC` returns a [Null reply](/topics/protocol#nil-reply) to notify that +the transaction failed. + +For example, imagine we have the need to atomically increment the value +of a key by 1 (let's suppose Redis doesn't have `INCR`). + +The first try may be the following: + +``` +val = GET mykey +val = val + 1 +SET mykey $val +``` + +This will work reliably only if we have a single client performing the +operation in a given time. If multiple clients try to increment the key +at about the same time there will be a race condition. For instance, +client A and B will read the old value, for instance, 10. The value will +be incremented to 11 by both the clients, and finally `SET` as the value +of the key. So the final value will be 11 instead of 12. + +Thanks to `WATCH` we are able to model the problem very well: + +``` +WATCH mykey +val = GET mykey +val = val + 1 +MULTI +SET mykey $val +EXEC +``` + +Using the above code, if there are race conditions and another client +modifies the result of `val` in the time between our call to `WATCH` and +our call to `EXEC`, the transaction will fail. + +We just have to repeat the operation hoping this time we'll not get a +new race. This form of locking is called _optimistic locking_. +In many use cases, multiple clients will be accessing different keys, +so collisions are unlikely – usually there's no need to repeat the operation. + +## WATCH explained + +So what is `WATCH` really about? It is a command that will +make the `EXEC` conditional: we are asking Redis to perform +the transaction only if none of the `WATCH`ed keys were modified. This includes +modifications made by the client, like write commands, and by Redis itself, +like expiration or eviction. If keys were modified between when they were +`WATCH`ed and when the `EXEC` was received, the entire transaction will be aborted +instead. + +**NOTE** +* In Redis versions before 6.0.9, an expired key would not cause a transaction +to be aborted. [More on this](https://github.com/redis/redis/pull/7920) +* Commands within a transaction won't trigger the `WATCH` condition since they +are only queued until the `EXEC` is sent. + +`WATCH` can be called multiple times. Simply all the `WATCH` calls will +have the effects to watch for changes starting from the call, up to +the moment `EXEC` is called. You can also send any number of keys to a +single `WATCH` call. + +When `EXEC` is called, all keys are `UNWATCH`ed, regardless of whether +the transaction was aborted or not. Also when a client connection is +closed, everything gets `UNWATCH`ed. + +It is also possible to use the `UNWATCH` command (without arguments) +in order to flush all the watched keys. Sometimes this is useful as we +optimistically lock a few keys, since possibly we need to perform a +transaction to alter those keys, but after reading the current content +of the keys we don't want to proceed. When this happens we just call +`UNWATCH` so that the connection can already be used freely for new +transactions. + +### Using WATCH to implement ZPOP + +A good example to illustrate how `WATCH` can be used to create new +atomic operations otherwise not supported by Redis is to implement ZPOP +(`ZPOPMIN`, `ZPOPMAX` and their blocking variants have only been added +in version 5.0), that is a command that pops the element with the lower +score from a sorted set in an atomic way. This is the simplest +implementation: + +``` +WATCH zset +element = ZRANGE zset 0 0 +MULTI +ZREM zset element +EXEC +``` + +If `EXEC` fails (i.e. returns a [Null reply](/topics/protocol#nil-reply)) we just repeat the operation. + +## Redis scripting and transactions + +Something else to consider for transaction like operations in redis are +[redis scripts](/commands/eval) which are transactional. Everything +you can do with a Redis Transaction, you can also do with a script, and +usually the script will be both simpler and faster. diff --git a/docs/management/_index.md b/docs/management/_index.md new file mode 100644 index 0000000000..bc67c5ceae --- /dev/null +++ b/docs/management/_index.md @@ -0,0 +1,6 @@ +--- +title: "Manage Redis" +linkTitle: "Manage Redis" +description: An administrator's guide to Redis +weight: 60 +--- diff --git a/docs/management/admin.md b/docs/management/admin.md new file mode 100644 index 0000000000..116b3aca21 --- /dev/null +++ b/docs/management/admin.md @@ -0,0 +1,86 @@ +--- +title: Redis administration +linkTitle: Administration +weight: 1 +description: Advice for configuring and managing Redis in production +aliases: [ + /topics/admin, + /topics/admin.md, + /manual/admin, + /manual/admin.md +] +--- + +## Redis setup tips + +### Linux + +* Deploy Redis using the Linux operating system. Redis is also tested on OS X, and from time to time on FreeBSD and OpenBSD systems. However, Linux is where most of the stress testing is performed, and where most production deployments are run. + +* Set the Linux kernel overcommit memory setting to 1. Add `vm.overcommit_memory = 1` to `/etc/sysctl.conf`. Then, reboot or run the command `sysctl vm.overcommit_memory=1` to activate the setting. See [FAQ: Background saving fails with a fork() error on Linux?](https://redis.io/docs/get-started/faq/#background-saving-fails-with-a-fork-error-on-linux) for details. + +* To ensure the Linux kernel feature Transparent Huge Pages does not impact Redis memory usage and latency, run the command: `echo never > /sys/kernel/mm/transparent_hugepage/enabled` to disable it. See [Latency Diagnosis - Latency induced by transparent huge pages](https://redis.io/docs/management/optimization/latency/#latency-induced-by-transparent-huge-pages) for additional context. + +### Memory + +* Ensured that swap is enabled and that your swap file size is equal to amount of memory on your system. If Linux does not have swap set up, and your Redis instance accidentally consumes too much memory, Redis can crash when it is out of memory, or the Linux kernel OOM killer can kill the Redis process. When swapping is enabled, you can detect latency spikes and act on them. + +* Set an explicit `maxmemory` option limit in your instance to make sure that it will report errors instead of failing when the system memory limit is near to be reached. Note that `maxmemory` should be set by calculating the overhead for Redis, other than data, and the fragmentation overhead. So if you think you have 10 GB of free memory, set it to 8 or 9. + +* If you are using Redis in a write-heavy application, while saving an RDB file on disk or rewriting the AOF log, Redis can use up to 2 times the memory normally used. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly. + +* See the `LATENCY DOCTOR` and `MEMORY DOCTOR` commands to assist in troubleshooting. + +### Imaging + +* When running under daemontools, use `daemonize no`. + +### Replication + +* Set up a non-trivial replication backlog in proportion to the amount of memory Redis is using. The backlog allows replicas to sync with the primary (master) instance much more easily. + +* If you use replication, Redis performs RDB saves even if persistence is disabled. (This does not apply to diskless replication.) If you don't have disk usage on the master, enable diskless replication. + +* If you are using replication, ensure that either your master has persistence enabled, or that it does not automatically restart on crashes. Replicas will try to maintain an exact copy of the master, so if a master restarts with an empty data set, replicas will be wiped as well. + +### Security + +* By default, Redis does not require any authentication and listens to all the network interfaces. This is a big security issue if you leave Redis exposed on the internet or other places where attackers can reach it. See for example [this attack](http://antirez.com/news/96) to see how dangerous it can be. Please check our [security page](/topics/security) and the [quick start](/topics/quickstart) for information about how to secure Redis. + +## Running Redis on EC2 + +* Use HVM based instances, not PV based instances. +* Do not use old instance families. For example, use m3.medium with HVM instead of m1.medium with PV. +* The use of Redis persistence with EC2 EBS volumes needs to be handled with care because sometimes EBS volumes have high latency characteristics. +* You may want to try the new diskless replication if you have issues when replicas are synchronizing with the master. + +## Upgrading or restarting a Redis instance without downtime + +Redis is designed to be a long-running process in your server. You can modify many configuration options without a restart using the `CONFIG SET` command. You can also switch from AOF to RDB snapshots persistence, or the other way around, without restarting Redis. Check the output of the `CONFIG GET *` command for more information. + +From time to time, a restart is required, for example, to upgrade the Redis process to a newer version, or when you need to modify a configuration parameter that is currently not supported by the `CONFIG` command. + +Follow these steps to avoid downtime. + +* Set up your new Redis instance as a replica for your current Redis instance. In order to do so, you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time. + +* If you use a single server, ensure that the replica is started on a different port than the master instance, otherwise the replica cannot start. + +* Wait for the replication initial synchronization to complete. Check the replica's log file. + +* Using `INFO`, ensure the master and replica have the same number of keys. Use `redis-cli` to check that the replica is working as expected and is replying to your commands. + +* Allow writes to the replica using `CONFIG SET slave-read-only no`. + +* Configure all your clients to use the new instance (the replica). Note that you may want to use the `CLIENT PAUSE` command to ensure that no client can write to the old master during the switch. + +* Once you confirm that the master is no longer receiving any queries (you can check this using the `MONITOR` command), elect the replica to master using the `REPLICAOF NO ONE` command, and then shut down your master. + +If you are using [Redis Sentinel](/topics/sentinel) or [Redis Cluster](/topics/cluster-tutorial), the simplest way to upgrade to newer versions is to upgrade one replica after the other. Then you can perform a manual failover to promote one of the upgraded replicas to master, and finally promote the last replica. + +--- +**NOTE** + +Redis Cluster 4.0 is not compatible with Redis Cluster 3.2 at cluster bus protocol level, so a mass restart is needed in this case. However, Redis 5 cluster bus is backward compatible with Redis 4. + +--- diff --git a/docs/management/config-file.md b/docs/management/config-file.md new file mode 100644 index 0000000000..05074ec832 --- /dev/null +++ b/docs/management/config-file.md @@ -0,0 +1,12 @@ +--- +title: "Redis configuration file example" +linkTitle: "Configuration example" +weight: 3 +description: > + The self-documented `redis.conf` file that's shipped with every version. +aliases: [ + /docs/manual/config-file + ] +--- + +Note: this file is generated from the unstable redis.conf during the website's build. diff --git a/docs/management/config.md b/docs/management/config.md new file mode 100644 index 0000000000..4a834763be --- /dev/null +++ b/docs/management/config.md @@ -0,0 +1,109 @@ +--- +title: "Redis configuration" +linkTitle: "Configuration" +weight: 2 +description: > + Overview of redis.conf, the Redis configuration file +aliases: [ + /docs/manual/config + ] + +--- + +Redis is able to start without a configuration file using a built-in default +configuration, however this setup is only recommended for testing and +development purposes. + +The proper way to configure Redis is by providing a Redis configuration file, +usually called `redis.conf`. + +The `redis.conf` file contains a number of directives that have a very simple +format: + + keyword argument1 argument2 ... argumentN + +This is an example of a configuration directive: + + replicaof 127.0.0.1 6380 + +It is possible to provide strings containing spaces as arguments using +(double or single) quotes, as in the following example: + + requirepass "hello world" + +Single-quoted string can contain characters escaped by backslashes, and +double-quoted strings can additionally include any ASCII symbols encoded using +backslashed hexadecimal notation "\\xff". + +The list of configuration directives, and their meaning and intended usage +is available in the self documented example redis.conf shipped into the +Redis distribution. + +* The self documented [redis.conf for Redis 7.2](https://raw.githubusercontent.com/redis/redis/7.2/redis.conf). +* The self documented [redis.conf for Redis 7.0](https://raw.githubusercontent.com/redis/redis/7.0/redis.conf). +* The self documented [redis.conf for Redis 6.2](https://raw.githubusercontent.com/redis/redis/6.2/redis.conf). +* The self documented [redis.conf for Redis 6.0](https://raw.githubusercontent.com/redis/redis/6.0/redis.conf). +* The self documented [redis.conf for Redis 5.0](https://raw.githubusercontent.com/redis/redis/5.0/redis.conf). +* The self documented [redis.conf for Redis 4.0](https://raw.githubusercontent.com/redis/redis/4.0/redis.conf). +* The self documented [redis.conf for Redis 3.2](https://raw.githubusercontent.com/redis/redis/3.2/redis.conf). +* The self documented [redis.conf for Redis 3.0](https://raw.githubusercontent.com/redis/redis/3.0/redis.conf). +* The self documented [redis.conf for Redis 2.8](https://raw.githubusercontent.com/redis/redis/2.8/redis.conf). +* The self documented [redis.conf for Redis 2.6](https://raw.githubusercontent.com/redis/redis/2.6/redis.conf). +* The self documented [redis.conf for Redis 2.4](https://raw.githubusercontent.com/redis/redis/2.4/redis.conf). + +Passing arguments via the command line +--- + +You can also pass Redis configuration parameters +using the command line directly. This is very useful for testing purposes. +The following is an example that starts a new Redis instance using port 6380 +as a replica of the instance running at 127.0.0.1 port 6379. + + ./redis-server --port 6380 --replicaof 127.0.0.1 6379 + +The format of the arguments passed via the command line is exactly the same +as the one used in the redis.conf file, with the exception that the keyword +is prefixed with `--`. + +Note that internally this generates an in-memory temporary config file +(possibly concatenating the config file passed by the user, if any) where +arguments are translated into the format of redis.conf. + +Changing Redis configuration while the server is running +--- + +It is possible to reconfigure Redis on the fly without stopping and restarting +the service, or querying the current configuration programmatically using the +special commands `CONFIG SET` and `CONFIG GET`. + +Not all of the configuration directives are supported in this way, but most +are supported as expected. +Please refer to the `CONFIG SET` and `CONFIG GET` pages for more information. + +Note that modifying the configuration on the fly **has no effects on the +redis.conf file** so at the next restart of Redis the old configuration will +be used instead. + +Make sure to also modify the `redis.conf` file accordingly to the configuration +you set using `CONFIG SET`. +You can do it manually, or you can use `CONFIG REWRITE`, which will automatically scan your `redis.conf` file and update the fields which don't match the current configuration value. +Fields non existing but set to the default value are not added. +Comments inside your configuration file are retained. + +Configuring Redis as a cache +--- + +If you plan to use Redis as a cache where every key will have an +expire set, you may consider using the following configuration instead +(assuming a max memory limit of 2 megabytes as an example): + + maxmemory 2mb + maxmemory-policy allkeys-lru + +In this configuration there is no need for the application to set a +time to live for keys using the `EXPIRE` command (or equivalent) since +all the keys will be evicted using an approximated LRU algorithm as long +as we hit the 2 megabyte memory limit. + +Basically, in this configuration Redis acts in a similar way to memcached. +We have more extensive documentation about using Redis as an LRU cache [here](/topics/lru-cache). diff --git a/docs/management/debugging.md b/docs/management/debugging.md new file mode 100644 index 0000000000..60dbc70898 --- /dev/null +++ b/docs/management/debugging.md @@ -0,0 +1,201 @@ +--- +title: "Debugging" +linkTitle: "Debugging" +weight: 10 +description: > + A guide to debugging Redis server processes +aliases: [ + /topics/debugging, + /docs/reference/debugging, + /docs/reference/debugging.md +] +--- + +Redis is developed with an emphasis on stability. We do our best with +every release to make sure you'll experience a stable product with no +crashes. However, if you ever need to debug the Redis process itself, read on. + +When Redis crashes, it produces a detailed report of what happened. However, +sometimes looking at the crash report is not enough, nor is it possible for +the Redis core team to reproduce the issue independently. In this scenario, we +need help from the user who can reproduce the issue. + +This guide shows how to use GDB to provide the information the +Redis developers will need to track the bug more easily. + +## What is GDB? + +GDB is the Gnu Debugger: a program that is able to inspect the internal state +of another program. Usually tracking and fixing a bug is an exercise in +gathering more information about the state of the program at the moment the +bug happens, so GDB is an extremely useful tool. + +GDB can be used in two ways: + +* It can attach to a running program and inspect the state of it at runtime. +* It can inspect the state of a program that already terminated using what is called a *core file*, that is, the image of the memory at the time the program was running. + +From the point of view of investigating Redis bugs we need to use both of these +GDB modes. The user able to reproduce the bug attaches GDB to their running Redis +instance, and when the crash happens, they create the `core` file that in turn +the developer will use to inspect the Redis internals at the time of the crash. + +This way the developer can perform all the inspections in his or her computer +without the help of the user, and the user is free to restart Redis in their +production environment. + +## Compiling Redis without optimizations + +By default Redis is compiled with the `-O2` switch, this means that compiler +optimizations are enabled. This makes the Redis executable faster, but at the +same time it makes Redis (like any other program) harder to inspect using GDB. + +It is better to attach GDB to Redis compiled without optimizations using the +`make noopt` command (instead of just using the plain `make` command). However, +if you have an already running Redis in production there is no need to recompile +and restart it if this is going to create problems on your side. GDB still works +against executables compiled with optimizations. + +You should not be overly concerned at the loss of performance from compiling Redis +without optimizations. It is unlikely that this will cause problems in your +environment as Redis is not very CPU-bound. + +## Attaching GDB to a running process + +If you have an already running Redis server, you can attach GDB to it, so that +if Redis crashes it will be possible to both inspect the internals and generate +a `core dump` file. + +After you attach GDB to the Redis process it will continue running as usual without +any loss of performance, so this is not a dangerous procedure. + +In order to attach GDB the first thing you need is the *process ID* of the running +Redis instance (the *pid* of the process). You can easily obtain it using +`redis-cli`: + + $ redis-cli info | grep process_id + process_id:58414 + +In the above example the process ID is **58414**. + +Login into your Redis server. + +(Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection times out. You can learn more about screen in [this article](http://www.linuxjournal.com/article/6340). + +Attach GDB to the running Redis server by typing: + + $ gdb + +For example: + + $ gdb /usr/local/bin/redis-server 58414 + +GDB will start and will attach to the running server printing something like the following: + + Reading symbols for shared libraries + done + 0x00007fff8d4797e6 in epoll_wait () + (gdb) + +At this point GDB is attached but **your Redis instance is blocked by GDB**. In +order to let the Redis instance continue the execution just type **continue** at +the GDB prompt, and press enter. + + (gdb) continue + Continuing. + +Done! Now your Redis instance has GDB attached. Now you can wait for the next crash. :) + +Now it's time to detach your screen/tmux session, if you are running GDB using it, by +pressing **Ctrl-a a** key combination. + +## After the crash + +Redis has a command to simulate a segmentation fault (in other words a bad crash) using +the `DEBUG SEGFAULT` command (don't use it against a real production instance of course! +So I'll use this command to crash my instance to show what happens in the GDB side: + + (gdb) continue + Continuing. + + Program received signal EXC_BAD_ACCESS, Could not access memory. + Reason: KERN_INVALID_ADDRESS at address: 0xffffffffffffffff + debugCommand (c=0x7ffc32005000) at debug.c:220 + 220 *((char*)-1) = 'x'; + +As you can see GDB detected that Redis crashed, and was even able to show me +the file name and line number causing the crash. This is already much better +than the Redis crash report back trace (containing just function names and +binary offsets). + +## Obtaining the stack trace + +The first thing to do is to obtain a full stack trace with GDB. This is as +simple as using the **bt** command: + + (gdb) bt + #0 debugCommand (c=0x7ffc32005000) at debug.c:220 + #1 0x000000010d246d63 in call (c=0x7ffc32005000) at redis.c:1163 + #2 0x000000010d247290 in processCommand (c=0x7ffc32005000) at redis.c:1305 + #3 0x000000010d251660 in processInputBuffer (c=0x7ffc32005000) at networking.c:959 + #4 0x000000010d251872 in readQueryFromClient (el=0x0, fd=5, privdata=0x7fff76f1c0b0, mask=220924512) at networking.c:1021 + #5 0x000000010d243523 in aeProcessEvents (eventLoop=0x7fff6ce408d0, flags=220829559) at ae.c:352 + #6 0x000000010d24373b in aeMain (eventLoop=0x10d429ef0) at ae.c:397 + #7 0x000000010d2494ff in main (argc=1, argv=0x10d2b2900) at redis.c:2046 + +This shows the backtrace, but we also want to dump the processor registers using the **info registers** command: + + (gdb) info registers + rax 0x0 0 + rbx 0x7ffc32005000 140721147367424 + rcx 0x10d2b0a60 4515891808 + rdx 0x7fff76f1c0b0 140735188943024 + rsi 0x10d299777 4515796855 + rdi 0x0 0 + rbp 0x7fff6ce40730 0x7fff6ce40730 + rsp 0x7fff6ce40650 0x7fff6ce40650 + r8 0x4f26b3f7 1327936503 + r9 0x7fff6ce40718 140735020271384 + r10 0x81 129 + r11 0x10d430398 4517462936 + r12 0x4b7c04f8babc0 1327936503000000 + r13 0x10d3350a0 4516434080 + r14 0x10d42d9f0 4517452272 + r15 0x10d430398 4517462936 + rip 0x10d26cfd4 0x10d26cfd4 + eflags 0x10246 66118 + cs 0x2b 43 + ss 0x0 0 + ds 0x0 0 + es 0x0 0 + fs 0x0 0 + gs 0x0 0 + +Please **make sure to include** both of these outputs in your bug report. + +## Obtaining the core file + +The next step is to generate the core dump, that is the image of the memory of the running Redis process. This is done using the `gcore` command: + + (gdb) gcore + Saved corefile core.58414 + +Now you have the core dump to send to the Redis developer, but **it is important +to understand** that this happens to contain all the data that was inside the +Redis instance at the time of the crash; Redis developers will make sure not to +share the content with anyone else, and will delete the file as soon as it is no +longer used for debugging purposes, but you are warned that by sending the core +file you are sending your data. + +## What to send to developers + +Finally you can send everything to the Redis core team: + +* The Redis executable you are using. +* The stack trace produced by the **bt** command, and the registers dump. +* The core file you generated with gdb. +* Information about the operating system and GCC version, and Redis version you are using. + +## Thank you + +Your help is extremely important! Many issues can only be tracked this way. So +thanks! diff --git a/docs/management/optimization/_index.md b/docs/management/optimization/_index.md new file mode 100644 index 0000000000..d444e489ad --- /dev/null +++ b/docs/management/optimization/_index.md @@ -0,0 +1,9 @@ +--- +title: "Optimizing Redis" +linkTitle: "Optimization" +weight: 8 +description: Benchmarking, profiling, and optimizations for memory and latency +aliases: [ + /docs/reference/optimization +] +--- diff --git a/topics/Connections_chart.png b/docs/management/optimization/benchmarks/Connections_chart.png similarity index 100% rename from topics/Connections_chart.png rename to docs/management/optimization/benchmarks/Connections_chart.png diff --git a/docs/management/optimization/benchmarks/Data_size.png b/docs/management/optimization/benchmarks/Data_size.png new file mode 100644 index 0000000000..1acff3f2b5 Binary files /dev/null and b/docs/management/optimization/benchmarks/Data_size.png differ diff --git a/topics/NUMA_chart.gif b/docs/management/optimization/benchmarks/NUMA_chart.gif similarity index 100% rename from topics/NUMA_chart.gif rename to docs/management/optimization/benchmarks/NUMA_chart.gif diff --git a/docs/management/optimization/benchmarks/index.md b/docs/management/optimization/benchmarks/index.md new file mode 100644 index 0000000000..cca6d8ac52 --- /dev/null +++ b/docs/management/optimization/benchmarks/index.md @@ -0,0 +1,293 @@ +--- +title: "Redis benchmark" +linkTitle: "Benchmarking" +weight: 1 +description: > + Using the redis-benchmark utility on a Redis server +aliases: [ + /topics/benchmarks, + /docs/reference/optimization/benchmarks, + /docs/reference/optimization/benchmarks.md +] +--- + +Redis includes the `redis-benchmark` utility that simulates running commands done +by N clients while at the same time sending M total queries. The utility provides +a default set of tests, or you can supply a custom set of tests. + +The following options are supported: + + Usage: redis-benchmark [-h ] [-p ] [-c ] [-n [-k ] + + -h Server hostname (default 127.0.0.1) + -p Server port (default 6379) + -s Server socket (overrides host and port) + -a Password for Redis Auth + -c Number of parallel connections (default 50) + -n Total number of requests (default 100000) + -d Data size of SET/GET value in bytes (default 3) + --dbnum SELECT the specified db number (default 0) + -k 1=keep alive 0=reconnect (default 1) + -r Use random keys for SET/GET/INCR, random values for SADD + Using this option the benchmark will expand the string __rand_int__ + inside an argument with a 12 digits number in the specified range + from 0 to keyspacelen-1. The substitution changes every time a command + is executed. Default tests use this to hit random keys in the + specified range. + -P Pipeline requests. Default 1 (no pipeline). + -q Quiet. Just show query/sec values + --csv Output in CSV format + -l Loop. Run the tests forever + -t Only run the comma separated list of tests. The test + names are the same as the ones produced as output. + -I Idle mode. Just open N idle connections and wait. + +You need to have a running Redis instance before launching the benchmark. +You can run the benchmarking utility like so: + + redis-benchmark -q -n 100000 + +### Running only a subset of the tests + +You don't need to run all the default tests every time you execute `redis-benchmark`. +For example, to select only a subset of tests, use the `-t` option +as in the following example: + + $ redis-benchmark -t set,lpush -n 100000 -q + SET: 74239.05 requests per second + LPUSH: 79239.30 requests per second + +This example runs the tests for the `SET` and `LPUSH` commands and uses quiet mode (see the `-q` switch). + +You can even benchmark a specific command: + + $ redis-benchmark -n 100000 -q script load "redis.call('set','foo','bar')" + script load redis.call('set','foo','bar'): 69881.20 requests per second + +### Selecting the size of the key space + +By default, the benchmark runs against a single key. In Redis the difference +between such a synthetic benchmark and a real one is not huge since it is an +in-memory system, however it is possible to stress cache misses and in general +to simulate a more real-world work load by using a large key space. + +This is obtained by using the `-r` switch. For instance if I want to run +one million SET operations, using a random key for every operation out of +100k possible keys, I'll use the following command line: + + $ redis-cli flushall + OK + + $ redis-benchmark -t set -r 100000 -n 1000000 + ====== SET ====== + 1000000 requests completed in 13.86 seconds + 50 parallel clients + 3 bytes payload + keep alive: 1 + + 99.76% `<=` 1 milliseconds + 99.98% `<=` 2 milliseconds + 100.00% `<=` 3 milliseconds + 100.00% `<=` 3 milliseconds + 72144.87 requests per second + + $ redis-cli dbsize + (integer) 99993 + +### Using pipelining + +By default every client (the benchmark simulates 50 clients if not otherwise +specified with `-c`) sends the next command only when the reply of the previous +command is received, this means that the server will likely need a read call +in order to read each command from every client. Also RTT is paid as well. + +Redis supports [pipelining](/topics/pipelining), so it is possible to send +multiple commands at once, a feature often exploited by real world applications. +Redis pipelining is able to dramatically improve the number of operations per +second a server is able do deliver. + +Consider this example of running the benchmark using a +pipelining of 16 commands: + + $ redis-benchmark -n 1000000 -t set,get -P 16 -q + SET: 403063.28 requests per second + GET: 508388.41 requests per second + +Using pipelining results in a significant increase in performance. + +### Pitfalls and misconceptions + +The first point is obvious: the golden rule of a useful benchmark is to +only compare apples and apples. You can compare different versions of Redis on the same workload or the same version of Redis, but with +different options. If you plan to compare Redis to something else, then it is +important to evaluate the functional and technical differences, and take them +in account. + ++ Redis is a server: all commands involve network or IPC round trips. It is meaningless to compare it to embedded data stores, because the cost of most operations is primarily in network/protocol management. ++ Redis commands return an acknowledgment for all usual commands. Some other data stores do not. Comparing Redis to stores involving one-way queries is only mildly useful. ++ Naively iterating on synchronous Redis commands does not benchmark Redis itself, but rather measure your network (or IPC) latency and the client library intrinsic latency. To really test Redis, you need multiple connections (like redis-benchmark) and/or to use pipelining to aggregate several commands and/or multiple threads or processes. ++ Redis is an in-memory data store with some optional persistence options. If you plan to compare it to transactional servers (MySQL, PostgreSQL, etc ...), then you should consider activating AOF and decide on a suitable fsync policy. ++ Redis is, mostly, a single-threaded server from the POV of commands execution (actually modern versions of Redis use threads for different things). It is not designed to benefit from multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed. It is not really fair to compare one single Redis instance to a multi-threaded data store. + +The `redis-benchmark` program is a quick and useful way to get some figures and +evaluate the performance of a Redis instance on a given hardware. However, +by default, it does not represent the maximum throughput a Redis instance can +sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly +easy to write a program generating more throughput than redis-benchmark. The +default behavior of redis-benchmark is to achieve throughput by exploiting +concurrency only (i.e. it creates several connections to the server). +It does not use pipelining or any parallelism at all (one pending query per +connection at most, and no multi-threading), if not explicitly enabled via +the `-P` parameter. So in some way using `redis-benchmark` and, triggering, for +example, a `BGSAVE` operation in the background at the same time, will provide +the user with numbers more near to the *worst case* than to the best case. + +To run a benchmark using pipelining mode (and achieve higher throughput), +you need to explicitly use the -P option. Please note that it is still a +realistic behavior since a lot of Redis based applications actively use +pipelining to improve performance. However you should use a pipeline size that +is more or less the average pipeline length you'll be able to use in your +application in order to get realistic numbers. + +The benchmark should apply the same operations, and work in the same way +with the multiple data stores you want to compare. It is absolutely pointless to +compare the result of redis-benchmark to the result of another benchmark +program and extrapolate. + +For instance, Redis and memcached in single-threaded mode can be compared on +GET/SET operations. Both are in-memory data stores, working mostly in the same +way at the protocol level. Provided their respective benchmark application is +aggregating queries in the same way (pipelining) and use a similar number of +connections, the comparison is actually meaningful. + +When you're benchmarking a high-performance, in-memory database like Redis, +it may be difficult to saturate +the server. Sometimes, the performance bottleneck is on the client side, +and not the server-side. In that case, the client (i.e., the benchmarking program itself) +must be fixed, or perhaps scaled out, to reach the maximum throughput. + +### Factors impacting Redis performance + +There are multiple factors having direct consequences on Redis performance. +We mention them here, since they can alter the result of any benchmarks. +Please note however, that a typical Redis instance running on a low end, +untuned box usually provides good enough performance for most applications. + ++ Network bandwidth and latency usually have a direct impact on the performance. +It is a good practice to use the ping program to quickly check the latency +between the client and server hosts is normal before launching the benchmark. +Regarding the bandwidth, it is generally useful to estimate +the throughput in Gbit/s and compare it to the theoretical bandwidth +of the network. For instance a benchmark setting 4 KB strings +in Redis at 100000 q/s, would actually consume 3.2 Gbit/s of bandwidth +and probably fit within a 10 Gbit/s link, but not a 1 Gbit/s one. In many real +world scenarios, Redis throughput is limited by the network well before being +limited by the CPU. To consolidate several high-throughput Redis instances +on a single server, it worth considering putting a 10 Gbit/s NIC +or multiple 1 Gbit/s NICs with TCP/IP bonding. ++ CPU is another very important factor. Being single-threaded, Redis favors +fast CPUs with large caches and not many cores. At this game, Intel CPUs are +currently the winners. It is not uncommon to get only half the performance on +an AMD Opteron CPU compared to similar Nehalem EP/Westmere EP/Sandy Bridge +Intel CPUs with Redis. When client and server run on the same box, the CPU is +the limiting factor with redis-benchmark. ++ Speed of RAM and memory bandwidth seem less critical for global performance +especially for small objects. For large objects (>10 KB), it may become +noticeable though. Usually, it is not really cost-effective to buy expensive +fast memory modules to optimize Redis. ++ Redis runs slower on a VM compared to running without virtualization using +the same hardware. If you have the chance to run Redis on a physical machine +this is preferred. However this does not mean that Redis is slow in +virtualized environments, the delivered performances are still very good +and most of the serious performance issues you may incur in virtualized +environments are due to over-provisioning, non-local disks with high latency, +or old hypervisor software that have slow `fork` syscall implementation. ++ When the server and client benchmark programs run on the same box, both +the TCP/IP loopback and unix domain sockets can be used. Depending on the +platform, unix domain sockets can achieve around 50% more throughput than +the TCP/IP loopback (on Linux for instance). The default behavior of +redis-benchmark is to use the TCP/IP loopback. ++ The performance benefit of unix domain sockets compared to TCP/IP loopback +tends to decrease when pipelining is heavily used (i.e. long pipelines). ++ When an ethernet network is used to access Redis, aggregating commands using +pipelining is especially efficient when the size of the data is kept under +the ethernet packet size (about 1500 bytes). Actually, processing 10 bytes, +100 bytes, or 1000 bytes queries almost result in the same throughput. +See the graph below. + + ![Data size impact](Data_size.png) + ++ On multi CPU sockets servers, Redis performance becomes dependent on the +NUMA configuration and process location. The most visible effect is that +redis-benchmark results seem non-deterministic because client and server +processes are distributed randomly on the cores. To get deterministic results, +it is required to use process placement tools (on Linux: taskset or numactl). +The most efficient combination is always to put the client and server on two +different cores of the same CPU to benefit from the L3 cache. +Here are some results of 4 KB SET benchmark for 3 server CPUs (AMD Istanbul, +Intel Nehalem EX, and Intel Westmere) with different relative placements. +Please note this benchmark is not meant to compare CPU models between themselves +(CPUs exact model and frequency are therefore not disclosed). + + ![NUMA chart](NUMA_chart.gif) + ++ With high-end configurations, the number of client connections is also an +important factor. Being based on epoll/kqueue, the Redis event loop is quite +scalable. Redis has already been benchmarked at more than 60000 connections, +and was still able to sustain 50000 q/s in these conditions. As a rule of thumb, +an instance with 30000 connections can only process half the throughput +achievable with 100 connections. Here is an example showing the throughput of +a Redis instance per number of connections: + + ![connections chart](Connections_chart.png) + ++ With high-end configurations, it is possible to achieve higher throughput by +tuning the NIC(s) configuration and associated interruptions. Best throughput +is achieved by setting an affinity between Rx/Tx NIC queues and CPU cores, +and activating RPS (Receive Packet Steering) support. More information in this +[thread](https://groups.google.com/forum/#!msg/redis-db/gUhc19gnYgc/BruTPCOroiMJ). +Jumbo frames may also provide a performance boost when large objects are used. ++ Depending on the platform, Redis can be compiled against different memory +allocators (libc malloc, jemalloc, tcmalloc), which may have different behaviors +in term of raw speed, internal and external fragmentation. +If you did not compile Redis yourself, you can use the INFO command to check +the `mem_allocator` field. Please note most benchmarks do not run long enough to +generate significant external fragmentation (contrary to production Redis +instances). + +### Other things to consider + +One important goal of any benchmark is to get reproducible results, so they +can be compared to the results of other tests. + ++ A good practice is to try to run tests on isolated hardware as much as possible. +If it is not possible, then the system must be monitored to check the benchmark +is not impacted by some external activity. ++ Some configurations (desktops and laptops for sure, some servers as well) +have a variable CPU core frequency mechanism. The policy controlling this +mechanism can be set at the OS level. Some CPU models are more aggressive than +others at adapting the frequency of the CPU cores to the workload. To get +reproducible results, it is better to set the highest possible fixed frequency +for all the CPU cores involved in the benchmark. ++ An important point is to size the system accordingly to the benchmark. +The system must have enough RAM and must not swap. On Linux, do not forget +to set the `overcommit_memory` parameter correctly. Please note 32 and 64 bit +Redis instances do not have the same memory footprint. ++ If you plan to use RDB or AOF for your benchmark, please check there is no other +I/O activity in the system. Avoid putting RDB or AOF files on NAS or NFS shares, +or on any other devices impacting your network bandwidth and/or latency +(for instance, EBS on Amazon EC2). ++ Set Redis logging level (loglevel parameter) to warning or notice. Avoid putting +the generated log file on a remote filesystem. ++ Avoid using monitoring tools which can alter the result of the benchmark. For +instance using INFO at regular interval to gather statistics is probably fine, +but MONITOR will impact the measured performance significantly. + +### Other Redis benchmarking tools + +There are several third-party tools that can be used for benchmarking Redis. Refer to each tool's +documentation for more information about its goals and capabilities. + +* [memtier_benchmark](https://github.com/redislabs/memtier_benchmark) from [Redis Ltd.](https://twitter.com/RedisInc) is a NoSQL Redis and Memcache traffic generation and benchmarking tool. +* [rpc-perf](https://github.com/twitter/rpc-perf) from [Twitter](https://twitter.com/twitter) is a tool for benchmarking RPC services that supports Redis and Memcache. +* [YCSB](https://github.com/brianfrankcooper/YCSB) from [Yahoo @Yahoo](https://twitter.com/Yahoo) is a benchmarking framework with clients to many databases, including Redis. diff --git a/docs/management/optimization/cpu-profiling.md b/docs/management/optimization/cpu-profiling.md new file mode 100644 index 0000000000..9f1383c95e --- /dev/null +++ b/docs/management/optimization/cpu-profiling.md @@ -0,0 +1,236 @@ +--- +title: "Redis CPU profiling" +linkTitle: "CPU profiling" +weight: 1 +description: > + Performance engineering guide for on-CPU profiling and tracing +aliases: [ + /topics/performance-on-cpu, + /docs/reference/optimization/cpu-profiling +] +--- + +## Filling the performance checklist + +Redis is developed with a great emphasis on performance. We do our best with +every release to make sure you'll experience a very stable and fast product. + +Nevertheless, if you're finding room to improve the efficiency of Redis or +are pursuing a performance regression investigation you will need a concise +methodical way of monitoring and analyzing Redis performance. + +To do so you can rely on different methodologies (some more suited than other +depending on the class of issues/analysis we intend to make). A curated list +of methodologies and their steps are enumerated by Brendan Greg at the +[following link](http://www.brendangregg.com/methodology.html). + +We recommend the Utilization Saturation and Errors (USE) Method for answering +the question of what is your bottleneck. Check the following mapping between +system resource, metric, and tools for a practical deep dive: +[USE method](http://www.brendangregg.com/USEmethod/use-rosetta.html). + +### Ensuring the CPU is your bottleneck + +This guide assumes you've followed one of the above methodologies to perform a +complete check of system health, and identified the bottleneck being the CPU. +**If you have identified that most of the time is spent blocked on I/O, locks, +timers, paging/swapping, etc., this guide is not for you**. + +### Build Prerequisites + +For a proper On-CPU analysis, Redis (and any dynamically loaded library like +Redis Modules) requires stack traces to be available to tracers, which you may +need to fix first. + +By default, Redis is compiled with the `-O2` switch (which we intent to keep +during profiling). This means that compiler optimizations are enabled. Many +compilers omit the frame pointer as a runtime optimization (saving a register), +thus breaking frame pointer-based stack walking. This makes the Redis +executable faster, but at the same time it makes Redis (like any other program) +harder to trace, potentially wrongfully pinpointing on-CPU time to the last +available frame pointer of a call stack that can get a lot deeper (but +impossible to trace). + +It's important that you ensure that: +- debug information is present: compile option `-g` +- frame pointer register is present: `-fno-omit-frame-pointer` +- we still run with optimizations to get an accurate representation of production run times, meaning we will keep: `-O2` + +You can do it as follows within redis main repo: + + $ make REDIS_CFLAGS="-g -fno-omit-frame-pointer" + +## A set of instruments to identify performance regressions and/or potential **on-CPU performance** improvements + +This document focuses specifically on **on-CPU** resource bottlenecks analysis, +meaning we're interested in understanding where threads are spending CPU cycles +while running on-CPU and, as importantly, whether those cycles are effectively +being used for computation or stalled waiting (not blocked!) for memory I/O, +and cache misses, etc. + +For that we will rely on toolkits (perf, bcc tools), and hardware specific PMCs +(Performance Monitoring Counters), to proceed with: + +- Hotspot analysis (perf or bcc tools): to profile code execution and determine which functions are consuming the most time and thus are targets for optimization. We'll present two options to collect, report, and visualize hotspots either with perf or bcc/BPF tracing tools. + +- Call counts analysis: to count events including function calls, enabling us to correlate several calls/components at once, relying on bcc/BPF tracing tools. + +- Hardware event sampling: crucial for understanding CPU behavior, including memory I/O, stall cycles, and cache misses. + +### Tool prerequisites + +The following steps rely on Linux perf_events (aka ["perf"](https://man7.org/linux/man-pages/man1/perf.1.html)), [bcc/BPF tracing tools](https://github.com/iovisor/bcc), and Brendan Greg’s [FlameGraph repo](https://github.com/brendangregg/FlameGraph). + +We assume beforehand you have: + +- Installed the perf tool on your system. Most Linux distributions will likely package this as a package related to the kernel. More information about the perf tool can be found at perf [wiki](https://perf.wiki.kernel.org/). +- Followed the install [bcc/BPF](https://github.com/iovisor/bcc/blob/master/INSTALL.md#installing-bcc) instructions to install bcc toolkit on your machine. +- Cloned Brendan Greg’s [FlameGraph repo](https://github.com/brendangregg/FlameGraph) and made accessible the `difffolded.pl` and `flamegraph.pl` files, to generated the collapsed stack traces and Flame Graphs. + +## Hotspot analysis with perf or eBPF (stack traces sampling) + +Profiling CPU usage by sampling stack traces at a timed interval is a fast and +easy way to identify performance-critical code sections (hotspots). + +### Sampling stack traces using perf + +To profile both user- and kernel-level stacks of redis-server for a specific +length of time, for example 60 seconds, at a sampling frequency of 999 samples +per second: + + $ perf record -g --pid $(pgrep redis-server) -F 999 -- sleep 60 + +#### Displaying the recorded profile information using perf report + +By default perf record will generate a perf.data file in the current working +directory. + +You can then report with a call-graph output (call chain, stack backtrace), +with a minimum call graph inclusion threshold of 0.5%, with: + + $ perf report -g "graph,0.5,caller" + +See the [perf report](https://man7.org/linux/man-pages/man1/perf-report.1.html) +documentation for advanced filtering, sorting and aggregation capabilities. + +#### Visualizing the recorded profile information using Flame Graphs + +[Flame graphs](http://www.brendangregg.com/flamegraphs.html) allow for a quick +and accurate visualization of frequent code-paths. They can be generated using +Brendan Greg's open source programs on [github](https://github.com/brendangregg/FlameGraph), +which create interactive SVGs from folded stack files. + +Specifically, for perf we need to convert the generated perf.data into the +captured stacks, and fold each of them into single lines. You can then render +the on-CPU flame graph with: + + $ perf script > redis.perf.stacks + $ stackcollapse-perf.pl redis.perf.stacks > redis.folded.stacks + $ flamegraph.pl redis.folded.stacks > redis.svg + +By default, perf script will generate a perf.data file in the current working +directory. See the [perf script](https://linux.die.net/man/1/perf-script) +documentation for advanced usage. + +See [FlameGraph usage options](https://github.com/brendangregg/FlameGraph#options) +for more advanced stack trace visualizations (like the differential one). + +#### Archiving and sharing recorded profile information + +So that analysis of the perf.data contents can be possible on a machine other +than the one on which collection happened, you need to export along with the +perf.data file all object files with build-ids found in the record data file. +This can be easily done with the help of +[perf-archive.sh](https://github.com/torvalds/linux/blob/master/tools/perf/perf-archive.sh) +script: + + $ perf-archive.sh perf.data + +Now please run: + + $ tar xvf perf.data.tar.bz2 -C ~/.debug + +on the machine where you need to run `perf report`. + +### Sampling stack traces using bcc/BPF's profile + +Similarly to perf, as of Linux kernel 4.9, BPF-optimized profiling is now fully +available with the promise of lower overhead on CPU (as stack traces are +frequency counted in kernel context) and disk I/O resources during profiling. + +Apart from that, and relying solely on bcc/BPF's profile tool, we have also +removed the perf.data and intermediate steps if stack traces analysis is our +main goal. You can use bcc's profile tool to output folded format directly, for +flame graph generation: + + $ /usr/share/bcc/tools/profile -F 999 -f --pid $(pgrep redis-server) --duration 60 > redis.folded.stacks + +In that manner, we've remove any preprocessing and can render the on-CPU flame +graph with a single command: + + $ flamegraph.pl redis.folded.stacks > redis.svg + +### Visualizing the recorded profile information using Flame Graphs + +## Call counts analysis with bcc/BPF + +A function may consume significant CPU cycles either because its code is slow +or because it's frequently called. To answer at what rate functions are being +called, you can rely upon call counts analysis using BCC's `funccount` tool: + + $ /usr/share/bcc/tools/funccount 'redis-server:(call*|*Read*|*Write*)' --pid $(pgrep redis-server) --duration 60 + Tracing 64 functions for "redis-server:(call*|*Read*|*Write*)"... Hit Ctrl-C to end. + + FUNC COUNT + call 334 + handleClientsWithPendingWrites 388 + clientInstallWriteHandler 388 + postponeClientRead 514 + handleClientsWithPendingReadsUsingThreads 735 + handleClientsWithPendingWritesUsingThreads 735 + prepareClientToWrite 1442 + Detaching... + + +The above output shows that, while tracing, the Redis's call() function was +called 334 times, handleClientsWithPendingWrites() 388 times, etc. + +## Hardware event counting with Performance Monitoring Counters (PMCs) + +Many modern processors contain a performance monitoring unit (PMU) exposing +Performance Monitoring Counters (PMCs). PMCs are crucial for understanding CPU +behavior, including memory I/O, stall cycles, and cache misses, and provide +low-level CPU performance statistics that aren't available anywhere else. + +The design and functionality of a PMU is CPU-specific and you should assess +your CPU supported counters and features by using `perf list`. + +To calculate the number of instructions per cycle, the number of micro ops +executed, the number of cycles during which no micro ops were dispatched, the +number stalled cycles on memory, including a per memory type stalls, for the +duration of 60s, specifically for redis process: + + $ perf stat -e "cpu-clock,cpu-cycles,instructions,uops_executed.core,uops_executed.stall_cycles,cache-references,cache-misses,cycle_activity.stalls_total,cycle_activity.stalls_mem_any,cycle_activity.stalls_l3_miss,cycle_activity.stalls_l2_miss,cycle_activity.stalls_l1d_miss" --pid $(pgrep redis-server) -- sleep 60 + + Performance counter stats for process id '3038': + + 60046.411437 cpu-clock (msec) # 1.001 CPUs utilized + 168991975443 cpu-cycles # 2.814 GHz (36.40%) + 388248178431 instructions # 2.30 insn per cycle (45.50%) + 443134227322 uops_executed.core # 7379.862 M/sec (45.51%) + 30317116399 uops_executed.stall_cycles # 504.895 M/sec (45.51%) + 670821512 cache-references # 11.172 M/sec (45.52%) + 23727619 cache-misses # 3.537 % of all cache refs (45.43%) + 30278479141 cycle_activity.stalls_total # 504.251 M/sec (36.33%) + 19981138777 cycle_activity.stalls_mem_any # 332.762 M/sec (36.33%) + 725708324 cycle_activity.stalls_l3_miss # 12.086 M/sec (36.33%) + 8487905659 cycle_activity.stalls_l2_miss # 141.356 M/sec (36.32%) + 10011909368 cycle_activity.stalls_l1d_miss # 166.736 M/sec (36.31%) + + 60.002765665 seconds time elapsed + +It's important to know that there are two very different ways in which PMCs can +be used (counting and sampling), and we've focused solely on PMCs counting for +the sake of this analysis. Brendan Greg clearly explains it on the following +[link](http://www.brendangregg.com/blog/2017-05-04/the-pmcs-of-ec2.html). + diff --git a/docs/management/optimization/latency-monitor.md b/docs/management/optimization/latency-monitor.md new file mode 100644 index 0000000000..07809edb28 --- /dev/null +++ b/docs/management/optimization/latency-monitor.md @@ -0,0 +1,106 @@ +--- +title: "Redis latency monitoring" +linkTitle: "Latency monitoring" +weight: 1 +description: Discovering slow server events in Redis +aliases: [ + /topics/latency-monitor, + /docs/reference/optimization/latency-monitor +] +--- + +Redis is often used for demanding use cases, where it +serves a large number of queries per second per instance, but also has strict latency requirements for the average response +time and the worst-case latency. + +While Redis is an in-memory system, it deals with the operating system in +different ways, for example, in the context of persisting to disk. +Moreover Redis implements a rich set of commands. Certain commands +are fast and run in constant or logarithmic time. Other commands are slower +O(N) commands that can cause latency spikes. + +Finally, Redis is single threaded. This is usually an advantage +from the point of view of the amount of work it can perform per core, and in +the latency figures it is able to provide. However, it poses +a challenge for latency, since the single +thread must be able to perform certain tasks incrementally, for +example key expiration, in a way that does not impact the other clients +that are served. + +For all these reasons, Redis 2.8.13 introduced a new feature called +**Latency Monitoring**, that helps the user to check and troubleshoot possible +latency problems. Latency monitoring is composed of the following conceptual +parts: + +* Latency hooks that sample different latency-sensitive code paths. +* Time series recording of latency spikes, split by different events. +* Reporting engine to fetch raw data from the time series. +* Analysis engine to provide human-readable reports and hints according to the measurements. + +The rest of this document covers the latency monitoring subsystem +details. For more information about the general topic of Redis +and latency, see [Redis latency problems troubleshooting](/topics/latency). + +## Events and time series + +Different monitored code paths have different names and are called *events*. +For example, `command` is an event that measures latency spikes of possibly slow +command executions, while `fast-command` is the event name for the monitoring +of the O(1) and O(log N) commands. Other events are less generic and monitor +specific operations performed by Redis. For example, the `fork` event +only monitors the time taken by Redis to execute the `fork(2)` system call. + +A latency spike is an event that takes more time to run than the configured latency +threshold. There is a separate time series associated with every monitored +event. This is how the time series work: + +* Every time a latency spike happens, it is logged in the appropriate time series. +* Every time series is composed of 160 elements. +* Each element is a pair made of a Unix timestamp of the time the latency spike was measured and the number of milliseconds the event took to execute. +* Latency spikes for the same event that occur in the same second are merged by taking the maximum latency. Even if continuous latency spikes are measured for a given event, which could happen with a low threshold, at least 160 seconds of history are available. +* Records the all-time maximum latency for every element. + +The framework monitors and logs latency spikes in the execution time of these events: + +* `command`: regular commands. +* `fast-command`: O(1) and O(log N) commands. +* `fork`: the `fork(2)` system call. +* `rdb-unlink-temp-file`: the `unlink(2)` system call. +* `aof-fsync-always`: the `fsync(2)` system call when invoked by the `appendfsync allways` policy. +* `aof-write`: writing to the AOF - a catchall event for `write(2)` system calls. +* `aof-write-pending-fsync`: the `write(2)` system call when there is a pending fsync. +* `aof-write-active-child`: the `write(2)` system call when there are active child processes. +* `aof-write-alone`: the `write(2)` system call when no pending fsync and no active child process. +* `aof-fstat`: the `fstat(2)` system call. +* `aof-rename`: the `rename(2)` system call for renaming the temporary file after completing `BGREWRITEAOF`. +* `aof-rewrite-diff-write`: writing the differences accumulated while performing `BGREWRITEAOF`. +* `active-defrag-cycle`: the active defragmentation cycle. +* `expire-cycle`: the expiration cycle. +* `eviction-cycle`: the eviction cycle. +* `eviction-del`: deletes during the eviction cycle. + +## How to enable latency monitoring + +What is high latency for one use case may not be considered high latency for another. Some applications may require that all queries be served in less than 1 millisecond. For other applications, it may be acceptable for a small amount of clients to experience a 2 second latency on occasion. + +The first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that take longer than the specified threshold will be logged as latency spikes. The user should set the threshold according to their needs. For example, if the application requires a maximum acceptable latency of 100 milliseconds, the threshold should be set to log all the events blocking the server for a time equal or greater to 100 milliseconds. + +Enable the latency monitor at runtime in a production server +with the following command: + + CONFIG SET latency-monitor-threshold 100 + +Monitoring is turned off by default (threshold set to 0), even if the actual cost of latency monitoring is near zero. While the memory requirements of latency monitoring are very small, there is no good reason to raise the baseline memory usage of a Redis instance that is working well. + +## Report information with the LATENCY command + +The user interface to the latency monitoring subsystem is the `LATENCY` command. +Like many other Redis commands, `LATENCY` accepts subcommands that modify its behavior. These subcommands are: + +* `LATENCY LATEST` - returns the latest latency samples for all events. +* `LATENCY HISTORY` - returns latency time series for a given event. +* `LATENCY RESET` - resets latency time series data for one or more events. +* `LATENCY GRAPH` - renders an ASCII-art graph of an event's latency samples. +* `LATENCY DOCTOR` - replies with a human-readable latency analysis report. + +Refer to each subcommand's documentation page for further information. diff --git a/topics/latency.md b/docs/management/optimization/latency.md similarity index 51% rename from topics/latency.md rename to docs/management/optimization/latency.md index e252ec2811..1419fa8fc6 100644 --- a/topics/latency.md +++ b/docs/management/optimization/latency.md @@ -1,5 +1,13 @@ -Redis latency problems troubleshooting -=== +--- +title: "Diagnosing latency issues" +linkTitle: "Latency diagnosis" +weight: 1 +description: Finding the causes of slow responses +aliases: [ + /topics/latency, + /docs/reference/optimization/latency +] +--- This document will help you understand what the problem could be if you are experiencing latency problems with Redis. @@ -9,11 +17,113 @@ issues a command and the time the reply to the command is received by the client. Usually Redis processing time is extremely low, in the sub microsecond range, but there are certain conditions leading to higher latency figures. +I've little time, give me the checklist +--- + +The following documentation is very important in order to run Redis in +a low latency fashion. However I understand that we are busy people, so +let's start with a quick checklist. If you fail following these steps, please +return here to read the full documentation. + +1. Make sure you are not running slow commands that are blocking the server. Use the Redis [Slow Log feature](/commands/slowlog) to check this. +2. For EC2 users, make sure you use HVM based modern EC2 instances, like m3.medium. Otherwise fork() is too slow. +3. Transparent huge pages must be disabled from your kernel. Use `echo never > /sys/kernel/mm/transparent_hugepage/enabled` to disable them, and restart your Redis process. +4. If you are using a virtual machine, it is possible that you have an intrinsic latency that has nothing to do with Redis. Check the minimum latency you can expect from your runtime environment using `./redis-cli --intrinsic-latency 100`. Note: you need to run this command in *the server* not in the client. +5. Enable and use the [Latency monitor](/topics/latency-monitor) feature of Redis in order to get a human readable description of the latency events and causes in your Redis instance. + +In general, use the following table for durability VS latency/performance tradeoffs, ordered from stronger safety to better latency. + +1. AOF + fsync always: this is very slow, you should use it only if you know what you are doing. +2. AOF + fsync every second: this is a good compromise. +3. AOF + fsync every second + no-appendfsync-on-rewrite option set to yes: this is as the above, but avoids to fsync during rewrites to lower the disk pressure. +4. AOF + fsync never. Fsyncing is up to the kernel in this setup, even less disk pressure and risk of latency spikes. +5. RDB. Here you have a vast spectrum of tradeoffs depending on the save triggers you configure. + +And now for people with 15 minutes to spend, the details... + +Measuring latency +----------------- + +If you are experiencing latency problems, you probably know how to measure +it in the context of your application, or maybe your latency problem is very +evident even macroscopically. However redis-cli can be used to measure the +latency of a Redis server in milliseconds, just try: + + redis-cli --latency -h `host` -p `port` + +Using the internal Redis latency monitoring subsystem +--- + +Since Redis 2.8.13, Redis provides latency monitoring capabilities that +are able to sample different execution paths to understand where the +server is blocking. This makes debugging of the problems illustrated in +this documentation much simpler, so we suggest enabling latency monitoring +ASAP. Please refer to the [Latency monitor documentation](/topics/latency-monitor). + +While the latency monitoring sampling and reporting capabilities will make +it simpler to understand the source of latency in your Redis system, it is still +advised that you read this documentation extensively to better understand +the topic of Redis and latency spikes. + +Latency baseline +---------------- + +There is a kind of latency that is inherently part of the environment where +you run Redis, that is the latency provided by your operating system kernel +and, if you are using virtualization, by the hypervisor you are using. + +While this latency can't be removed it is important to study it because +it is the baseline, or in other words, you won't be able to achieve a Redis +latency that is better than the latency that every process running in your +environment will experience because of the kernel or hypervisor implementation +or setup. + +We call this kind of latency **intrinsic latency**, and `redis-cli` starting +from Redis version 2.8.7 is able to measure it. This is an example run +under Linux 3.11.0 running on an entry level server. + +Note: the argument `100` is the number of seconds the test will be executed. +The more time we run the test, the more likely we'll be able to spot +latency spikes. 100 seconds is usually appropriate, however you may want +to perform a few runs at different times. Please note that the test is CPU +intensive and will likely saturate a single core in your system. + + $ ./redis-cli --intrinsic-latency 100 + Max latency so far: 1 microseconds. + Max latency so far: 16 microseconds. + Max latency so far: 50 microseconds. + Max latency so far: 53 microseconds. + Max latency so far: 83 microseconds. + Max latency so far: 115 microseconds. + +Note: redis-cli in this special case needs to **run in the server** where you run or plan to run Redis, not in the client. In this special mode redis-cli does not connect to a Redis server at all: it will just try to measure the largest time the kernel does not provide CPU time to run to the redis-cli process itself. + +In the above example, the intrinsic latency of the system is just 0.115 +milliseconds (or 115 microseconds), which is a good news, however keep in mind +that the intrinsic latency may change over time depending on the load of the +system. + +Virtualized environments will not show so good numbers, especially with high +load or if there are noisy neighbors. The following is a run on a Linode 4096 +instance running Redis and Apache: + + $ ./redis-cli --intrinsic-latency 100 + Max latency so far: 573 microseconds. + Max latency so far: 695 microseconds. + Max latency so far: 919 microseconds. + Max latency so far: 1606 microseconds. + Max latency so far: 3191 microseconds. + Max latency so far: 9243 microseconds. + Max latency so far: 9671 microseconds. + +Here we have an intrinsic latency of 9.7 milliseconds: this means that we can't ask better than that to Redis. However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values. We were able to measure up to 40 milliseconds in +systems otherwise apparently running normally. + Latency induced by network and communication -------------------------------------------- Clients connect to Redis using a TCP/IP connection or a Unix domain connection. -The typical latency of a 1 GBits/s network is about 200 us, while the latency +The typical latency of a 1 Gbit/s network is about 200 us, while the latency with a Unix domain socket can be as low as 30 us. It actually depends on your network and system hardware. On top of the communication itself, the system adds some more latency (due to thread scheduling, CPU caches, NUMA placement, @@ -39,17 +149,16 @@ Here are some guidelines: + Prefer to use aggregated commands (MSET/MGET), or commands with variadic parameters (if possible) over pipelining. + Prefer to use pipelining (if possible) over sequence of roundtrips. -+ Future version of Redis will support Lua server-side scripting - (experimental branches are already available) to cover cases that are not - suitable for raw pipelining (for instance when the result of a command is - an input for the following commands). ++ Redis supports Lua server-side scripting to cover cases that are not suitable + for raw pipelining (for instance when the result of a command is an input for + the following commands). On Linux, some people can achieve better latencies by playing with process placement (taskset), cgroups, real-time priorities (chrt), NUMA configuration (numactl), or by using a low-latency kernel. Please note vanilla Redis is not really suitable to be bound on a **single** CPU core. Redis can fork background tasks that can be extremely CPU consuming -like bgsave or AOF rewrite. These tasks must **never** run on the same core +like `BGSAVE` or `BGREWRITEAOF`. These tasks must **never** run on the same core as the main event loop. In most situations, these kind of system level optimizations are not needed. @@ -62,25 +171,25 @@ Redis uses a *mostly* single threaded design. This means that a single process serves all the client requests, using a technique called **multiplexing**. This means that Redis can serve a single request in every given moment, so all the requests are served sequentially. This is very similar to how Node.js -works as well. However, both products are often not perceived as being slow. -This is caused in part by the small about of time to complete a single request, +works as well. However, both products are not often perceived as being slow. +This is caused in part by the small amount of time to complete a single request, but primarily because these products are designed to not block on system calls, such as reading data from or writing data to a socket. I said that Redis is *mostly* single threaded since actually from Redis 2.4 we use threads in Redis in order to perform some slow I/O operations in the background, mainly related to disk I/O, but this does not change the fact -that Redis servers all the requests using a single thread. +that Redis serves all the requests using a single thread. Latency generated by slow commands ---------------------------------- A consequence of being single thread is that when a request is slow to serve all the other clients will wait for this request to be served. When executing -normal commands, like **GET** or **SET** or **LPUSH** this is not a problem -at all since this commands are executed in constant (and very small) time. -However there are commands operating on many elements, like **SORT**, **LREM**, -**SUNION** and others. For instance taking the intersection of two big sets +normal commands, like `GET` or `SET` or `LPUSH` this is not a problem +at all since these commands are executed in constant (and very small) time. +However there are commands operating on many elements, like `SORT`, `LREM`, +`SUNION` and others. For instance taking the intersection of two big sets can take a considerable amount of time. The algorithmic complexity of all commands is documented. A good practice @@ -88,7 +197,7 @@ is to systematically check it when using commands you are not familiar with. If you have latency concerns you should either not use slow commands against values composed of many elements, or you should run a replica using Redis -replication where to run all your slow queries. +replication where you run all your slow queries. It is possible to monitor slow commands using the Redis [Slow Log feature](/commands/slowlog). @@ -98,96 +207,70 @@ Additionally, you can use your favorite per-process monitoring program main Redis process. If it is high while the traffic is not, it is usually a sign that slow commands are used. +**IMPORTANT NOTE**: a VERY common source of latency generated by the execution +of slow commands is the use of the `KEYS` command in production environments. +`KEYS`, as documented in the Redis documentation, should only be used for +debugging purposes. Since Redis 2.8 a new commands were introduced in order to +iterate the key space and other large collections incrementally, please check +the `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` commands for more information. Latency generated by fork ------------------------- -Depending on the chosen persistency mechanism, Redis has to fork background -processes. The fork operation (running in the main thread) can induce latency -by itself. +In order to generate the RDB file in background, or to rewrite the Append Only File if AOF persistence is enabled, Redis has to fork background processes. +The fork operation (running in the main thread) can induce latency by itself. Forking is an expensive operation on most Unix-like systems, since it involves copying a good number of objects linked to the process. This is especially true for the page table associated to the virtual memory mechanism. -For instance on a Linux/AMD64 system, the memory is divided in 4 KB pages. +For instance on a Linux/AMD64 system, the memory is divided in 4 kB pages. To convert virtual addresses to physical addresses, each process stores a page table (actually represented as a tree) containing at least a pointer per page of the address space of the process. So a large 24 GB Redis instance -requires a page table of 24 GB / 4 KB * 8 = 48 MB. +requires a page table of 24 GB / 4 kB * 8 = 48 MB. When a background save is performed, this instance will have to be forked, which will involve allocating and copying 48 MB of memory. It takes time and CPU, especially on virtual machines where allocation and initialization of a large memory chunk can be expensive. -Some CPUs can use different page size though. AMD and Intel CPUs can support -2 MB page size if needed. These pages are nicknamed *huge pages*. Some -operating systems can optimize page size in real time, transparently -aggregating small pages into huge pages on the fly. - -On Linux, explicit huge pages management has been introduced in 2.6.16, and -implicit transparent huge pages are available starting in 2.6.38. If you -run recent Linux distributions (for example RH 6 or derivatives), transparent -huge pages can be activated, and you can use a vanilla Redis version with them. - -This is the preferred way to experiment/use with huge pages on Linux. - -Now, if you run older distributions (RH 5, SLES 10-11, or derivatives), and -not afraid of a few hacks, Redis requires to be patched in order to support -huge pages. - -The first step would be to read [Mel Gorman's primer on huge pages](http://lwn.net/Articles/374424/) - -There are currently two ways to patch Redis to support huge pages. - -+ For Redis 2.4, the embedded jemalloc allocator must be patched. -[patch](https://gist.github.com/1171054) by Pieter Noordhuis. -Note this patch relies on the anonymous mmap huge page support, -only available starting 2.6.32, so this method cannot be used for older -distributions (RH 5, SLES 10, and derivatives). - -+ For Redis 2.2, or 2.4 with the libc allocator, Redis makefile -must be altered to link Redis with -[the libhugetlbfs library](http://libhugetlbfs.sourceforge.net/). -It is a straightforward [change](https://gist.github.com/1240452) - -Then, the system must be configured to support huge pages. +Fork time in different systems +------------------------------ -The following command allocates and makes N huge pages available: +Modern hardware is pretty fast at copying the page table, but Xen is not. +The problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virtual Box does not result into slow fork time. +The following is a table that compares fork time for different Redis instance +size. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output. - $ sudo sysctl -w vm.nr_hugepages= +However the good news is that **new types of EC2 HVM based instances are much +better with fork times**, almost on par with physical servers, so for example +using m3.medium (or better) instances will provide good results. -The following command mounts the huge page filesystem: +* **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB). +* **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB) +* **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 milliseconds (9 milliseconds per GB). +* **Linux VM on 6sync (KVM)** 360 MB RSS forked in 8.2 milliseconds (23.3 milliseconds per GB). +* **Linux VM on EC2, old instance types (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB). +* **Linux VM on EC2, new instance types (Xen)** 1GB RSS forked in 10 milliseconds (10 milliseconds per GB). +* **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 milliseconds (424 milliseconds per GB). - $ sudo mount -t hugetlbfs none /mnt/hugetlbfs +As you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude. For EC2 users the suggestion is simple: use modern HVM based instances. -In all cases, once Redis is running with huge pages (transparent or -not), the following benefits are expected: +Latency induced by transparent huge pages +----------------------------------------- -+ The latency due to the fork operations is dramatically reduced. - This is mostly useful for very large instances, and especially - on a VM. -+ Redis is faster due to the fact the translation look-aside buffer - (TLB) of the CPU is more efficient to cache page table entries - (i.e. the hit ratio is better). Do not expect miracle, it is only - a few percent gain at most. -+ Redis memory cannot be swapped out anymore, which is interesting - to avoid outstanding latencies due to virtual memory. +Unfortunately when a Linux kernel has transparent huge pages enabled, Redis +incurs to a big latency penalty after the `fork` call is used in order to +persist on disk. Huge pages are the cause of the following issue: -Unfortunately, and on top of the extra operational complexity, -there is also a significant drawback of running Redis with -huge pages. The COW mechanism granularity is the page. With -2 MB pages, the probability a page is modified during a background -save operation is 512 times higher than with 4 KB pages. The actual -memory required for a background save therefore increases a lot, -especially if the write traffic is truly random, with poor locality. -With huge pages, using twice the memory while saving is not anymore -a theoretical incident. It really happens. +1. Fork is called, two processes with shared huge pages are created. +2. In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory. +3. This will result in big latency and big memory usage. -The result of a complete benchmark can be found -[here](https://gist.github.com/1272254). +Make sure to **disable transparent huge pages** using the following command: + echo never > /sys/kernel/mm/transparent_hugepage/enabled Latency induced by swapping (operating system paging) ----------------------------------------------------- @@ -207,7 +290,7 @@ The kernel relocates Redis memory pages on disk mainly because of three reasons: * The system is under memory pressure since the running processes are demanding more physical memory than the amount that is available. The simplest instance of -this problem is simply Redis using more memory than the one available. +this problem is simply Redis using more memory than is available. * The Redis instance data set, or part of the data set, is mostly completely idle (never accessed by clients), so the kernel could swap idle memory pages on disk. This problem is very rare since even a moderately slow instance will touch all @@ -224,7 +307,7 @@ this is the case. The first thing to do is to checking the amount of Redis memory that is swapped on disk. In order to do so you need to obtain the Redis instance pid: - $ redis-cli info | grep redis-cli info | grep process_id + $ redis-cli info | grep process_id process_id:5454 Now enter the /proc file system directory for this process: @@ -274,7 +357,7 @@ to do is to grep for the Swap field across all the file: Swap: 0 kB Swap: 0 kB -If everything is 0 kb, or if there are sporadic 4k entries, everything is +If everything is 0 kB, or if there are sporadic 4k entries, everything is perfectly normal. Actually in our example instance (the one of a real web site running Redis and serving hundreds of users every second) there are a few entries that show more swapped pages. To investigate if this is a serious @@ -344,7 +427,7 @@ memory map: Swap: 0 kB As you can see from the output, there is a map of 720896 kB -(with just 12 kB swapped) and 156 kb more swapped in another map: +(with just 12 kB swapped) and 156 kB more swapped in another map: basically a very small amount of our memory is swapped so this is not going to create any problem at all. @@ -361,14 +444,14 @@ Redis instance you can further verify it using the **vmstat** command: 0 0 3980 697048 147180 1406640 0 0 0 0 18613 15987 6 6 88 0 2 0 3980 696924 147180 1406656 0 0 0 0 18744 16299 6 5 88 0 0 0 3980 697048 147180 1406688 0 0 0 4 18520 15974 6 6 88 0 -^C + ^C The interesting part of the output for our needs are the two columns **si** and **so**, that counts the amount of memory swapped from/to the swap file. If you see non zero counts in those two columns then there is swapping activity in your system. -Finally, the **iostat** command be be used to check the global I/O activity of +Finally, the **iostat** command can be used to check the global I/O activity of the system. $ iostat -xk 1 @@ -408,7 +491,7 @@ in a different thread since Redis 2.4. We'll see how configuration can affect the amount and source of latency when using the AOF file. -The AOF can be configured to perform an fsync on disk in three different +The AOF can be configured to perform a fsync on disk in three different ways using the **appendfsync** configuration option (this setting can be modified at runtime using the **CONFIG SET** command). @@ -419,15 +502,15 @@ cope with the speed at which Redis is receiving data, however this is uncommon if the disk is not seriously slowed down by other processes doing I/O. -* When appendfsync is set to the value of **everysec** Redis performs an +* When appendfsync is set to the value of **everysec** Redis performs a fsync every second. It uses a different thread, and if the fsync is still in progress Redis uses a buffer to delay the write(2) call up to two seconds -(since write would block on Linux if an fsync is in progress against the +(since write would block on Linux if a fsync is in progress against the same file). However if the fsync is taking too long Redis will eventually perform the write(2) call even if the fsync is still in progress, and this can be a source of latency. -* When appendfsync is set to the value of **always** an fsync is performed +* When appendfsync is set to the value of **always** a fsync is performed at every write operation before replying back to the client with an OK code (actually Redis will try to cluster many commands executed at the same time into a single fsync). In this mode performances are very low in general and @@ -449,7 +532,7 @@ file you can use the strace command under Linux: The above command will show all the fdatasync(2) system calls performed by Redis in the main thread. With the above command you'll not see the fdatasync system calls performed by the background thread when the -the appendfsync config option is set to **everysec**. In order to do so +appendfsync config option is set to **everysec**. In order to do so just add the -f switch to strace. If you wish you can also see both fdatasync and write system calls with the @@ -463,3 +546,81 @@ Apparently there is no way to tell strace to just show slow system calls so I use the following command: sudo strace -f -p $(pidof redis-server) -T -e trace=fdatasync,write 2>&1 | grep -v '0.0' | grep -v unfinished + +Latency generated by expires +---------------------------- + +Redis evict expired keys in two ways: + ++ One *lazy* way expires a key when it is requested by a command, but it is found to be already expired. ++ One *active* way expires a few keys every 100 milliseconds. + +The active expiring is designed to be adaptive. An expire cycle is started every 100 milliseconds (10 times per second), and will do the following: + ++ Sample `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` keys, evicting all the keys already expired. ++ If the more than 25% of the keys were found expired, repeat. + +Given that `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` is set to 20 by default, and the process is performed ten times per second, usually just 200 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time, so that the *lazy* algorithm does not help. At the same time expiring just 200 keys per second has no effects in the latency a Redis instance. + +However the algorithm is adaptive and will loop if it finds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*. + +Basically this means that **if the database has many, many keys expiring in the same second, and these make up at least 25% of the current population of keys with an expire set**, Redis can block in order to get the percentage of keys already expired below 25%. + +This approach is needed in order to avoid using too much memory for keys that are already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time. + +In short: be aware that many keys expiring at the same moment can be a source of latency. + +Redis software watchdog +--- + +Redis 2.6 introduces the *Redis Software Watchdog* that is a debugging tool +designed to track those latency problems that for one reason or the other +escaped an analysis using normal tools. + +The software watchdog is an experimental feature. While it is designed to +be used in production environments care should be taken to backup the database +before proceeding as it could possibly have unexpected interactions with the +normal execution of the Redis server. + +It is important to use it only as *last resort* when there is no way to track the issue by other means. + +This is how this feature works: + +* The user enables the software watchdog using the `CONFIG SET` command. +* Redis starts monitoring itself constantly. +* If Redis detects that the server is blocked into some operation that is not returning fast enough, and that may be the source of the latency issue, a low level report about where the server is blocked is dumped on the log file. +* The user contacts the developers writing a message in the Redis Google Group, including the watchdog report in the message. + +Note that this feature cannot be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes. + +To enable the feature just use the following: + + CONFIG SET watchdog-period 500 + +The period is specified in milliseconds. In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater. The minimum configurable period is 200 milliseconds. + +When you are done with the software watchdog you can turn it off setting the `watchdog-period` parameter to 0. **Important:** remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea. + +The following is an example of what you'll see printed in the log file once the software watchdog detects a delay longer than the configured one: + + [8547 | signal handler] (1333114359) + --- WATCHDOG TIMER EXPIRED --- + /lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d] + /lib/libpthread.so.0(+0xf8f0) [0x7f16b5f158f0] + /lib/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d] + /lib/libc.so.6(usleep+0x34) [0x7f16b5c62844] + ./redis-server(debugCommand+0x3e1) [0x43ab41] + ./redis-server(call+0x5d) [0x415a9d] + ./redis-server(processCommand+0x375) [0x415fc5] + ./redis-server(processInputBuffer+0x4f) [0x4203cf] + ./redis-server(readQueryFromClient+0xa0) [0x4204e0] + ./redis-server(aeProcessEvents+0x128) [0x411b48] + ./redis-server(aeMain+0x2b) [0x411dbb] + ./redis-server(main+0x2b6) [0x418556] + /lib/libc.so.6(__libc_start_main+0xfd) [0x7f16b5ba1c4d] + ./redis-server() [0x411099] + ------ + +Note: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context. + +If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is. diff --git a/docs/management/optimization/memory-optimization.md b/docs/management/optimization/memory-optimization.md new file mode 100644 index 0000000000..09cc105a06 --- /dev/null +++ b/docs/management/optimization/memory-optimization.md @@ -0,0 +1,274 @@ +--- +title: Memory optimization +linkTitle: Memory optimization +description: Strategies for optimizing memory usage in Redis +weight: 1 +aliases: [ + /topics/memory-optimization, + /docs/reference/optimization/memory-optimization +] +--- + +## Special encoding of small aggregate data types + +Since Redis 2.2 many data types are optimized to use less space up to a certain size. +Hashes, Lists, Sets composed of just integers, and Sorted Sets, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory-efficient way that uses *up to 10 times less memory* (with 5 times less memory used being the average saving). + +This is completely transparent from the point of view of the user and API. +Since this is a CPU / memory tradeoff it is possible to tune the maximum +number of elements and maximum element size for special encoded types +using the following redis.conf directives (defaults are shown): + +### Redis <= 6.2 + +``` +hash-max-ziplist-entries 512 +hash-max-ziplist-value 64 +zset-max-ziplist-entries 128 +zset-max-ziplist-value 64 +set-max-intset-entries 512 +``` + +### Redis >= 7.0 + +``` +hash-max-listpack-entries 512 +hash-max-listpack-value 64 +zset-max-listpack-entries 128 +zset-max-listpack-value 64 +set-max-intset-entries 512 +``` + +### Redis >= 7.2 + +The following directives are also available: + +``` +set-max-listpack-entries 128 +set-max-listpack-value 64 +``` + +If a specially encoded value overflows the configured max size, +Redis will automatically convert it into normal encoding. +This operation is very fast for small values, +but if you change the setting in order to use specially encoded values +for much larger aggregate types the suggestion is to run some +benchmarks and tests to check the conversion time. + +## Using 32-bit instances + + +When Redis is compiled as a 32-bit target, it uses a lot less memory per key, since pointers are small, +but such an instance will be limited to 4 GB of maximum memory usage. +To compile Redis as 32-bit binary use *make 32bit*. +RDB and AOF files are compatible between 32-bit and 64-bit instances +(and between little and big endian of course) so you can switch from 32 to 64-bit, or the contrary, without problems. + +## Bit and byte level operations + +Redis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`. +Using these commands you can treat the Redis string type as a random access array. +For instance, if you have an application where users are identified by a unique progressive integer number, +you can use a bitmap to save information about the subscription of users in a mailing list, +setting the bit for subscribed and clearing it for unsubscribed, or the other way around. +With 100 million users this data will take just 12 megabytes of RAM in a Redis instance. +You can do the same using `GETRANGE` and `SETRANGE` to store one byte of information for each user. +This is just an example but it is possible to model several problems in very little space with these new primitives. + +## Use hashes when possible + + +Small hashes are encoded in a very small space, so you should try representing your data using hashes whenever possible. +For instance, if you have objects representing users in a web application, +instead of using different keys for name, surname, email, password, use a single hash with all the required fields. + +If you want to know more about this, read the next section. + +## Using hashes to abstract a very memory-efficient plain key-value store on top of Redis + +I understand the title of this section is a bit scary, but I'm going to explain in detail what this is about. + +Basically it is possible to model a plain key-value store using Redis +where values can just be just strings, which is not just more memory efficient +than Redis plain keys but also much more memory efficient than memcached. + +Let's start with some facts: a few keys use a lot more memory than a single key +containing a hash with a few fields. How is this possible? We use a trick. +In theory to guarantee that we perform lookups in constant time +(also known as O(1) in big O notation) there is the need to use a data structure +with a constant time complexity in the average case, like a hash table. + +But many times hashes contain just a few fields. When hashes are small we can +instead just encode them in an O(N) data structure, like a linear +array with length-prefixed key-value pairs. Since we do this only when N +is small, the amortized time for `HGET` and `HSET` commands is still O(1): the +hash will be converted into a real hash table as soon as the number of elements +it contains grows too large (you can configure the limit in redis.conf). + +This does not only work well from the point of view of time complexity, but +also from the point of view of constant times since a linear array of key-value pairs happens to play very well with the CPU cache (it has a better +cache locality than a hash table). + +However since hash fields and values are not (always) represented as full-featured Redis objects, hash fields can't have an associated time to live +(expire) like a real key, and can only contain a string. But we are okay with +this, this was the intention anyway when the hash data type API was +designed (we trust simplicity more than features, so nested data structures +are not allowed, as expires of single fields are not allowed). + +So hashes are memory efficient. This is useful when using hashes +to represent objects or to model other problems when there are group of +related fields. But what about if we have a plain key value business? + +Imagine we want to use Redis as a cache for many small objects, which can be JSON encoded objects, small HTML fragments, simple key -> boolean values +and so forth. Basically, anything is a string -> string map with small keys +and values. + +Now let's assume the objects we want to cache are numbered, like: + + * object:102393 + * object:1234 + * object:5 + +This is what we can do. Every time we perform a +SET operation to set a new value, we actually split the key into two parts, +one part used as a key, and the other part used as the field name for the hash. For instance, the +object named "object:1234" is actually split into: + +* a Key named object:12 +* a Field named 34 + +So we use all the characters but the last two for the key, and the final +two characters for the hash field name. To set our key we use the following +command: + +``` +HSET object:12 34 somevalue +``` + +As you can see every hash will end up containing 100 fields, which is an optimal compromise between CPU and memory saved. + +There is another important thing to note, with this schema +every hash will have more or +less 100 fields regardless of the number of objects we cached. This is because our objects will always end with a number and not a random string. In some way, the final number can be considered as a form of implicit pre-sharding. + +What about small numbers? Like object:2? We handle this case using just +"object:" as a key name, and the whole number as the hash field name. +So object:2 and object:10 will both end inside the key "object:", but one +as field name "2" and one as "10". + +How much memory do we save this way? + +I used the following Ruby program to test how this works: + +```ruby +require 'rubygems' +require 'redis' + +USE_OPTIMIZATION = true + +def hash_get_key_field(key) + s = key.split(':') + if s[1].length > 2 + { key: s[0] + ':' + s[1][0..-3], field: s[1][-2..-1] } + else + { key: s[0] + ':', field: s[1] } + end +end + +def hash_set(r, key, value) + kf = hash_get_key_field(key) + r.hset(kf[:key], kf[:field], value) +end + +def hash_get(r, key, value) + kf = hash_get_key_field(key) + r.hget(kf[:key], kf[:field], value) +end + +r = Redis.new +(0..100_000).each do |id| + key = "object:#{id}" + if USE_OPTIMIZATION + hash_set(r, key, 'val') + else + r.set(key, 'val') + end +end +``` + +This is the result against a 64 bit instance of Redis 2.2: + + * USE_OPTIMIZATION set to true: 1.7 MB of used memory + * USE_OPTIMIZATION set to false; 11 MB of used memory + +This is an order of magnitude, I think this makes Redis more or less the most +memory efficient plain key value store out there. + +*WARNING*: for this to work, make sure that in your redis.conf you have +something like this: + +``` +hash-max-zipmap-entries 256 +``` + +Also remember to set the following field accordingly to the maximum size +of your keys and values: + +``` +hash-max-zipmap-value 1024 +``` + +Every time a hash exceeds the number of elements or element size specified +it will be converted into a real hash table, and the memory saving will be lost. + +You may ask, why don't you do this implicitly in the normal key space so that +I don't have to care? There are two reasons: one is that we tend to make +tradeoffs explicit, and this is a clear tradeoff between many things: CPU, +memory, and max element size. The second is that the top-level key space must +support a lot of interesting things like expires, LRU data, and so +forth so it is not practical to do this in a general way. + +But the Redis Way is that the user must understand how things work so that he can pick the best compromise and to understand how the system will +behave exactly. + +## Memory allocation + +To store user keys, Redis allocates at most as much memory as the `maxmemory` +setting enables (however there are small extra allocations possible). + +The exact value can be set in the configuration file or set later via +`CONFIG SET` (for more info, see [Using memory as an LRU cache](/docs/reference/eviction)). +There are a few things that should be noted about how Redis manages memory: + +* Redis will not always free up (return) memory to the OS when keys are removed. +This is not something special about Redis, but it is how most malloc() implementations work. +For example, if you fill an instance with 5GB worth of data, and then +remove the equivalent of 2GB of data, the Resident Set Size (also known as +the RSS, which is the number of memory pages consumed by the process) +will probably still be around 5GB, even if Redis will claim that the user +memory is around 3GB. This happens because the underlying allocator can't easily release the memory. +For example, often most of the removed keys were allocated on the same pages as the other keys that still exist. +* The previous point means that you need to provision memory based on your +**peak memory usage**. If your workload from time to time requires 10GB, even if +most of the time 5GB could do, you need to provision for 10GB. +* However allocators are smart and are able to reuse free chunks of memory, +so after you free 2GB of your 5GB data set, when you start adding more keys +again, you'll see the RSS (Resident Set Size) stay steady and not grow +more, as you add up to 2GB of additional keys. The allocator is basically +trying to reuse the 2GB of memory previously (logically) freed. +* Because of all this, the fragmentation ratio is not reliable when you +had a memory usage that at the peak is much larger than the currently used memory. +The fragmentation is calculated as the physical memory actually used (the RSS +value) divided by the amount of memory currently in use (as the sum of all +the allocations performed by Redis). Because the RSS reflects the peak memory, +when the (virtually) used memory is low since a lot of keys/values were freed, but the RSS is high, the ratio `RSS / mem_used` will be very high. + +If `maxmemory` is not set Redis will keep allocating memory as it sees +fit and thus it can (gradually) eat up all your free memory. +Therefore it is generally advisable to configure some limits. You may also +want to set `maxmemory-policy` to `noeviction` (which is *not* the default +value in some older versions of Redis). + +It makes Redis return an out-of-memory error for write commands if and when it reaches the +limit - which in turn may result in errors in the application but will not render the +whole machine dead because of memory starvation. diff --git a/docs/management/persistence.md b/docs/management/persistence.md new file mode 100644 index 0000000000..4328c1b849 --- /dev/null +++ b/docs/management/persistence.md @@ -0,0 +1,383 @@ +--- +title: Redis persistence +linkTitle: Persistence +weight: 7 +description: How Redis writes data to disk +aliases: [ + /topics/persistence, + /topics/persistence.md, + /docs/manual/persistence, + /docs/manual/persistence.md +] +--- + +Persistence refers to the writing of data to durable storage, such as a solid-state disk (SSD). Redis provides a range of persistence options. These include: + +* **RDB** (Redis Database): RDB persistence performs point-in-time snapshots of your dataset at specified intervals. +* **AOF** (Append Only File): AOF persistence logs every write operation received by the server. These operations can then be replayed again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself. +* **No persistence**: You can disable persistence completely. This is sometimes used when caching. +* **RDB + AOF**: You can also combine both AOF and RDB in the same instance. + +If you'd rather not think about the tradeoffs between these different persistence strategies, you may want to consider [Redis Enterprise's persistence options](https://docs.redis.com/latest/rs/databases/configure/database-persistence/), which can be pre-configured using a UI. + +To learn more about how to evaluate your Redis persistence strategy, read on. + +## RDB advantages + +* RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters. +* RDB is very good for disaster recovery, being a single compact file that can be transferred to far data centers, or onto Amazon S3 (possibly encrypted). +* RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent process will never perform disk I/O or alike. +* RDB allows faster restarts with big datasets compared to AOF. +* On replicas, RDB supports [partial resynchronizations after restarts and failovers](https://redis.io/topics/replication#partial-resynchronizations-after-restarts-and-failovers). + +## RDB disadvantages + +* RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working (for example after a power outage). You can configure different *save points* where an RDB is produced (for instance after at least five minutes and 100 writes against the data set, you can have multiple save points). However you'll usually create an RDB snapshot every five minutes or more, so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data. +* RDB needs to fork() often in order to persist on disk using a child process. fork() can be time consuming if the dataset is big, and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great. AOF also needs to fork() but less frequently and you can tune how often you want to rewrite your logs without any trade-off on durability. + +## AOF advantages + +* Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second, write performance is still great. fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress, so you can only lose one second worth of writes. +* The AOF log is an append-only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with a half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily. +* Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one. +* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you've accidentally flushed everything using the `FLUSHALL` command, as long as no rewrite of the log was performed in the meantime, you can still save your data set just by stopping the server, removing the latest command, and restarting Redis again. + +## AOF disadvantages + +* AOF files are usually bigger than the equivalent RDB files for the same dataset. +* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load. + +**Redis < 7.0** + +* AOF can use a lot of memory if there are writes to the database during a rewrite (these are buffered in memory and written to the new AOF at the end). +* All write commands that arrive during rewrite are written to disk twice. +* Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite. + +Ok, so what should I use? +--- + +The general indication you should use both persistence methods is if +you want a degree of data safety comparable to what PostgreSQL can provide you. + +If you care a lot about your data, but still can live with a few minutes of +data loss in case of disasters, you can simply use RDB alone. + +There are many users using AOF alone, but we discourage it since to have an +RDB snapshot from time to time is a great idea for doing database backups, +for faster restarts, and in the event of bugs in the AOF engine. + +The following sections will illustrate a few more details about the two persistence models. + +## Snapshotting + +By default Redis saves snapshots of the dataset on disk, in a binary +file called `dump.rdb`. You can configure Redis to have it save the +dataset every N seconds if there are at least M changes in the dataset, +or you can manually call the `SAVE` or `BGSAVE` commands. + +For example, this configuration will make Redis automatically dump the +dataset to disk every 60 seconds if at least 1000 keys changed: + + save 60 1000 + +This strategy is known as _snapshotting_. + +### How it works + +Whenever Redis needs to dump the dataset to disk, this is what happens: + +* Redis [forks](http://linux.die.net/man/2/fork). We now have a child +and a parent process. + +* The child starts to write the dataset to a temporary RDB file. + +* When the child is done writing the new RDB file, it replaces the old +one. + +This method allows Redis to benefit from copy-on-write semantics. + +## Append-only file + +Snapshotting is not very durable. If your computer running Redis stops, +your power line fails, or you accidentally `kill -9` your instance, the +latest data written to Redis will be lost. While this may not be a big +deal for some applications, there are use cases for full durability, and +in these cases Redis snapshotting alone is not a viable option. + +The _append-only file_ is an alternative, fully-durable strategy for +Redis. It became available in version 1.1. + +You can turn on the AOF in your configuration file: + + appendonly yes + +From now on, every time Redis receives a command that changes the +dataset (e.g. `SET`) it will append it to the AOF. When you restart +Redis it will re-play the AOF to rebuild the state. + +Since Redis 7.0.0, Redis uses a multi part AOF mechanism. +That is, the original single AOF file is split into base file (at most one) and incremental files (there may be more than one). +The base file represents an initial (RDB or AOF format) snapshot of the data present when the AOF is [rewritten](#log-rewriting). +The incremental files contains incremental changes since the last base AOF file was created. All these files are put in a separate directory and are tracked by a manifest file. + +### Log rewriting + +The AOF gets bigger and bigger as write operations are +performed. For example, if you are incrementing a counter 100 times, +you'll end up with a single key in your dataset containing the final +value, but 100 entries in your AOF. 99 of those entries are not needed +to rebuild the current state. + +The rewrite is completely safe. +While Redis continues appending to the old file, +a completely new one is produced with the minimal set of operations needed to create the current data set, +and once this second file is ready Redis switches the two and starts appending to the new one. + +So Redis supports an interesting feature: it is able to rebuild the AOF +in the background without interrupting service to clients. Whenever +you issue a `BGREWRITEAOF`, Redis will write the shortest sequence of +commands needed to rebuild the current dataset in memory. If you're +using the AOF with Redis 2.2 you'll need to run `BGREWRITEAOF` from time to +time. Since Redis 2.4 is able to trigger log rewriting automatically (see the +example configuration file for more information). + +Since Redis 7.0.0, when an AOF rewrite is scheduled, the Redis parent process opens a new incremental AOF file to continue writing. +The child process executes the rewrite logic and generates a new base AOF. +Redis will use a temporary manifest file to track the newly generated base file and incremental file. +When they are ready, Redis will perform an atomic replacement operation to make this temporary manifest file take effect. +In order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite, +Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate. + +### How durable is the append only file? + +You can configure how many times Redis will +[`fsync`](http://linux.die.net/man/2/fsync) data on disk. There are +three options: + +* `appendfsync always`: `fsync` every time new commands are appended to the AOF. Very very slow, very safe. Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed, so it means a single write and a single fsync (before sending the replies). +* `appendfsync everysec`: `fsync` every second. Fast enough (since version 2.4 likely to be as fast as snapshotting), and you may lose 1 second of data if there is a disaster. +* `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning. + +The suggested (and default) policy is to `fsync` every second. It is +both fast and relatively safe. The `always` policy is very slow in +practice, but it supports group commit, so if there are multiple parallel +writes Redis will try to perform a single `fsync` operation. + +### What should I do if my AOF gets truncated? + +It is possible the server crashed while writing the AOF file, or the +volume where the AOF file is stored was full at the time of writing. When this happens the +AOF still contains consistent data representing a given point-in-time version +of the dataset (that may be old up to one second with the default AOF fsync +policy), but the last command in the AOF could be truncated. +The latest major versions of Redis will be able to load the AOF anyway, just +discarding the last non well formed command in the file. In this case the +server will emit a log like the following: + +``` +* Reading RDB preamble from AOF file... +* Reading the remaining AOF tail... +# !!! Warning: short read while loading the AOF file !!! +# !!! Truncating the AOF at offset 439 !!! +# AOF loaded anyway because aof-load-truncated is enabled +``` + +You can change the default configuration to force Redis to stop in such +cases if you want, but the default configuration is to continue regardless of +the fact the last command in the file is not well-formed, in order to guarantee +availability after a restart. + +Older versions of Redis may not recover, and may require the following steps: + +* Make a backup copy of your AOF file. +* Fix the original file using the `redis-check-aof` tool that ships with Redis: + + $ redis-check-aof --fix + +* Optionally use `diff -u` to check what is the difference between two files. +* Restart the server with the fixed file. + +### What should I do if my AOF gets corrupted? + +If the AOF file is not just truncated, but corrupted with invalid byte +sequences in the middle, things are more complex. Redis will complain +at startup and will abort: + +``` +* Reading the remaining AOF tail... +# Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix +``` + +The best thing to do is to run the `redis-check-aof` utility, initially without +the `--fix` option, then understand the problem, jump to the given +offset in the file, and see if it is possible to manually repair the file: +The AOF uses the same format of the Redis protocol and is quite simple to fix +manually. Otherwise it is possible to let the utility fix the file for us, but +in that case all the AOF portion from the invalid part to the end of the +file may be discarded, leading to a massive amount of data loss if the +corruption happened to be in the initial part of the file. + +### How it works + +Log rewriting uses the same copy-on-write trick already in use for +snapshotting. This is how it works: + +**Redis >= 7.0** + +* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child +and a parent process. + +* The child starts writing the new base AOF in a temporary file. + +* The parent opens a new increments AOF file to continue writing updates. + If the rewriting fails, the old base and increment files (if there are any) plus this newly opened increment file represent the complete updated dataset, + so we are safe. + +* When the child is done rewriting the base file, the parent gets a signal, +and uses the newly opened increment file and child generated base file to build a temp manifest, +and persist it. + +* Profit! Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect. Redis also cleans up the old base file and any unused increment files. + +**Redis < 7.0** + +* Redis [forks](http://linux.die.net/man/2/fork), so now we have a child +and a parent process. + +* The child starts writing the new AOF in a temporary file. + +* The parent accumulates all the new changes in an in-memory buffer (but +at the same time it writes the new changes in the old append-only file, +so if the rewriting fails, we are safe). + +* When the child is done rewriting the file, the parent gets a signal, +and appends the in-memory buffer at the end of the file generated by the +child. + +* Now Redis atomically renames the new file into the old one, +and starts appending new data into the new file. + +### How I can switch to AOF, if I'm currently using dump.rdb snapshots? + +If you want to enable AOF in a server that is currently using RDB snapshots, you need to convert the data by enabling AOF via CONFIG command on the live server first. + +**IMPORTANT:** not following this procedure (e.g. just changing the config and restarting the server) can result in data loss! + +**Redis >= 2.2** + +Preparations: + +* Make a backup of your latest dump.rdb file. +* Transfer this backup to a safe place. + +Switch to AOF on live database: + +* Enable AOF: `redis-cli config set appendonly yes` +* Optionally disable RDB: `redis-cli config set save ""` +* Make sure writes are appended to the append only file correctly. +* **IMPORTANT:** Update your `redis.conf` (potentially through `CONFIG REWRITE`) and ensure that it matches the configuration above. + If you forget this step, when you restart the server, the configuration changes will be lost and the server will start again with the old configuration, resulting in a loss of your data. + +Next time you restart the server: + +* Before restarting the server, wait for AOF rewrite to finish persisting the data. + You can do that by watching `INFO persistence`, waiting for `aof_rewrite_in_progress` and `aof_rewrite_scheduled` to be `0`, and validating that `aof_last_bgrewrite_status` is `ok`. +* After restarting the server, check that your database contains the same number of keys it contained previously. + +**Redis 2.0** + +* Make a backup of your latest dump.rdb file. +* Transfer this backup into a safe place. +* Stop all the writes against the database! +* Issue a `redis-cli BGREWRITEAOF`. This will create the append only file. +* Stop the server when Redis finished generating the AOF dump. +* Edit redis.conf end enable append only file persistence. +* Restart the server. +* Make sure that your database contains the same number of keys it contained before the switch. +* Make sure that writes are appended to the append only file correctly. + +## Interactions between AOF and RDB persistence + +Redis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB +snapshotting operation is already in progress, or allowing a `BGSAVE` while the +AOF rewrite is in progress. This prevents two Redis background processes +from doing heavy disk I/O at the same time. + +When snapshotting is in progress and the user explicitly requests a log +rewrite operation using `BGREWRITEAOF` the server will reply with an OK +status code telling the user the operation is scheduled, and the rewrite +will start once the snapshotting is completed. + +In the case both AOF and RDB persistence are enabled and Redis restarts the +AOF file will be used to reconstruct the original dataset since it is +guaranteed to be the most complete. + +## Backing up Redis data + +Before starting this section, make sure to read the following sentence: **Make Sure to Backup Your Database**. Disks break, instances in the cloud disappear, and so forth: no backups means huge risk of data disappearing into /dev/null. + +Redis is very data backup friendly since you can copy RDB files while the +database is running: the RDB is never modified once produced, and while it +gets produced it uses a temporary name and is renamed into its final destination +atomically using rename(2) only when the new snapshot is complete. + +This means that copying the RDB file is completely safe while the server is +running. This is what we suggest: + +* Create a cron job in your server creating hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory. +* Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with date and time information. +* At least one time every day make sure to transfer an RDB snapshot *outside your data center* or at least *outside the physical machine* running your Redis instance. + +### Backing up AOF persistence + +If you run a Redis instance with only AOF persistence enabled, you can still perform backups. +Since Redis 7.0.0, AOF files are split into multiple files which reside in a single directory determined by the `appenddirname` configuration. +During normal operation all you need to do is copy/tar the files in this directory to achieve a backup. However, if this is done during a [rewrite](#log-rewriting), you might end up with an invalid backup. +To work around this you must disable AOF rewrites during the backup: + +1. Turn off automatic rewrites with
+ `CONFIG SET` `auto-aof-rewrite-percentage 0`
+ Make sure you don't manually start a rewrite (using `BGREWRITEAOF`) during this time. +2. Check there's no current rewrite in progress using
+ `INFO` `persistence`
+ and verifying `aof_rewrite_in_progress` is 0. If it's 1, then you'll need to wait for the rewrite to complete. +3. Now you can safely copy the files in the `appenddirname` directory. +4. Re-enable rewrites when done:
+ `CONFIG SET` `auto-aof-rewrite-percentage ` + +**Note:** If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in `appenddirname` (in step 3 above) and then re-enable rewrites (step 4) after the hard links are created. +Now you can copy/tar the hardlinks and delete them when done. This works because Redis guarantees that it +only appends to files in this directory, or completely replaces them if necessary, so the content should be +consistent at any given point in time. + + +**Note:** If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via `CONFIG REWRITE`. +Just make sure to re-enable automatic rewrites when done (step 4) and persist it with another `CONFIG REWRITE`. + +Prior to version 7.0.0 backing up the AOF file can be done simply by copying the aof file (like backing up the RDB snapshot). The file may lack the final part +but Redis will still be able to load it (see the previous sections about [truncated AOF files](#what-should-i-do-if-my-aof-gets-truncated)). + + +## Disaster recovery + +Disaster recovery in the context of Redis is basically the same story as +backups, plus the ability to transfer those backups in many different external +data centers. This way data is secured even in the case of some catastrophic +event affecting the main data center where Redis is running and producing its +snapshots. + +We'll review the most interesting disaster recovery techniques +that don't have too high costs. + +* Amazon S3 and other similar services are a good way for implementing your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety. +* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate a ssh client key without passphrase, then add it in the `authorized_keys` file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers +for best results. + +It is important to understand that this system can easily fail if not +implemented in the right way. At least, make absolutely sure that after the +transfer is completed you are able to verify the file size (that should match +the one of the file you copied) and possibly the SHA1 digest, if you are using +a VPS. + +You also need some kind of independent alert system if the transfer of fresh +backups is not working for some reason. diff --git a/docs/management/replication.md b/docs/management/replication.md new file mode 100644 index 0000000000..79e7e341fc --- /dev/null +++ b/docs/management/replication.md @@ -0,0 +1,357 @@ +--- +title: Redis replication +linkTitle: Replication +weight: 5 +description: How Redis supports high availability and failover with replication +aliases: [ + /topics/replication, + /topics/replication.md, + /docs/manual/replication, + /docs/manual/replication.md +] +--- + +At the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a *leader follower* (master-replica) replication that is simple to use and configure. It allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master. + +This system works using three main mechanisms: + +1. When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset. +2. When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection. +3. When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes. + +Redis uses by default asynchronous replication, which being low latency and +high performance, is the natural replication mode for the vast majority of Redis +use cases. However, Redis replicas asynchronously acknowledge the amount of data +they received periodically with the master. So the master does not wait every time +for a command to be processed by the replicas, however it knows, if needed, what +replica already processed what command. This allows having optional synchronous replication. + +Synchronous replication of certain data can be requested by the clients using +the `WAIT` command. However `WAIT` is only able to ensure there are the +specified number of acknowledged copies in the other Redis instances, it does not +turn a set of Redis instances into a CP system with strong consistency: acknowledged +writes can still be lost during a failover, depending on the exact configuration +of the Redis persistence. However with `WAIT` the probability of losing a write +after a failure event is greatly reduced to certain hard to trigger failure +modes. + +You can check the Redis Sentinel or Redis Cluster documentation for more information +about high availability and failover. The rest of this document mainly describes the basic characteristics of Redis basic replication. + +### Important facts about Redis replication + +* Redis uses asynchronous replication, with asynchronous replica-to-master acknowledges of the amount of data processed. +* A master can have multiple replicas. +* Replicas are able to accept connections from other replicas. Aside from connecting a number of replicas to the same master, replicas can also be connected to other replicas in a cascading-like structure. Since Redis 4.0, all the sub-replicas will receive exactly the same replication stream from the master. +* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization. +* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 you can configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica. +* Replication can be used both for scalability, to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability. +* You can use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However, this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to sync with it, the replica will be emptied as well. + +## Safety of replication when master has persistence turned off + +In setups where Redis replication is used, it is strongly advised to have +persistence turned on in the master and in the replicas. When this is not possible, +for example because of latency concerns due to very slow disks, instances should +be configured to **avoid restarting automatically** after a reboot. + +To better understand why masters with persistence turned off configured to +auto restart are dangerous, check the following failure mode where data +is wiped from the master and all its replicas: + +1. We have a setup with node A acting as master, with persistence turned down, and nodes B and C replicating from node A. +2. Node A crashes, however it has some auto-restart system, that restarts the process. However since persistence is turned off, the node restarts with an empty data set. +3. Nodes B and C will replicate from node A, which is empty, so they'll effectively destroy their copy of the data. + +When Redis Sentinel is used for high availability, also turning off persistence +on the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to not detect a failure, so that the failure mode described above happens. + +Every time data safety is important, and replication is used with master configured without persistence, auto restart of instances should be disabled. + +## How Redis replication works + +Every Redis master has a replication ID: it is a large pseudo random string +that marks a given story of the dataset. Each master also takes an offset that +increments for every byte of replication stream that it is produced to be +sent to replicas, to update the state of the replicas with the new changes +modifying the dataset. The replication offset is incremented even if no replica +is actually connected, so basically every given pair of: + + Replication ID, offset + +Identifies an exact version of the dataset of a master. + +When replicas connect to masters, they use the `PSYNC` command to send +their old master replication ID and the offsets they processed so far. This way +the master can send just the incremental part needed. However if there is not +enough *backlog* in the master buffers, or if the replica is referring to an +history (replication ID) which is no longer known, then a full resynchronization +happens: in this case the replica will get a full copy of the dataset, from scratch. + +This is how a full synchronization works in more details: + +The master starts a background saving process to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the replica, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the replica. This is done as a stream of commands and is in the same format of the Redis protocol itself. + +You can try it yourself via telnet. Connect to the Redis port while the +server is doing some work and issue the `SYNC` command. You'll see a bulk +transfer and then every command received by the master will be re-issued +in the telnet session. Actually `SYNC` is an old protocol no longer used by +newer Redis instances, but is still there for backward compatibility: it does +not allow partial resynchronizations, so now `PSYNC` is used instead. + +As already said, replicas are able to automatically reconnect when the master-replica link goes down for some reason. If the master receives multiple concurrent replica synchronization requests, it performs a single background save in to serve all of them. + +## Replication ID explained + +In the previous section we said that if two instances have the same replication +ID and replication offset, they have exactly the same data. However it is useful +to understand what exactly is the replication ID, and why instances have actually +two replication IDs: the main ID and the secondary ID. + +A replication ID basically marks a given *history* of the data set. Every time +an instance restarts from scratch as a master, or a replica is promoted to master, +a new replication ID is generated for this instance. The replicas connected to +a master will inherit its replication ID after the handshake. So two instances +with the same ID are related by the fact that they hold the same data, but +potentially at a different time. It is the offset that works as a logical time +to understand, for a given history (replication ID), who holds the most updated +data set. + +For instance, if two instances A and B have the same replication ID, but one +with offset 1000 and one with offset 1023, it means that the first lacks certain +commands applied to the data set. It also means that A, by applying just a few +commands, may reach exactly the same state of B. + +The reason why Redis instances have two replication IDs is because of replicas +that are promoted to masters. After a failover, the promoted replica requires +to still remember what was its past replication ID, because such replication ID +was the one of the former master. In this way, when other replicas will sync +with the new master, they will try to perform a partial resynchronization using the +old master replication ID. This will work as expected, because when the replica +is promoted to master it sets its secondary ID to its main ID, remembering what +was the offset when this ID switch happened. Later it will select a new random +replication ID, because a new history begins. When handling the new replicas +connecting, the master will match their IDs and offsets both with the current +ID and the secondary ID (up to a given offset, for safety). In short this means +that after a failover, replicas connecting to the newly promoted master don't have +to perform a full sync. + +In case you wonder why a replica promoted to master needs to change its +replication ID after a failover: it is possible that the old master is still +working as a master because of some network partition: retaining the same +replication ID would violate the fact that the same ID and same offset of any +two random instances mean they have the same data set. + +## Diskless replication + +Normally a full resynchronization requires creating an RDB file on disk, +then reloading the same RDB from disk to feed the replicas with the data. + +With slow disks this can be a very stressing operation for the master. +Redis version 2.8.18 is the first version to have support for diskless +replication. In this setup the child process directly sends the +RDB over the wire to replicas, without using the disk as intermediate storage. + +## Configuration + +To configure basic Redis replication is trivial: just add the following line to the replica configuration file: + + replicaof 192.168.1.1 6379 + +Of course you need to replace 192.168.1.1 6379 with your master IP address (or +hostname) and port. Alternatively, you can call the `REPLICAOF` command and the +master host will start a sync with the replica. + +There are also a few parameters for tuning the replication backlog taken +in memory by the master to perform the partial resynchronization. See the example +`redis.conf` shipped with the Redis distribution for more information. + +Diskless replication can be enabled using the `repl-diskless-sync` configuration +parameter. The delay to start the transfer to wait for more replicas to +arrive after the first one is controlled by the `repl-diskless-sync-delay` +parameter. Please refer to the example `redis.conf` file in the Redis distribution +for more details. + +## Read-only replica + +Since Redis 2.6, replicas support a read-only mode that is enabled by default. +This behavior is controlled by the `replica-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`. + +Read-only replicas will reject all write commands, so that it is not possible to write to a replica because of a mistake. This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. The [Security](/topics/security) page describes how to secure a Redis instance. + +You may wonder why it is possible to revert the read-only setting +and have replica instances that can be targeted by write operations. +The answer is that writable replicas exist only for historical reasons. +Using writable replicas can result in inconsistency between the master and the replica, so it is not recommended to use writable replicas. +To understand in which situations this can be a problem, we need to understand how replication works. +Changes on the master is replicated by propagating regular Redis commands to the replica. +When a key expires on the master, this is propagated as a DEL command. +If a key which exists on the master but is deleted, expired or has a different type on the replica compared to the master will react differently to commands like DEL, INCR or RPOP propagated from the master than intended. +The propagated command may fail on the replica or result in a different outcome. +To minimize the risks (if you insist on using writable replicas) we suggest you follow these recommendations: + +* Don't write to keys in a writable replica that are also used on the master. + (This can be hard to guarantee if you don't have control over all the clients that write to the master.) + +* Don't configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system. + In general, don't configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency. + +Historically, there were some use cases that were considered legitimate for writable replicas. +As of version 7.0, these use cases are now all obsolete and the same can be achieved by other means. +For example: + +* Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like `SUNIONSTORE` and `ZINTERSTORE`. + Instead, use commands that return the result without storing it, such as `SUNION` and `ZINTER`. + +* Using the `SORT` command (which is not considered a read-only command because of the optional STORE option and therefore cannot be used on a read-only replica). + Instead, use `SORT_RO`, which is a read-only command. + +* Using `EVAL` and `EVALSHA` are also not considered read-only commands, because the Lua script may call write commands. + Instead, use `EVAL_RO` and `EVALSHA_RO` where the Lua script can only call read-only commands. + +While writes to a replica will be discarded if the replica and the master resync or if the replica is restarted, there is no guarantee that they will sync automatically. + +Before version 4.0, writable replicas were incapable of expiring keys with a time to live set. +This means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory. +Redis 4.0 RC3 and greater versions are able to evict keys with TTL as masters do, with the exceptions of keys written in DB numbers greater than 63 (but by default Redis instances only have 16 databases). +Note though that even in versions greater than 4.0, using `EXPIRE` on a key that could ever exists on the master can cause inconsistency between the replica and the master. + +Also note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in the following setup: + + A ---> B ---> C + +Even if `B` is writable, C will not see `B` writes and will instead have identical dataset as the master instance `A`. + +## Setting a replica to authenticate to a master + +If your master has a password via `requirepass`, it's trivial to configure the +replica to use that password in all sync operations. + +To do it on a running instance, use `redis-cli` and type: + + config set masterauth + +To set it permanently, add this to your config file: + + masterauth + +## Allow writes only with N attached replicas + +Starting with Redis 2.8, you can configure a Redis master to +accept write queries only if at least N replicas are currently connected to the +master. + +However, because Redis uses asynchronous replication it is not possible to ensure +the replica actually received a given write, so there is always a window for data +loss. + +This is how the feature works: + +* Redis replicas ping the master every second, acknowledging the amount of replication stream processed. +* Redis masters will remember the last time it received a ping from every replica. +* The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds. + +If there are at least N replicas, with a lag less than M seconds, then the write will be accepted. + +You may think of it as a best effort data safety mechanism, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. In general bound data loss is better than unbound one. + +If the conditions are not met, the master will instead reply with an error and the write will not be accepted. + +There are two configuration parameters for this feature: + +* min-replicas-to-write `` +* min-replicas-max-lag `` + +For more information, please check the example `redis.conf` file shipped with the +Redis source distribution. + +## How Redis replication deals with expires on keys + +Redis expires allow keys to have a limited time to live (TTL). Such a feature depends +on the ability of an instance to count the time, however Redis replicas correctly +replicate keys with expires, even when such keys are altered using Lua +scripts. + +To implement such a feature Redis cannot rely on the ability of the master and +replica to have synced clocks, since this is a problem that cannot be solved +and would result in race conditions and diverging data sets, so Redis +uses three main techniques to make the replication of expired keys +able to work: + +1. Replicas don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the replicas. +2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. To deal with that the replica uses its logical clock to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live. +3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set. + +Once a replica is promoted to a master it will start to expire keys independently, and will not require any help from its old master. + +## Configuring replication in Docker and NAT + +When Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output is scanned to discover replicas' addresses. + +The problem is that the `ROLE` command, and the replication section of +the `INFO` output, when issued into a master instance, will show replicas +as having the IP address they use to connect to the master, which, in +environments using NAT may be different compared to the logical address of the +replica instance (the one that clients should use to connect to replicas). + +Similarly the replicas will be listed with the listening port configured +into `redis.conf`, that may be different from the forwarded port in case +the port is remapped. + +To fix both issues, it is possible, since Redis 3.2.2, to force +a replica to announce an arbitrary pair of IP and port to the master. +The two configurations directives to use are: + + replica-announce-ip 5.5.5.5 + replica-announce-port 1234 + +And are documented in the example `redis.conf` of recent Redis distributions. + +## The INFO and ROLE command + +There are two Redis commands that provide a lot of information on the current +replication parameters of master and replica instances. One is `INFO`. If the +command is called with the `replication` argument as `INFO replication` only +information relevant to the replication are displayed. Another more +computer-friendly command is `ROLE`, that provides the replication status of +masters and replicas together with their replication offsets, list of connected +replicas and so forth. + +## Partial sync after restarts and failovers + +Since Redis 4.0, when an instance is promoted to master after a failover, +it will still be able to perform a partial resynchronization with the replicas +of the old master. To do so, the replica remembers the old replication ID and +offset of its former master, so can provide part of the backlog to the connecting +replicas even if they ask for the old replication ID. + +However the new replication ID of the promoted replica will be different, since it +constitutes a different history of the data set. For example, the master can +return available and can continue accepting writes for some time, so using the +same replication ID in the promoted replica would violate the rule that a +replication ID and offset pair identifies only a single data set. + +Moreover, replicas - when powered off gently and restarted - are able to store +in the `RDB` file the information needed to resync with their +master. This is useful in case of upgrades. When this is needed, it is better to +use the `SHUTDOWN` command in order to perform a `save & quit` operation on the +replica. + +It is not possible to partially sync a replica that restarted via the +AOF file. However the instance may be turned to RDB persistence before shutting +down it, than can be restarted, and finally AOF can be enabled again. + +## `Maxmemory` on replicas + +By default, a replica will ignore `maxmemory` (unless it is promoted to master after a failover or manually). +It means that the eviction of keys will be handled by the master, sending the DEL commands to the replica as keys evict in the master side. + +This behavior ensures that masters and replicas stay consistent, which is usually what you want. +However, if your replica is writable, or you want the replica to have a different memory setting, and you are sure all the writes performed to the replica are idempotent, then you may change this default (but be sure to understand what you are doing). + +Note that since the replica by default does not evict, it may end up using more memory than what is set via `maxmemory` (since there are certain buffers that may be larger on the replica, or data structures may sometimes take more memory and so forth). +Make sure you monitor your replicas, and make sure they have enough memory to never hit a real out-of-memory condition before the master hits the configured `maxmemory` setting. + +To change this behavior, you can allow a replica to not ignore the `maxmemory`. The configuration directives to use is: + + replica-ignore-maxmemory no diff --git a/docs/management/scaling.md b/docs/management/scaling.md new file mode 100644 index 0000000000..11de8eaafe --- /dev/null +++ b/docs/management/scaling.md @@ -0,0 +1,1010 @@ +--- +title: Scale with Redis Cluster +linkTitle: Scale with Redis Cluster +weight: 6 +description: Horizontal scaling with Redis Cluster +aliases: [ + /topics/cluster-tutorial, + /topics/partitioning, + /docs/manual/scaling, + /docs/manual/scaling.md +] +--- + +Redis scales horizontally with a deployment topology called Redis Cluster. +This topic will teach you how to set up, test, and operate Redis Cluster in production. +You will learn about the availability and consistency characteristics of Redis Cluster from the end user's point of view. + +If you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally, consult the [Redis Cluster specification](/topics/cluster-spec). To learn how Redis Enterprise handles scaling, see [Linear Scaling with Redis Enterprise](https://redis.com/redis-enterprise/technology/linear-scaling-redis-enterprise/). + +## Redis Cluster 101 + +Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes. +Redis Cluster also provides some degree of availability during partitions—in practical terms, the ability to continue operations when some nodes fail or are unable to communicate. +However, the cluster will become unavailable in the event of larger failures (for example, when the majority of masters are unavailable). + +So, with Redis Cluster, you get the ability to: + +* Automatically split your dataset among multiple nodes. +* Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster. + +#### Redis Cluster TCP ports + +Every Redis Cluster node requires two open TCP connections: a Redis TCP port used to serve clients, e.g., 6379, and second port known as the _cluster bus port_. +By default, the cluster bus port is set by adding 10000 to the data port (e.g., 16379); however, you can override this in the `cluster-port` configuration. + +Cluster bus is a node-to-node communication channel that uses a binary protocol, which is more suited to exchanging information between nodes due to +little bandwidth and processing time. +Nodes use the cluster bus for failure detection, configuration updates, failover authorization, and so forth. +Clients should never try to communicate with the cluster bus port, but rather use the Redis command port. +However, make sure you open both ports in your firewall, otherwise Redis cluster nodes won't be able to communicate. + +For a Redis Cluster to work properly you need, for each node: + +1. The client communication port (usually 6379) used to communicate with clients and be open to all the clients that need to reach the cluster, plus all the other cluster nodes that use the client port for key migrations. +2. The cluster bus port must be reachable from all the other cluster nodes. + +If you don't open both TCP ports, your cluster will not work as expected. + +#### Redis Cluster and Docker + +Currently, Redis Cluster does not support NATted environments and in general +environments where IP addresses or TCP ports are remapped. + +Docker uses a technique called _port mapping_: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. +This is useful for running multiple containers using the same ports, at the same time, in the same server. + +To make Docker compatible with Redis Cluster, you need to use Docker's _host networking mode_. +Please see the `--net=host` option in the [Docker documentation](https://docs.docker.com/engine/userguide/networking/dockernetworks/) for more information. + +#### Redis Cluster data sharding + +Redis Cluster does not use consistent hashing, but a different form of sharding +where every key is conceptually part of what we call a **hash slot**. + +There are 16384 hash slots in Redis Cluster, and to compute the hash +slot for a given key, we simply take the CRC16 of the key modulo +16384. + +Every node in a Redis Cluster is responsible for a subset of the hash slots, +so, for example, you may have a cluster with 3 nodes, where: + +* Node A contains hash slots from 0 to 5500. +* Node B contains hash slots from 5501 to 11000. +* Node C contains hash slots from 11001 to 16383. + +This makes it easy to add and remove cluster nodes. For example, if +I want to add a new node D, I need to move some hash slots from nodes A, B, C +to D. Similarly, if I want to remove node A from the cluster, I can just +move the hash slots served by A to B and C. Once node A is empty, +I can remove it from the cluster completely. + +Moving hash slots from a node to another does not require stopping +any operations; therefore, adding and removing nodes, or changing the percentage of hash slots held by a node, requires no downtime. + +Redis Cluster supports multiple key operations as long as all of the keys involved in a single command execution (or whole transaction, or Lua script +execution) belong to the same hash slot. The user can force multiple keys +to be part of the same hash slot by using a feature called *hash tags*. + +Hash tags are documented in the Redis Cluster specification, but the gist is +that if there is a substring between {} brackets in a key, only what is +inside the string is hashed. For example, the keys `user:{123}:profile` and `user:{123}:account` are guaranteed to be in the same hash slot because they share the same hash tag. As a result, you can operate on these two keys in the same multi-key operation. + +#### Redis Cluster master-replica model + +To remain available when a subset of master nodes are failing or are +not able to communicate with the majority of nodes, Redis Cluster uses a +master-replica model where every hash slot has from 1 (the master itself) to N +replicas (N-1 additional replica nodes). + +In our example cluster with nodes A, B, C, if node B fails the cluster is not +able to continue, since we no longer have a way to serve hash slots in the +range 5501-11000. + +However, when the cluster is created (or at a later time), we add a replica +node to every master, so that the final cluster is composed of A, B, C +that are master nodes, and A1, B1, C1 that are replica nodes. +This way, the system can continue if node B fails. + +Node B1 replicates B, and B fails, the cluster will promote node B1 as the new +master and will continue to operate correctly. + +However, note that if nodes B and B1 fail at the same time, Redis Cluster will not be able to continue to operate. + +#### Redis Cluster consistency guarantees + +Redis Cluster does not guarantee **strong consistency**. In practical +terms this means that under certain conditions it is possible that Redis +Cluster will lose writes that were acknowledged by the system to the client. + +The first reason why Redis Cluster can lose writes is because it uses +asynchronous replication. This means that during writes the following +happens: + +* Your client writes to the master B. +* The master B replies OK to your client. +* The master B propagates the write to its replicas B1, B2 and B3. + +As you can see, B does not wait for an acknowledgement from B1, B2, B3 before +replying to the client, since this would be a prohibitive latency penalty +for Redis, so if your client writes something, B acknowledges the write, +but crashes before being able to send the write to its replicas, one of the +replicas (that did not receive the write) can be promoted to master, losing +the write forever. + +This is very similar to what happens with most databases that are +configured to flush data to disk every second, so it is a scenario you +are already able to reason about because of past experiences with traditional +database systems not involving distributed systems. Similarly you can +improve consistency by forcing the database to flush data to disk before +replying to the client, but this usually results in prohibitively low +performance. That would be the equivalent of synchronous replication in +the case of Redis Cluster. + +Basically, there is a trade-off to be made between performance and consistency. + +Redis Cluster has support for synchronous writes when absolutely needed, +implemented via the `WAIT` command. This makes losing writes a lot less +likely. However, note that Redis Cluster does not implement strong consistency +even when synchronous replication is used: it is always possible, under more +complex failure scenarios, that a replica that was not able to receive the write +will be elected as master. + +There is another notable scenario where Redis Cluster will lose writes, that +happens during a network partition where a client is isolated with a minority +of instances including at least a master. + +Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, +with 3 masters and 3 replicas. There is also a client, that we will call Z1. + +After a partition occurs, it is possible that in one side of the +partition we have A, C, A1, B1, C1, and in the other side we have B and Z1. + +Z1 is still able to write to B, which will accept its writes. If the +partition heals in a very short time, the cluster will continue normally. +However, if the partition lasts enough time for B1 to be promoted to master +on the majority side of the partition, the writes that Z1 has sent to B +in the meantime will be lost. + +{{% alert title="Note" color="info" %}} +There is a **maximum window** to the amount of writes Z1 will be able +to send to B: if enough time has elapsed for the majority side of the +partition to elect a replica as master, every master node in the minority +side will have stopped accepting writes. +{{% /alert %}} + +This amount of time is a very important configuration directive of Redis +Cluster, and is called the **node timeout**. + +After node timeout has elapsed, a master node is considered to be failing, +and can be replaced by one of its replicas. +Similarly, after node timeout has elapsed without a master node to be able +to sense the majority of the other master nodes, it enters an error state +and stops accepting writes. + +## Redis Cluster configuration parameters + +We are about to create an example cluster deployment. +Before we continue, let's introduce the configuration parameters that Redis Cluster introduces +in the `redis.conf` file. + +* **cluster-enabled ``**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a standalone instance as usual. +* **cluster-config-file ``**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception. +* **cluster-node-timeout ``**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries. +* **cluster-slave-validity-factor ``**: If set to zero, a replica will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the replica remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a replica disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster. +* **cluster-migration-barrier ``**: Minimum number of replicas a master will remain connected with, for another replica to migrate to a master which is no longer covered by any replica. See the appropriate section about replica migration in this tutorial for more information. +* **cluster-require-full-coverage ``**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed. +* **cluster-allow-reads-when-down ``**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible. + +## Create and use a Redis Cluster + +To create and use a Redis Cluster, follow these steps: + +* [Create a Redis Cluster](#create-a-redis-cluster) +* [Interact with the cluster](#interact-with-the-cluster) +* [Write an example app with redis-rb-cluster](#write-an-example-app-with-redis-rb-cluster) +* [Reshard the cluster](#reshard-the-cluster) +* [A more interesting example application](#a-more-interesting-example-application) +* [Test the failover](#test-the-failover) +* [Manual failover](#manual-failover) +* [Add a new node](#add-a-new-node) +* [Remove a node](#remove-a-node) +* [Replica migration](#replica-migration) +* [Upgrade nodes in a Redis Cluster](#upgrade-nodes-in-a-redis-cluster) +* [Migrate to Redis Cluster](#migrate-to-redis-cluster) + +But, first, familiarize yourself with the requirements for creating a cluster. + +#### Requirements to create a Redis Cluster + +To create a cluster, the first thing you need is to have a few empty Redis instances running in _cluster mode_. + +At minimum, set the following directives in the `redis.conf` file: + +``` +port 7000 +cluster-enabled yes +cluster-config-file nodes.conf +cluster-node-timeout 5000 +appendonly yes +``` + +To enable cluster mode, set the `cluster-enabled` directive to `yes`. +Every instance also contains the path of a file where the +configuration for this node is stored, which by default is `nodes.conf`. +This file is never touched by humans; it is simply generated at startup +by the Redis Cluster instances, and updated every time it is needed. + +Note that the **minimal cluster** that works as expected must contain +at least three master nodes. For deployment, we strongly recommend +a six-node cluster, with three masters and three replicas. + +You can test this locally by creating the following directories named +after the port number of the instance you'll run inside any given directory. + +For example: + +``` +mkdir cluster-test +cd cluster-test +mkdir 7000 7001 7002 7003 7004 7005 +``` + +Create a `redis.conf` file inside each of the directories, from 7000 to 7005. +As a template for your configuration file just use the small example above, +but make sure to replace the port number `7000` with the right port number +according to the directory name. + + +You can start each instance as follows, each running in a separate terminal tab: + +``` +cd 7000 +redis-server ./redis.conf +``` +You'll see from the logs that every node assigns itself a new ID: + + [82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1 + +This ID will be used forever by this specific instance in order for the instance +to have a unique name in the context of the cluster. Every node +remembers every other node using this IDs, and not by IP or port. +IP addresses and ports may change, but the unique node identifier will never +change for all the life of the node. We call this identifier simply **Node ID**. + +#### Create a Redis Cluster + +Now that we have a number of instances running, you need to create your cluster by writing some meaningful configuration to the nodes. + +You can configure and execute individual instances manually or use the create-cluster script. +Let's go over how you do it manually. + +To create the cluster, run: + + redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \ + 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \ + --cluster-replicas 1 + +The command used here is **create**, since we want to create a new cluster. +The option `--cluster-replicas 1` means that we want a replica for every master created. + +The other arguments are the list of addresses of the instances I want to use +to create the new cluster. + +`redis-cli` will propose a configuration. Accept the proposed configuration by typing **yes**. +The cluster will be configured and *joined*, which means that instances will be +bootstrapped into talking with each other. Finally, if everything has gone well, you'll see a message like this: + + [OK] All 16384 slots covered + +This means that there is at least one master instance serving each of the +16384 available slots. + +If you don't want to create a Redis Cluster by configuring and executing +individual instances manually as explained above, there is a much simpler +system (but you'll not learn the same amount of operational details). + +Find the `utils/create-cluster` directory in the Redis distribution. +There is a script called `create-cluster` inside (same name as the directory +it is contained into), it's a simple bash script. In order to start +a 6 nodes cluster with 3 masters and 3 replicas just type the following +commands: + +1. `create-cluster start` +2. `create-cluster create` + +Reply to `yes` in step 2 when the `redis-cli` utility wants you to accept +the cluster layout. + +You can now interact with the cluster, the first node will start at port 30001 +by default. When you are done, stop the cluster with: + +3. `create-cluster stop` + +Please read the `README` inside this directory for more information on how +to run the script. + +#### Interact with the cluster + +To connect to Redis Cluster, you'll need a cluster-aware Redis client. +See the [documentation](/docs/clients) for your client of choice to determine its cluster support. + +You can also test your Redis Cluster using the `redis-cli` command line utility: + +``` +$ redis-cli -c -p 7000 +redis 127.0.0.1:7000> set foo bar +-> Redirected to slot [12182] located at 127.0.0.1:7002 +OK +redis 127.0.0.1:7002> set hello world +-> Redirected to slot [866] located at 127.0.0.1:7000 +OK +redis 127.0.0.1:7000> get foo +-> Redirected to slot [12182] located at 127.0.0.1:7002 +"bar" +redis 127.0.0.1:7002> get hello +-> Redirected to slot [866] located at 127.0.0.1:7000 +"world" +``` + +{{% alert title="Note" color="info" %}} +If you created the cluster using the script, your nodes may listen +on different ports, starting from 30001 by default. +{{% /alert %}} + +The `redis-cli` cluster support is very basic, so it always uses the fact that +Redis Cluster nodes are able to redirect a client to the right node. +A serious client is able to do better than that, and cache the map between +hash slots and nodes addresses, to directly use the right connection to the +right node. The map is refreshed only when something changed in the cluster +configuration, for example after a failover or after the system administrator +changed the cluster layout by adding or removing nodes. + +#### Write an example app with redis-rb-cluster + +Before going forward showing how to operate the Redis Cluster, doing things +like a failover, or a resharding, we need to create some example application +or at least to be able to understand the semantics of a simple Redis Cluster +client interaction. + +In this way we can run an example and at the same time try to make nodes +failing, or start a resharding, to see how Redis Cluster behaves under real +world conditions. It is not very helpful to see what happens while nobody +is writing to the cluster. + +This section explains some basic usage of +[redis-rb-cluster](https://github.com/antirez/redis-rb-cluster) showing two +examples. +The first is the following, and is the +[`example.rb`](https://github.com/antirez/redis-rb-cluster/blob/master/example.rb) +file inside the redis-rb-cluster distribution: + +``` + 1 require './cluster' + 2 + 3 if ARGV.length != 2 + 4 startup_nodes = [ + 5 {:host => "127.0.0.1", :port => 7000}, + 6 {:host => "127.0.0.1", :port => 7001} + 7 ] + 8 else + 9 startup_nodes = [ + 10 {:host => ARGV[0], :port => ARGV[1].to_i} + 11 ] + 12 end + 13 + 14 rc = RedisCluster.new(startup_nodes,32,:timeout => 0.1) + 15 + 16 last = false + 17 + 18 while not last + 19 begin + 20 last = rc.get("__last__") + 21 last = 0 if !last + 22 rescue => e + 23 puts "error #{e.to_s}" + 24 sleep 1 + 25 end + 26 end + 27 + 28 ((last.to_i+1)..1000000000).each{|x| + 29 begin + 30 rc.set("foo#{x}",x) + 31 puts rc.get("foo#{x}") + 32 rc.set("__last__",x) + 33 rescue => e + 34 puts "error #{e.to_s}" + 35 end + 36 sleep 0.1 + 37 } +``` + +The application does a very simple thing, it sets keys in the form `foo` to `number`, one after the other. So if you run the program the result is the +following stream of commands: + +* SET foo0 0 +* SET foo1 1 +* SET foo2 2 +* And so forth... + +The program looks more complex than it should usually as it is designed to +show errors on the screen instead of exiting with an exception, so every +operation performed with the cluster is wrapped by `begin` `rescue` blocks. + +The **line 14** is the first interesting line in the program. It creates the +Redis Cluster object, using as argument a list of *startup nodes*, the maximum +number of connections this object is allowed to take against different nodes, +and finally the timeout after a given operation is considered to be failed. + +The startup nodes don't need to be all the nodes of the cluster. The important +thing is that at least one node is reachable. Also note that redis-rb-cluster +updates this list of startup nodes as soon as it is able to connect with the +first node. You should expect such a behavior with any other serious client. + +Now that we have the Redis Cluster object instance stored in the **rc** variable, +we are ready to use the object like if it was a normal Redis object instance. + +This is exactly what happens in **line 18 to 26**: when we restart the example +we don't want to start again with `foo0`, so we store the counter inside +Redis itself. The code above is designed to read this counter, or if the +counter does not exist, to assign it the value of zero. + +However note how it is a while loop, as we want to try again and again even +if the cluster is down and is returning errors. Normal applications don't need +to be so careful. + +**Lines between 28 and 37** start the main loop where the keys are set or +an error is displayed. + +Note the `sleep` call at the end of the loop. In your tests you can remove +the sleep if you want to write to the cluster as fast as possible (relatively +to the fact that this is a busy loop without real parallelism of course, so +you'll get the usually 10k ops/second in the best of the conditions). + +Normally writes are slowed down in order for the example application to be +easier to follow by humans. + +Starting the application produces the following output: + +``` +ruby ./example.rb +1 +2 +3 +4 +5 +6 +7 +8 +9 +^C (I stopped the program here) +``` + +This is not a very interesting program and we'll use a better one in a moment +but we can already see what happens during a resharding when the program +is running. + +#### Reshard the cluster + +Now we are ready to try a cluster resharding. To do this, please +keep the example.rb program running, so that you can see if there is some +impact on the program running. Also, you may want to comment the `sleep` +call to have some more serious write load during resharding. + +Resharding basically means to move hash slots from a set of nodes to another +set of nodes. +Like cluster creation, it is accomplished using the redis-cli utility. + +To start a resharding, just type: + + redis-cli --cluster reshard 127.0.0.1:7000 + +You only need to specify a single node, redis-cli will find the other nodes +automatically. + +Currently redis-cli is only able to reshard with the administrator support, +you can't just say move 5% of slots from this node to the other one (but +this is pretty trivial to implement). So it starts with questions. The first +is how much of a resharding do you want to do: + + How many slots do you want to move (from 1 to 16384)? + +We can try to reshard 1000 hash slots, that should already contain a non +trivial amount of keys if the example is still running without the sleep +call. + +Then redis-cli needs to know what is the target of the resharding, that is, +the node that will receive the hash slots. +I'll use the first master node, that is, 127.0.0.1:7000, but I need +to specify the Node ID of the instance. This was already printed in a +list by redis-cli, but I can always find the ID of a node with the following +command if I need: + +``` +$ redis-cli -p 7000 cluster nodes | grep myself +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460 +``` + +Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1. + +Now you'll get asked from what nodes you want to take those keys. +I'll just type `all` in order to take a bit of hash slots from all the +other master nodes. + +After the final confirmation you'll see a message for every slot that +redis-cli is going to move from a node to another, and a dot will be printed +for every actual key moved from one side to the other. + +While the resharding is in progress you should be able to see your +example program running unaffected. You can stop and restart it multiple times +during the resharding if you want. + +At the end of the resharding, you can test the health of the cluster with +the following command: + + redis-cli --cluster check 127.0.0.1:7000 + +All the slots will be covered as usual, but this time the master at +127.0.0.1:7000 will have more hash slots, something around 6461. + +Resharding can be performed automatically without the need to manually +enter the parameters in an interactive way. This is possible using a command +line like the following: + + redis-cli --cluster reshard : --cluster-from --cluster-to --cluster-slots --cluster-yes + +This allows to build some automatism if you are likely to reshard often, +however currently there is no way for `redis-cli` to automatically +rebalance the cluster checking the distribution of keys across the cluster +nodes and intelligently moving slots as needed. This feature will be added +in the future. + +The `--cluster-yes` option instructs the cluster manager to automatically answer +"yes" to the command's prompts, allowing it to run in a non-interactive mode. +Note that this option can also be activated by setting the +`REDISCLI_CLUSTER_YES` environment variable. + +#### A more interesting example application + +The example application we wrote early is not very good. +It writes to the cluster in a simple way without even checking if what was +written is the right thing. + +From our point of view the cluster receiving the writes could just always +write the key `foo` to `42` to every operation, and we would not notice at +all. + +So in the `redis-rb-cluster` repository, there is a more interesting application +that is called `consistency-test.rb`. It uses a set of counters, by default 1000, and sends `INCR` commands in order to increment the counters. + +However instead of just writing, the application does two additional things: + +* When a counter is updated using `INCR`, the application remembers the write. +* It also reads a random counter before every write, and check if the value is what we expected it to be, comparing it with the value it has in memory. + +What this means is that this application is a simple **consistency checker**, +and is able to tell you if the cluster lost some write, or if it accepted +a write that we did not receive acknowledgment for. In the first case we'll +see a counter having a value that is smaller than the one we remember, while +in the second case the value will be greater. + +Running the consistency-test application produces a line of output every +second: + +``` +$ ruby consistency-test.rb +925 R (0 err) | 925 W (0 err) | +5030 R (0 err) | 5030 W (0 err) | +9261 R (0 err) | 9261 W (0 err) | +13517 R (0 err) | 13517 W (0 err) | +17780 R (0 err) | 17780 W (0 err) | +22025 R (0 err) | 22025 W (0 err) | +25818 R (0 err) | 25818 W (0 err) | +``` + +The line shows the number of **R**eads and **W**rites performed, and the +number of errors (query not accepted because of errors since the system was +not available). + +If some inconsistency is found, new lines are added to the output. +This is what happens, for example, if I reset a counter manually while +the program is running: + +``` +$ redis-cli -h 127.0.0.1 -p 7000 set key_217 0 +OK + +(in the other tab I see...) + +94774 R (0 err) | 94774 W (0 err) | +98821 R (0 err) | 98821 W (0 err) | +102886 R (0 err) | 102886 W (0 err) | 114 lost | +107046 R (0 err) | 107046 W (0 err) | 114 lost | +``` + +When I set the counter to 0 the real value was 114, so the program reports +114 lost writes (`INCR` commands that are not remembered by the cluster). + +This program is much more interesting as a test case, so we'll use it +to test the Redis Cluster failover. + +#### Test the failover + +To trigger the failover, the simplest thing we can do (that is also +the semantically simplest failure that can occur in a distributed system) +is to crash a single process, in our case a single master. + +{{% alert title="Note" color="info" %}} +During this test, you should take a tab open with the consistency test +application running. +{{% /alert %}} + +We can identify a master and crash it with the following command: + +``` +$ redis-cli -p 7000 cluster nodes | grep master +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921 +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383 +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 +``` + +Ok, so 7000, 7001, and 7002 are masters. Let's crash node 7002 with the +**DEBUG SEGFAULT** command: + +``` +$ redis-cli -p 7002 debug segfault +Error: Server closed the connection +``` + +Now we can look at the output of the consistency test to see what it reported. + +``` +18849 R (0 err) | 18849 W (0 err) | +23151 R (0 err) | 23151 W (0 err) | +27302 R (0 err) | 27302 W (0 err) | + +... many error warnings here ... + +29659 R (578 err) | 29660 W (577 err) | +33749 R (578 err) | 33750 W (577 err) | +37918 R (578 err) | 37919 W (577 err) | +42077 R (578 err) | 42078 W (577 err) | +``` + +As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may +sound unexpected as in the first part of this tutorial we stated that Redis +Cluster can lose writes during the failover because it uses asynchronous +replication. What we did not say is that this is not very likely to happen +because Redis sends the reply to the client, and the commands to replicate +to the replicas, about at the same time, so there is a very small window to +lose data. However the fact that it is hard to trigger does not mean that it +is impossible, so this does not change the consistency guarantees provided +by Redis cluster. + +We can now check what is the cluster setup after the failover (note that +in the meantime I restarted the crashed instance so that it rejoins the +cluster as a replica): + +``` +$ redis-cli -p 7000 cluster nodes +3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected +a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected +97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422 +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383 +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921 +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected +``` + +Now the masters are running on ports 7000, 7001 and 7005. What was previously +a master, that is the Redis instance running on port 7002, is now a replica of +7005. + +The output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens: + +* Node ID +* ip:port +* flags: master, replica, myself, fail, ... +* if it is a replica, the Node ID of the master +* Time of the last pending PING still waiting for a reply. +* Time of the last PONG received. +* Configuration epoch for this node (see the Cluster specification). +* Status of the link to this node. +* Slots served... + +#### Manual failover + +Sometimes it is useful to force a failover without actually causing any problem +on a master. For example, to upgrade the Redis process of one of the +master nodes it is a good idea to failover it to turn it into a replica +with minimal impact on availability. + +Manual failovers are supported by Redis Cluster using the `CLUSTER FAILOVER` +command, that must be executed in one of the replicas of the master you want +to failover. + +Manual failovers are special and are safer compared to failovers resulting from +actual master failures. They occur in a way that avoids data loss in the +process, by switching clients from the original master to the new master only +when the system is sure that the new master processed all the replication stream +from the old one. + +This is what you see in the replica log when you perform a manual failover: + + # Manual failover user request accepted. + # Received replication offset for paused master manual failover: 347540 + # All master replication stream processed, manual failover can start. + # Start of election delayed for 0 milliseconds (rank #0, offset 347540). + # Starting a failover election for epoch 7545. + # Failover election won: I'm the new master. + +Basically clients connected to the master we are failing over are stopped. +At the same time the master sends its replication offset to the replica, that +waits to reach the offset on its side. When the replication offset is reached, +the failover starts, and the old master is informed about the configuration +switch. When the clients are unblocked on the old master, they are redirected +to the new master. + +{{% alert title="Note" color="info" %}} +To promote a replica to master, it must first be known as a replica by a majority of the masters in the cluster. + Otherwise, it cannot win the failover election. + If the replica has just been added to the cluster (see [Add a new node as a replica](#add-a-new-node-as-a-replica)), you may need to wait a while before sending the `CLUSTER FAILOVER` command, to make sure the masters in cluster are aware of the new replica. +{{% /alert %}} + +#### Add a new node + +Adding a new node is basically the process of adding an empty node and then +moving some data into it, in case it is a new master, or telling it to +setup as a replica of a known node, in case it is a replica. + +We'll show both, starting with the addition of a new master instance. + +In both cases the first step to perform is **adding an empty node**. + +This is as simple as to start a new node in port 7006 (we already used +from 7000 to 7005 for our existing 6 nodes) with the same configuration +used for the other nodes, except for the port number, so what you should +do in order to conform with the setup we used for the previous nodes: + +* Create a new tab in your terminal application. +* Enter the `cluster-test` directory. +* Create a directory named `7006`. +* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number. +* Finally start the server with `../redis-server ./redis.conf` + +At this point the server should be running. + +Now we can use **redis-cli** as usual in order to add the node to +the existing cluster. + + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 + +As you can see I used the **add-node** command specifying the address of the +new node as first argument, and the address of a random existing node in the +cluster as second argument. + +In practical terms redis-cli here did very little to help us, it just +sent a `CLUSTER MEET` message to the node, something that is also possible +to accomplish manually. However redis-cli also checks the state of the +cluster before to operate, so it is a good idea to perform cluster operations +always via redis-cli even when you know how the internals work. + +Now we can connect to the new node to see if it really joined the cluster: + +``` +redis 127.0.0.1:7006> cluster nodes +3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921 +3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected +f093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected +a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected +97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422 +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383 +``` + +Note that since this node is already connected to the cluster it is already +able to redirect client queries correctly and is generally speaking part of +the cluster. However it has two peculiarities compared to the other masters: + +* It holds no data as it has no assigned hash slots. +* Because it is a master without assigned slots, it does not participate in the election process when a replica wants to become a master. + +Now it is possible to assign hash slots to this node using the resharding +feature of `redis-cli`. +It is basically useless to show this as we already +did in a previous section, there is no difference, it is just a resharding +having as a target the empty node. + +##### Add a new node as a replica + +Adding a new replica can be performed in two ways. The obvious one is to +use redis-cli again, but with the --cluster-slave option, like this: + + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave + +Note that the command line here is exactly like the one we used to add +a new master, so we are not specifying to which master we want to add +the replica. In this case, what happens is that redis-cli will add the new +node as replica of a random master among the masters with fewer replicas. + +However you can specify exactly what master you want to target with your +new replica with the following command line: + + redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave --cluster-master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e + +This way we assign the new replica to a specific master. + +A more manual way to add a replica to a specific master is to add the new +node as an empty master, and then turn it into a replica using the +`CLUSTER REPLICATE` command. This also works if the node was added as a replica +but you want to move it as a replica of a different master. + +For example in order to add a replica for the node 127.0.0.1:7005 that is +currently serving hash slots in the range 11423-16383, that has a Node ID +3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need to do is to connect +with the new node (already added as empty master) and send the command: + + redis 127.0.0.1:7006> cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e + +That's it. Now we have a new replica for this set of hash slots, and all +the other nodes in the cluster already know (after a few seconds needed to +update their config). We can verify with the following command: + +``` +$ redis-cli -p 7000 cluster nodes | grep slave | grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e +f093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected +2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected +``` + +The node 3c3a0c... now has two replicas, running on ports 7002 (the existing one) and 7006 (the new one). + +#### Remove a node + +To remove a replica node just use the `del-node` command of redis-cli: + + redis-cli --cluster del-node 127.0.0.1:7000 `` + +The first argument is just a random node in the cluster, the second argument +is the ID of the node you want to remove. + +You can remove a master node in the same way as well, **however in order to +remove a master node it must be empty**. If the master is not empty you need +to reshard data away from it to all the other master nodes before. + +An alternative to remove a master node is to perform a manual failover of it +over one of its replicas and remove the node after it turned into a replica of the +new master. Obviously this does not help when you want to reduce the actual +number of masters in your cluster, in that case, a resharding is needed. + +There is a special scenario where you want to remove a failed node. +You should not use the `del-node` command because it tries to connect to all nodes and you will encounter a "connection refused" error. +Instead, you can use the `call` command: + + redis-cli --cluster call 127.0.0.1:7000 cluster forget `` + +This command will execute `CLUSTER FORGET` command on every node. + +#### Replica migration + +In Redis Cluster, you can reconfigure a replica to replicate with a +different master at any time just using this command: + + CLUSTER REPLICATE + +However there is a special scenario where you want replicas to move from one +master to another one automatically, without the help of the system administrator. +The automatic reconfiguration of replicas is called *replicas migration* and is +able to improve the reliability of a Redis Cluster. + +{{% alert title="Note" color="info" %}} +You can read the details of replicas migration in the [Redis Cluster Specification](/topics/cluster-spec), here we'll only provide some information about the +general idea and what you should do in order to benefit from it. +{{% /alert %}} + +The reason why you may want to let your cluster replicas to move from one master +to another under certain condition, is that usually the Redis Cluster is as +resistant to failures as the number of replicas attached to a given master. + +For example a cluster where every master has a single replica can't continue +operations if the master and its replica fail at the same time, simply because +there is no other instance to have a copy of the hash slots the master was +serving. However while net-splits are likely to isolate a number of nodes +at the same time, many other kind of failures, like hardware or software failures +local to a single node, are a very notable class of failures that are unlikely +to happen at the same time, so it is possible that in your cluster where +every master has a replica, the replica is killed at 4am, and the master is killed +at 6am. This still will result in a cluster that can no longer operate. + +To improve reliability of the system we have the option to add additional +replicas to every master, but this is expensive. Replica migration allows to +add more replicas to just a few masters. So you have 10 masters with 1 replica +each, for a total of 20 instances. However you add, for example, 3 instances +more as replicas of some of your masters, so certain masters will have more +than a single replica. + +With replicas migration what happens is that if a master is left without +replicas, a replica from a master that has multiple replicas will migrate to +the *orphaned* master. So after your replica goes down at 4am as in the example +we made above, another replica will take its place, and when the master +will fail as well at 5am, there is still a replica that can be elected so that +the cluster can continue to operate. + +So what you should know about replicas migration in short? + +* The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment. +* To benefit from replica migration you have just to add a few more replicas to a single master in your cluster, it does not matter what master. +* There is a configuration parameter that controls the replica migration feature that is called `cluster-migration-barrier`: you can read more about it in the example `redis.conf` file provided with Redis Cluster. + +#### Upgrade nodes in a Redis Cluster + +Upgrading replica nodes is easy since you just need to stop the node and restart +it with an updated version of Redis. If there are clients scaling reads using +replica nodes, they should be able to reconnect to a different replica if a given +one is not available. + +Upgrading masters is a bit more complex, and the suggested procedure is: + +1. Use `CLUSTER FAILOVER` to trigger a manual failover of the master to one of its replicas. + (See the [Manual failover](#manual-failover) in this topic.) +2. Wait for the master to turn into a replica. +3. Finally upgrade the node as you do for replicas. +4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master. + +Following this procedure you should upgrade one node after the other until +all the nodes are upgraded. + +#### Migrate to Redis Cluster + +Users willing to migrate to Redis Cluster may have just a single master, or +may already using a preexisting sharding setup, where keys +are split among N nodes, using some in-house algorithm or a sharding algorithm +implemented by their client library or Redis proxy. + +In both cases it is possible to migrate to Redis Cluster easily, however +what is the most important detail is if multiple-keys operations are used +by the application, and how. There are three different cases: + +1. Multiple keys operations, or transactions, or Lua scripts involving multiple keys, are not used. Keys are accessed independently (even if accessed via transactions or Lua scripts grouping multiple commands, about the same key, together). +2. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used but only with keys having the same **hash tag**, which means that the keys used together all have a `{...}` sub-string that happens to be identical. For example the following multiple keys operation is defined in the context of the same hash tag: `SUNION {user:1000}.foo {user:1000}.bar`. +3. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used with key names not having an explicit, or the same, hash tag. + +The third case is not handled by Redis Cluster: the application requires to +be modified in order to not use multi keys operations or only use them in +the context of the same hash tag. + +Case 1 and 2 are covered, so we'll focus on those two cases, that are handled +in the same way, so no distinction will be made in the documentation. + +Assuming you have your preexisting data set split into N masters, where +N=1 if you have no preexisting sharding, the following steps are needed +in order to migrate your data set to Redis Cluster: + +1. Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application / environment. +2. Generate an append only file for all of your N masters using the `BGREWRITEAOF` command, and waiting for the AOF file to be completely generated. +3. Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers). +4. Create a Redis Cluster composed of N masters and zero replicas. You'll add replicas later. Make sure all your nodes are using the append only file for persistence. +5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N. +6. Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration. +7. Use `redis-cli --cluster fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not. +8. Use `redis-cli --cluster check` at the end to make sure your cluster is ok. +9. Restart your clients modified to use a Redis Cluster aware client library. + +There is an alternative way to import data from external instances to a Redis +Cluster, which is to use the `redis-cli --cluster import` command. + +The command moves all the keys of a running instance (deleting the keys from +the source instance) to the specified pre-existing Redis Cluster. However +note that if you use a Redis 2.8 instance as source instance the operation +may be slow since 2.8 does not implement migrate connection caching, so you +may want to restart your source instance with a Redis 3.x version before +to perform such operation. + +{{% alert title="Note" color="info" %}} +Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated. +{{% /alert %}} + +## Learn more + +* [Redis Cluster specification](/topics/cluster-spec) +* [Linear Scaling with Redis Enterprise](https://redis.com/redis-enterprise/technology/linear-scaling-redis-enterprise/) +* [Docker documentation](https://docs.docker.com/engine/userguide/networking/dockernetworks/) + diff --git a/docs/management/security/_index.md b/docs/management/security/_index.md new file mode 100644 index 0000000000..20537b6879 --- /dev/null +++ b/docs/management/security/_index.md @@ -0,0 +1,230 @@ +--- +title: "Redis security" +linkTitle: "Security" +weight: 1 +description: Security model and features in Redis +aliases: [ + /topics/security, + /docs/manual/security, + /docs/manual/security.md +] +--- + +This document provides an introduction to the topic of security from the point of +view of Redis. It covers the access control provided by Redis, code security concerns, +attacks that can be triggered from the outside by selecting malicious inputs, and +other similar topics. +You can learn more about access control, data protection and encryption, secure Redis architectures, and secure deployment techniques by taking the [Redis University security course](https://university.redis.com/courses/ru330/). + +For security-related contacts, open an issue on GitHub, or when you feel it +is really important to preserve the security of the communication, use the +GPG key at the end of this document. + +## Security model + +Redis is designed to be accessed by trusted clients inside trusted environments. +This means that usually it is not a good idea to expose the Redis instance +directly to the internet or, in general, to an environment where untrusted +clients can directly access the Redis TCP port or UNIX socket. + +For instance, in the common context of a web application implemented using Redis +as a database, cache, or messaging system, the clients inside the front-end +(web side) of the application will query Redis to generate pages or +to perform operations requested or triggered by the web application user. + +In this case, the web application mediates access between Redis and +untrusted clients (the user browsers accessing the web application). + +In general, untrusted access to Redis should +always be mediated by a layer implementing ACLs, validating user input, +and deciding what operations to perform against the Redis instance. + +## Network security + +Access to the Redis port should be denied to everybody but trusted clients +in the network, so the servers running Redis should be directly accessible +only by the computers implementing the application using Redis. + +In the common case of a single computer directly exposed to the internet, such +as a virtualized Linux instance (Linode, EC2, ...), the Redis port should be +firewalled to prevent access from the outside. Clients will still be able to +access Redis using the loopback interface. + +Note that it is possible to bind Redis to a single interface by adding a line +like the following to the **redis.conf** file: + + bind 127.0.0.1 + +Failing to protect the Redis port from the outside can have a big security +impact because of the nature of Redis. For instance, a single `FLUSHALL` command can be used by an external attacker to delete the whole data set. + +## Protected mode + +Unfortunately, many users fail to protect Redis instances from being accessed +from external networks. Many instances are simply left exposed on the +internet with public IPs. Since version 3.2.0, Redis enters a special mode called **protected mode** when it is +executed with the default configuration (binding all the interfaces) and +without any password in order to access it. In this mode, Redis only replies to queries from the +loopback interfaces, and replies to clients connecting from other +addresses with an error that explains the problem and how to configure +Redis properly. + +We expect protected mode to seriously decrease the security issues caused +by unprotected Redis instances executed without proper administration. However, +the system administrator can still ignore the error given by Redis and +disable protected mode or manually bind all the interfaces. + +## Authentication + +Redis provides two ways to authenticate clients. +The recommended authentication method, introduced in Redis 6, is via Access Control Lists, allowing named users to be created and assigned fine-grained permissions. +Read more about Access Control Lists [here](/docs/management/security/acl/). + +The legacy authentication method is enabled by editing the **redis.conf** file, and providing a database password using the `requirepass` setting. +This password is then used by all clients. + +When the `requirepass` setting is enabled, Redis will refuse any query by +unauthenticated clients. A client can authenticate itself by sending the +**AUTH** command followed by the password. + +The password is set by the system administrator in clear text inside the +redis.conf file. It should be long enough to prevent brute force attacks +for two reasons: + +* Redis is very fast at serving queries. Many passwords per second can be tested by an external client. +* The Redis password is stored in the **redis.conf** file and inside the client configuration. Since the system administrator does not need to remember it, the password can be very long. + +The goal of the authentication layer is to optionally provide a layer of +redundancy. If firewalling or any other system implemented to protect Redis +from external attackers fail, an external client will still not be able to +access the Redis instance without knowledge of the authentication password. + +Since the `AUTH` command, like every other Redis command, is sent unencrypted, it +does not protect against an attacker that has enough access to the network to +perform eavesdropping. + +## TLS support + +Redis has optional support for TLS on all communication channels, including +client connections, replication links, and the Redis Cluster bus protocol. + +## Disallowing specific commands + +It is possible to disallow commands in Redis or to rename them as an unguessable +name, so that normal clients are limited to a specified set of commands. + +For instance, a virtualized server provider may offer a managed Redis instance +service. In this context, normal users should probably not be able to +call the Redis **CONFIG** command to alter the configuration of the instance, +but the systems that provide and remove instances should be able to do so. + +In this case, it is possible to either rename or completely shadow commands from +the command table. This feature is available as a statement that can be used +inside the redis.conf configuration file. For example: + + rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 + +In the above example, the **CONFIG** command was renamed into an unguessable name. It is also possible to completely disallow it (or any other command) by renaming it to the empty string, like in the following example: + + rename-command CONFIG "" + +## Attacks triggered by malicious inputs from external clients + +There is a class of attacks that an attacker can trigger from the outside even +without external access to the instance. For example, an attacker might insert data into Redis that triggers pathological (worst case) +algorithm complexity on data structures implemented inside Redis internals. + +An attacker could supply, via a web form, a set of strings that +are known to hash to the same bucket in a hash table in order to turn the +O(1) expected time (the average time) to the O(N) worst case. This can consume more +CPU than expected and ultimately cause a Denial of Service. + +To prevent this specific attack, Redis uses a per-execution, pseudo-random +seed to the hash function. + +Redis implements the SORT command using the qsort algorithm. Currently, +the algorithm is not randomized, so it is possible to trigger a quadratic +worst-case behavior by carefully selecting the right set of inputs. + +## String escaping and NoSQL injection + +The Redis protocol has no concept of string escaping, so injection +is impossible under normal circumstances using a normal client library. +The protocol uses prefixed-length strings and is completely binary safe. + +Since Lua scripts executed by the `EVAL` and `EVALSHA` commands follow the +same rules, those commands are also safe. + +While it would be a strange use case, the application should avoid composing the body of the Lua script from strings obtained from untrusted sources. + +## Code security + +In a classical Redis setup, clients are allowed full access to the command set, +but accessing the instance should never result in the ability to control the +system where Redis is running. + +Internally, Redis uses all the well-known practices for writing secure code to +prevent buffer overflows, format bugs, and other memory corruption issues. +However, the ability to control the server configuration using the **CONFIG** +command allows the client to change the working directory of the program and +the name of the dump file. This allows clients to write RDB Redis files +to random paths. This is [a security issue](http://antirez.com/news/96) that may lead to the ability to compromise the system and/or run untrusted code as the same user as Redis is running. + +Redis does not require root privileges to run. It is recommended to +run it as an unprivileged *redis* user that is only used for this purpose. + +## GPG key + +``` +-----BEGIN PGP PUBLIC KEY BLOCK----- + +mQINBF9FWioBEADfBiOE/iKpj2EF/cJ/KzFX+jSBKa8SKrE/9RE0faVF6OYnqstL +S5ox/o+yT45FdfFiRNDflKenjFbOmCbAdIys9Ta0iq6I9hs4sKfkNfNVlKZWtSVG +W4lI6zO2Zyc2wLZonI+Q32dDiXWNcCEsmajFcddukPevj9vKMTJZtF79P2SylEPq +mUuhMy/jOt7q1ibJCj5srtaureBH9662t4IJMFjsEe+hiZ5v071UiQA6Tp7rxLqZ +O6ZRzuamFP3xfy2Lz5NQ7QwnBH1ROabhJPoBOKCATCbfgFcM1Rj+9AOGfoDCOJKH +7yiEezMqr9VbDrEmYSmCO4KheqwC0T06lOLIQC4nnwKopNO/PN21mirCLHvfo01O +H/NUG1LZifOwAURbiFNF8Z3+L0csdhD8JnO+1nphjDHr0Xn9Vff2Vej030pRI/9C +SJ2s5fZUq8jK4n06sKCbqA4pekpbKyhRy3iuITKv7Nxesl4T/uhkc9ccpAvbuD1E +NczN1IH05jiMUMM3lC1A9TSvxSqflqI46TZU3qWLa9yg45kDC8Ryr39TY37LscQk +9x3WwLLkuHeUurnwAk46fSj7+FCKTGTdPVw8v7XbvNOTDf8vJ3o2PxX1uh2P2BHs +9L+E1P96oMkiEy1ug7gu8V+mKu5PAuD3QFzU3XCB93DpDakgtznRRXCkAQARAQAB +tBtSZWRpcyBMYWJzIDxyZWRpc0ByZWRpcy5pbz6JAk4EEwEKADgWIQR5sNCo1OBf +WO913l22qvOUq0evbgUCX0VaKgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK +CRC2qvOUq0evbpZaD/4rN7xesDcAG4ec895Fqzk3w74W1/K9lzRKZDwRsAqI+sAz +ZXvQMtWSxLfF2BITxLnHJXK5P+2Y6XlNgrn1GYwC1MsARyM9e1AzwDJHcXFkHU82 +2aALIMXGtiZs/ejFh9ZSs5cgRlxBSqot/uxXm9AvKEByhmIeHPZse/Rc6e3qa57v +OhCkVZB4ETx5iZrgA+gdmS8N7MXG0cEu5gJLacG57MHi+2WMOCU9Xfj6+Pqhw3qc +E6lBinKcA/LdgUJ1onK0JCnOG1YVHjuFtaisfPXvEmUBGaSGE6lM4J7lass/OWps +Dd+oHCGI+VOGNx6AiBDZG8mZacu0/7goRnOTdljJ93rKkj31I+6+j4xzkAC0IXW8 +LAP9Mmo9TGx0L5CaljykhW6z/RK3qd7dAYE+i7e8J9PuQaGG5pjFzuW4vY45j0V/ +9JUMKDaGbU5choGqsCpAVtAMFfIBj3UQ5LCt5zKyescKCUb9uifOLeeQ1vay3R9o +eRSD52YpRBpor0AyYxcLur/pkHB0sSvXEfRZENQTohpY71rHSaFd3q1Hkk7lZl95 +m24NRlrJnjFmeSPKP22vqUYIwoGNUF/D38UzvqHD8ltTPgkZc+Y+RRbVNqkQYiwW +GH/DigNB8r2sdkt+1EUu+YkYosxtzxpxxpYGKXYXx0uf+EZmRqRt/OSHKnf2GLkC +DQRfRVoqARAApffsrDNo4JWjX3r6wHJJ8IpwnGEJ2IzGkg8f1Ofk2uKrjkII/oIx +sXC3EeauC1Plhs+m9GP/SPY0LXmZ0OzGD/S1yMpmBeBuXJ0gONDo+xCg1pKGshPs +75XzpbggSOtEYR5S8Z46yCu7TGJRXBMGBhDgCfPVFBBNsnG5B0EeHXM4trqqlN6d +PAcwtLnKPz/Z+lloKR6bFXvYGuN5vjRXjcVYZLLCEwdV9iY5/Opqk9sCluasb3t/ +c2gcsLWWFnNz2desvb/Y4ADJzxY+Um848DSR8IcdoArSsqmcCTiYvYC/UU7XPVNk +Jrx/HwgTVYiLGbtMB3u3fUpHW8SabdHc4xG3sx0LeIvl+JwHgx7yVhNYJEyOQfnE +mfS97x6surXgTVLbWVjXKIJhoWnWbLP4NkBc27H4qo8wM/IWH4SSXYNzFLlCDPnw +vQZSel21qxdqAWaSxkKcymfMS4nVDhVj0jhlcTY3aZcHMjqoUB07p5+laJr9CCGv +0Y0j0qT2aUO22A3kbv6H9c1Yjv8EI7eNz07aoH1oYU6ShsiaLfIqPfGYb7LwOFWi +PSl0dCY7WJg2H6UHsV/y2DwRr/3oH0a9hv/cvcMneMi3tpIkRwYFBPXEsIcoD9xr +RI5dp8BBdO/Nt+puoQq9oyialWnQK5+AY7ErW1yxjgie4PQ+XtN+85UAEQEAAYkC +NgQYAQoAIBYhBHmw0KjU4F9Y73XeXbaq85SrR69uBQJfRVoqAhsMAAoJELaq85Sr +R69uoV0QAIvlxAHYTjvH1lt5KbpVGs5gwIAnCMPxmaOXcaZ8V0Z1GEU+/IztwV+N +MYCBv1tYa7OppNs1pn75DhzoNAi+XQOVvU0OZgVJutthZe0fNDFGG9B4i/cxRscI +Ld8TPQQNiZPBZ4ubcxbZyBinE9HsYUM49otHjsyFZ0GqTpyne+zBf1GAQoekxlKo +tWSkkmW0x4qW6eiAmyo5lPS1bBjvaSc67i+6Bv5QkZa0UIkRqAzKN4zVvc2FyILz ++7wVLCzWcXrJt8dOeS6Y/Fjbhb6m7dtapUSETAKu6wJvSd9ndDUjFHD33NQIZ/nL +WaPbn01+e/PHtUDmyZ2W2KbcdlIT9nb2uHrruqdCN04sXkID8E2m2gYMA+TjhC0Q +JBJ9WPmdBeKH91R6wWDq6+HwOpgc/9na+BHZXMG+qyEcvNHB5RJdiu2r1Haf6gHi +Fd6rJ6VzaVwnmKmUSKA2wHUuUJ6oxVJ1nFb7Aaschq8F79TAfee0iaGe9cP+xUHL +zBDKwZ9PtyGfdBp1qNOb94sfEasWPftT26rLgKPFcroCSR2QCK5qHsMNCZL+u71w +NnTtq9YZDRaQ2JAc6VDZCcgu+dLiFxVIi1PFcJQ31rVe16+AQ9zsafiNsxkPdZcY +U9XKndQE028dGZv1E3S5BwpnikrUkWdxcYrVZ4fiNIy5I3My2yCe +=J9BD +-----END PGP PUBLIC KEY BLOCK----- +``` diff --git a/docs/management/security/acl.md b/docs/management/security/acl.md new file mode 100644 index 0000000000..6b1baeed1f --- /dev/null +++ b/docs/management/security/acl.md @@ -0,0 +1,576 @@ +--- +title: "ACL" +linkTitle: "ACL" +weight: 1 +description: Redis Access Control List +aliases: [ + /topics/acl, + /docs/manual/security/acl, + /docs/manual/security/acl.md +] +--- + +The Redis ACL, short for Access Control List, is the feature that allows certain +connections to be limited in terms of the commands that can be executed and the +keys that can be accessed. The way it works is that, after connecting, a client +is required to provide a username and a valid password to authenticate. If authentication succeeded, the connection is associated with a given +user and the limits the user has. Redis can be configured so that new +connections are already authenticated with a "default" user (this is the +default configuration). Configuring the default user has, as a side effect, +the ability to provide only a specific subset of functionalities to connections +that are not explicitly authenticated. + +In the default configuration, Redis 6 (the first version to have ACLs) works +exactly like older versions of Redis. Every new connection is +capable of calling every possible command and accessing every key, so the +ACL feature is backward compatible with old clients and applications. Also +the old way to configure a password, using the **requirepass** configuration +directive, still works as expected. However, it now +sets a password for the default user. + +The Redis `AUTH` command was extended in Redis 6, so now it is possible to +use it in the two-arguments form: + + AUTH + +Here's an example of the old form: + + AUTH + +What happens is that the username used to authenticate is "default", so +just specifying the password implies that we want to authenticate against +the default user. This provides backward compatibility. + +## When ACLs are useful + +Before using ACLs, you may want to ask yourself what's the goal you want to +accomplish by implementing this layer of protection. Normally there are +two main goals that are well served by ACLs: + +1. You want to improve security by restricting the access to commands and keys, so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed. For instance, certain clients may just be able to execute read only commands. +2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed to damage the data or the configuration due to software errors or manual mistakes. For instance, there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command. + +Another typical usage of ACLs is related to managed Redis instances. Redis is +often provided as a managed service both by internal company teams that handle +the Redis infrastructure for the other internal customers they have, or is +provided in a software-as-a-service setup by cloud providers. In both +setups, we want to be sure that configuration commands are excluded for the +customers. + +## Configure ACLs with the ACL command + +ACLs are defined using a DSL (domain specific language) that describes what +a given user is allowed to do. Such rules are always implemented from the +first to the last, left-to-right, because sometimes the order of the rules is +important to understand what the user is really able to do. + +By default there is a single user defined, called *default*. We +can use the `ACL LIST` command in order to check the currently active ACLs +and verify what the configuration of a freshly started, defaults-configured +Redis instance is: + + > ACL LIST + 1) "user default on nopass ~* &* +@all" + +The command above reports the list of users in the same format that is +used in the Redis configuration files, by translating the current ACLs set +for the users back into their description. + +The first two words in each line are "user" followed by the username. The +next words are ACL rules that describe different things. We'll show how the rules work in detail, but for now it is enough to say that the default +user is configured to be active (on), to require no password (nopass), to +access every possible key (`~*`) and Pub/Sub channel (`&*`), and be able to +call every possible command (`+@all`). + +Also, in the special case of the default user, having the *nopass* rule means +that new connections are automatically authenticated with the default user +without any explicit `AUTH` call needed. + +## ACL rules + +The following is the list of valid ACL rules. Certain rules are just +single words that are used in order to activate or remove a flag, or to +perform a given change to the user ACL. Other rules are char prefixes that +are concatenated with command or category names, key patterns, and +so forth. + +Enable and disallow users: + +* `on`: Enable the user: it is possible to authenticate as this user. +* `off`: Disallow the user: it's no longer possible to authenticate with this user; however, previously authenticated connections will still work. Note that if the default user is flagged as *off*, new connections will start as not authenticated and will require the user to send `AUTH` or `HELLO` with the AUTH option in order to authenticate in some way, regardless of the default user configuration. + +Allow and disallow commands: + +* `+`: Add the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g "+config|get"). +* `-`: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with `|` for blocking subcommands (e.g "-config|set"). +* `+@`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules. +* `-@`: Like `+@` but removes the commands from the list of commands the client can call. +* `+|first-arg`: Allow a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with "+". This feature is deprecated and may be removed in the future. +* `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system. +* `nocommands`: Alias for -@all. + +Allow and disallow certain keys and key permissions: + +* `~`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of `KEYS`. It is possible to specify multiple patterns. +* `%R~`: (Available in Redis 7.0 and later) Add the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See [key permissions](#key-permissions) for more information. +* `%W~`: (Available in Redis 7.0 and later) Add the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See [key permissions](#key-permissions) for more information. +* `%RW~`: (Available in Redis 7.0 and later) Alias for `~`. +* `allkeys`: Alias for `~*`. +* `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will only allow the client to access keys that match the pattern `objects:*`. + +Allow and disallow Pub/Sub channels: + +* `&`: (Available in Redis 6.2 and later) Add a glob style pattern of Pub/Sub channels that can be accessed by the user. It is possible to specify multiple channel patterns. Note that pattern matching is done only for channels mentioned by `PUBLISH` and `SUBSCRIBE`, whereas `PSUBSCRIBE` requires a literal match between its channel patterns and those allowed for user. +* `allchannels`: Alias for `&*` that allows the user to access all Pub/Sub channels. +* `resetchannels`: Flush the list of allowed channel patterns and disconnect the user's Pub/Sub clients if these are no longer able to access their respective channels and/or channel patterns. + +Configure valid passwords for the user: + +* `>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add "mypass" to the list of valid passwords. This directive clears the *nopass* flag (see later). Every user can have any number of passwords. +* `<`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set. +* `#`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the `acl.conf` file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only contain lowercase hexadecimal characters. +* `!`: Remove this hash value from the list of valid passwords. This is useful when you do not know the password specified by the hash value but would like to remove the password from the user. +* `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition. +* `resetpass`: Flushes the list of allowed passwords and removes the *nopass* status. After *resetpass*, the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later). + +*Note: if a user is not flagged with nopass and has no list of valid passwords, that user is effectively impossible to use because there will be no way to log in as that user.* + +Configure selectors for the user: + +* `()`: (Available in Redis 7.0 and later) Create a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See [selectors](#selectors) for more information. +* `clearselectors`: (Available in Redis 7.0 and later) Delete all of the selectors attached to the user. + +Reset the user: + +* `reset` Performs the following actions: resetpass, resetkeys, resetchannels, allchannels (if acl-pubsub-default is set), off, clearselectors, -@all. The user returns to the same state it had immediately after its creation. + +## Create and edit user ACLs with the ACL SETUSER command + +Users can be created and modified in two main ways: + +1. Using the ACL command and its `ACL SETUSER` subcommand. +2. Modifying the server configuration, where users can be defined, and restarting the server. With an *external ACL file*, just call `ACL LOAD`. + +In this section we'll learn how to define users using the `ACL` command. +With such knowledge, it will be trivial to do the same things via the +configuration files. Defining users in the configuration deserves its own +section and will be discussed later separately. + +To start, try the simplest `ACL SETUSER` command call: + + > ACL SETUSER alice + OK + +The `ACL SETUSER` command takes the username and a list of ACL rules to apply +to the user. However the above example did not specify any rule at all. +This will just create the user if it did not exist, using the defaults for new +users. If the user already exists, the command above will do nothing at all. + +Check the default user status: + + > ACL LIST + 1) "user alice off resetchannels -@all" + 2) "user default on nopass ~* &* +@all" + +The new user "alice" is: + +* In the off status, so `AUTH` will not work for the user "alice". +* The user also has no passwords set. +* Cannot access any command. Note that the user is created by default without the ability to access any command, so the `-@all` in the output above could be omitted; however, `ACL LIST` attempts to be explicit rather than implicit. +* There are no key patterns that the user can access. +* There are no Pub/Sub channels that the user can access. + +New users are created with restrictive permissions by default. Starting with Redis 6.2, ACL provides Pub/Sub channels access management as well. To ensure backward compatibility with version 6.0 when upgrading to Redis 6.2, new users are granted the 'allchannels' permission by default. The default can be set to `resetchannels` via the `acl-pubsub-default` configuration directive. + +From 7.0, The `acl-pubsub-default` value is set to `resetchannels` to restrict the channels access by default to provide better security. +The default can be set to `allchannels` via the `acl-pubsub-default` configuration directive to be compatible with previous versions. + +Such user is completely useless. Let's try to define the user so that +it is active, has a password, and can access with only the `GET` command +to key names starting with the string "cached:". + + > ACL SETUSER alice on >p1pp0 ~cached:* +get + OK + +Now the user can do something, but will refuse to do other things: + + > AUTH alice p1pp0 + OK + > GET foo + (error) NOPERM this user has no permissions to access one of the keys used as arguments + > GET cached:1234 + (nil) + > SET cached:1234 zap + (error) NOPERM this user has no permissions to run the 'set' command + +Things are working as expected. In order to inspect the configuration of the +user alice (remember that user names are case sensitive), it is possible to +use an alternative to `ACL LIST` which is designed to be more suitable for +computers to read, while `ACL GETUSER` is more human readable. + + > ACL GETUSER alice + 1) "flags" + 2) 1) "on" + 3) "passwords" + 4) 1) "2d9c75..." + 5) "commands" + 6) "-@all +get" + 7) "keys" + 8) "~cached:*" + 9) "channels" + 10) "" + 11) "selectors" + 12) (empty array) + +The `ACL GETUSER` returns a field-value array that describes the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords, and so forth. The output is probably more readable if we use RESP3, so that it is returned as a map reply: + + > ACL GETUSER alice + 1# "flags" => 1~ "on" + 2# "passwords" => 1) "2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927" + 3# "commands" => "-@all +get" + 4# "keys" => "~cached:*" + 5# "channels" => "" + 6# "selectors" => (empty array) + +*Note: from now on, we'll continue using the Redis default protocol, version 2* + +Using another `ACL SETUSER` command (from a different user, because alice cannot run the `ACL` command), we can add multiple patterns to the user: + + > ACL SETUSER alice ~objects:* ~items:* ~public:* + OK + > ACL LIST + 1) "user alice on #2d9c75... ~cached:* ~objects:* ~items:* ~public:* resetchannels -@all +get" + 2) "user default on nopass ~* &* +@all" + +The user representation in memory is now as we expect it to be. + +## Multiple calls to ACL SETUSER + +It is very important to understand what happens when `ACL SETUSER` is called +multiple times. What is critical to know is that every `ACL SETUSER` call will +NOT reset the user, but will just apply the ACL rules to the existing user. +The user is reset only if it was not known before. In that case, a brand new +user is created with zeroed-ACLs. The user cannot do anything, is +disallowed, has no passwords, and so forth. This is the best default for safety. + +However later calls will just modify the user incrementally. For instance, +the following sequence: + + > ACL SETUSER myuser +set + OK + > ACL SETUSER myuser +get + OK + +Will result in myuser being able to call both `GET` and `SET`: + + > ACL LIST + 1) "user default on nopass ~* &* +@all" + 2) "user myuser off resetchannels -@all +get +set" + +## Command categories + +Setting user ACLs by specifying all the commands one after the other is +really annoying, so instead we do things like this: + + > ACL SETUSER antirez on +@all -@dangerous >42a979... ~* + +By saying +@all and -@dangerous, we included all the commands and later removed +all the commands that are tagged as dangerous inside the Redis command table. +Note that command categories **never include modules commands** with +the exception of +@all. If you say +@all, all the commands can be executed by +the user, even future commands loaded via the modules system. However if you +use the ACL rule +@read or any other, the modules commands are always +excluded. This is very important because you should just trust the Redis +internal command table. Modules may expose dangerous things and in +the case of an ACL that is just additive, that is, in the form of `+@all -...` +You should be absolutely sure that you'll never include what you did not mean +to. + +The following is a list of command categories and their meanings: + +* **admin** - Administrative commands. Normal applications will never need to use + these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc. +* **bitmap** - Data type: bitmaps related. +* **blocking** - Potentially blocking the connection until released by another + command. +* **connection** - Commands affecting the connection or other connections. + This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc. +* **dangerous** - Potentially dangerous commands (each should be considered with care for + various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`, + `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc. +* **geo** - Data type: geospatial indexes related. +* **hash** - Data type: hashes related. +* **hyperloglog** - Data type: hyperloglog related. +* **fast** - Fast O(1) commands. May loop on the number of arguments, but not the + number of elements in the key. +* **keyspace** - Writing or reading from keys, databases, or their metadata + in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`, + `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace, + key, or metadata will also have the `write` category. Commands that only read + the keyspace, key, or metadata will have the `read` category. +* **list** - Data type: lists related. +* **pubsub** - PubSub-related commands. +* **read** - Reading from keys (values or metadata). Note that commands that don't + interact with keys, will not have either `read` or `write`. +* **scripting** - Scripting related. +* **set** - Data type: sets related. +* **sortedset** - Data type: sorted sets related. +* **slow** - All commands that are not `fast`. +* **stream** - Data type: streams related. +* **string** - Data type: strings related. +* **transaction** - `WATCH` / `MULTI` / `EXEC` related commands. +* **write** - Writing to keys (values or metadata). + +Redis can also show you a list of all categories and the exact commands each category includes using the Redis `ACL CAT` command. It can be used in two forms: + + ACL CAT -- Will just list all the categories available + ACL CAT -- Will list all the commands inside the category + +Examples: + + > ACL CAT + 1) "keyspace" + 2) "read" + 3) "write" + 4) "set" + 5) "sortedset" + 6) "list" + 7) "hash" + 8) "string" + 9) "bitmap" + 10) "hyperloglog" + 11) "geo" + 12) "stream" + 13) "pubsub" + 14) "admin" + 15) "fast" + 16) "slow" + 17) "blocking" + 18) "dangerous" + 19) "connection" + 20) "transaction" + 21) "scripting" + +As you can see, so far there are 21 distinct categories. Now let's check what +command is part of the *geo* category: + + > ACL CAT geo + 1) "geohash" + 2) "georadius_ro" + 3) "georadiusbymember" + 4) "geopos" + 5) "geoadd" + 6) "georadiusbymember_ro" + 7) "geodist" + 8) "georadius" + 9) "geosearch" + 10) "geosearchstore" + +Note that commands may be part of multiple categories. For example, an +ACL rule like `+@geo -@read` will result in certain geo commands to be +excluded because they are read-only commands. + +## Allow/block subcommands + +Starting from Redis 7.0, subcommands can be allowed/blocked just like other +commands (by using the separator `|` between the command and subcommand, for +example: `+config|get` or `-config|set`) + +That is true for all commands except DEBUG. In order to allow/block specific +DEBUG subcommands, see the next section. + +## Allow the first-arg of a blocked command + +**Note: This feature is deprecated since Redis 7.0 and may be removed in the future.** + +Sometimes the ability to exclude or include a command or a subcommand as a whole is not enough. +Many deployments may not be happy providing the ability to execute a `SELECT` for any DB, but may +still want to be able to run `SELECT 0`. + +In such case we could alter the ACL of a user in the following way: + + ACL SETUSER myuser -select +select|0 + +First, remove the `SELECT` command and then add the allowed +first-arg. Note that **it is not possible to do the reverse** since first-args +can be only added, not excluded. It is safer to specify all the first-args +that are valid for some user since it is possible that +new first-args may be added in the future. + +Another example: + + ACL SETUSER myuser -debug +debug|digest + +Note that first-arg matching may add some performance penalty; however, it is hard to measure even with synthetic benchmarks. The +additional CPU cost is only paid when such commands are called, and not when +other commands are called. + +It is possible to use this mechanism in order to allow subcommands in Redis +versions prior to 7.0 (see above section). + +## +@all VS -@all + +In the previous section, it was observed how it is possible to define command +ACLs based on adding/removing single commands. + +## Selectors + +Starting with Redis 7.0, Redis supports adding multiple sets of rules that are evaluated independently of each other. +These secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses. +In order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command. +Internally, the root permissions are checked first followed by selectors in the order they were added. + +For example, consider a user with the ACL rules `+GET ~key1 (+SET ~key2)`. +This user is able to execute `GET key1` and `SET key2 hello`, but not `GET key2` or `SET key1 world`. + +Unlike the user's root permissions, selectors cannot be modified after they are added. +Instead, selectors can be removed with the `clearselectors` keyword, which removes all of the added selectors. +Note that `clearselectors` does not remove the root permissions. + +## Key permissions + +Starting with Redis 7.0, key patterns can also be used to define how a command is able to touch a key. +This is achieved through rules that define key permissions. +The key permission rules take the form of `%()~`. +Permissions are defined as individual characters that map to the following key permissions: + +* W (Write): The data stored within the key may be updated or deleted. +* R (Read): User supplied data from the key is processed, copied or returned. Note that this does not include metadata such as size information (example `STRLEN`), type information (example `TYPE`) or information about whether a value exists within a collection (example `SISMEMBER`). + +Permissions can be composed together by specifying multiple characters. +Specifying the permission as 'RW' is considered full access and is analogous to just passing in `~`. + +For a concrete example, consider a user with ACL rules `+@all ~app1:* (+@read ~app2:*)`. +This user has full access on `app1:*` and readonly access on `app2:*`. +However, some commands support reading data from one key, doing some transformation, and storing it into another key. +One such command is the `COPY` command, which copies the data from the source key into the destination key. +The example set of ACL rules is unable to handle a request copying data from `app2:user` into `app1:user`, since neither the root permission nor the selector fully matches the command. +However, using key selectors you can define a set of ACL rules that can handle this request `+@all ~app1:* %R~app2:*`. +The first pattern is able to match `app1:user` and the second pattern is able to match `app2:user`. + +Which type of permission is required for a command is documented through [key specifications](/topics/key-specs#logical-operation-flags). +The type of permission is based off the keys logical operation flags. +The insert, update, and delete flags map to the write key permission. +The access flag maps to the read key permission. +If the key has no logical operation flags, such as `EXISTS`, the user still needs either key read or key write permissions to execute the command. + +Note: Side channels to accessing user data are ignored when it comes to evaluating whether read permissions are required to execute a command. +This means that some write commands that return metadata about the modified key only require write permission on the key to execute. +For example, consider the following two commands: + +* `LPUSH key1 data`: modifies "key1" but only returns metadata about it, the size of the list after the push, so the command only requires write permission on "key1" to execute. +* `LPOP key2`: modifies "key2" but also returns data from it, the left most item in the list, so the command requires both read and write permission on "key2" to execute. + +If an application needs to make sure no data is accessed from a key, including side channels, it's recommended to not provide any access to the key. + +## How passwords are stored internally + +Redis internally stores passwords hashed with SHA256. If you set a password +and check the output of `ACL LIST` or `ACL GETUSER`, you'll see a long hex +string that looks pseudo random. Here is an example, because in the previous +examples, for the sake of brevity, the long hex string was trimmed: + + > ACL GETUSER default + 1) "flags" + 2) 1) "on" + 3) "passwords" + 4) 1) "2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927" + 5) "commands" + 6) "+@all" + 7) "keys" + 8) "~*" + 9) "channels" + 10) "&*" + 11) "selectors" + 12) (empty array) + +Using SHA256 provides the ability to avoid storing the password in clear text +while still allowing for a very fast `AUTH` command, which is a very important +feature of Redis and is coherent with what clients expect from Redis. + +However ACL *passwords* are not really passwords. They are shared secrets +between the server and the client, because the password is +not an authentication token used by a human being. For instance: + +* There are no length limits, the password will just be memorized in some client software. There is no human that needs to recall a password in this context. +* The ACL password does not protect any other thing. For example, it will never be the password for some email account. +* Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains. + +For this reason, slowing down the password authentication, in order to use an +algorithm that uses time and space to make password cracking hard, +is a very poor choice. What we suggest instead is to generate strong +passwords, so that nobody will be able to crack it using a +dictionary or a brute force attack even if they have the hash. To do so, there is a special ACL +command `ACL GENPASS` that generates passwords using the system cryptographic pseudorandom +generator: + + > ACL GENPASS + "dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc" + +The command outputs a 32-byte (256-bit) pseudorandom string converted to a +64-byte alphanumerical string. This is long enough to avoid attacks and short +enough to be easy to manage, cut & paste, store, and so forth. This is what +you should use in order to generate Redis passwords. + +## Use an external ACL file + +There are two ways to store users inside the Redis configuration: + +1. Users can be specified directly inside the `redis.conf` file. +2. It is possible to specify an external ACL file. + +The two methods are *mutually incompatible*, so Redis will ask you to use one +or the other. Specifying users inside `redis.conf` is +good for simple use cases. When there are multiple users to define, in a +complex environment, we recommend you use the ACL file instead. + +The format used inside `redis.conf` and in the external ACL file is exactly +the same, so it is trivial to switch from one to the other, and is +the following: + + user ... acl rules ... + +For instance: + + user worker +@list +@connection ~jobs:* on >ffa9203c493aa99 + +When you want to use an external ACL file, you are required to specify +the configuration directive called `aclfile`, like this: + + aclfile /etc/redis/users.acl + +When you are just specifying a few users directly inside the `redis.conf` +file, you can use `CONFIG REWRITE` in order to store the new user configuration +inside the file by rewriting it. + +The external ACL file however is more powerful. You can do the following: + +* Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*. Otherwise, an error is reported to the user, and the old configuration will remain valid. +* Use `ACL SAVE` to save the current ACL configuration to the ACL file. + +Note that `CONFIG REWRITE` does not also trigger `ACL SAVE`. When you use +an ACL file, the configuration and the ACLs are handled separately. + +## ACL rules for Sentinel and Replicas + +In case you don't want to provide Redis replicas and Redis Sentinel instances +full access to your Redis instances, the following is the set of commands +that must be allowed in order for everything to work correctly. + +For Sentinel, allow the user to access the following commands both in the master and replica instances: + +* AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF, CONFIG, CLIENT, EXEC. + +Sentinel does not need to access any key in the database but does use Pub/Sub, so the ACL rule would be the following (note: `AUTH` is not needed since it is always allowed): + + ACL SETUSER sentinel-user on >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill + +Redis replicas require the following commands to be allowed on the master instance: + +* PSYNC, REPLCONF, PING + +No keys need to be accessed, so this translates to the following rules: + + ACL setuser replica-user on >somepassword +psync +replconf +ping + +Note that you don't need to configure the replicas to allow the master to be able to execute any set of commands. The master is always authenticated as the root user from the point of view of replicas. diff --git a/docs/management/security/encryption.md b/docs/management/security/encryption.md new file mode 100644 index 0000000000..33aa813a44 --- /dev/null +++ b/docs/management/security/encryption.md @@ -0,0 +1,132 @@ +--- +title: "TLS" +linkTitle: "TLS" +weight: 1 +description: Redis TLS support +aliases: [ + /topics/encryption, + /docs/manual/security/encryption, + /docs/manual/security/encryption.md +] +--- + +SSL/TLS is supported by Redis starting with version 6 as an optional feature +that needs to be enabled at compile time. + +## Getting Started + +### Building + +To build with TLS support you'll need OpenSSL development libraries (e.g. +`libssl-dev` on Debian/Ubuntu). + +Build Redis with the following command: + +```sh +make BUILD_TLS=yes +``` + +### Tests + +To run Redis test suite with TLS, you'll need TLS support for TCL (i.e. +`tcl-tls` package on Debian/Ubuntu). + +1. Run `./utils/gen-test-certs.sh` to generate a root CA and a server + certificate. + +2. Run `./runtest --tls` or `./runtest-cluster --tls` to run Redis and Redis + Cluster tests in TLS mode. + +### Running manually + +To manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was +invoked so sample certificates/keys are available): + + ./src/redis-server --tls-port 6379 --port 0 \ + --tls-cert-file ./tests/tls/redis.crt \ + --tls-key-file ./tests/tls/redis.key \ + --tls-ca-cert-file ./tests/tls/ca.crt + +To connect to this Redis server with `redis-cli`: + + ./src/redis-cli --tls \ + --cert ./tests/tls/redis.crt \ + --key ./tests/tls/redis.key \ + --cacert ./tests/tls/ca.crt + +### Certificate configuration + +In order to support TLS, Redis must be configured with a X.509 certificate and a +private key. In addition, it is necessary to specify a CA certificate bundle +file or path to be used as a trusted root when validating certificates. To +support DH based ciphers, a DH params file can also be configured. For example: + +``` +tls-cert-file /path/to/redis.crt +tls-key-file /path/to/redis.key +tls-ca-cert-file /path/to/ca.crt +tls-dh-params-file /path/to/redis.dh +``` + +### TLS listening port + +The `tls-port` configuration directive enables accepting SSL/TLS connections on +the specified port. This is **in addition** to listening on `port` for TCP +connections, so it is possible to access Redis on different ports using TLS and +non-TLS connections simultaneously. + +You may specify `port 0` to disable the non-TLS port completely. To enable only +TLS on the default Redis port, use: + +``` +port 0 +tls-port 6379 +``` + +### Client certificate authentication + +By default, Redis uses mutual TLS and requires clients to authenticate with a +valid certificate (authenticated against trusted root CAs specified by +`ca-cert-file` or `ca-cert-dir`). + +You may use `tls-auth-clients no` to disable client authentication. + +### Replication + +A Redis master server handles connecting clients and replica servers in the same +way, so the above `tls-port` and `tls-auth-clients` directives apply to +replication links as well. + +On the replica server side, it is necessary to specify `tls-replication yes` to +use TLS for outgoing connections to the master. + +### Cluster + +When Redis Cluster is used, use `tls-cluster yes` in order to enable TLS for the +cluster bus and cross-node connections. + +### Sentinel + +Sentinel inherits its networking configuration from the common Redis +configuration, so all of the above applies to Sentinel as well. + +When connecting to master servers, Sentinel will use the `tls-replication` +directive to determine if a TLS or non-TLS connection is required. + +In addition, the very same `tls-replication` directive will determine whether Sentinel's +port, that accepts connections from other Sentinels, will support TLS as well. That is, +Sentinel will be configured with `tls-port` if and only if `tls-replication` is enabled. + +### Additional configuration + +Additional TLS configuration is available to control the choice of TLS protocol +versions, ciphers and cipher suites, etc. Please consult the self documented +`redis.conf` for more information. + +### Performance considerations + +TLS adds a layer to the communication stack with overheads due to writing/reading to/from an SSL connection, encryption/decryption and integrity checks. Consequently, using TLS results in a decrease of the achievable throughput per Redis instance (for more information refer to this [discussion](https://github.com/redis/redis/issues/7595)). + +### Limitations + +I/O threading is currently not supported with TLS. diff --git a/docs/management/sentinel.md b/docs/management/sentinel.md new file mode 100644 index 0000000000..35718e3692 --- /dev/null +++ b/docs/management/sentinel.md @@ -0,0 +1,1240 @@ +--- +title: "High availability with Redis Sentinel" +linkTitle: "High availability with Sentinel" +weight: 4 +description: High availability for non-clustered Redis +aliases: [ + /topics/sentinel, + /docs/manual/sentinel, + /docs/manual/sentinel.md +] +--- + +Redis Sentinel provides high availability for Redis when not using [Redis Cluster](/docs/manual/scaling). + +Redis Sentinel also provides other collateral tasks such as monitoring, +notifications and acts as a configuration provider for clients. + +This is the full list of Sentinel capabilities at a macroscopic level (i.e. the *big picture*): + +* **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected. +* **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances. +* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting. +* **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address. + +## Sentinel as a distributed system + +Redis Sentinel is a distributed system: + +Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together. The advantage of having multiple Sentinel processes cooperating are the following: + +1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives. +2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all. + +The sum of Sentinels, Redis instances (masters and replicas) and clients +connecting to Sentinel and Redis, are also a larger distributed system with +specific properties. In this document concepts will be introduced gradually +starting from basic information needed in order to understand the basic +properties of Sentinel, to more complex information (that are optional) in +order to understand how exactly Sentinel works. + +## Sentinel quick start + +### Obtaining Sentinel + +The current version of Sentinel is called **Sentinel 2**. It is a rewrite of +the initial Sentinel implementation using stronger and simpler-to-predict +algorithms (that are explained in this documentation). + +A stable release of Redis Sentinel is shipped since Redis 2.8. + +New developments are performed in the *unstable* branch, and new features +sometimes are back ported into the latest stable branch as soon as they are +considered to be stable. + +Redis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used. + +### Running Sentinel + +If you are using the `redis-sentinel` executable (or if you have a symbolic +link with that name to the `redis-server` executable) you can run Sentinel +with the following command line: + + redis-sentinel /path/to/sentinel.conf + +Otherwise you can use directly the `redis-server` executable starting it in +Sentinel mode: + + redis-server /path/to/sentinel.conf --sentinel + +Both ways work the same. + +However **it is mandatory** to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable. + +Sentinels by default run **listening for connections to TCP port 26379**, so +for Sentinels to work, port 26379 of your servers **must be open** to receive +connections from the IP addresses of the other Sentinel instances. +Otherwise Sentinels can't talk and can't agree about what to do, so failover +will never be performed. + +### Fundamental things to know about Sentinel before deploying + +1. You need at least three Sentinel instances for a robust deployment. +2. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones. +3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it. +4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all. +5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working). +6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the [section about _Sentinel and Docker_](#sentinel-docker-nat-and-possible-issues) later in this document for more information. + +### Configuring Sentinel + +The Redis source distribution contains a file called `sentinel.conf` +that is a self-documented example configuration file you can use to +configure Sentinel, however a typical minimal configuration file looks like the +following: + + sentinel monitor mymaster 127.0.0.1 6379 2 + sentinel down-after-milliseconds mymaster 60000 + sentinel failover-timeout mymaster 180000 + sentinel parallel-syncs mymaster 1 + + sentinel monitor resque 192.168.1.3 6380 4 + sentinel down-after-milliseconds resque 10000 + sentinel failover-timeout resque 180000 + sentinel parallel-syncs resque 5 + +You only need to specify the masters to monitor, giving to each separated +master (that may have any number of replicas) a different name. There is no +need to specify replicas, which are auto-discovered. Sentinel will update the +configuration automatically with additional information about replicas (in +order to retain the information in case of restart). The configuration is +also rewritten every time a replica is promoted to master during a failover +and every time a new Sentinel is discovered. + +The example configuration above basically monitors two sets of Redis +instances, each composed of a master and an undefined number of replicas. +One set of instances is called `mymaster`, and the other `resque`. + +The meaning of the arguments of `sentinel monitor` statements is the following: + + sentinel monitor + +For the sake of clarity, let's check line by line what the configuration +options mean: + +The first line is used to tell Redis to monitor a master called *mymaster*, +that is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything +is pretty obvious but the **quorum** argument: + +* The **quorum** is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible. +* However **the quorum is only used to detect the failure**. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the **majority of the Sentinel processes**. + +So for example if you have 5 Sentinel processes, and the quorum for a given +master set to the value of 2, this is what happens: + +* If two Sentinels agree at the same time about the master being unreachable, one of the two will try to start a failover. +* If there are at least a total of three Sentinels reachable, the failover will be authorized and will actually start. + +In practical terms this means during failures **Sentinel never starts a failover if the majority of Sentinel processes are unable to talk** (aka no failover in the minority partition). + +### Other Sentinel options + +The other options are almost always in the form: + + sentinel + +And are used for the following purposes: + +* `down-after-milliseconds` is the time in milliseconds an instance should not +be reachable (either does not reply to our PINGs or it is replying with an +error) for a Sentinel starting to think it is down. +* `parallel-syncs` sets the number of replicas that can be reconfigured to use +the new master after a failover at the same time. The lower the number, the +more time it will take for the failover process to complete, however if the +replicas are configured to serve old data, you may not want all the replicas to +re-synchronize with the master at the same time. While the replication +process is mostly non blocking for a replica, there is a moment when it stops to +load the bulk data from the master. You may want to make sure only one replica +at a time is not reachable by setting this option to the value of 1. + +Additional options are described in the rest of this document and +documented in the example `sentinel.conf` file shipped with the Redis +distribution. + +Configuration parameters can be modified at runtime: + +* Master-specific configuration parameters are modified using `SENTINEL SET`. +* Global configuration parameters are modified using `SENTINEL CONFIG SET`. + +See the [_Reconfiguring Sentinel at runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. + +### Example Sentinel deployments + +Now that you know the basic information about Sentinel, you may wonder where +you should place your Sentinel processes, how many Sentinel processes you need +and so forth. This section shows a few example deployments. + +We use ASCII art in order to show you configuration examples in a *graphical* +format, this is what the different symbols means: + + +--------------------+ + | This is a computer | + | or VM that fails | + | independently. We | + | call it a "box" | + +--------------------+ + +We write inside the boxes what they are running: + + +-------------------+ + | Redis master M1 | + | Redis Sentinel S1 | + +-------------------+ + +Different boxes are connected by lines, to show that they are able to talk: + + +-------------+ +-------------+ + | Sentinel S1 |---------------| Sentinel S2 | + +-------------+ +-------------+ + +Network partitions are shown as interrupted lines using slashes: + + +-------------+ +-------------+ + | Sentinel S1 |------ // ------| Sentinel S2 | + +-------------+ +-------------+ + +Also note that: + +* Masters are called M1, M2, M3, ..., Mn. +* Replicas are called R1, R2, R3, ..., Rn (R stands for *replica*). +* Sentinels are called S1, S2, S3, ..., Sn. +* Clients are called C1, C2, C3, ..., Cn. +* When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention. + +Note that we will never show **setups where just two Sentinels are used**, since +Sentinels always need **to talk with the majority** in order to start a +failover. + +#### Example 1: just two Sentinels, DON'T DO THIS + + +----+ +----+ + | M1 |---------| R1 | + | S1 | | S2 | + +----+ +----+ + + Configuration: quorum = 1 + +* In this setup, if the master M1 fails, R1 will be promoted since the two Sentinels can reach agreement about the failure (obviously with quorum set to 1) and can also authorize a failover because the majority is two. So apparently it could superficially work, however check the next points to see why this setup is broken. +* If the box where M1 is running stops working, also S1 stops working. The Sentinel running in the other box S2 will not be able to authorize a failover, so the system will become not available. + +Note that a majority is needed in order to order different failovers, and later propagate the latest configuration to all the Sentinels. Also note that the ability to failover in a single side of the above setup, without any agreement, would be very dangerous: + + +----+ +------+ + | M1 |----//-----| [M1] | + | S1 | | S2 | + +----+ +------+ + +In the above configuration we created two masters (assuming S2 could failover +without authorization) in a perfectly symmetrical way. Clients may write +indefinitely to both sides, and there is no way to understand when the +partition heals what configuration is the right one, in order to prevent +a *permanent split brain condition*. + +So please **deploy at least three Sentinels in three different boxes** always. + +#### Example 2: basic setup with three boxes + +This is a very simple setup, that has the advantage to be simple to tune +for additional safety. It is based on three boxes, each box running both +a Redis process and a Sentinel process. + + + +----+ + | M1 | + | S1 | + +----+ + | + +----+ | +----+ + | R2 |----+----| R3 | + | S2 | | S3 | + +----+ +----+ + + Configuration: quorum = 2 + +If the master M1 fails, S2 and S3 will agree about the failure and will +be able to authorize a failover, making clients able to continue. + +In every Sentinel setup, as Redis uses asynchronous replication, there is +always the risk of losing some writes because a given acknowledged write +may not be able to reach the replica which is promoted to master. However in +the above setup there is a higher risk due to clients being partitioned away +with an old master, like in the following picture: + + +----+ + | M1 | + | S1 | <- C1 (writes will be lost) + +----+ + | + / + / + +------+ | +----+ + | [M2] |----+----| R3 | + | S2 | | S3 | + +------+ +----+ + +In this case a network partition isolated the old master M1, so the +replica R2 is promoted to master. However clients, like C1, that are +in the same partition as the old master, may continue to write data +to the old master. This data will be lost forever since when the partition +will heal, the master will be reconfigured as a replica of the new master, +discarding its data set. + +This problem can be mitigated using the following Redis replication +feature, that allows to stop accepting writes if a master detects that +it is no longer able to transfer its writes to the specified number of replicas. + + min-replicas-to-write 1 + min-replicas-max-lag 10 + +With the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds. + +Using this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master. + +However there is no free lunch. With this refinement, if the two replicas are +down, the master will stop accepting writes. It's a trade off. + +#### Example 3: Sentinel in the client boxes + +Sometimes we have only two Redis boxes available, one for the master and +one for the replica. The configuration in the example 2 is not viable in +that case, so we can resort to the following, where Sentinels are placed +where clients are: + + +----+ +----+ + | M1 |----+----| R1 | + | | | | | + +----+ | +----+ + | + +------------+------------+ + | | | + | | | + +----+ +----+ +----+ + | C1 | | C2 | | C3 | + | S1 | | S2 | | S3 | + +----+ +----+ +----+ + + Configuration: quorum = 2 + +In this setup, the point of view Sentinels is the same as the clients: if +a master is reachable by the majority of the clients, it is fine. +C1, C2, C3 here are generic clients, it does not mean that C1 identifies +a single client connected to Redis. It is more likely something like +an application server, a Rails app, or something like that. + +If the box where M1 and S1 are running fails, the failover will happen +without issues, however it is easy to see that different network partitions +will result in different behaviors. For example Sentinel will not be able +to setup if the network between the clients and the Redis servers is +disconnected, since the Redis master and replica will both be unavailable. + +Note that if C3 gets partitioned with M1 (hardly possible with +the network described above, but more likely possible with different +layouts, or because of failures at the software layer), we have a similar +issue as described in Example 2, with the difference that here we have +no way to break the symmetry, since there is just a replica and master, so +the master can't stop accepting queries when it is disconnected from its replica, +otherwise the master would never be available during replica failures. + +So this is a valid setup but the setup in the Example 2 has advantages +such as the HA system of Redis running in the same boxes as Redis itself +which may be simpler to manage, and the ability to put a bound on the amount +of time a master in the minority partition can receive writes. + +#### Example 4: Sentinel client side with less than three clients + +The setup described in the Example 3 cannot be used if there are less than +three boxes in the client side (for example three web servers). In this +case we need to resort to a mixed setup like the following: + + +----+ +----+ + | M1 |----+----| R1 | + | S1 | | | S2 | + +----+ | +----+ + | + +------+-----+ + | | + | | + +----+ +----+ + | C1 | | C2 | + | S3 | | S4 | + +----+ +----+ + + Configuration: quorum = 3 + +This is similar to the setup in Example 3, but here we run four Sentinels +in the four boxes we have available. If the master M1 becomes unavailable +the other three Sentinels will perform the failover. + +In theory this setup works removing the box where C2 and S4 are running, and +setting the quorum to 2. However it is unlikely that we want HA in the +Redis side without having high availability in our application layer. + +### Sentinel, Docker, NAT, and possible issues + +Docker uses a technique called port mapping: programs running inside Docker +containers may be exposed with a different port compared to the one the +program believes to be using. This is useful in order to run multiple +containers using the same ports, at the same time, in the same server. + +Docker is not the only software system where this happens, there are other +Network Address Translation setups where ports may be remapped, and sometimes +not ports but also IP addresses. + +Remapping ports and addresses creates issues with Sentinel in two ways: + +1. Sentinel auto-discovery of other Sentinels no longer works, since it is based on *hello* messages where each Sentinel announce at which port and IP address they are listening for connection. However Sentinels have no way to understand that an address or port is remapped, so it is announcing an information that is not correct for other Sentinels to connect. +2. Replicas are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the replica itself during the handshake, however the port may be wrong for the same reason as exposed in point 1. + +Since Sentinels auto detect replicas using masters `INFO` output information, +the detected replicas will not be reachable, and Sentinel will never be able to +failover the master, since there are no good replicas from the point of view of +the system, so there is currently no way to monitor with Sentinel a set of +master and replica instances deployed with Docker, **unless you instruct Docker +to map the port 1:1**. + +For the first problem, in case you want to run a set of Sentinel +instances using Docker with forwarded ports (or any other NAT setup where ports +are remapped), you can use the following two Sentinel configuration directives +in order to force Sentinel to announce a specific set of IP and port: + + sentinel announce-ip + sentinel announce-port + +Note that Docker has the ability to run in *host networking mode* (check the `--net=host` option for more information). This should create no issues since ports are not remapped in this setup. + +### IP Addresses and DNS names + +Older versions of Sentinel did not support host names and required IP addresses to be specified everywhere. +Starting with version 6.2, Sentinel has *optional* support for host names. + +**This capability is disabled by default. If you're going to enable DNS/hostnames support, please note:** + +1. The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly. Unexpected delays in address resolution may have a negative impact on Sentinel. +2. You should use hostnames everywhere and avoid mixing hostnames and IP addresses. To do that, use `replica-announce-ip ` and `sentinel announce-ip ` for all Redis and Sentinel instances, respectively. + +Enabling the `resolve-hostnames` global configuration allows Sentinel to accept host names: + +* As part of a `sentinel monitor` command +* As a replica address, if the replica uses a host name value for `replica-announce-ip` + +Sentinel will accept host names as valid inputs and resolve them, but will still refer to IP addresses when announcing an instance, updating configuration files, etc. + +Enabling the `announce-hostnames` global configuration makes Sentinel use host names instead. This affects replies to clients, values written in configuration files, the `REPLICAOF` command issued to replicas, etc. + +This behavior may not be compatible with all Sentinel clients, that may explicitly expect an IP address. + +Using host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching. + +## A quick tutorial + +In the next sections of this document, all the details about [_Sentinel API_](#sentinel-api), +configuration and semantics will be covered incrementally. However for people +that want to play with the system ASAP, this section is a tutorial that shows +how to configure and interact with 3 Sentinel instances. + +Here we assume that the instances are executed at port 5000, 5001, 5002. +We also assume that you have a running Redis master at port 6379 with a +replica running at port 6380. We will use the IPv4 loopback address 127.0.0.1 +everywhere during the tutorial, assuming you are running the simulation +on your personal computer. + +The three Sentinel configuration files should look like the following: + + port 5000 + sentinel monitor mymaster 127.0.0.1 6379 2 + sentinel down-after-milliseconds mymaster 5000 + sentinel failover-timeout mymaster 60000 + sentinel parallel-syncs mymaster 1 + +The other two configuration files will be identical but using 5001 and 5002 +as port numbers. + +A few things to note about the above configuration: + +* The master set is called `mymaster`. It identifies the master and its replicas. Since each *master set* has a different name, Sentinel can monitor different sets of masters and replicas at the same time. +* The quorum was set to the value of 2 (last argument of `sentinel monitor` configuration directive). +* The `down-after-milliseconds` value is 5000 milliseconds, that is 5 seconds, so masters will be detected as failing as soon as we don't receive any reply from our pings within this amount of time. + +Once you start the three Sentinels, you'll see a few messages they log, like: + + +monitor master mymaster 127.0.0.1 6379 quorum 2 + +This is a Sentinel event, and you can receive this kind of events via Pub/Sub +if you `SUBSCRIBE` to the event name as specified later in [_Pub/Sub Messages_ section](#pubsub-messages). + +Sentinel generates and logs different events during failure detection and +failover. + +Asking Sentinel about the state of a master +--- + +The most obvious thing to do with Sentinel to get started, is check if the +master it is monitoring is doing well: + + $ redis-cli -p 5000 + 127.0.0.1:5000> sentinel master mymaster + 1) "name" + 2) "mymaster" + 3) "ip" + 4) "127.0.0.1" + 5) "port" + 6) "6379" + 7) "runid" + 8) "953ae6a589449c13ddefaee3538d356d287f509b" + 9) "flags" + 10) "master" + 11) "link-pending-commands" + 12) "0" + 13) "link-refcount" + 14) "1" + 15) "last-ping-sent" + 16) "0" + 17) "last-ok-ping-reply" + 18) "735" + 19) "last-ping-reply" + 20) "735" + 21) "down-after-milliseconds" + 22) "5000" + 23) "info-refresh" + 24) "126" + 25) "role-reported" + 26) "master" + 27) "role-reported-time" + 28) "532439" + 29) "config-epoch" + 30) "1" + 31) "num-slaves" + 32) "1" + 33) "num-other-sentinels" + 34) "2" + 35) "quorum" + 36) "2" + 37) "failover-timeout" + 38) "60000" + 39) "parallel-syncs" + 40) "1" + +As you can see, it prints a number of information about the master. There are +a few that are of particular interest for us: + +1. `num-other-sentinels` is 2, so we know the Sentinel already detected two more Sentinels for this master. If you check the logs you'll see the `+sentinel` events generated. +2. `flags` is just `master`. If the master was down we could expect to see `s_down` or `o_down` flag as well here. +3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached replica to our master. + +In order to explore more about this instance, you may want to try the following +two commands: + + SENTINEL replicas mymaster + SENTINEL sentinels mymaster + +The first will provide similar information about the replicas connected to the +master, and the second about the other Sentinels. + +Obtaining the address of the current master +--- + +As we already specified, Sentinel also acts as a configuration provider for +clients that want to connect to a set of master and replicas. Because of +possible failovers or reconfigurations, clients have no idea about who is +the currently active master for a given set of instances, so Sentinel exports +an API to ask this question: + + 127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster + 1) "127.0.0.1" + 2) "6379" + +### Testing the failover + +At this point our toy Sentinel deployment is ready to be tested. We can +just kill our master and check if the configuration changes. To do so +we can just do: + + redis-cli -p 6379 DEBUG sleep 30 + +This command will make our master no longer reachable, sleeping for 30 seconds. +It basically simulates a master hanging for some reason. + +If you check the Sentinel logs, you should be able to see a lot of action: + +1. Each Sentinel detects the master is down with an `+sdown` event. +2. This event is later escalated to `+odown`, which means that multiple Sentinels agree about the fact the master is not reachable. +3. Sentinels vote a Sentinel that will start the first failover attempt. +4. The failover happens. + +If you ask again what is the current master address for `mymaster`, eventually +we should get a different reply this time: + + 127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster + 1) "127.0.0.1" + 2) "6380" + +So far so good... At this point you may jump to create your Sentinel deployment +or can read more to understand all the Sentinel commands and internals. + +## Sentinel API + +Sentinel provides an API in order to inspect its state, check the health +of monitored masters and replicas, subscribe in order to receive specific +notifications, and change the Sentinel configuration at run time. + +By default Sentinel runs using TCP port 26379 (note that 6379 is the normal +Redis port). Sentinels accept commands using the Redis protocol, so you can +use `redis-cli` or any other unmodified Redis client in order to talk with +Sentinel. + +It is possible to directly query a Sentinel to check what is the state of +the monitored Redis instances from its point of view, to see what other +Sentinels it knows, and so forth. Alternatively, using Pub/Sub, it is possible +to receive *push style* notifications from Sentinels, every time some event +happens, like a failover, or an instance entering an error condition, and +so forth. + +### Sentinel commands + +The `SENTINEL` command is the main API for Sentinel. The following is the list of its subcommands (minimal version is noted for where applicable): + +* **SENTINEL CONFIG GET ``** (`>= 6.2`) Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command. +* **SENTINEL CONFIG SET `` ``** (`>= 6.2`) Set the value of a global Sentinel configuration parameter. +* **SENTINEL CKQUORUM ``** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok. +* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing. +* **SENTINEL FAILOVER ``** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations). +* **SENTINEL GET-MASTER-ADDR-BY-NAME ``** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica. +* **SENTINEL INFO-CACHE** (`>= 3.2`) Return cached `INFO` output from masters and replicas. +* **SENTINEL IS-MASTER-DOWN-BY-ADDR ** Check if the master specified by ip:port is down from current Sentinel's point of view. This command is mostly for internal use. +* **SENTINEL MASTER ``** Show the state and info of the specified master. +* **SENTINEL MASTERS** Show a list of monitored masters and their state. +* **SENTINEL MONITOR** Start Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL MYID** (`>= 6.2`) Return the ID of the Sentinel instance. +* **SENTINEL PENDING-SCRIPTS** This command returns information about pending scripts. +* **SENTINEL REMOVE** Stop Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL REPLICAS ``** (`>= 5.0`) Show a list of replicas for this master, and their state. +* **SENTINEL SENTINELS ``** Show a list of sentinel instances for this master, and their state. +* **SENTINEL SET** Set Sentinel's monitoring configuration. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information. +* **SENTINEL SIMULATE-FAILURE (crash-after-election|crash-after-promotion|help)** (`>= 3.2`) This command simulates different Sentinel crash scenarios. +* **SENTINEL RESET ``** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master. + +For connection management and administration purposes, Sentinel supports the following subset of Redis' commands: + +* **ACL** (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](/topics/acl) documentation page and the [_Sentinel Access Control List authentication_](#sentinel-access-control-list-authentication). +* **AUTH** (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [_Configuring Sentinel instances with authentication_ section](#configuring-sentinel-instances-with-authentication). +* **CLIENT** This command manages client connections. For more information refer to its subcommands' pages. +* **COMMAND** (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands. +* **HELLO** (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command. +* **INFO** Return information and statistics about the Sentinel server. For more information see the `INFO` command. +* **PING** This command simply returns PONG. +* **ROLE** This command returns the string "sentinel" and a list of monitored masters. For more information refer to the `ROLE` command. +* **SHUTDOWN** Shut down the Sentinel instance. + +Lastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands. Refer to the [_Pub/Sub Messages_ section](#pubsub-messages) for more details. + +### Reconfiguring Sentinel at Runtime + +Starting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network. + +The following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance. + +* **SENTINEL MONITOR `` `` `` ``** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use a hostname in as `ip`, but you need to provide an IPv4 or IPv6 address. +* **SENTINEL REMOVE ``** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth. +* **SENTINEL SET `` [`