diff --git a/docs/plugins/codecs/multiline.asciidoc b/docs/plugins/codecs/multiline.asciidoc index f551b2ffe..eda7a3580 100644 --- a/docs/plugins/codecs/multiline.asciidoc +++ b/docs/plugins/codecs/multiline.asciidoc @@ -102,6 +102,8 @@ Available configuration options: |======================================================================= |Setting |Input type|Required|Default value | <> |<>, one of `["ASCII-8BIT", "Big5", "Big5-HKSCS", "Big5-UAO", "CP949", "Emacs-Mule", "EUC-JP", "EUC-KR", "EUC-TW", "GB18030", "GBK", "ISO-8859-1", "ISO-8859-2", "ISO-8859-3", "ISO-8859-4", "ISO-8859-5", "ISO-8859-6", "ISO-8859-7", "ISO-8859-8", "ISO-8859-9", "ISO-8859-10", "ISO-8859-11", "ISO-8859-13", "ISO-8859-14", "ISO-8859-15", "ISO-8859-16", "KOI8-R", "KOI8-U", "Shift_JIS", "US-ASCII", "UTF-8", "UTF-16BE", "UTF-16LE", "UTF-32BE", "UTF-32LE", "Windows-1251", "GB2312", "IBM437", "IBM737", "IBM775", "CP850", "IBM852", "CP852", "IBM855", "CP855", "IBM857", "IBM860", "IBM861", "IBM862", "IBM863", "IBM864", "IBM865", "IBM866", "IBM869", "Windows-1258", "GB1988", "macCentEuro", "macCroatian", "macCyrillic", "macGreek", "macIceland", "macRoman", "macRomania", "macThai", "macTurkish", "macUkraine", "CP950", "CP951", "stateless-ISO-2022-JP", "eucJP-ms", "CP51932", "GB12345", "ISO-2022-JP", "ISO-2022-JP-2", "CP50220", "CP50221", "Windows-1252", "Windows-1250", "Windows-1256", "Windows-1253", "Windows-1255", "Windows-1254", "TIS-620", "Windows-874", "Windows-1257", "Windows-31J", "MacJapanese", "UTF-7", "UTF8-MAC", "UTF-16", "UTF-32", "UTF8-DoCoMo", "SJIS-DoCoMo", "UTF8-KDDI", "SJIS-KDDI", "ISO-2022-JP-KDDI", "stateless-ISO-2022-JP-KDDI", "UTF8-SoftBank", "SJIS-SoftBank", "BINARY", "CP437", "CP737", "CP775", "IBM850", "CP857", "CP860", "CP861", "CP862", "CP863", "CP864", "CP865", "CP866", "CP869", "CP1258", "Big5-HKSCS:2008", "eucJP", "euc-jp-ms", "eucKR", "eucTW", "EUC-CN", "eucCN", "CP936", "ISO2022-JP", "ISO2022-JP2", "ISO8859-1", "CP1252", "ISO8859-2", "CP1250", "ISO8859-3", "ISO8859-4", "ISO8859-5", "ISO8859-6", "CP1256", "ISO8859-7", "CP1253", "ISO8859-8", "CP1255", "ISO8859-9", "CP1254", "ISO8859-10", "ISO8859-11", "CP874", "ISO8859-13", "CP1257", "ISO8859-14", "ISO8859-15", "ISO8859-16", "CP878", "CP932", "csWindows31J", "SJIS", "PCK", "MacJapan", "ASCII", "ANSI_X3.4-1968", "646", "CP65000", "CP65001", "UTF-8-MAC", "UTF-8-HFS", "UCS-2BE", "UCS-4BE", "UCS-4LE", "CP1251", "external", "locale"]`|No|`"UTF-8"` +| <> |<>|No|`"10 MiB"` +| <> |<>|No|`500` | <> |<>|No|`"multiline"` | <> |<>|No|`false` | <> |<>|Yes| @@ -129,6 +131,28 @@ or in another character set other than `UTF-8`. This only affects "plain" format logs since JSON is `UTF-8` already. +[[plugins-codecs-multiline-max_bytes]] +===== `max_bytes` + + * Value type is <> + * Default value is `"10 MiB"` + +The accumulation of events can make logstash exit with an out of memory error +if event boundaries are not correctly defined. This settings make sure to flush +multiline events after reaching a number of bytes, it is used in combination +max_lines. + +[[plugins-codecs-multiline-max_lines]] +===== `max_lines` + + * Value type is <> + * Default value is `500` + +The accumulation of events can make logstash exit with an out of memory error +if event boundaries are not correctly defined. This settings make sure to flush +multiline events after reaching a number of lines, it is used in combination +max_bytes. + [[plugins-codecs-multiline-multiline_tag]] ===== `multiline_tag` diff --git a/docs/plugins/codecs/s3_plain.asciidoc b/docs/plugins/codecs/s3_plain.asciidoc index 23fc409ab..89f496a80 100644 --- a/docs/plugins/codecs/s3_plain.asciidoc +++ b/docs/plugins/codecs/s3_plain.asciidoc @@ -1,6 +1,10 @@ [[plugins-codecs-s3_plain]] === s3_plain + +NOTE: This is a community-maintained plugin! It does not ship with Logstash by default, but it is easy to install by running `bin/plugin install logstash-codec-s3_plain`. + + The "s3_plain" codec is used for backward compatibility with previous version of the S3 Output @@ -8,20 +12,16 @@ The "s3_plain" codec is used for backward compatibility with previous version of ==== Synopsis -This plugin supports the following configuration options: +This plugin has no configuration options. -Required configuration options: +Complete configuration example: [source,json] -------------------------- s3_plain { - } +} -------------------------- -==== Details - -  - diff --git a/docs/plugins/filters/date.asciidoc b/docs/plugins/filters/date.asciidoc index 298fec22b..3c79d2a69 100644 --- a/docs/plugins/filters/date.asciidoc +++ b/docs/plugins/filters/date.asciidoc @@ -283,7 +283,7 @@ This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you -For example, `America/Los_Angeles` or `Europe/France` are valid IDs. +For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. [[plugins-filters-date-type]] ===== `type` (DEPRECATED) diff --git a/docs/plugins/filters/json.asciidoc b/docs/plugins/filters/json.asciidoc index 7124dcc2e..0f67c54d3 100644 --- a/docs/plugins/filters/json.asciidoc +++ b/docs/plugins/filters/json.asciidoc @@ -194,7 +194,7 @@ The configuration for the JSON filter: [source,ruby] source => source_field -For example, if you have JSON data in the @message field: +For example, if you have JSON data in the `message` field: [source,ruby] filter { json { @@ -202,7 +202,7 @@ For example, if you have JSON data in the @message field: } } -The above would parse the json from the @message field +The above would parse the json from the `message` field [[plugins-filters-json-tags]] ===== `tags` (DEPRECATED) diff --git a/docs/plugins/filters/mutate.asciidoc b/docs/plugins/filters/mutate.asciidoc index 5cb0ba897..733263ecd 100644 --- a/docs/plugins/filters/mutate.asciidoc +++ b/docs/plugins/filters/mutate.asciidoc @@ -124,7 +124,15 @@ Convert a field's value to a different type, like turning a string to an integer. If the field value is an array, all members will be converted. If the field is a hash, no action will be taken. -Valid conversion targets are: integer, float, string. +If the conversion type is `boolean`, the acceptable values are: + +* **True:** `true`, `t`, `yes`, `y`, and `1` +* **False:** `false`, `f`, `no`, `n`, and `0` + +If a value other than these is provided, it will pass straight through +and log a warning message. + +Valid conversion targets are: integer, float, string, and boolean. Example: [source,ruby] diff --git a/docs/plugins/filters/split.asciidoc b/docs/plugins/filters/split.asciidoc index 5ecd4d6e1..58db19928 100644 --- a/docs/plugins/filters/split.asciidoc +++ b/docs/plugins/filters/split.asciidoc @@ -40,6 +40,7 @@ Available configuration options: | <> |<>|No|`false` | <> |<>|No|`[]` | <> |<>|No|`[]` +| <> |<>|No| | <> |<>|No|`"\n"` |======================================================================= @@ -202,6 +203,15 @@ would remove a sad, unwanted tag as well. Only handle events with all of these tags. Optional. +[[plugins-filters-split-target]] +===== `target` + + * Value type is <> + * There is no default value for this setting. + +The field within the new event which the value is split into. +If not set, target field defaults to split field name. + [[plugins-filters-split-terminator]] ===== `terminator` diff --git a/docs/plugins/filters/translate.asciidoc b/docs/plugins/filters/translate.asciidoc index a6dcce784..77b6c36d6 100644 --- a/docs/plugins/filters/translate.asciidoc +++ b/docs/plugins/filters/translate.asciidoc @@ -230,6 +230,7 @@ For example, if we have configured `fallback => "no match"`, using this dictiona Then, if logstash received an event with the field `foo` set to `bar`, the destination field would be set to `bar`. However, if logstash received an event with `foo` set to `nope`, then the destination field would still be populated, but with the value of `no match`. +This configuration can be dynamic and include parts of the event using the `%{field}` syntax. [[plugins-filters-translate-field]] ===== `field` diff --git a/docs/plugins/inputs/couchdb_changes.asciidoc b/docs/plugins/inputs/couchdb_changes.asciidoc index 7d27e4e45..0888be8c6 100644 --- a/docs/plugins/inputs/couchdb_changes.asciidoc +++ b/docs/plugins/inputs/couchdb_changes.asciidoc @@ -3,13 +3,13 @@ -This CouchDB input allows you to automatically stream events from the +This CouchDB input allows you to automatically stream events from the CouchDB http://guide.couchdb.org/draft/notifications.html[_changes] URI. Moreover, any "future" changes will automatically be streamed as well making it easy to synchronize your CouchDB data with any target destination ### Upsert and delete -You can use event metadata to allow for document deletion. +You can use event metadata to allow for document deletion. All non-delete operations are treated as upserts ### Starting at a Specific Sequence @@ -149,7 +149,7 @@ The format of input data (plain, json, json_event) Logstash connects to CouchDB's _changes with feed=continuous The heartbeat is how often (in milliseconds) Logstash will ping -CouchDB to ensure the connection is maintained. Changing this +CouchDB to ensure the connection is maintained. Changing this setting is not recommended unless you know what you are doing. [[plugins-inputs-couchdb_changes-host]] @@ -166,7 +166,7 @@ IP or hostname of your CouchDB instance * Value type is <> * Default value is `true` -Future feature! Until implemented, changing this from the default +Future feature! Until implemented, changing this from the default will not do anything. Ignore attachments associated with CouchDB documents. @@ -181,7 +181,7 @@ If unspecified, Logstash will attempt to read the last sequence number from the `sequence_path` file. If that is empty or non-existent, it will begin with 0 (the beginning). -If you specify this value, it is anticipated that you will +If you specify this value, it is anticipated that you will only be doing so for an initial read under special circumstances and that you will unset this value afterwards. @@ -215,7 +215,7 @@ will cause unexpected results. * Value type is <> * Default value is `nil` -Password, if authentication is needed to connect to +Password, if authentication is needed to connect to CouchDB [[plugins-inputs-couchdb_changes-port]] @@ -283,11 +283,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. @@ -297,7 +297,7 @@ when sent to another Logstash server. * Value type is <> * Default value is `nil` -Username, if authentication is needed to connect to +Username, if authentication is needed to connect to CouchDB diff --git a/docs/plugins/inputs/elasticsearch.asciidoc b/docs/plugins/inputs/elasticsearch.asciidoc index e031b4faa..e3fb888d3 100644 --- a/docs/plugins/inputs/elasticsearch.asciidoc +++ b/docs/plugins/inputs/elasticsearch.asciidoc @@ -297,11 +297,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/eventlog.asciidoc b/docs/plugins/inputs/eventlog.asciidoc index 0bf46f954..edc2b2e36 100644 --- a/docs/plugins/inputs/eventlog.asciidoc +++ b/docs/plugins/inputs/eventlog.asciidoc @@ -142,11 +142,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/exec.asciidoc b/docs/plugins/inputs/exec.asciidoc index f31218dd2..a201a1b14 100644 --- a/docs/plugins/inputs/exec.asciidoc +++ b/docs/plugins/inputs/exec.asciidoc @@ -153,11 +153,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/file.asciidoc b/docs/plugins/inputs/file.asciidoc index 3574276c9..03b5823a7 100644 --- a/docs/plugins/inputs/file.asciidoc +++ b/docs/plugins/inputs/file.asciidoc @@ -37,6 +37,7 @@ Available configuration options: |Setting |Input type|Required|Default value | <> |<>|No|`{}` | <> |<>|No|`"plain"` +| <> |<>|No|`"\n"` | <> |<>|No|`15` | <> |<>|No| | <> |<>|Yes| @@ -94,13 +95,21 @@ The codec used for input data. Input codecs are a convenient method for decoding +[[plugins-inputs-file-delimiter]] +===== `delimiter` + + * Value type is <> + * Default value is `"\n"` + +set the new line delimiter, defaults to "\n" + [[plugins-inputs-file-discover_interval]] ===== `discover_interval` * Value type is <> * Default value is `15` -How often we expand globs to discover new files to watch. +How often (in seconds) we expand globs to discover new files to watch. [[plugins-inputs-file-exclude]] ===== `exclude` @@ -197,9 +206,9 @@ has no effect. * Value type is <> * Default value is `1` -How often we stat files to see if they have been modified. Increasing -this interval will decrease the number of system calls we make, but -increase the time to detect new log lines. +How often (in seconds) we stat files to see if they have been modified. +Increasing this interval will decrease the number of system calls we make, +but increase the time to detect new log lines. [[plugins-inputs-file-tags]] ===== `tags` @@ -222,11 +231,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/ganglia.asciidoc b/docs/plugins/inputs/ganglia.asciidoc index eb2b072ce..cce6646b0 100644 --- a/docs/plugins/inputs/ganglia.asciidoc +++ b/docs/plugins/inputs/ganglia.asciidoc @@ -144,11 +144,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/gelf.asciidoc b/docs/plugins/inputs/gelf.asciidoc index 128f91658..33cdd8015 100644 --- a/docs/plugins/inputs/gelf.asciidoc +++ b/docs/plugins/inputs/gelf.asciidoc @@ -178,11 +178,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/generator.asciidoc b/docs/plugins/inputs/generator.asciidoc index 2933b69f2..3a46122d9 100644 --- a/docs/plugins/inputs/generator.asciidoc +++ b/docs/plugins/inputs/generator.asciidoc @@ -175,8 +175,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-generator-type]] ===== `type` @@ -189,11 +188,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/github.asciidoc b/docs/plugins/inputs/github.asciidoc index 9560f65bc..d373cf11f 100644 --- a/docs/plugins/inputs/github.asciidoc +++ b/docs/plugins/inputs/github.asciidoc @@ -165,11 +165,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/graphite.asciidoc b/docs/plugins/inputs/graphite.asciidoc index 3650e8b18..4611f76f6 100644 --- a/docs/plugins/inputs/graphite.asciidoc +++ b/docs/plugins/inputs/graphite.asciidoc @@ -216,11 +216,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/heartbeat.asciidoc b/docs/plugins/inputs/heartbeat.asciidoc index a570a5f2d..86ab7a9f9 100644 --- a/docs/plugins/inputs/heartbeat.asciidoc +++ b/docs/plugins/inputs/heartbeat.asciidoc @@ -2,12 +2,10 @@ === heartbeat -NOTE: This is a community-maintained plugin! It does not ship with Logstash by default, but it is easy to install by running `bin/plugin install logstash-input-heartbeat`. - Generate heartbeat messages. -The general intention of this is to test the performance and +The general intention of this is to test the performance and availability of Logstash. @@ -35,6 +33,7 @@ Available configuration options: |Setting |Input type|Required|Default value | <> |<>|No|`{}` | <> |<>|No|`"plain"` +| <> |<>|No|`-1` | <> |<>|No|`60` | <> |<>|No|`"ok"` | <> |<>|No| @@ -79,6 +78,15 @@ This only affects `plain` format logs since json is `UTF-8` already. The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline. +[[plugins-inputs-heartbeat-count]] +===== `count` + + * Value type is <> + * Default value is `-1` + +How many times to iterate. +This is typically used only for testing purposes. + [[plugins-inputs-heartbeat-debug]] ===== `debug` (DEPRECATED) @@ -119,7 +127,7 @@ If you set this to `epoch` then this plugin will use the current timestamp in unix timestamp (which is by definition, UTC). It will output this value into a field called `clock` -If you set this to `sequence` then this plugin will send a sequence of +If you set this to `sequence` then this plugin will send a sequence of numbers beginning at 0 and incrementing each interval. It will output this value into a field called `clock` @@ -157,8 +165,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-heartbeat-type]] ===== `type` @@ -171,11 +178,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/heroku.asciidoc b/docs/plugins/inputs/heroku.asciidoc index 57cbae278..e14481b39 100644 --- a/docs/plugins/inputs/heroku.asciidoc +++ b/docs/plugins/inputs/heroku.asciidoc @@ -150,11 +150,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/imap.asciidoc b/docs/plugins/inputs/imap.asciidoc index f363ab0dd..31af8f967 100644 --- a/docs/plugins/inputs/imap.asciidoc +++ b/docs/plugins/inputs/imap.asciidoc @@ -225,11 +225,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/irc.asciidoc b/docs/plugins/inputs/irc.asciidoc index a1bfefac2..0bd842972 100644 --- a/docs/plugins/inputs/irc.asciidoc +++ b/docs/plugins/inputs/irc.asciidoc @@ -31,14 +31,17 @@ Available configuration options: |======================================================================= |Setting |Input type|Required|Default value | <> |<>|No|`{}` +| <> |<>|No|`false` | <> |<>|Yes| | <> |<>|No|`"plain"` +| <> |<>|No|`false` | <> |<>|Yes| | <> |<>|No|`"logstash"` | <> |<>|No| | <> |<>|No|`6667` | <> |<>|No|`"logstash"` | <> |<>|No|`false` +| <> |<>|No|`5` | <> |<>|No| | <> |<>|No| | <> |<>|No|`"logstash"` @@ -58,6 +61,14 @@ Available configuration options: Add a field to an event +[[plugins-inputs-irc-catch_all]] +===== `catch_all` + + * Value type is <> + * Default value is `false` + +Catch all IRC channel/user events not just channel messages + [[plugins-inputs-irc-channels]] ===== `channels` @@ -67,11 +78,11 @@ Add a field to an event Channels to join and read messages from. -These should be full channel names including the `#` symbol, such as -`#logstash`. +These should be full channel names including the '#' symbol, such as +"#logstash". For passworded channels, add a space and the channel password, such as -`#logstash password`. +"#logstash password". [[plugins-inputs-irc-charset]] @@ -115,6 +126,14 @@ The codec used for input data. Input codecs are a convenient method for decoding The format of input data (plain, json, json_event) +[[plugins-inputs-irc-get_stats]] +===== `get_stats` + + * Value type is <> + * Default value is `false` + +Gather and send user counts for channels - this requires catch_all and will force it + [[plugins-inputs-irc-host]] ===== `host` @@ -179,6 +198,14 @@ IRC Real name Set this to true to enable SSL. +[[plugins-inputs-irc-stats_interval]] +===== `stats_interval` + + * Value type is <> + * Default value is `5` + +How often in minutes to get the user count stats + [[plugins-inputs-irc-tags]] ===== `tags` @@ -200,11 +227,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/jmx.asciidoc b/docs/plugins/inputs/jmx.asciidoc index 9da4726bc..ceeef6ce9 100644 --- a/docs/plugins/inputs/jmx.asciidoc +++ b/docs/plugins/inputs/jmx.asciidoc @@ -5,7 +5,92 @@ NOTE: This is a community-maintained plugin! It does not ship with Logstash by default, but it is easy to install by running `bin/plugin install logstash-input-jmx`. -Permits to retrieve metrics from jmx. +This input plugin permits to retrieve metrics from remote Java applications using JMX. +Every `polling_frequency`, it scans a folder containing json configuration +files describing JVMs to monitor with metrics to retrieve. +Then a pool of threads will retrieve metrics and create events. + +## The configuration: + +In Logstash configuration, you must set the polling frequency, +the number of thread used to poll metrics and a directory absolute path containing +json files with the configuration per jvm of metrics to retrieve. +Logstash input configuration example: +[source,ruby] + jmx { + //Required + path => "/apps/logstash_conf/jmxconf" + //Optional, default 60s + polling_frequency => 15 + type => "jmx" + //Optional, default 4 + nb_thread => 4 + } + +Json JMX configuration example: +[source,js] + { + //Required, JMX listening host/ip + "host" : "192.168.1.2", + //Required, JMX listening port + "port" : 1335, + //Optional, the username to connect to JMX + "username" : "user", + //Optional, the password to connect to JMX + "password": "pass", + //Optional, use this alias as a prefix in the metric name. If not set use _ + "alias" : "test.homeserver.elasticsearch", + //Required, list of JMX metrics to retrieve + "queries" : [ + { + //Required, the object name of Mbean to request + "object_name" : "java.lang:type=Memory", + //Optional, use this alias in the metrics value instead of the object_name + "object_alias" : "Memory" + }, { + "object_name" : "java.lang:type=Runtime", + //Optional, set of attributes to retrieve. If not set retrieve + //all metrics available on the configured object_name. + "attributes" : [ "Uptime", "StartTime" ], + "object_alias" : "Runtime" + }, { + //object_name can be configured with * to retrieve all matching Mbeans + "object_name" : "java.lang:type=GarbageCollector,name=*", + "attributes" : [ "CollectionCount", "CollectionTime" ], + //object_alias can be based on specific value from the object_name thanks to ${}. + //In this case ${type} will be replaced by GarbageCollector... + "object_alias" : "${type}.${name}" + }, { + "object_name" : "java.nio:type=BufferPool,name=*", + "object_alias" : "${type}.${name}" + } ] + } + +Here are examples of generated events. When returned metrics value type is +number/boolean it is stored in `metric_value_number` event field +otherwise it is stored in `metric_value_string` event field. +[source,ruby] + { + "@version" => "1", + "@timestamp" => "2014-02-18T20:57:27.688Z", + "host" => "192.168.1.2", + "path" => "/apps/logstash_conf/jmxconf", + "type" => "jmx", + "metric_path" => "test.homeserver.elasticsearch.GarbageCollector.ParNew.CollectionCount", + "metric_value_number" => 2212 + } + +[source,ruby] + { + "@version" => "1", + "@timestamp" => "2014-02-18T20:58:06.376Z", + "host" => "localhost", + "path" => "/apps/logstash_conf/jmxconf", + "type" => "jmx", + "metric_path" => "test.homeserver.elasticsearch.BufferPool.mapped.ObjectName", + "metric_value_string" => "java.nio:type=BufferPool,name=mapped" + } +   @@ -124,7 +209,6 @@ Indicate number of thread launched to retrieve metrics * Value type is <> * There is no default value for this setting. -TODO add documentation Path where json conf files are stored [[plugins-inputs-jmx-polling_frequency]] @@ -133,7 +217,7 @@ Path where json conf files are stored * Value type is <> * Default value is `60` -Indicate interval between to jmx metrics retrieval +Indicate interval between two jmx metrics retrieval (in s) [[plugins-inputs-jmx-tags]] @@ -157,11 +241,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/kafka.asciidoc b/docs/plugins/inputs/kafka.asciidoc index 4058e6f5a..49ccf0507 100644 --- a/docs/plugins/inputs/kafka.asciidoc +++ b/docs/plugins/inputs/kafka.asciidoc @@ -7,8 +7,8 @@ This input will read events from a Kafka topic. It uses the high level consumer by Kafka to read messages from the broker. It also maintains the state of what has been consumed using Zookeeper. The default input codec is json -The only required configuration is the topic name. By default it will connect to a Zookeeper -running on localhost. All the broker information is read from Zookeeper state +You must configure `topic_id`, `white_list` or `black_list`. By default it will connect to a +Zookeeper running on localhost. All the broker information is read from Zookeeper state Ideally you should have as many threads as the number of partitions for a perfect balance -- more threads than partitions means that some threads will be idle @@ -30,7 +30,6 @@ Required configuration options: [source,json] -------------------------- kafka { - topic_id => ... } -------------------------- @@ -42,22 +41,27 @@ Available configuration options: |======================================================================= |Setting |Input type|Required|Default value | <> |<>|No|`{}` +| <> |<>, one of `["largest", "smallest"]`|No|`"largest"` +| <> |<>|No|`nil` | <> |<>|No|`"json"` | <> |<>|No|`nil` | <> |<>|No|`true` | <> |<>|No|`0` | <> |<>|No|`1` | <> |<>|No|`-1` +| <> |<>|No|`"kafka.serializer.DefaultDecoder"` | <> |<>|No|`false` | <> |<>|No|`1048576` | <> |<>|No|`"logstash"` +| <> |<>|No|`"kafka.serializer.DefaultDecoder"` | <> |<>|No|`20` | <> |<>|No|`2000` | <> |<>|No|`4` | <> |<>|No|`false` | <> |<>|No| -| <> |<>|Yes| +| <> |<>|No|`nil` | <> |<>|No| +| <> |<>|No|`nil` | <> |<>|No|`"localhost:2181"` |======================================================================= @@ -75,6 +79,24 @@ Available configuration options: Add a field to an event +[[plugins-inputs-kafka-auto_offset_reset]] +===== `auto_offset_reset` + + * Value can be any of: `largest`, `smallest` + * Default value is `"largest"` + +`smallest` or `largest` - (optional, default `largest`) If the consumer does not already +have an established offset or offset is invalid, start with the earliest message present in the +log (`smallest`) or after the last message in the log (`largest`). + +[[plugins-inputs-kafka-black_list]] +===== `black_list` + + * Value type is <> + * Default value is `nil` + +Blacklist of topics to exclude from consumption. + [[plugins-inputs-kafka-charset]] ===== `charset` (DEPRECATED) @@ -151,6 +173,14 @@ the specified interval +[[plugins-inputs-kafka-decoder_class]] +===== `decoder_class` + + * Value type is <> + * Default value is `"kafka.serializer.DefaultDecoder"` + +The serializer class for messages. The default decoder takes a byte[] and returns the same byte[] + [[plugins-inputs-kafka-decorate_events]] ===== `decorate_events` @@ -190,6 +220,14 @@ A string that uniquely identifies the group of consumer processes to which this belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. +[[plugins-inputs-kafka-key_decoder_class]] +===== `key_decoder_class` + + * Value type is <> + * Default value is `"kafka.serializer.DefaultDecoder"` + +The serializer class for keys (defaults to the same default as for messages) + [[plugins-inputs-kafka-message_format]] ===== `message_format` (DEPRECATED) @@ -238,14 +276,9 @@ maximum number of attempts before giving up. * Value type is <> * Default value is `false` -Specify whether to jump to beginning of the queue when there is no initial offset in -ZooKeeper, or if an offset is out of range. If this is `false`, messages are consumed -from the latest offset - -If `reset_beginning` is true, the consumer will check ZooKeeper to see if any other group members -are present and active. If not, the consumer deletes any offset information in the ZooKeeper -and starts at the smallest offset. If other group members are present `reset_beginning` will not -work and the consumer threads will rejoin the consumer group. +Reset the consumer group to start at the earliest message present in the log by clearing any +offsets for the group stored in Zookeeper. This is destructive! Must be used in conjunction +with auto_offset_reset => 'smallest' [[plugins-inputs-kafka-tags]] ===== `tags` @@ -260,9 +293,8 @@ This can help with processing later. [[plugins-inputs-kafka-topic_id]] ===== `topic_id` - * This is a required setting. * Value type is <> - * There is no default value for this setting. + * Default value is `nil` The topic to consume messages from @@ -277,14 +309,22 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. +[[plugins-inputs-kafka-white_list]] +===== `white_list` + + * Value type is <> + * Default value is `nil` + +Whitelist of topics to include for consumption. + [[plugins-inputs-kafka-zk_connect]] ===== `zk_connect` diff --git a/docs/plugins/inputs/log4j.asciidoc b/docs/plugins/inputs/log4j.asciidoc index 5820e301d..3b443c81a 100644 --- a/docs/plugins/inputs/log4j.asciidoc +++ b/docs/plugins/inputs/log4j.asciidoc @@ -171,11 +171,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/lumberjack.asciidoc b/docs/plugins/inputs/lumberjack.asciidoc index 029794ad8..a870c4cfa 100644 --- a/docs/plugins/inputs/lumberjack.asciidoc +++ b/docs/plugins/inputs/lumberjack.asciidoc @@ -5,7 +5,7 @@ Receive events using the lumberjack protocol. -This is mainly to receive events shipped with lumberjack[http://github.com/jordansissel/lumberjack], +This is mainly to receive events shipped with lumberjack[http://github.com/jordansissel/lumberjack], now represented primarily via the https://github.com/elasticsearch/logstash-forwarder[Logstash-forwarder]. @@ -109,6 +109,17 @@ The format of input data (plain, json, json_event) The IP address to listen on. +[[plugins-inputs-lumberjack-max_clients]] +===== `max_clients` (DEPRECATED) + + * DEPRECATED WARNING: This configuration item is deprecated and may not be available in future versions. + * Value type is <> + * Default value is `1000` + +Number of maximum clients that the lumberjack input will accept, this allow you +to control the back pressure up to the client and stop logstash to go OOM with +connection. This settings is a temporary solution and will be deprecated really soon. + [[plugins-inputs-lumberjack-message_format]] ===== `message_format` (DEPRECATED) @@ -180,11 +191,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/meetup.asciidoc b/docs/plugins/inputs/meetup.asciidoc index ee5e6d065..2a7c00f36 100644 --- a/docs/plugins/inputs/meetup.asciidoc +++ b/docs/plugins/inputs/meetup.asciidoc @@ -176,11 +176,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/pipe.asciidoc b/docs/plugins/inputs/pipe.asciidoc index ebbb6f342..5b799269f 100644 --- a/docs/plugins/inputs/pipe.asciidoc +++ b/docs/plugins/inputs/pipe.asciidoc @@ -145,11 +145,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/puppet_facter.asciidoc b/docs/plugins/inputs/puppet_facter.asciidoc index f5667e697..764efc842 100644 --- a/docs/plugins/inputs/puppet_facter.asciidoc +++ b/docs/plugins/inputs/puppet_facter.asciidoc @@ -189,11 +189,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/rabbitmq.asciidoc b/docs/plugins/inputs/rabbitmq.asciidoc index c38daff9f..050bb6d24 100644 --- a/docs/plugins/inputs/rabbitmq.asciidoc +++ b/docs/plugins/inputs/rabbitmq.asciidoc @@ -284,8 +284,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-rabbitmq-type]] ===== `type` @@ -298,11 +297,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/rackspace.asciidoc b/docs/plugins/inputs/rackspace.asciidoc index a3c281e6f..59be3b901 100644 --- a/docs/plugins/inputs/rackspace.asciidoc +++ b/docs/plugins/inputs/rackspace.asciidoc @@ -178,11 +178,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/redis.asciidoc b/docs/plugins/inputs/redis.asciidoc index c3a6b18ce..553403d02 100644 --- a/docs/plugins/inputs/redis.asciidoc +++ b/docs/plugins/inputs/redis.asciidoc @@ -219,8 +219,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-redis-timeout]] ===== `timeout` @@ -241,11 +240,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/relp.asciidoc b/docs/plugins/inputs/relp.asciidoc index a879e79bf..fb0493f9e 100644 --- a/docs/plugins/inputs/relp.asciidoc +++ b/docs/plugins/inputs/relp.asciidoc @@ -7,7 +7,7 @@ NOTE: This is a community-maintained plugin! It does not ship with Logstash by d Read RELP events over a TCP socket. -For more information about RELP, see +For more information about RELP, see This protocol implements application-level acknowledgements to help protect @@ -155,11 +155,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/rss.asciidoc b/docs/plugins/inputs/rss.asciidoc index 11544838a..47436d26c 100644 --- a/docs/plugins/inputs/rss.asciidoc +++ b/docs/plugins/inputs/rss.asciidoc @@ -146,11 +146,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/s3.asciidoc b/docs/plugins/inputs/s3.asciidoc index acf60e5d7..d54067837 100644 --- a/docs/plugins/inputs/s3.asciidoc +++ b/docs/plugins/inputs/s3.asciidoc @@ -38,17 +38,18 @@ Available configuration options: | <> |<>|No|`nil` | <> |<>|No|`nil` | <> |<>|Yes| -| <> |<>|No|`"line"` +| <> |<>|No|`"plain"` | <> |<>|No|`false` | <> |<>|No|`nil` | <> |<>|No|`60` | <> |<>|No|`nil` | <> |<>|No| -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No| | <> |<>|No| | <> |<>|No|`nil` | <> |<>|No| +| <> |<>|No|`"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"` | <> |<>|No| | <> |<>|No|`true` |======================================================================= @@ -65,12 +66,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-inputs-s3-add_field]] ===== `add_field` @@ -86,13 +82,6 @@ Add a field to an event * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-inputs-s3-backup_add_prefix]] @@ -149,7 +138,7 @@ This only affects `plain` format logs since json is `UTF-8` already. ===== `codec` * Value type is <> - * Default value is `"line"` + * Default value is `"plain"` The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline. @@ -238,15 +227,15 @@ If specified, the prefix of filenames in the bucket must match (not a regexp) * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-inputs-s3-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-inputs-s3-region_endpoint]] ===== `region_endpoint` (DEPRECATED) @@ -263,7 +252,7 @@ The AWS region for your bucket. * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-inputs-s3-session_token]] ===== `session_token` @@ -271,7 +260,7 @@ The AWS Secret Access Key * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-inputs-s3-sincedb_path]] ===== `sincedb_path` @@ -294,6 +283,15 @@ Add any number of arbitrary tags to your event. This can help with processing later. +[[plugins-inputs-s3-temporary_directory]] +===== `temporary_directory` + + * Value type is <> + * Default value is `"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"` + +Set the directory where logstash will store the tmp files before processing them. +default to the current OS temporary directory in linux /tmp/logstash + [[plugins-inputs-s3-type]] ===== `type` @@ -305,11 +303,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. @@ -319,7 +317,6 @@ when sent to another Logstash server. * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + diff --git a/docs/plugins/inputs/snmptrap.asciidoc b/docs/plugins/inputs/snmptrap.asciidoc index 55d95d433..f518be661 100644 --- a/docs/plugins/inputs/snmptrap.asciidoc +++ b/docs/plugins/inputs/snmptrap.asciidoc @@ -37,7 +37,7 @@ Available configuration options: |Setting |Input type|Required|Default value | <> |<>|No|`{}` | <> |<>|No|`"plain"` -| <> |<>|No|`"public"` +| <> |<>|No|`"public"` | <> |<>|No|`"0.0.0.0"` | <> |<>|No|`1062` | <> |<>|No| @@ -85,7 +85,7 @@ The codec used for input data. Input codecs are a convenient method for decoding [[plugins-inputs-snmptrap-community]] ===== `community` - * Value type is <> + * Value type is <> * Default value is `"public"` SNMP Community String to listen for. @@ -161,11 +161,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/sqlite.asciidoc b/docs/plugins/inputs/sqlite.asciidoc index 87f54573a..0eb6899a3 100644 --- a/docs/plugins/inputs/sqlite.asciidoc +++ b/docs/plugins/inputs/sqlite.asciidoc @@ -208,11 +208,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/sqs.asciidoc b/docs/plugins/inputs/sqs.asciidoc index 3b51479d0..0bbf0325a 100644 --- a/docs/plugins/inputs/sqs.asciidoc +++ b/docs/plugins/inputs/sqs.asciidoc @@ -87,7 +87,7 @@ Available configuration options: | <> |<>|No| | <> |<>|No| | <> |<>|Yes| -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No| | <> |<>|No| | <> |<>|No| @@ -109,12 +109,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-inputs-sqs-add_field]] ===== `add_field` @@ -130,13 +125,6 @@ Add a field to an event * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-inputs-sqs-charset]] @@ -217,7 +205,7 @@ will cause unexpected results. * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-inputs-sqs-queue]] ===== `queue` @@ -231,10 +219,10 @@ Name of the SQS Queue name to pull messages from. Note that this is just the nam [[plugins-inputs-sqs-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-inputs-sqs-secret_access_key]] ===== `secret_access_key` @@ -242,7 +230,7 @@ The AWS Region * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-inputs-sqs-sent_timestamp_field]] ===== `sent_timestamp_field` @@ -258,7 +246,7 @@ Name of the event field in which to store the SQS message Sent Timestamp * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-inputs-sqs-tags]] ===== `tags` @@ -276,8 +264,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-sqs-type]] ===== `type` @@ -290,11 +277,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. @@ -304,7 +291,6 @@ when sent to another Logstash server. * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + diff --git a/docs/plugins/inputs/stdin.asciidoc b/docs/plugins/inputs/stdin.asciidoc index e5159964f..93e0aef82 100644 --- a/docs/plugins/inputs/stdin.asciidoc +++ b/docs/plugins/inputs/stdin.asciidoc @@ -127,11 +127,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/stomp.asciidoc b/docs/plugins/inputs/stomp.asciidoc index 8a9d96a8d..63fed8041 100644 --- a/docs/plugins/inputs/stomp.asciidoc +++ b/docs/plugins/inputs/stomp.asciidoc @@ -170,11 +170,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/syslog.asciidoc b/docs/plugins/inputs/syslog.asciidoc index 1c9633121..79d5dbfe5 100644 --- a/docs/plugins/inputs/syslog.asciidoc +++ b/docs/plugins/inputs/syslog.asciidoc @@ -207,11 +207,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/tcp.asciidoc b/docs/plugins/inputs/tcp.asciidoc index 986e83045..9741f2f96 100644 --- a/docs/plugins/inputs/tcp.asciidoc +++ b/docs/plugins/inputs/tcp.asciidoc @@ -228,11 +228,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/twitter.asciidoc b/docs/plugins/inputs/twitter.asciidoc index 26d2c5a7d..1ae20605c 100644 --- a/docs/plugins/inputs/twitter.asciidoc +++ b/docs/plugins/inputs/twitter.asciidoc @@ -211,11 +211,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/udp.asciidoc b/docs/plugins/inputs/udp.asciidoc index b9b9850c6..e251d21c7 100644 --- a/docs/plugins/inputs/udp.asciidoc +++ b/docs/plugins/inputs/udp.asciidoc @@ -168,11 +168,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/unix.asciidoc b/docs/plugins/inputs/unix.asciidoc index 772428d4f..4a9fee183 100644 --- a/docs/plugins/inputs/unix.asciidoc +++ b/docs/plugins/inputs/unix.asciidoc @@ -172,11 +172,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/varnishlog.asciidoc b/docs/plugins/inputs/varnishlog.asciidoc index 7a937acb8..343650669 100644 --- a/docs/plugins/inputs/varnishlog.asciidoc +++ b/docs/plugins/inputs/varnishlog.asciidoc @@ -122,8 +122,7 @@ This can help with processing later. * Value type is <> * Default value is `1` -Set this to the number of threads you want this input to spawn. -This is the same as declaring the input multiple times + [[plugins-inputs-varnishlog-type]] ===== `type` @@ -136,11 +135,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/websocket.asciidoc b/docs/plugins/inputs/websocket.asciidoc index 8d6a32ad1..3ab145f51 100644 --- a/docs/plugins/inputs/websocket.asciidoc +++ b/docs/plugins/inputs/websocket.asciidoc @@ -143,11 +143,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/wmi.asciidoc b/docs/plugins/inputs/wmi.asciidoc index 6682e0ee4..15553bdd9 100644 --- a/docs/plugins/inputs/wmi.asciidoc +++ b/docs/plugins/inputs/wmi.asciidoc @@ -161,11 +161,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/xmpp.asciidoc b/docs/plugins/inputs/xmpp.asciidoc index 3d04596f6..b77b3ee18 100644 --- a/docs/plugins/inputs/xmpp.asciidoc +++ b/docs/plugins/inputs/xmpp.asciidoc @@ -162,11 +162,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/zenoss.asciidoc b/docs/plugins/inputs/zenoss.asciidoc index eadf54b90..c7971360d 100644 --- a/docs/plugins/inputs/zenoss.asciidoc +++ b/docs/plugins/inputs/zenoss.asciidoc @@ -268,11 +268,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/inputs/zeromq.asciidoc b/docs/plugins/inputs/zeromq.asciidoc index 6317e4ef4..b3d5b99b5 100644 --- a/docs/plugins/inputs/zeromq.asciidoc +++ b/docs/plugins/inputs/zeromq.asciidoc @@ -223,11 +223,11 @@ Add a `type` field to all events handled by this input. Types are used mainly for filter activation. The type is stored as part of the event itself, so you can -also use the type to search for it in the web interface. +also use the type to search for it in Kibana. If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then -a new input will not override the existing type. A type set at +a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server. diff --git a/docs/plugins/outputs/cloudwatch.asciidoc b/docs/plugins/outputs/cloudwatch.asciidoc index 205a72198..2d25b97da 100644 --- a/docs/plugins/outputs/cloudwatch.asciidoc +++ b/docs/plugins/outputs/cloudwatch.asciidoc @@ -96,7 +96,7 @@ Available configuration options: | <> |<>|No|`"Logstash"` | <> |<>|No| | <> |<>|No|`10000` -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No| | <> |<>|No| | <> |<>|No|`"1m"` @@ -118,12 +118,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-outputs-cloudwatch-aws_credentials_file]] ===== `aws_credentials_file` @@ -131,13 +126,6 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-outputs-cloudwatch-codec]] @@ -245,7 +233,7 @@ The default namespace to use for events which do not have a `CW_namespace` field * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-outputs-cloudwatch-queue_size]] ===== `queue_size` @@ -259,10 +247,10 @@ Set this to the number of events-per-timeframe you will be sending to CloudWatch [[plugins-outputs-cloudwatch-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-outputs-cloudwatch-secret_access_key]] ===== `secret_access_key` @@ -270,7 +258,7 @@ The AWS Region * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-outputs-cloudwatch-session_token]] ===== `session_token` @@ -278,7 +266,7 @@ The AWS Secret Access Key * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-outputs-cloudwatch-tags]] ===== `tags` (DEPRECATED) @@ -334,8 +322,7 @@ If you set this option you should probably set the "value" option along with it * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + [[plugins-outputs-cloudwatch-value]] ===== `value` diff --git a/docs/plugins/outputs/graphite.asciidoc b/docs/plugins/outputs/graphite.asciidoc index 41a12dc89..240acb599 100644 --- a/docs/plugins/outputs/graphite.asciidoc +++ b/docs/plugins/outputs/graphite.asciidoc @@ -42,6 +42,7 @@ Available configuration options: | <> |<>|No|`2003` | <> |<>|No|`2` | <> |<>|No|`false` +| <> |<>|No|`"@timestamp"` | <> |<>|No|`1` |======================================================================= @@ -122,7 +123,7 @@ The metric(s) to use. This supports dynamic strings like %{host} for metric names and also for values. This is a hash field with key being the metric name, value being the metric value. Example: [source,ruby] - [ "%{host}/uptime", "%{uptime_1m}" ] + metrics => { "%{host}/uptime" => "%{uptime_1m}" } The value will be coerced to a floating point value. Values which cannot be coerced will be set to zero (0). You may use either `metrics` or `fields_are_metrics`, @@ -175,6 +176,16 @@ Should metrics be resent on failure? Only handle events with all of these tags. Optional. +[[plugins-outputs-graphite-timestamp_field]] +===== `timestamp_field` + + * Value type is <> + * Default value is `"@timestamp"` + +Use this field for the timestamp instead of '@timestamp' which is the +default. Useful when backfilling or just getting more accurate data into +graphite since you probably have a cache layer infront of Logstash. + [[plugins-outputs-graphite-type]] ===== `type` (DEPRECATED) diff --git a/docs/plugins/outputs/kafka.asciidoc b/docs/plugins/outputs/kafka.asciidoc index 7eace6933..2bf2ce81c 100644 --- a/docs/plugins/outputs/kafka.asciidoc +++ b/docs/plugins/outputs/kafka.asciidoc @@ -52,8 +52,9 @@ Available configuration options: | <> |<>|No|`"json"` | <> |<>|No|`""` | <> |<>, one of `["none", "gzip", "snappy"]`|No|`"none"` -| <> |<>|No|`nil` +| <> |<>|No|`"kafka.serializer.StringEncoder"` | <> |<>|No|`3` +| <> |<>|No|`nil` | <> |<>|No|`"kafka.producer.DefaultPartitioner"` | <> |<>, one of `["sync", "async"]`|No|`"sync"` | <> |<>|No|`10000` @@ -148,7 +149,7 @@ Optional. ===== `key_serializer_class` * Value type is <> - * Default value is `nil` + * Default value is `"kafka.serializer.StringEncoder"` The serializer class for keys (defaults to the same as for messages if nothing is given) @@ -163,6 +164,23 @@ property specifies the number of retries when such failures occur. Note that set non-zero value here can lead to duplicates in the case of network errors that cause a message to be sent but the acknowledgement to be lost. +[[plugins-outputs-kafka-partition_key_format]] +===== `partition_key_format` + + * Value type is <> + * Default value is `nil` + +Provides a way to specify a partition key as a string. To specify a partition key for +Kafka, configure a format that will produce the key as a string. Defaults +`key_serializer_class` to `kafka.serializer.StringEncoder` to match. For example, to partition +by host: +[source,ruby] + output { + kafka { + partition_key_format => "%{host}" + } + } + [[plugins-outputs-kafka-partitioner_class]] ===== `partitioner_class` diff --git a/docs/plugins/outputs/s3.asciidoc b/docs/plugins/outputs/s3.asciidoc index 519da955b..7d6e8702d 100644 --- a/docs/plugins/outputs/s3.asciidoc +++ b/docs/plugins/outputs/s3.asciidoc @@ -77,12 +77,12 @@ Available configuration options: | <> |<>|No|`"line"` | <> |<>|No|`""` | <> |<>|No| -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No|`false` | <> |<>|No| | <> |<>|No| | <> |<>|No|`0` -| <> |<>|No|`"/var/folders/_n/n4w7mmts5f58nflsvnnk_l2w0000gn/T/logstash"` +| <> |<>|No|`"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"` | <> |<>|No|`0` | <> |<>|No|`1` | <> |<>|No|`true` @@ -101,12 +101,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-outputs-s3-aws_credentials_file]] ===== `aws_credentials_file` @@ -114,13 +109,6 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-outputs-s3-bucket]] @@ -180,15 +168,15 @@ Specify a prefix to the uploaded filename, this can simulate directories on S3 * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-outputs-s3-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-outputs-s3-restore]] ===== `restore` @@ -204,7 +192,7 @@ The AWS Region * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-outputs-s3-session_token]] ===== `session_token` @@ -212,7 +200,7 @@ The AWS Secret Access Key * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-outputs-s3-size_file]] ===== `size_file` @@ -237,7 +225,7 @@ Optional. ===== `temporary_directory` * Value type is <> - * Default value is `"/var/folders/_n/n4w7mmts5f58nflsvnnk_l2w0000gn/T/logstash"` + * Default value is `"/var/folders/_9/x4bq65rs6vd0rrjthct3zxjw0000gn/T/logstash"` Set the directory where logstash will store the tmp files before sending it to S3 default to the current OS temporary directory in linux /tmp/logstash @@ -279,8 +267,7 @@ Specify how many workers to use to upload the files to S3 * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + [[plugins-outputs-s3-workers]] ===== `workers` diff --git a/docs/plugins/outputs/sns.asciidoc b/docs/plugins/outputs/sns.asciidoc index 81ddba13b..a4495ded3 100644 --- a/docs/plugins/outputs/sns.asciidoc +++ b/docs/plugins/outputs/sns.asciidoc @@ -54,7 +54,7 @@ Available configuration options: | <> |<>, one of `["json", "plain"]`|No|`"plain"` | <> |<>|No| | <> |<>|No| -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No| | <> |<>|No| | <> |<>|No|`true` @@ -73,12 +73,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-outputs-sns-arn]] ===== `arn` @@ -94,13 +89,6 @@ SNS topic ARN. * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-outputs-sns-codec]] @@ -135,7 +123,7 @@ Message format. Defaults to plain text. * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-outputs-sns-publish_boot_message_arn]] ===== `publish_boot_message_arn` @@ -153,10 +141,10 @@ Example: arn:aws:sns:us-east-1:770975001275:logstash-testing [[plugins-outputs-sns-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-outputs-sns-secret_access_key]] ===== `secret_access_key` @@ -164,7 +152,7 @@ The AWS Region * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-outputs-sns-session_token]] ===== `session_token` @@ -172,7 +160,7 @@ The AWS Secret Access Key * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-outputs-sns-tags]] ===== `tags` (DEPRECATED) @@ -202,8 +190,7 @@ Optional. * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + [[plugins-outputs-sns-workers]] ===== `workers` diff --git a/docs/plugins/outputs/sqs.asciidoc b/docs/plugins/outputs/sqs.asciidoc index cc05fcbd5..10c47b015 100644 --- a/docs/plugins/outputs/sqs.asciidoc +++ b/docs/plugins/outputs/sqs.asciidoc @@ -88,7 +88,7 @@ Available configuration options: | <> |<>|No|`"plain"` | <> |<>|No| | <> |<>|Yes| -| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"]`|No|`"us-east-1"` +| <> |<>, one of `["us-east-1", "us-west-1", "us-west-2", "eu-central-1", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1", "cn-north-1"]`|No|`"us-east-1"` | <> |<>|No| | <> |<>|No| | <> |<>|No|`true` @@ -107,12 +107,7 @@ Available configuration options: * Value type is <> * There is no default value for this setting. -This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order... -1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config -2. External credentials file specified by `aws_credentials_file` -3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` -4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` -5. IAM Instance Profile (available when running inside EC2) + [[plugins-outputs-sqs-aws_credentials_file]] ===== `aws_credentials_file` @@ -120,13 +115,6 @@ This plugin uses the AWS SDK and supports several ways to get credentials, which * Value type is <> * There is no default value for this setting. -Path to YAML file containing a hash of AWS credentials. -This file will only be loaded if `access_key_id` and -`secret_access_key` aren't set. The contents of the -file should look like this: - - :access_key_id: "12345" - :secret_access_key: "54321" [[plugins-outputs-sqs-batch]] @@ -178,7 +166,7 @@ Optional. * Value type is <> * There is no default value for this setting. -URI to proxy server if required + [[plugins-outputs-sqs-queue]] ===== `queue` @@ -192,10 +180,10 @@ Name of SQS queue to push messages into. Note that this is just the name of the [[plugins-outputs-sqs-region]] ===== `region` - * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1` + * Value can be any of: `us-east-1`, `us-west-1`, `us-west-2`, `eu-central-1`, `eu-west-1`, `ap-southeast-1`, `ap-southeast-2`, `ap-northeast-1`, `sa-east-1`, `us-gov-west-1`, `cn-north-1` * Default value is `"us-east-1"` -The AWS Region + [[plugins-outputs-sqs-secret_access_key]] ===== `secret_access_key` @@ -203,7 +191,7 @@ The AWS Region * Value type is <> * There is no default value for this setting. -The AWS Secret Access Key + [[plugins-outputs-sqs-session_token]] ===== `session_token` @@ -211,7 +199,7 @@ The AWS Secret Access Key * Value type is <> * There is no default value for this setting. -The AWS Session token for temprory credential + [[plugins-outputs-sqs-tags]] ===== `tags` (DEPRECATED) @@ -241,8 +229,7 @@ Optional. * Value type is <> * Default value is `true` -Should we require (true) or disable (false) using SSL for communicating with the AWS API -The AWS SDK for Ruby defaults to SSL so we preserve that + [[plugins-outputs-sqs-workers]] ===== `workers` diff --git a/docs/plugins/outputs/zabbix.asciidoc b/docs/plugins/outputs/zabbix.asciidoc index 2de6a4f95..2783b669f 100644 --- a/docs/plugins/outputs/zabbix.asciidoc +++ b/docs/plugins/outputs/zabbix.asciidoc @@ -5,65 +5,41 @@ NOTE: This is a community-maintained plugin! It does not ship with Logstash by default, but it is easy to install by running `bin/plugin install logstash-output-zabbix`. -The zabbix output is used for sending item data to zabbix via the -zabbix_sender executable. - -For this output to work, your event must have the following fields: - -* "zabbix_host" (the host configured in Zabbix) -* "zabbix_item" (the item key on the host in Zabbix) -* "send_field" (the field name that is sending to Zabbix) - -In Zabbix, create your host with the same name (no spaces in the name of -the host supported) and create your item with the specified key as a -Zabbix Trapper item. Also you need to set field that will be send to zabbix -as item.value, otherwise `@message` will be sent. - -The easiest way to use this output is with the grep filter. -Presumably, you only want certain events matching a given pattern -to send events to zabbix, so use grep or grok to match and also to add the required -fields. -[source,ruby] - filter { - grep { - type => "linux-syslog" - match => [ "@message", "(error|ERROR|CRITICAL)" ] - add_tag => [ "zabbix-sender" ] - add_field => [ - "zabbix_host", "%{source_host}", - "zabbix_item", "item.key" - "send_field", "field_name" - ] - } - grok { - match => [ "message", "%{SYSLOGBASE} %{DATA:data}" ] - add_tag => [ "zabbix-sender" ] - add_field => [ - "zabbix_host", "%{source_host}", - "zabbix_item", "item.key", - "send_field", "data" - ] - } - } - -[source,ruby] - output { - zabbix { - # only process events with this tag - tags => "zabbix-sender" - - # specify the hostname or ip of your zabbix server - # (defaults to localhost) - host => "localhost" - - # specify the port to connect to (default 10051) - port => "10051" - - # specify the path to zabbix_sender - # (defaults to "/usr/local/bin/zabbix_sender") - zabbix_sender => "/usr/local/bin/zabbix_sender" - } - } +The Zabbix output is used to send item data (key/value pairs) to a Zabbix +server. The event `@timestamp` will automatically be associated with the +Zabbix item data. + +The Zabbix Sender protocol is described at +https://www.zabbix.org/wiki/Docs/protocols/zabbix_sender/2.0 +Zabbix uses a kind of nested key/value store. + +[source,txt] + host + ├── item1 + │ └── value1 + ├── item2 + │ └── value2 + ├── ... + │ └── ... + ├── item_n + │ └── value_n + +Each "host" is an identifier, and each item is associated with that host. +Items are typed on the Zabbix side. You can send numbers as strings and +Zabbix will Do The Right Thing. + +In the Zabbix UI, ensure that your hostname matches the value referenced by +`zabbix_host`. Create the item with the key as it appears in the field +referenced by `zabbix_key`. In the item configuration window, ensure that the +type dropdown is set to Zabbix Trapper. Also be sure to set the type of +information that Zabbix should expect for this item. + +This plugin does not currently send in batches. While it is possible to do +so, this is not supported. Be careful not to flood your Zabbix server with +too many events per second. + +NOTE: This plugin will log a warning if a necessary field is missing. It will +not attempt to resend if Zabbix is down, but will log an error message.   @@ -77,6 +53,8 @@ Required configuration options: [source,json] -------------------------- zabbix { + zabbix_host => ... + zabbix_key => ... } -------------------------- @@ -88,10 +66,13 @@ Available configuration options: |======================================================================= |Setting |Input type|Required|Default value | <> |<>|No|`"plain"` -| <> |<>|No|`"localhost"` -| <> |<>|No|`10051` +| <> |<>|No|`1` | <> |<>|No|`1` -| <> |a valid filesystem path|No|`"/usr/local/bin/zabbix_sender"` +| <> |<>|Yes| +| <> |<>|Yes| +| <> |<>|No|`"localhost"` +| <> |<>|No|`10051` +| <> |<>|No|`"message"` |======================================================================= @@ -118,22 +99,6 @@ The codec used for output data. Output codecs are a convenient method for encodi Only handle events without any of these tags. Optional. -[[plugins-outputs-zabbix-host]] -===== `host` - - * Value type is <> - * Default value is `"localhost"` - - - -[[plugins-outputs-zabbix-port]] -===== `port` - - * Value type is <> - * Default value is `10051` - - - [[plugins-outputs-zabbix-tags]] ===== `tags` (DEPRECATED) @@ -144,6 +109,16 @@ Optional. Only handle events with all of these tags. Optional. +[[plugins-outputs-zabbix-timeout]] +===== `timeout` + + * Value type is <> + * Default value is `1` + +The number of seconds to wait before giving up on a connection to the Zabbix +server. This number should be very small, otherwise delays in delivery of +other outputs could result. + [[plugins-outputs-zabbix-type]] ===== `type` (DEPRECATED) @@ -165,12 +140,48 @@ Optional. The number of workers to use for this output. Note that this setting may not be useful for all outputs. -[[plugins-outputs-zabbix-zabbix_sender]] -===== `zabbix_sender` +[[plugins-outputs-zabbix-zabbix_host]] +===== `zabbix_host` - * Value type is <> - * Default value is `"/usr/local/bin/zabbix_sender"` + * This is a required setting. + * Value type is <> + * There is no default value for this setting. +The field name which holds the Zabbix host name. This can be a sub-field of +the @metadata field. + +[[plugins-outputs-zabbix-zabbix_key]] +===== `zabbix_key` + + * This is a required setting. + * Value type is <> + * There is no default value for this setting. + +The field name which holds the Zabbix key. This can be a sub-field of +the @metadata field. + +[[plugins-outputs-zabbix-zabbix_server_host]] +===== `zabbix_server_host` + + * Value type is <> + * Default value is `"localhost"` + +The IP or resolvable hostname where the Zabbix server is running + +[[plugins-outputs-zabbix-zabbix_server_port]] +===== `zabbix_server_port` + + * Value type is <> + * Default value is `10051` + +The port on which the Zabbix server is running + +[[plugins-outputs-zabbix-zabbix_value]] +===== `zabbix_value` + + * Value type is <> + * Default value is `"message"` +The field name which holds the value you want to send.