diff --git a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md index ae5dc49cd..1317fe2b2 100644 --- a/administration/configuring-fluent-bit/classic-mode/upstream-servers.md +++ b/administration/configuring-fluent-bit/classic-mode/upstream-servers.md @@ -5,6 +5,7 @@ Fluent Bit [output plugins](../../pipeline/outputs/) aim to connect to external An `Upstream` defines a set of nodes that will be targeted by an output plugin, by the nature of the implementation an output plugin must support the `Upstream` feature. The following plugin has `Upstream` support: - [Forward](../../../pipeline/outputs/forward.md) +- [Elasticsearch](../../../pipeline/outputs/elasticsearch.md) The current balancing mode implemented is `round-robin`. diff --git a/pipeline/outputs/elasticsearch.md b/pipeline/outputs/elasticsearch.md index 95a104ac4..e46ed1ed4 100644 --- a/pipeline/outputs/elasticsearch.md +++ b/pipeline/outputs/elasticsearch.md @@ -10,46 +10,51 @@ operational Elasticsearch service running in your environment. ## Configuration Parameters -| Key | Description | Default | -| :--- | :--- | :--- | -| `Host` | IP address or hostname of the target Elasticsearch instance | `127.0.0.1` | -| `Port` | TCP port of the target Elasticsearch instance | `9200` | -| `Path` | Elasticsearch accepts new data on HTTP query path `/_bulk`. You can also serve Elasticsearch behind a reverse proxy on a sub-path. Define the path by adding a path prefix in the indexing HTTP POST URI. | Empty string | -| `compress` | Set payload compression mechanism. Option available is `gzip`. | _none_ | -| `Buffer_Size` | Specify the buffer size used to read the response from the Elasticsearch HTTP service. Use for debugging purposes where required to read full responses. Response size grows depending of the number of records inserted. To use an unlimited amount of memory, set this value to `False`. Otherwise set the value according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md). | `512KB` | -| `Pipeline` | Define which pipeline the database should use. For performance reasons, it's strongly suggested to do parsing and filtering on Fluent Bit side, and avoid pipelines. | _none_ | -| `AWS_Auth` | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service. | `Off` | -| `AWS_Region` | Specify the AWS region for Amazon OpenSearch Service. | _none_ | -| `AWS_STS_Endpoint` | Specify the custom STS endpoint to be used with STS API for Amazon OpenSearch Service | _none_ | -| `AWS_Role_ARN` | AWS IAM Role to assume to put records to your Amazon cluster | _none_ | -| `AWS_External_ID` | External ID for the AWS IAM Role specified with `aws_role_arn` | _none_ | -| `AWS_Service_Name` | Service name to use in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See [Amazon OpenSearch Serverless](opensearch.md) for more information. | `es` | -| `AWS_Profile` | AWS profile name | `default` | -| `Cloud_ID` | If using Elastic's Elasticsearch Service you can specify the `cloud_id` of the cluster running. The string has the format `:`. Once decoded, the `base64_info` string has the format `$$`. | _none_ | -| `Cloud_Auth` | Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud | _none_ | -| `HTTP_User` | Optional username credential for Elastic X-Pack access | _none_ | -| `HTTP_Passwd` | Password for user defined in `HTTP_User` | _none_ | -| `Index` | Index name | `fluent-bit` | -| `Type` | Type name | `_doc` | -| `Logstash_Format` | Enable Logstash format compatibility. This option takes a Boolean value: `True/False`, `On/Off` | `Off` | -| `Logstash_Prefix` | When `Logstash_Format` is enabled, the Index name is composed using a prefix and the date, e.g: If `Logstash_Prefix` is equal to `mydata` your index will become `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. | `logstash` | -| `Logstash_Prefix_Key` | When included: the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record then the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ | -| `Logstash_Prefix_Separator` | Set a separator between `Logstash_Prefix` and date.| `-` | -| `Logstash_DateFormat` | Time format based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html) to generate the second part of the Index name. | `%Y.%m.%d` | -| `Time_Key` | When `Logstash_Format` is enabled, each record will get a new timestamp field. The `Time_Key` property defines the name of that field. | `@timestamp` | -| `Time_Key_Format` | When `Logstash_Format` is enabled, this property defines the format of the timestamp. | `%Y-%m-%dT%H:%M:%S` | -| `Time_Key_Nanos` | When `Logstash_Format` is enabled, enabling this property sends nanosecond precision timestamps. | `Off` | -| `Include_Tag_Key` | When enabled, it append the Tag name to the record. | `Off` | -| `Tag_Key` | When `Include_Tag_Key` is enabled, this property defines the key name for the tag. | `_flb-key` | -| `Generate_ID` | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying ES. | `Off` | -| `Id_Key` | If set, `_id` will be the value of the key from incoming record and `Generate_ID` option is ignored. | _none_ | -| `Write_Operation` | `Write_operation` can be any of: `create`, `index`, `update`, `upsert`. | `create` | -| `Replace_Dots` | When enabled, replace field name dots with underscore. Required by Elasticsearch 2.0-2.3. | `Off` | -| `Trace_Output` | Print all ElasticSearch API request payloads to `stdout` for diagnostics. | `Off` | -| `Trace_Error` | If ElasticSearch returns an error, print the ElasticSearch API request and response for diagnostics. | `Off` | -| `Current_Time_Index` | Use current time for index generation instead of message record. | `Off` | -| `Suppress_Type_Name` | When enabled, mapping types is removed and `Type` option is ignored. Elasticsearch 8.0.0 or higher [no longer supports mapping types](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html), and is set to `On`. | `Off` | -| `Workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` | +The **Overridable** column indicates if a key can be overridden in the NODE section of an +[Upstream](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) +configuration. + +| Key | Description | Default | Overridable | +| :--- | :--- | :--- | :--- | +| `Host` | IP address or hostname of the target Elasticsearch instance. | `127.0.0.1` | Yes. Default value isn't applicable for NODE section of Upstream configuration, which **requires** Host to be specified. | +| `Port` | TCP port of the target Elasticsearch instance. | `9200` | Yes. Default value isn't applicable for NODE section of Upstream configuration, which **requires** Port to be specified. | +| `Path` | Elasticsearch accepts new data on HTTP query path `/_bulk`. You can also serve Elasticsearch behind a reverse proxy on a sub-path. Define the path by adding a path prefix in the indexing HTTP POST URI. | Empty string | Yes | +| `compress` | Set payload compression mechanism. Option available is `gzip`. | _none_ | Yes | +| `Buffer_Size` | Specify the buffer size used to read the response from the Elasticsearch HTTP service. Use for debugging purposes where required to read full responses. Response size grows depending of the number of records inserted. To use an unlimited amount of memory, set this value to `False`. Otherwise set the value according to the [Unit Size](../../administration/configuring-fluent-bit/unit-sizes.md). | `512KB` | Yes | +| `Pipeline` | Define which pipeline the database should use. For performance reasons, it's strongly suggested to do parsing and filtering on Fluent Bit side, and avoid pipelines. | _none_ | Yes | +| `AWS_Auth` | Enable AWS Sigv4 Authentication for Amazon OpenSearch Service. | `Off` | Yes | +| `AWS_Region` | Specify the AWS region for Amazon OpenSearch Service. | _none_ | Yes | +| `AWS_STS_Endpoint` | Specify the custom STS endpoint to be used with STS API for Amazon OpenSearch Service. | _none_ | Yes | +| `AWS_Role_ARN` | AWS IAM Role to assume to put records to your Amazon cluster. | _none_ | Yes | +| `AWS_External_ID` | External ID for the AWS IAM Role specified with `aws_role_arn`. | _none_ | Yes | +| `AWS_Service_Name` | Service name to use in AWS Sigv4 signature. For integration with Amazon OpenSearch Serverless, set to `aoss`. See [Amazon OpenSearch Serverless](opensearch.md) for more information. | `es` | Yes | +| `AWS_Profile` | AWS profile name. | `default` | Yes | +| `Cloud_ID` | If using Elastic's Elasticsearch Service you can specify the `cloud_id` of the cluster running. The string has the format `:`. After decoding, the `base64_info` string has the format `$$`. | _none_ | No | +| `Cloud_Auth` | Specify the credentials to use to connect to Elastic's Elasticsearch Service running on Elastic Cloud. | _none_ | Yes | +| `HTTP_User` | Optional username credential for Elastic X-Pack access. | _none_ | Yes | +| `HTTP_Passwd` | Password for user defined in `HTTP_User`. | _none_ | Yes | +| `Index` | Index name. | `fluent-bit` | Yes | +| `Type` | Type name. | `_doc` | Yes | +| `Logstash_Format` | Enable Logstash format compatibility. This option takes a Boolean value: `True/False`, `On/Off`. | `Off` | Yes | +| `Logstash_Prefix` | When `Logstash_Format` is enabled, the Index name is composed using a prefix and the date, For example, if `Logstash_Prefix` is equal to `mydata`, your index becomes `mydata-YYYY.MM.DD`. The last string appended belongs to the date when the data is being generated. | `logstash` | Yes | +| `Logstash_Prefix_Key` | When included: the value of the key in the record will be evaluated as key reference and overrides `Logstash_Prefix` for index generation. If the key/value isn't found in the record, the `Logstash_Prefix` option will act as a fallback. The parameter is expected to be a [record accessor](../../administration/configuring-fluent-bit/classic-mode/record-accessor.md). | _none_ | Yes | +| `Logstash_Prefix_Separator` | Set a separator between `Logstash_Prefix` and date. | `-` | Yes | +| `Logstash_DateFormat` | Time format based on [strftime](http://man7.org/linux/man-pages/man3/strftime.3.html) to generate the second part of the Index name. | `%Y.%m.%d` | Yes | +| `Time_Key` | When `Logstash_Format` is enabled, each record gets a new timestamp field. The `Time_Key` property defines the name of that field. | `@timestamp` | Yes | +| `Time_Key_Format` | When `Logstash_Format` is enabled, this property defines the format of the timestamp. | `%Y-%m-%dT%H:%M:%S` | Yes | +| `Time_Key_Nanos` | When `Logstash_Format` is enabled, enabling this property sends nanosecond precision timestamps. | `Off` | Yes | +| `Include_Tag_Key` | When enabled, it append the Tag name to the record. | `Off` | Yes | +| `Tag_Key` | When `Include_Tag_Key` is enabled, this property defines the key name for the tag. | `_flb-key` | Yes | +| `Generate_ID` | When enabled, generate `_id` for outgoing records. This prevents duplicate records when retrying ES. | `Off` | Yes | +| `Id_Key` | If set, `_id` is the value of the key from incoming record, and `Generate_ID` option is ignored. | _none_ | Yes | +| `Write_Operation` | `Write_operation` can be any of: `create`, `index`, `update`, `upsert`. | `create` | Yes | +| `Replace_Dots` | When enabled, replace field name dots with underscore. Required by Elasticsearch 2.0-2.3. | `Off` | Yes | +| `Trace_Output` | Print all ElasticSearch API request payloads to `stdout` for diagnostics. | `Off` | Yes | +| `Trace_Error` | If ElasticSearch returns an error, print the ElasticSearch API request and response for diagnostics. | `Off` | Yes | +| `Current_Time_Index` | Use current time for index generation instead of message record. | `Off` | Yes | +| `Suppress_Type_Name` | When enabled, mapping types is removed and `Type` option is ignored. Elasticsearch 8.0.0 or later [doesn't support mapping types](https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html), which requires this value to be `On`. | `Off` | Yes | +| `Workers` | The number of [workers](../../administration/multithreading.md#outputs) to perform flush operations for this output. | `2` | No | +| `Upstream` | If plugin will connect to an _Upstream_ instead of a simple host, this property defines the absolute path for the Upstream configuration file, for more details about this refer to the [Upstream Servers](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) documentation section. | _none_ | No | If you have used a common relational database, the parameters `index` and `type` can be compared to the `database` and `table` concepts. @@ -59,6 +64,16 @@ be compared to the `database` and `table` concepts. The Elasticsearch output plugin supports TLS/SSL. For more details about the properties available and general configuration, see [TLS/SSL](../../administration/transport-security.md). +### AWS Sigv4 Authentication and Upstream Servers + +The `http_proxy`, `no_proxy`, and `TLS` parameters used for AWS Sigv4 Authentication +(for connection of plugin to AWS to generate authentication signature) are never +picked from the `NODE` section of the +[Upstream](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) +configuration. However, `TLS` parameters for connection of the plugin to +Elasticsearch **can** be overridden in the `NODE` section of Upstream, even if AWS +authentication is used. + ### `write_operation` The `write_operation` can be any of: @@ -110,7 +125,7 @@ fluent-bit -i cpu -t cpu -o es -p Host=192.168.2.3 -p Port=9200 \ ### Configuration File -In your main configuration file append the following `Input` and `Output` sections. +In your main configuration file append the following `Input` and `Output` sections: {% tabs %} {% tab title="fluent-bit.yaml" %} @@ -151,6 +166,96 @@ pipeline: {% endtab %} {% endtabs %} +### Configuration File with Upstream + +#### Classic mode Configuration File with Upstream + +In your main classic mode configuration file append the following `Input` and `Output` sections: + +```text +[INPUT] + Name dummy + Dummy { "message" : "this is dummy data" } + +[OUTPUT] + Name es + Match * + Upstream ./upstream.conf + Index my_index + Type my_type +``` + +Your [Upstream Servers](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) +configuration file can be similar to the following: + +```text +[UPSTREAM] + name es-balancing + +[NODE] + name node-1 + host localhost + port 9201 + +[NODE] + name node-2 + host localhost + port 9202 + +[NODE] + name node-3 + host localhost + port 9203 +``` + +#### YAML Configuration File with Upstream + +In your main YAML configuration file (fluent-bit.yaml) put the following `Input` and `Output` sections: + +```yaml +pipeline: + inputs: + - name: dummy + dummy: "{ \"message\" : \"this is dummy data\" }" + outputs: + - name: es + match: * + index: fluent-bit + type: my_type + upstream: ./upstream.yaml +``` + +Your Upstream Servers configuration file can use +[classic mode](../../administration/configuring-fluent-bit/classic-mode/upstream-servers.md) +(refer to "Classic mode Configuration File with Upstream" section at this page) or +[YAML format](../../administration/configuring-fluent-bit/yaml/upstream-servers-section.md). +If Upstream Servers configuration uses YAML format, then it can be placed in the same file as main configuration (e.g. in fluent-bit.yaml), like: + +```yaml +pipeline: + inputs: + - name: dummy + dummy: "{ \"message\" : \"this is dummy data\" }" + outputs: + - name: es + match: * + index: fluent-bit + type: my_type + upstream: ./fluent-bit.yaml +upstream_servers: + - name: es-balancing + nodes: + - name: node-1 + host: localhost + port: 9201 + - name: node-2 + host: localhost + port: 9202 + - name: node-3 + host: localhost + port: 9203 +``` + ## About Elasticsearch field names Some input plugins can generate messages where the field names contains dots. For diff --git a/pipeline/outputs/loki.md b/pipeline/outputs/loki.md index 18df1d649..1b9b6af18 100644 --- a/pipeline/outputs/loki.md +++ b/pipeline/outputs/loki.md @@ -225,7 +225,7 @@ The following configuration examples generate the same Stream Labels: Add the JSON path to the plugin output configuration: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -255,7 +255,7 @@ pipeline: The previous configurations accomplish the same as this one: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -293,7 +293,7 @@ job="fluentbit", stream="stdout" If you're running in a Kubernetes environment, consider enabling the `auto_kubernetes_labels` option, which auto-populates the streams with the Pod labels for you. Consider the following configuration: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -341,7 +341,7 @@ Consider this JSON example: If the value is a string, `line_format` is `json`, and `drop_single_key` is `true`, it will be sent as a quoted string. -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -413,7 +413,7 @@ The following configuration: determined by the Kubernetes metadata filter (not shown). - Uses a structured metadata field to hold the Kubernetes pod name. -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -454,7 +454,7 @@ In addition to the `structured_metadata` configuration parameter, a `structured_ The following configuration is similar to the previous example, except now all entries in the log record map value `$kubernetes` will be used as structured metadata entries: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -486,7 +486,7 @@ pipeline: Assuming the value `$kubernetes` is a map containing two entries `namespace_name` and `pod_name`, the previous configuration is equivalent to: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -531,7 +531,7 @@ Fluent Bit supports sending logs and metrics to [Grafana Cloud](https://grafana. Below is an example configuration, be sure to set the credentials (shown here with XXX) and ensure the host URL matches the correct one for your deployment: -% tabs %} +{% tabs %} {% tab title="fluent-bit.yaml" %} ```yaml @@ -574,7 +574,7 @@ pipeline: The following configuration example emits a dummy example record and ingests it on Loki . Copy and paste the corresponding content below into a file `out_loki.yaml` or `out_loki.conf`: -% tabs %} +{% tabs %} {% tab title="out-loki.yaml" %} ```yaml