It reads events from multiple Kafka topics using sarama
library.
It guarantees at "at-least-once delivery" due to the commitment mechanism.
Example Standard example:
pipelines:
example_pipeline:
input:
type: kafka
brokers: [kafka:9092, kafka:9091]
topics: [topic1, topic2]
offset: newest
# output plugin is not important in this case, let's emulate s3 output.
output:
type: s3
file_config:
retention_interval: 10s
endpoint: "s3.fake_host.org:80"
access_key: "access_key1"
secret_key: "secret_key2"
bucket: "bucket-logs"
bucket_field_event: "bucket_name"
brokers
[]string
required
The name of kafka brokers to read from.
topics
[]string
required
The list of kafka topics to read from.
consumer_group
string
default=file-d
The name of consumer group to use.
channel_buffer_size
int
default=256
The number of unprocessed messages in the buffer that are loaded in the background from kafka.
offset
string
default=newest
options=oldest|newest
The newest and oldest values is used when a consumer starts but there is no committed offset for the assigned partition.
newest
- set offset to the newest messageoldest
- set offset to the oldest message
Generated using insane-doc