- Broker Configs
- Confluent Platform and Apache Kafka Compatibility
- https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-zookeeper
kustomize build . | kubectl apply -f -
Test zookeeper functionalities:
# show zookeeper pods
for i in 0 1 2; do kubectl exec zk-$i -n kafka -- hostname -f; done
# The servers in a ZooKeeper ensemble use natural numbers as unique identifiers
# Store each server's identifier in a file called myid in the server's data directory
for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -n kafka -- cat /var/lib/zookeeper/data/myid; done
# show config
kubectl exec zk-0 -n kafka -c zookeeper -- cat /opt/zookeeper/conf/zoo.cfg
# write world to the path /hello on the zk-0 Pod in the ensemble
kubectl exec zk-0 -n kafka -c zookeeper -- zkCli.sh create /hello world
# get the data from the zk-1 Pod
kubectl exec zk-1 -n kafka -c zookeeper -- zkCli.sh get /hello
Test broker and topic operations:
# create a topic
kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092 --topic messages --create --partitions 1 --replication-factor 3 --config retention.ms=86400001 --config retention.bytes=274877906943
# describe dynamic configs of a topic
kubectl -n kafka exec -ti testclient -- ./bin/kafka-configs.sh -bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092 --entity-type topics --entity-name messages --describe
# alter topic configs
kubectl -n kafka exec -ti testclient -- ./bin/kafka-configs.sh --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092 --alter --entity-type topics --entity-name messages --add-config retention.bytes=274877906944
# list topics, should have "messages"
kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --list --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092
# list all topics using zookeeper shell
kubectl -n kafka exec -ti testclient -- ./bin/zookeeper-shell.sh zk-cs.kafka.svc.cluster.local:2181 ls /brokers/topics
# describe a topic
kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --topic messages --describe --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092
# delete a topic (marked for deletion)
kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --delete --topic messages --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092
# list topics that are marked deleted using zookeeper shell
kubectl -n kafka exec -ti testclient -- ./bin/zookeeper-shell.sh zk-cs.kafka.svc.cluster.local:2181 ls /admin/delete_topics
# delete a topic using zookeeper shell
kubectl -n kafka exec -ti testclient -- ./bin/zookeeper-shell.sh zk-cs.kafka.svc.cluster.local:2181 deleteall /brokers/topics/messages
# list broker ids using zookeeper shell
kubectl -n kafka exec -ti testclient -- ./bin/zookeeper-shell.sh zk-cs.kafka.svc.cluster.local:2181 ls /brokers/ids
# describe a broker using zookeeper shell
kubectl -n kafka exec -ti testclient -- ./bin/zookeeper-shell.sh zk-cs.kafka.svc.cluster.local:2181 get /brokers/ids/1001
Test consumer and producer functionalities:
# start consumer
kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-hs.kafka.svc.cluster.local:9092 --topic messages --from-beginning
# start producer
kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list kafka-0.kafka-hs.kafka.svc.cluster.local:9092,kafka-1.kafka-hs.kafka.svc.cluster.local:9092,kafka-2.kafka-hs.kafka.svc.cluster.local:9092 --topic messages
Send messages in producer:
>hello
>world
You should receive messages in consumer:
hello
world
- After deleting a topic using the above command, one should also delete the topic directory on each broker (as defined in the logs.dirs and log.dir properties) with
rm -rf
command. - Check the Confluent platform and Apache Kafka compatibility here. For example, we use Confluent platform version 7.6.0, which maps to Kafka version 3.6.0.
- One can also test the Kafka cluster using this helper.
- Grafana dashboard for Confluent Kafka and ZooKeeper: https://github.com/confluentinc/cp-helm-charts/blob/master/grafana-dashboard/confluent-open-source-grafana-dashboard.json
- Kafka Exporter
- Github
- Dockerhub
- Grafana dashboard: https://grafana.com/grafana/dashboards/7589-kafka-exporter-overview/
- Import dashboard
- Go to Kafka Exporter Overview / Settings
- Go to variable
instance
and set query aslabel_values(kafka_brokers, instance)
- the value of variable
instance
is set to the value of labelinstance
in metrickafka_brokers
- the value of variable
- Update the dashboard settings and save the variable
- To filter out all Kafka-related logs on Loki:
{app="kafka"} != "SocketServer" != "InvalidReceiveException" != "org.apache.kafka.common.network" != "Thread.java" != "kafka_exporter.go"