Application pod starting before kafka pod is causing leader election for the topic #5627
shreyasarani23
started this conversation in
General
Replies: 1 comment 4 replies
-
I'm not sure I follow what do you expect from Strimzi. Your application IMHO should be able to handle errors like this - either by recovering from them or for example by exiting and letting the Kubernetes to restart the pod with back-off until the Kafka cluster is ready. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi @scholzj we have application pod that writes to kafka. And we are using 3 kafka broker pods. since daily we are starting and stopping AKS cluster, after starting the AKS cluster first the application pod is in ready state. whereas the kafka pods are not in ready state. usually one by one kafka pods will come to ready state from running state. Since all the 3 kafka pods are not in ready state, the application pods is trying to connect to all the 3 kafka brokers since one of the broker might not be in ready state and this broker might be the partition leader for that topic it is causing leader election. below is the logs of application pod.
And also getting the error timeout while acquiring locks.
Please help me to resolve this issue.
Beta Was this translation helpful? Give feedback.
All reactions