-
Notifications
You must be signed in to change notification settings - Fork 3.2k
rd_kafka_consumer_poll returns " Broker: Not coordinator" ; no messages received after this error #3714
-
Hi, We are seeing that the consumer_poll() call returns RD_KAFKA_RESP_ERR_NOT_COORDINATOR and subsequent calls to poll don't return any messages (we know there are messages on the channel). How do we work around this issue ? It appears that we have to re-publish on the topic for the poll to start receiving messages again. The auto.offset.reset is set to "earliest". Note: There is only one consumer in the consumer group consuming from a topic with one partition. Why is this happening? Is there a way to work around it ?? Any help will be appreciated. Vasudev |
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment · 3 replies
-
I think you are hitting these two issues that are fixed in v1.8, please upgrade to v1.8.2: (from https://github.com/edenhill/librdkafka/blob/master/CHANGELOG.md#consumer-fixes-1 ):
|
Beta Was this translation helpful? Give feedback.
All reactions
-
Thanks for your response, Edenhill. We will try it out and get back to you. Thanks again. |
Beta Was this translation helpful? Give feedback.
All reactions
-
We moved to v1.8.2 and still see the "Not coordinator" error in the message after which consumer poll just times out and does not receive any messages. we use rd_kafka_assign(offset=OFFSET_STORED) ; auto.offset.reset = earliest; Do you think it will help to use rd_kafka_subscribe() instead ?
|
Beta Was this translation helpful? Give feedback.
All reactions
-
You have a problematic broker (broker 0) that reports itself as the coordinator in FindCoordinatorRequest but still says it is not the coordinator in other requests. This is a broker side issue.
|
Beta Was this translation helpful? Give feedback.
I think you are hitting these two issues that are fixed in v1.8, please upgrade to v1.8.2:
(from https://github.com/edenhill/librdkafka/blob/master/CHANGELOG.md#consumer-fixes-1 ):
auto.offset.reset
could previously be triggered by temporary errors,such as disconnects and timeouts (after the two retries are exhausted).
This is now fixed so that the auto offset reset policy is only triggered
for permanent errors.
auto.offset.reset
is now logged to help theapplication owner identify the reason of the reset.