-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Existing event alert keeps getting deleted and fired again #29
Comments
The exporter doesn't report or fire alerts, Prometheus does, through alerting rule. I would check the alerting rule to see if its condition is too specific that it could only hold for a few minutes. |
Hey, you are right, But the problem still caused by the exporter, the prometheus just scrape the exporter, my issue is when prometheus scrape the exporter sometimes it gets that an existing event is disappeared which then lead to delete the alert even tho' the event still exist in the cluster. After few minutes or seconds it gets recreate. My alert rule:
|
Try doing something like this: |
or even better: Kubernetes produce quite a lot of events. Without any filters or aggregation, you'll be flooded with alerts. |
I think the answer to my question is to understand the flag: event.max-length, Please help me understand the flag, I want the exporter to report every event until kubernetes delete them himself. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
TLDR; The alert is getting deleted and recreate while the kubernetes event still exist.
The exporter reports existing event as new event every few minutes, meaning even when a kubernetes event exist, the exporter keeps report that the event is resolved (disappear) but then he recreate it (because the event is still exist) which cause to prometheus keeps deleting and recreate alerts of the same event.
What you expected to happen:
When event occurred, the exporter should report and keep it until the event is deleted. (and prometheus will keep firing)
How to reproduce it (as minimally and precisely as possible):
Every event that keeps repeating himself would behave like what I described.
Anything else we need to know?:
It could be miss-configuration of alert-rules in prometheus, but I wanted to be sure if it's a well-known bug.
The text was updated successfully, but these errors were encountered: