-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #22
Comments
Looks like after the |
Inuse space good:
Alloc_space bad:
|
I'm still monitoring the situation but it looks like adding |
I've been trying to look at the code since my previously deployed version of this adapter (which still works fine). The only thing I can see, since your wrapper around the BulkWriter is largely the same, just moved around to different files, is this: |
Thanks for the detailed report. Not sure when I will get to this but please keep me posted with any further developments. |
Do you guys have any update on this? We've been using this adapter and it gets regularly evicted from Kubernetes as it's consuming too much RAM (10GB+ in under 24H)... |
I've actually switched to using metricbeat and the Prometheus module. You can use metricbeat to scrape the same targets as Prometheus, rather than go through the |
Thanks @sevagh for your workaround, sadly that won't work for us as the Prometheus module doesn't seem to be compatible with AWS/Kubernetes service discovery... |
@Kanshiroron, I haven't had a chance to dig into this one yet sorry. |
Hey @pwillie I am actually testing a workaround I'll keep you posted with the results. |
Hey @pwillie, after tweaking the adapter configuration, I no longer see the issue appearing. It has been running for 10 days now without any issue.
I suspect the number of shard and replicas to be causing the issue (use to have 5 index shards and 3 replicas) |
Thanks for the update and great to hear of your experience. |
You're welcome. |
Hello.
I just upgraded to the latest version of this adapter (with Prometheus 2.6 pollers) and the following ES settings:
The adapter has a memory leak. I added the pprof debug endpoint. Here's the pprof/heap
-top
output:Another one:
It's mostly from this method: https://github.com/pwillie/prometheus-es-adapter/blob/master/pkg/elasticsearch/write.go#L69
Moreover, the
inuse_space
doesn't seem to be the source of the leak - I think it's a GC problem (but I'm not sure how well I can continue debugging). When I usepprof -alloc_space
I think the bulk of the leak is evident there:Running with
pprof list Write
:Here's the Prometheus protobuf marshaling code:
The text was updated successfully, but these errors were encountered: