-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set compactor memory limits #11970
Set compactor memory limits #11970
Conversation
Just to add some further detail here, the compactor is being evicted (first |
@@ -62,6 +62,7 @@ | |||
container.mixin.readinessProbe.httpGet.withPort($._config.http_listen_port) + | |||
container.mixin.readinessProbe.withTimeoutSeconds(1) + | |||
k.util.resourcesRequests('4', '2Gi') + | |||
k.util.resourcesLimits(null, '32Gi') + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest setting this 2x requests, and adding a note in the release notes for folks to set this limit to a figure appropriate for their env.
32Gi
may not be appropriate in all cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
If compactor memory grows unbounded, the underlying node may issue a
SIGTERM
to stop in. In this case, the pod doesn't come back up and cluster performance is impacted.This PR introduces a memory limit to the compactor so it'll OOM and be restarted by k8s