-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set some default resource requests on the workspace pod #698
Labels
Milestone
Comments
This comment has been minimized.
This comment has been minimized.
5 tasks
Zombie processes do seem to accumulate in the workspace pod, given a per-minute resync:
|
Likely related to pulumi/pulumi#17361 |
EronWright
added a commit
that referenced
this issue
Oct 7, 2024
<!--Thanks for your contribution. See [CONTRIBUTING](CONTRIBUTING.md) for Pulumi's contribution guidelines. Help us merge your changes more quickly by adding more details such as labels, milestones, and reviewers.--> ### Proposed changes <!--Give us a brief description of what you've done and what it solves. --> Implements good defaults for the workspace resource, using a ["burstable"](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#burstable) approach. Since a workspace pod's utilization is bursty - with low resource usage during idle times and with high resource usage during deployment ops - the pod requests a small amount of resources (64mb, 100m) to be able to idle. A deployment op is able to use much more memory - all available memory on the host. Users may customize the resources (e.g. to apply different requests and/or limits). For large/complex Pulumi apps, it might make sense to reserve more memory and/or use #694. The agent takes some pains to stay within the requested amount, using a programmatic form of the [GOMEMLIMIT](https://weaviate.io/blog/gomemlimit-a-game-changer-for-high-memory-applications) environment variable. The agent detects the requested amount via the Downward API. We don't use `GOMEMLIMIT` to avoid propagating it to sub-processes, and because the format is a Kubernetes 'quantity'. It was observed that zombies weren't being cleaned up, and this was leading to resource exhaustion. Fixed by using [tini](https://github.com/krallin/tini/) as the entrypoint process (PID 1). ### Related issues (optional) <!--Refer to related PRs or issues: #1234, or 'Fixes #1234' or 'Closes #1234'. Or link to full URLs to issues or pull requests in other GitHub repositories. --> Closes #698
github-project-automation
bot
moved this from In Progress
to Done
in Pulumi Kubernetes Operator v2
Oct 7, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Manager has limits on it already -- currently has guaranteed QoS.
Related to #694 and probably a pre-req -- set a small request limit to give the workspace pod burst-able QoS.
Additional considerations:
SetMemoryLimit
in code?).The text was updated successfully, but these errors were encountered: