Skip to content

kube play: set service container as main PID when possible #17469

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 10, 2023

Conversation

vrothberg
Copy link
Member

Commit 4fa307f fixed a number of issues in the sdnotify proxies. Whenever a container runs with a custom sdnotify policy, the proxies need to keep running which in turn required Podman to run and wait for the service container to stop. Improve on that behavior and set the service container as the main PID (instead of Podman) when no container needs sdnotify.

Fixes: #17345
Signed-off-by: Valentin Rothberg [email protected]

Does this PR introduce a user-facing change?

None

@rhatdan @ygalblum @alexlarsson PTAL
This optimization will also benefit Quadlet. If we plan on cutting a Podman v4.4.2, this may be a nice candidate.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 10, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vrothberg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 10, 2023
@vrothberg
Copy link
Member Author

What I find amazing: the issue was opened because the RAM usage increased when running kube play on a Raspberry Pi. Podman allows for running K8s workloads on a single Pi ... really cool

@vrothberg
Copy link
Member Author

@TomSweeneyRedHat, can we still backport for RHEL or do we need an exception dance?

switch len(notifyProxies) {
case 0: // Optimization for containers/podman/issues/17345
// No container needs sdnotify, so we can mark the
// service container as the main PID and return early.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this a bit confusing, I understand this as use the container main pid but it is actually the conmon pid.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Updated the comment.

case 0: // Optimization for containers/podman/issues/17345
// No container needs sdnotify, so we can mark the
// service container as the main PID and return early.
data, err := serviceContainer.Inspect(false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inspect is rather expensive since we look out for permanence I think it would be better to add an accessor for the ConmonPid.

Copy link
Member Author

@vrothberg vrothberg Feb 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice idea! With the afternoon meetings ahead, I am running out of time but I can add an accessor in another PR.

We did the inspect before 4fa307f so it's at least not a regression.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good

Commit 4fa307f fixed a number of issues in the sdnotify proxies.
Whenever a container runs with a custom sdnotify policy, the proxies
need to keep running which in turn required Podman to run and wait for
the service container to stop.  Improve on that behavior and set the
service container as the main PID (instead of Podman) when no container
needs sdnotify.

Fixes: containers#17345
Signed-off-by: Valentin Rothberg <[email protected]>
@Luap99
Copy link
Member

Luap99 commented Feb 10, 2023

LGTM

@rhatdan
Copy link
Member

rhatdan commented Feb 10, 2023

/lgtm
Nice work.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 10, 2023
@openshift-merge-robot openshift-merge-robot merged commit f099c1f into containers:main Feb 10, 2023
@vrothberg vrothberg deleted the fix-17345 branch February 10, 2023 15:31
umohnani8 added a commit to umohnani8/libpod that referenced this pull request Feb 21, 2023
If a service container was created for a pod, kube play
was no longer waiting on it to exit before returning.
Looks like this was introduced by
containers#17469.

Kube play --wait will add tests that will help test this.
Just want to fix this before anything is really affected.

[NO NEW TESTS NEEDED]

Signed-off-by: Urvashi Mohnani <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 10, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 10, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note-none
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: podman kube play stays in memory when running systemd service
4 participants