Skip to content

Job controller optimization: reduce work duration time & minimize cache locking #132305

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2025

Conversation

xigang
Copy link
Member

@xigang xigang commented Jun 14, 2025

What type of PR is this?

/kind cleanup

What this PR does / why we need it:

This improves performance and correctness at scale by minimizing the time a read lock is held on the cache, reducing blockage of CacheController delta processing. It also helps lower workqueue processing time per object/key.

Which issue(s) this PR is related to:

Fixes # #130767

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Job controller uses controller UID index for pod lookups.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 14, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Jun 14, 2025
@k8s-ci-robot k8s-ci-robot requested review from kow3ns and tenzen-y June 14, 2025 10:56
@k8s-ci-robot k8s-ci-robot added the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label Jun 14, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 14, 2025
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Apps Jun 14, 2025
@xigang xigang changed the title Optimize job controller performance: reduce work duration time & minimize cache locking Job controller optimization: reduce work duration time & minimize cache locking Jun 14, 2025
@xigang
Copy link
Member Author

xigang commented Jun 14, 2025

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Jun 14, 2025
@xigang
Copy link
Member Author

xigang commented Jun 15, 2025

/cc @mimowo @tenzen-y @kannon92

PTAL.

@k8s-ci-robot k8s-ci-robot requested review from kannon92 and mimowo June 15, 2025 14:11
@xigang
Copy link
Member Author

xigang commented Jun 16, 2025

/assign @tenzen-y @kannon92

@mimowo
Copy link
Contributor

mimowo commented Jun 17, 2025

@xigang do you have some measurements or experiments which would show the performance improvement?

@mimowo
Copy link
Contributor

mimowo commented Jun 17, 2025

Please also post some release note and fix linter

// getJobPodsByIndexer returns the set of pods that this Job should manage.
func (jm *Controller) getJobPodsByIndexer(ctx context.Context, j *batch.Job) ([]*v1.Pod, error) {
podsForJob := []*v1.Pod{}
for _, key := range []string{string(j.UID), controller.OrphanPodIndexKey} {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How the orphan pods get populated in the index, can you provide some reference?

Copy link
Member Author

@xigang xigang Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orphan pods are automatically indexed through the AddPodControllerUIDIndexer function in pkg/controller/controller_utils.go:1092-1105.

When a pod has no ControllerRef, it gets indexed under OrphanPodIndexKey instead of a specific controller UID.

This allows controllers to discover and potentially adopt pods that match their selectors but temporarily lack owner references.

Reference:
Similar pattern used by StatefulSet, ReplicaSet, and DaemonSet controllers.

Copy link
Contributor

@mimowo mimowo Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, thank you for explaining.

Does it mean that we now list orphaned Pods from all namespaces?

If so, then it seems this may actually hurt performance of some environments with many namespaces and orphaned Pods (for example when Pods are managed by external systems directly), as before we would only list Pods from a single namespace.

@hakuna-matatah @xigang Is such a situation covered by the benchmarks, or is it not a concern for us?

Maybe as a mitigation we could filter early the Pods by the namespace as we iterate over them anyway, wdyt @soltysh @wojtek-t @atiratree ? If we change the semantics of the function to return also Pods from other namespaces, then we need to assume that the downstream code can cope with it. I'm not sure how well tested such scenarios are, so extra filtering by namespace seems useful anyway to maintain semantics.

Or maybe, even better there could be different keys for orphans in the indexer depending on namespace?

As a follow up I argue we should commonize the functions (probably in pkg/controller/controller_utils.go), as this seems a non-trivial code duplication.

Copy link
Member Author

@xigang xigang Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To distinguish orphan Pods based on namespace, the index can be built like this:

// AddPodControllerUIDIndexer adds an indexer for Pod's controllerRef.UID to the given PodInformer.
// This indexer is used to efficiently look up pods by their ControllerRef.UID
func AddPodControllerUIDIndexer(podInformer cache.SharedIndexInformer) error {
	if _, exists := podInformer.GetIndexer().GetIndexers()[PodControllerUIDIndex]; exists {
		// indexer already exists, do nothing
		return nil
	}
	return podInformer.AddIndexers(cache.Indexers{
		PodControllerUIDIndex: func(obj interface{}) ([]string, error) {
			pod, ok := obj.(*v1.Pod)
			if !ok {
				return nil, nil
			}
			// Get the ControllerRef of the Pod to check if it's managed by a controller
			if ref := metav1.GetControllerOf(pod); ref != nil {
				return []string{string(ref.UID)}, nil
			}
			// If the Pod has no controller (i.e., it's orphaned), index it with namespace-specific OrphanPodIndexKey
			// This helps identify orphan pods for reconciliation and adoption by controllers within specific namespaces
			return []string{OrphanPodIndexKey + "/" + pod.Namespace}, nil
		},
	})
}

However, this approach requires changes to the code of other controllers, such as ReplicaSet and DaemonSet.

Before:

uidKeys := []string{string(rs.UID), controller.OrphanPodIndexKey}

After:

uidKeys := []string{string(rs.UID), controller.OrphanPodIndexKey + "/" + rs.Namespace}

Copy link
Contributor

@mimowo mimowo Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, to have a peace of mind that we are not worsening the scenario I would suggest to go this way, and commonized the function in controller_utils, but let's see what others say.

I'm ok to merge the PR as is if we follow up, but keep it open yet for visibility.

I'm also ok to turn this PR to also adjust the previous controllers and expose the utility function in controllers_utils

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let’s wait and see what others suggest first.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not super worried about too many orphaned pods, but I also like the change to the indexing suggested above. Given it's purely in-memory indexing we can change that at any point in time.

So let's follow-up on changing the value to also include the namespace - I would suggest merging the PR as is, and then switching all controllers to the new index in one shot.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I can take care of the follow-up work on the new index.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sgtm. I would also like the following up to ensure that there is no risk for the downstream code to now deal with pods coming from other namespaces. the code may or may not be ready for it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, sounds like a nice improvement for all the controllers.

A small suggestion; we could make a function for constructing the orphan key

controller.OrphanPodIndexKey + "/" + rs.Namespace

@xigang
Copy link
Member Author

xigang commented Jun 17, 2025

do you have some measurements or experiments which would show the performance improvement?

This improves performance and correctness at scale by minimizing the time a read lock is held on the cache, reducing blockage of CacheController delta processing. It also helps lower workqueue processing time per object/key.

We did some benchmarking comparing indexing with simply listing the pods from the Store/Cache.

For 10k pods in store, using Index it took ~30 µs (vs) ~320 µs

For 500k pods in store, using Index it took ~35 µs (vs) 18.080762ms

We can tell that, listing from store using Index is 500X times better at higher scale.

Following are the results:

=== RUN   TestGetPodsForDaemonSetUsesIndexer
    daemon_controller_test.go:922: ByIndex() fetched 100 pods in 30.252µs
    daemon_controller_test.go:930: List() fetched 10000 pods in 320.047µs
=== RUN   TestGetPodsForDaemonSetUsesIndexer
    daemon_controller_test.go:922: ByIndex() fetched 100 pods in 37.842µs
    daemon_controller_test.go:930: List() fetched 500000 pods in 18.080762ms

Optimizing this list call will provide the following benefits:

  • Reduce Read Lock Duration on the InformerCache, allowing writes to proceed faster and reducing chances of cache staleness at scale.
  • Lower Work Queue Processing Time, enabling faster convergence to the desired state
  • Decreased Queue Wait Time,, ensuring items are dispatched and processed more efficiently

You can refer to this PR: #130961

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Jun 17, 2025
@xigang
Copy link
Member Author

xigang commented Jun 17, 2025

Please also post some release note and fix linter

@mimowo Done.

@xigang
Copy link
Member Author

xigang commented Jun 17, 2025

@mimowo For approve

@@ -799,6 +809,27 @@ func (jm *Controller) getPodsForJob(ctx context.Context, j *batch.Job) ([]*v1.Po
return pods, err
}

// getJobPodsByIndexer returns the set of pods that this Job should manage.
func (jm *Controller) getJobPodsByIndexer(ctx context.Context, j *batch.Job) ([]*v1.Pod, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: ctx is unused here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!👍🏻

if err != nil {
return nil, err
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, revert unnecessary formatting changes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@mimowo
Copy link
Contributor

mimowo commented Jun 18, 2025

/hold
To clarify #132305 (comment)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 18, 2025
@mimowo
Copy link
Contributor

mimowo commented Jun 18, 2025

/unhold
/lgtm
/approve
as suggested by @wojtek-t

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 18, 2025
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 18, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 607be3aa28f793e5a026f7f158e94b11b49b9c6d

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mimowo, xigang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 18, 2025
@k8s-ci-robot k8s-ci-robot merged commit 8f1f17a into kubernetes:master Jun 18, 2025
13 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.34 milestone Jun 18, 2025
@github-project-automation github-project-automation bot moved this from Needs Triage to Done in SIG Apps Jun 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

7 participants