-
Notifications
You must be signed in to change notification settings - Fork 40.8k
Job controller optimization: reduce work duration time & minimize cache locking #132305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
@xigang do you have some measurements or experiments which would show the performance improvement? |
Please also post some release note and fix linter |
// getJobPodsByIndexer returns the set of pods that this Job should manage. | ||
func (jm *Controller) getJobPodsByIndexer(ctx context.Context, j *batch.Job) ([]*v1.Pod, error) { | ||
podsForJob := []*v1.Pod{} | ||
for _, key := range []string{string(j.UID), controller.OrphanPodIndexKey} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How the orphan pods get populated in the index, can you provide some reference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Orphan pods are automatically indexed through the AddPodControllerUIDIndexer
function in pkg/controller/controller_utils.go:1092-1105
.
When a pod has no ControllerRef, it gets indexed under OrphanPodIndexKey instead of a specific controller UID.
This allows controllers to discover and potentially adopt pods that match their selectors but temporarily lack owner references.
Reference:
Similar pattern used by StatefulSet, ReplicaSet, and DaemonSet controllers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, thank you for explaining.
Does it mean that we now list orphaned Pods from all namespaces?
If so, then it seems this may actually hurt performance of some environments with many namespaces and orphaned Pods (for example when Pods are managed by external systems directly), as before we would only list Pods from a single namespace.
@hakuna-matatah @xigang Is such a situation covered by the benchmarks, or is it not a concern for us?
Maybe as a mitigation we could filter early the Pods by the namespace as we iterate over them anyway, wdyt @soltysh @wojtek-t @atiratree ? If we change the semantics of the function to return also Pods from other namespaces, then we need to assume that the downstream code can cope with it. I'm not sure how well tested such scenarios are, so extra filtering by namespace seems useful anyway to maintain semantics.
Or maybe, even better there could be different keys for orphans in the indexer depending on namespace?
As a follow up I argue we should commonize the functions (probably in pkg/controller/controller_utils.go
), as this seems a non-trivial code duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To distinguish orphan Pods based on namespace, the index can be built like this:
// AddPodControllerUIDIndexer adds an indexer for Pod's controllerRef.UID to the given PodInformer.
// This indexer is used to efficiently look up pods by their ControllerRef.UID
func AddPodControllerUIDIndexer(podInformer cache.SharedIndexInformer) error {
if _, exists := podInformer.GetIndexer().GetIndexers()[PodControllerUIDIndex]; exists {
// indexer already exists, do nothing
return nil
}
return podInformer.AddIndexers(cache.Indexers{
PodControllerUIDIndex: func(obj interface{}) ([]string, error) {
pod, ok := obj.(*v1.Pod)
if !ok {
return nil, nil
}
// Get the ControllerRef of the Pod to check if it's managed by a controller
if ref := metav1.GetControllerOf(pod); ref != nil {
return []string{string(ref.UID)}, nil
}
// If the Pod has no controller (i.e., it's orphaned), index it with namespace-specific OrphanPodIndexKey
// This helps identify orphan pods for reconciliation and adoption by controllers within specific namespaces
return []string{OrphanPodIndexKey + "/" + pod.Namespace}, nil
},
})
}
However, this approach requires changes to the code of other controllers, such as ReplicaSet and DaemonSet.
Before:
uidKeys := []string{string(rs.UID), controller.OrphanPodIndexKey}
After:
uidKeys := []string{string(rs.UID), controller.OrphanPodIndexKey + "/" + rs.Namespace}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, to have a peace of mind that we are not worsening the scenario I would suggest to go this way, and commonized the function in controller_utils, but let's see what others say.
I'm ok to merge the PR as is if we follow up, but keep it open yet for visibility.
I'm also ok to turn this PR to also adjust the previous controllers and expose the utility function in controllers_utils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let’s wait and see what others suggest first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super worried about too many orphaned pods, but I also like the change to the indexing suggested above. Given it's purely in-memory indexing we can change that at any point in time.
So let's follow-up on changing the value to also include the namespace - I would suggest merging the PR as is, and then switching all controllers to the new index in one shot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I can take care of the follow-up work on the new index.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm. I would also like the following up to ensure that there is no risk for the downstream code to now deal with pods coming from other namespaces. the code may or may not be ready for it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, sounds like a nice improvement for all the controllers.
A small suggestion; we could make a function for constructing the orphan key
controller.OrphanPodIndexKey + "/" + rs.Namespace
This improves performance and correctness at scale by minimizing the time a read lock is held on the cache, reducing blockage of CacheController delta processing. It also helps lower workqueue processing time per object/key. We did some benchmarking comparing indexing with simply listing the pods from the Store/Cache. For 10k pods in store, using Index it took ~30 µs (vs) ~320 µs For 500k pods in store, using Index it took ~35 µs (vs) 18.080762ms We can tell that, listing from store using Index is 500X times better at higher scale. Following are the results:
Optimizing this list call will provide the following benefits:
You can refer to this PR: #130961 |
@mimowo Done. |
@mimowo For approve |
pkg/controller/job/job_controller.go
Outdated
@@ -799,6 +809,27 @@ func (jm *Controller) getPodsForJob(ctx context.Context, j *batch.Job) ([]*v1.Po | |||
return pods, err | |||
} | |||
|
|||
// getJobPodsByIndexer returns the set of pods that this Job should manage. | |||
func (jm *Controller) getJobPodsByIndexer(ctx context.Context, j *batch.Job) ([]*v1.Pod, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ctx is unused here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!👍🏻
pkg/controller/job/job_controller.go
Outdated
if err != nil { | ||
return nil, err | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, revert unnecessary formatting changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
/hold |
…mize cache locking Signed-off-by: xigang <[email protected]>
/unhold |
LGTM label has been added. Git tree hash: 607be3aa28f793e5a026f7f158e94b11b49b9c6d
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mimowo, xigang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This improves performance and correctness at scale by minimizing the time a read lock is held on the cache, reducing blockage of CacheController delta processing. It also helps lower workqueue processing time per object/key.
Which issue(s) this PR is related to:
Fixes # #130767
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: