-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRD upgrade race condition #10032
Comments
/assign |
I think this is not correct. We should get stuck earlier in the func here: func (m *mapper) findAPIGroupByName(groupName string) (*metav1.APIGroup, error) {
// Looking in the cache first.
{
m.mu.RLock()
group, ok := m.apiGroups[groupName]
m.mu.RUnlock()
if ok {
return group, nil
} We'll take a closer look to see if we can fix the lazy restmapper in CR |
Sorry you're right. It hides/ignores the error. But I still think the lazy restmapper should be improved :) |
I'm happy to look into the lazy rest mapper and make changes What would you suggest? A possible improvement I can see is to remove the A more robust option would be to implement some kind of cache invalidation. Either trigger from errors or periodically (or both), refresh the caches groups and versions. |
@g-gaston @fabriziopandini and I met in a Zoom call and discussed options. This is a rough summary of the issues in the lazy restmapper in CR: The high-level problem is that once we have a certain apiGroup in m.apiGroups we never ever refresh it again. This can lead to problems if either 1. an apiVersion is removed or 2. an apiVersion is added. 1. apiVersion is removedThe specific problem we're hitting with this is that fetchGroupVersionResources will fail in CR v0.16.3. In CR >= v0.17.0 we are ignoring the error. But we are still calling ServerResourcesForGroupVersion for the missing apiVersion, which is not ideal. We would basically suggest to remove the specific apiGroup from m.apiGroups as soon as we get the not found error. Basically getting a not found on a specific version of an apiGroup signals pretty clearly that the apiGroup we cached is outdated (i.e. it contains a version that doesn't exist anymore). On the next call of findAPIGroupByName m.apiGroup will be refreshed and everything recovers by itself. @g-gaston will open a PR in CR with a proposed fix. To me at this point this seems like a non-breaking change and we'll ask if we can backport it into v0.16 & v0.17 (so we can get this issue fixed in CAPI). 2. apiVersion is addedBasically we won't get any errors like in case 1. But the RestMappings returned by the mapper would be simply missing the apiVersion. A potential fix could be to add a TTL to m.apiGroups. E.g. so we call ServerGroups ~ every 5 minutes or so. This shouldn't be a performance issue I think. |
Let's keep this issue open for tracking, but the work will happen in CR |
Yeah makes sense to keep it open here. At the very minimum we have to bump CR and want to verify that the issue is fixed in CAPI. Worst case we might want to consider implementing a workaround if we can't get a fix into CR v0.16.x |
/priority important-soon |
I think yes, @g-gaston ? |
Oh yes! This is done done |
@g-gaston: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What steps did you take and what happened?
Upgrade capi components with
clusterctl
fromv1.5.2
tov1.6.0
clusterctl upgrade apply
completes successfully. However, on the next change I make to the cluster, reconciliation for Machines gets stuck due to this error when reading the provider machine object:This error continues on loop and is not resolved until the container is restarted. Once the container is restarted, the reconciliation continues without any other issue.
This error also appears (sometimes) in the MachineSet and MachineDeployment controllers.
Note: I haven't been able to replicate this using the capi official artifacts and only using EKS-A's fork. I believe this is just because of the nature of the race condition and not because of the difference in code (I haven't been able to find even one patch in the codepath of this issue).
What did you expect to happen?
All controllers should be able to continue reconciliation after upgrading with
clusterctl
.Cluster API version
v1.5.2
tov1.6.0
Kubernetes version
1.27
Anything else you would like to add?
TLDR: capi
v1.6.0
marksv1alpha4
apis as not served. The problem comes fromcontroller-runtime
caching the results of the call to listAPIGroup
s. If this call returnsv1alpha4
as one of the available versions forinfrastructure.cluster.x-k8s.io
, the client will try to get theAPIResource
definition forv1alpha4
. If the call to get this APIResource is made after the api is marked as not served, the client will receive a not found error. Sincev1alpha4
has been cached already as available, the client will try to keep making this call and receive the same error forever.This is the stack for the previously mentioned error:
The first time a resource from a particular group is requested, the restmapper for the client detects that group hasn't been seen before and it gets the definition for all the available versions of that group. The rest mapper caches both the group and the versions for that group.
If there is an error trying to get one of the available versions, the rest mapper aborts and returns an error. The restmapper doesn't distinguish between different errors, so a 404 will result in this call failing until either the group is re-created or the cached is invalidated (the only way to do that is to restart the program).
The issue here is that if the first call to get the available versions returns a version that has been (or will be soon) deleted, the client becomes incapable of making requests for any resource of that group, independently of the version.
My hypothesis here is that the first call to get the APIGroup is made either just before the CRDs are updated or immediately after but before the kube api server cache is refreshed. This call then returns
v1alpha4
. And immediately after, the restmapper's call to get the APIResource forinfrastructure.cluster.x-k8s.io/v1alpha4
returns a not found error since this version has already been marked as not served in the CRD.IMO the long term solution is to not fail if a cached group version is not found. Actually, this has already been implemented in controller-runtime. It has been released as part of
v0.17.0
but since it's marked as a breaking change, it doesn't seem like it will be backported tov0.16
. We already bumped controller-runtime tov0.17.0
but it won't be backported to ourv1.6
branch, so we can't leverage this fix.I propose to update
external.Get
to be able to identify this issue and restart the controller when it happens. This restart should happen at most once (immediately after a CRD upgrade) and it suspect it won't be frequent, since it seems that until now I'm the only one who faced this race condition.In addition, we could change the upgrade logic in
clusterctl
to:This wouldn't be enough to guarantee the issue doesn't happen, so I don't think this can be an alternative to restarting the controller when this issue happens. Given this change is more involved and this is only required as a short term solution for the v1.6 releases, I would vote to only implement the first change for now.
Label(s) to be applied
/kind bug
/area clusterctl
The text was updated successfully, but these errors were encountered: