You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We noticed that Endpoints sometimes has stale data when the deployment scaled out and in frequently, but we won't have that problem if we use "service-name.namespace.svc.cluster_name:8080" instead of "kubernetes:///service-name.namespace:8080"
After digging into that, we noticed that there are difference between the Endpoints and Endpointslices of the same service, where the endpointslices contains the correct IPs but the endpoints has some stale ones. I found couple stale endpoints that belonged to pods that got killed a day ago.
Enpoints:
devbox% kubectl get endpoints service-name -n namespace
NAME ENDPOINTS AGE
service-name 10.192.1.23:8080,10.192.1.26:8080,10.192.1.27:8080 + 997 more... 27h
Endpointslices:
devbox% kubectl get endpointslices -n namespace | grep service-name
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
service-name-f9cm4 IPv4 8080 10.192.118.170,10.192.1.18,10.192.17.44 + 3 more... 5h43m
As kuberesolver uses the Endpoints API to get the endpoint, it explains why we got stale endpoints when we use kuberesolver to resolve the url "kubernetes:///service-name.namespace:8080" .
u, err:=url.Parse(fmt.Sprintf("%s/api/v1/watch/namespaces/%s/endpoints/%s",
client.Host(), namespace, targetName))
iferr!=nil {
returnnil, err
}
req, err:=client.GetRequest(u.String())
iferr!=nil {
returnnil, err
}
req=req.WithContext(ctx)
resp, err:=client.Do(req)
iferr!=nil {
returnnil, err
}
ifresp.StatusCode!=http.StatusOK {
deferresp.Body.Close()
returnnil, fmt.Errorf("invalid response code %d for service %s in namespace %s", resp.StatusCode, targetName, namespace)
}
returnnewStreamWatcher(resp.Body), nil
}
Version
We are using github.com/sercand/kuberesolver/v5 v5.1.1
Suggestion
I think this issue could be fixed if we could use endpointslices API instead of endpointsAPI to get the endpoint. This should be a very easy fix, I could do that if no objection.
The text was updated successfully, but these errors were encountered:
rye-sw
changed the title
Kuberesolve got state endpoints
Kuberesolver got state endpoints
Aug 28, 2024
Description
We noticed that Endpoints sometimes has stale data when the deployment scaled out and in frequently, but we won't have that problem if we use "service-name.namespace.svc.cluster_name:8080" instead of "kubernetes:///service-name.namespace:8080"
After digging into that, we noticed that there are difference between the
Endpoints
andEndpointslices
of the same service, where theendpointslices
contains the correct IPs but theendpoints
has some stale ones. I found couple stale endpoints that belonged to pods that got killed a day ago.Enpoints:
Endpointslices:
As kuberesolver uses the
Endpoints
API to get the endpoint, it explains why we got stale endpoints when we use kuberesolver to resolve the url "kubernetes:///service-name.namespace:8080" .kuberesolver/kubernetes.go
Lines 155 to 198 in b382846
Version
We are using github.com/sercand/kuberesolver/v5 v5.1.1
Suggestion
I think this issue could be fixed if we could use
endpointslices
API instead ofendpoints
API to get the endpoint. This should be a very easy fix, I could do that if no objection.The text was updated successfully, but these errors were encountered: