-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate status.nodeInfo.kubeProxyVersion field #4004
Comments
/sig node |
/cc @thockin |
/milestone v1.28 |
What the staus on this? Alpha in 29? |
Yes, we'll be in Alpha at v1.29 |
/milestone v1.29 |
Hello @HirazawaUi 👋, 1.29 Enhancements team here! Just checking in as we approach enhancements freeze on 01:00 UTC, Friday, 6th October, 2023. This enhancement is targeting for stage Here's where this enhancement currently stands:
The status of this enhancement is marked as |
Hi @HirazawaUi 👋, v1.29 Communication Release Team. I would like to check any plan about publication blogs related new features, removals, and deprecations for this release. You need to open a PR placeholder in the website repository. |
Ok, I've opened a PR placeholder in the website repository, thanks for the tip! |
Hi @HirazawaUi 👋, v1.29 Docs Shadow here |
* Allow instantiating v1.31 Kubernetes clients * Update `README.md` and `docs/usage/supported_k8s_versions.md` for K8s 1.31 * Maintain added feature gates ``` ./hack/compare-k8s-feature-gates.sh 1.30 1.31 Feature gates added in 1.31 compared to 1.30: AllowDNSOnlyNodeCSR AllowInsecureKubeletCertificateSigningRequests AnonymousAuthConfigurableEndpoints AuthorizeNodeWithSelectors AuthorizeWithSelectors ConcurrentWatchObjectDecode CoordinatedLeaderElection DRAControlPlaneController DisableAllocatorDualWrite ImageVolume ReloadKubeletServerCertificateFile ResilientWatchCacheInitialization ResourceHealthStatus SupplementalGroupsPolicy WatchCacheInitializationPostStartHook ``` * Maintain removed feature gates ``` ./hack/compare-k8s-feature-gates.sh 1.30 1.31 Feature gates removed in 1.31 compared to 1.30: APIPriorityAndFairness CSIMigrationRBD CSINodeExpandSecret ConsistentHTTPGetHandlers CustomResourceValidationExpressions DefaultHostNetworkHostPortsInPodTemplates InTreePluginAWSUnregister InTreePluginAzureDiskUnregister InTreePluginAzureFileUnregister InTreePluginGCEUnregister InTreePluginOpenStackUnregister InTreePluginRBDUnregister InTreePluginvSphereUnregister JobReadyPods ReadWriteOncePod ServiceNodePortStaticSubrange SkipReadOnlyValidationGCE ``` * Maintain locked to default feature gates ``` ./hack/compare-k8s-feature-gates.sh 1.30 1.31 Feature gates locked to default in 1.31 compared to 1.30: AppArmor Default: true AppArmorFields Default: true DevicePluginCDIDevices Default: true DisableCloudProviders Default: true DisableKubeletCloudCredentialProviders Default: true ElasticIndexedJob Default: true JobPodFailurePolicy Default: true KubeProxyDrainingTerminatingNodes Default: true LogarithmicScaleDown Default: true PDBUnhealthyPodEvictionPolicy Default: true PersistentVolumeLastPhaseTransitionTime Default: true PodDisruptionConditions Default: true StatefulSetStartOrdinal Default: true ``` * Maintain admission plugins ``` ./hack/compare-k8s-admission-plugins.sh 1.30 1.31 Admission plugins added in 1.31 compared to 1.30: Admission plugins removed in 1.31 compared to 1.30: PersistentVolumeLabel ``` * Maintain API groups ``` ./hack/compare-k8s-api-groups.sh 1.30 1.31 Kubernetes API group versions added in 1.31 compared to 1.30: coordination.k8s.io/v1alpha1 resource.k8s.io/v1alpha3 Kubernetes API GVRs added in 1.31 compared to 1.30: coordination.k8s.io/v1alpha1/leasecandidates networking.k8s.io/v1beta1/ipaddresses networking.k8s.io/v1beta1/servicecidrs resource.k8s.io/v1alpha3/deviceclasses resource.k8s.io/v1alpha3/podschedulingcontexts resource.k8s.io/v1alpha3/resourceclaims resource.k8s.io/v1alpha3/resourceclaimtemplates resource.k8s.io/v1alpha3/resourceslices storage.k8s.io/v1beta1/volumeattributesclasses Kubernetes API group versions removed in 1.31 compared to 1.30: resource.k8s.io/v1alpha2 Kubernetes API GVRs removed in 1.31 compared to 1.30: resource.k8s.io/v1alpha2/podschedulingcontexts resource.k8s.io/v1alpha2/resourceclaimparameters resource.k8s.io/v1alpha2/resourceclaims resource.k8s.io/v1alpha2/resourceclaimtemplates resource.k8s.io/v1alpha2/resourceclasses resource.k8s.io/v1alpha2/resourceclassparameters resource.k8s.io/v1alpha2/resourceslices ``` * Maintain kube-controller-manager controllers ``` ./hack/compute-k8s-controllers.sh 1.30 1.31 kube-controller-manager controllers added in 1.31 compared to 1.30: Added Controllers for API Group [networking/v1beta1]: service-cidr-controller Added Controllers for API Group [resource/v1alpha3]: resourceclaim-controller kube-controller-manager controllers removed in 1.31 compared to 1.30: Removed Controllers for API Group [networking/v1alpha1]: service-cidr-controller Removed Controllers for API Group [resource/v1alpha2]: resourceclaim-controller ``` * [no-op] Maintain copies of the DaemonSet controller's scheduling logic * Add K8s 1.31 to the local CloudProfile * tests: Don't check for Node's `.status field.nodeInfo.kubeProxyVersion` field The `.status field.nodeInfo.kubeProxyVersion` is a lie since its initial introduction. The field is set by kubelet which cannot know the kube-proxy version or whether kube-proxy is running at all or not. The `DisableNodeKubeProxyVersion` feature gate is enabled by default since K8s 1.31. The field is set to empty string in the Node status. There is no added value in checking this field in the upgrade tests due to the reasons from above. Ref kubernetes/enhancements#4004 * Add version constraints for K8s 1.31 * maintenance controller: Set `.spec.kubernetes.kubeAPIServer.oidcConfig.clientAuthentication=nil` when doing forceful update to K8s 1.31+ * maintenance controller: Move `kubernetes.kubelet.systemReserved` to `kubernetes.kubelet.kubeReserved` when doing forceful update to K8s 1.31+ * Nit: Use the KubeProxyEnabled helper func instead of duplicating the same logic * Update the e2e tests section in `docs/development/new-kubernetes-version.md` * Update provider extensions instructions in `docs/development/new-kubernetes-version.md` * Remove unnecessary logic for the `KubeletCgroupDriverFromCRI` feature gate Only the feature gate enablement is not enough the new auto-detection flow to be used. The feature depends on a new CRI API that will be present only in containerd 2.0+. Additionally, even if cgroup driver is specified and the new flow auto-detects another one from the CRI, it will ignore the specified cgroup driver and will use the auto-detected one. * Default kubelet's and containerd's cgroup driver to `systemd` for K8s 1.31+ * Update the local Garden Kubernetes version to 1.31 * Nit: Do not log admission is being deployed on make gardener-extensions-down * Order API groups alphabetically
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/remove-lifecycle stale per liggitt's earlier comment this can now become on-by-default in 1.33 |
@HirazawaUi could you update this feature description with the latest state? Reading these comments I think the state is:
|
updated. |
Hello @HirazawaUi 👋, v1.33 Enhancements team here. Just checking in as we approach enhancements freeze on 02:00 UTC Friday 14th February 2025 / 19:00 PDT Thursday 13th February 2025. This enhancement is targeting stage Here's where this enhancement currently stands:
For this KEP, we would just need to update the following:
The status of this enhancement is marked as If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you! |
Hi @HirazawaUi , looks like the PR has merged, so you are all set for enhancements freeze |
Hello @danwinship and @HirazawaUi 👋, v1.33 Docs Shadow here. |
Hi @HirazawaUi 👋 -- this is Ryota (@rytswd) from the 1.33 Communications Team! For the 1.33 release, we are currently in the process of collecting and curating a list of potential feature blogs, and we'd love for you to consider writing one for your enhancement! As you may be aware, feature blogs are a great way to communicate to users about features which fall into (but not limited to) the following categories:
To opt in to write a feature blog, could you please let us know and open a "Feature Blog placeholder PR" (which can be only a skeleton at first) against the website repository by Wednesday, 5th March, 2025? For more information about writing a blog, please find the blog contribution guidelines 📚 Tip Some timeline to keep in mind:
Note In your placeholder PR, use |
@rytswd This is a minor change so it doesn't require a blog post :) |
Noted, thanks for the update, @HirazawaUi! 🎸 |
Hey again @HirazawaUi 👋, v1.33 Enhancements team here. Just checking in as we approach code freeze at 02:00 UTC Friday 21st March 2025 / 19:00 PDT Thursday 20th March 2025. Here's where this enhancement currently stands:
Per the issue description, these are all of the implementation (code-related) PRs for 1.33, some of which are not merged yet: Please let me know (and keep the issue description updated) if there are any other PRs in k/k that we should track for this KEP, so that we can maintain accurate status. If you anticipate missing code freeze, you can file an exception request in advance. The status of this enhancement is marked as |
Hi @HirazawaUi 👋, v1.33 Enhancements team here, Just a quick friendly reminder as we approach the code freeze later this week, at 02:00 UTC Friday 21st March 2025 / 19:00 PDT Thursday 20th March 2025. The current status of this enhancement is marked as If you anticipate missing code freeze, you can file an exception request in advance. Thank you! |
I’m working on getting the PR merged. |
Hey @HirazawaUi 👋, 1.33 Enhancements team here, With all the implementation(code related) PRs merged as per the issue description: This enhancement is now marked as Additionally, please let me know if there are any other PRs in k/k not listed in the description that we should track for this KEP, so that we can maintain accurate status. |
Enhancement Description
status.nodeInfo.kubeProxyVersion
field of v1.Nodek/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update(s):Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: