You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This would let us create a listing of available kubernetes versions which can be used by clusterctl, e.g.:
$ kubectl get KubernetesReleases
NAME Version Template Date
v1.30.5-ubuntu-2204 ubuntu-2204-kube-v1.30.5 VM-106 2024-01-24
v1.30.5-ubuntu-2404 ubuntu-2404-kube-v1.30.5 VM-107 2024-01-24
v1.30.5-rockylinux-9 rockylinux-9-kube-v1.30.5 VM-108 2024-01-24
Then later, in our cluster.yaml we could specify a kubernetes version by 'name' such as 'v1.30.5' instead of by vm id. This would be good because we might want to replace a template, such as when image-builder is used to rebuild an image due to a security issue, and we wouldn't have to update all the cluster.yaml files with a new vm id... instead we could just update the KubernetesRelease resource and all the clusters would then reflect the change.
Anything else you would like to add:
The Proxmox provider could support both, either specifying a vm-id or the kubernetes release.
New folks can just use a vm-id, but I'd definitely be creating the KR resources and using the 'name' in the cluster yaml, and create the KR as the last step in an image-builder pipeline
Is it possible that image-builder adds a label of some type which might make it easy for the proxmox controller to recognize an image-builder image and add it as a KubernetesRelease automatically?
The text was updated successfully, but these errors were encountered:
Describe the solution you'd like
Create a CRD KubernetesReleases.
This would let us create a listing of available kubernetes versions which can be used by clusterctl, e.g.:
Then later, in our cluster.yaml we could specify a kubernetes version by 'name' such as 'v1.30.5' instead of by vm id. This would be good because we might want to replace a template, such as when image-builder is used to rebuild an image due to a security issue, and we wouldn't have to update all the cluster.yaml files with a new vm id... instead we could just update the KubernetesRelease resource and all the clusters would then reflect the change.
Anything else you would like to add:
The text was updated successfully, but these errors were encountered: