v0.15.0
Component versions
Kubernetes: v1.15.6
Etcd: v3.3.17
Calico: v3.9.1
Upgrade notes
Cloud Controller Manager
This release introduces a new external cloud-controller-manager
that has been separated out of the controller-manager
and performs the integration actions between the Kubernetes cluster and AWS cloud features.
WARNING:
This change is breaking if you make use of PersistentVolumes inside of your cluster, you can read more about the limitations here.
Etcd Upgrade
If upgrading from existing kube-aws clusters you must deploy v0.14.2 or higher before upgrading to this release (otherwise you will see a cloud formation error when you try to apply). This release contains a major etcd upgrade to the v3.3.x branch. The upgrade is performed by spinning up a new etcd cluster and copying across the contents from the existing servers. Should the cluster upgrade fail (at any point) then we will roll-back and revert to using the original servers again.
WARNING: it is possible to lose cluster state changes if they are made after the copy has been performed but before all of the kube-apiservers
have been replaced or if the cluster roll fails and rolls back to the original servers; therefore we strongly suggest that you perform the upgrade in a maintenance window with customer deployments disabled if possible.
Plugins
The following features have been updated and migrated into plugins and have been removed from the core kube-aws configuration and code: -
- Kubernetes Dashboard
- Kiam
- Kube2IAM
If you use these features - please note that you now need configure them via the plugins section of your cluster.yaml.
Roll NodePools by AvailabilityZone
In this release we make the nodePoolRollingStrategy
AvailabilityZone
the default choice. You will need to update your cluster.yaml files if you want to continue to use Parallel
or Sequential
strategies. Rolling by AvailabilityZone
is safer than parallel because all nodePools within the same AZ will role in parallel but nodepools across AZs will be rolled one AZ-at-a-time.
Note: The default MaxBatchSize
remains at "1" but we invite you to try setting your MaxBatchSize
to the same as your maxSize
from time-to-time to test what happens in the event of losing an AZ!
Features
- #1726: move Kiam to a plugin(Thanks to @davidmccormick)
- #1727: Allow CoreDNS resources to be configured(Thanks to @dominicgunn)
- #1730: Move kube2iam to a plugin(Thanks to @davidmccormick)
- #1746: Allow resource configuration for APIServer(Thanks to @dominicgunn)
- #1756: master: Remove the control-plane stacks dependence on cross stack references(Thanks to @davidmccormick)
- #1773: Allow major Etcd upgrades with safe roll-back(Thanks to @davidmccormick)
- #1754: Move kubernetes dashboard to a plugin(Thanks to @davidmccormick)
- #1782: Use nodePoolRollingStrategy of 'AvailabilityZone' by default.(Thanks to @davidmccormick)
Improvements
- #1720: CoreDNS prometheus metric annotations exposed at deployment level (Thanks to @HarryStericker)
- #1735: kube2iam resources improvement(Thanks to @jorge07)
- #1742: Update prompt and banner earlier in boot process(Thanks to @davidmccormick)
- #1748: Networking Version Updates(Thanks to @dominicgunn)
- #1739: Take region from the cluster config.(Thanks to @davidmccormick)
- #1774: Add missing calico networkset crd and rbac permission(Thanks to @davidmccormick)
- #1731: Referencing the drainTimeout value in the NodeDrainer daemonset (Thanks to @HarryStericker)
- #1769: CoreDNS prometheus metric annotations exposed at pod level(Thanks to @kfr2)
- #1757: allow server certs to be also used for client authentication(Thanks to @davidmccormick)
- #1791: Add flag to cmds to use AWS profile(Thanks to @javipolo)
- #1799: Add autoscaling:DescribeAutoScalingGroups policy for node drainer.(Thanks to @d-kuro)