You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Change in behavior of upgrade error: Unsatisfiable PDB
To Reproduce
Steps to reproduce the behavior:
Create a PDB with minAvailable == replicas. Ideally, place the workload on a workerpool node.
Try upgrading cluster to a new k8s patch version, or just upgrading the builderpool to the new K8S patch version az aks nodepool upgrade --resource-group rgname --cluster-name cluster-name --name builderpool --no-wait --kubernetes-version 1.30.9
Expected behavior
Upgrade to nodepool gets performed without any issues, as the workload associated with PDB is deployed on another (workerpool) nodepool.
Workerpool upgrade begins where a new set of temporary nodes are created, tries to drain the old nodes, and if PDB is making it stuck, retries until a timeout and upgrade process fails.
Environment (please complete the following information):
CLI Version 2.68.0
Kubernetes version 1.30.7
Additional context
Cluster patch upgrade fails with UnsatisfiablePDB
(UnsatisfiablePDB) Upgrade is blocked due to invalid Pod Disruption Budgets (PDBs). Please review the PDB spec to allow disruptions during upgrades. To bypass this error, set forceUpgrade in upgradeSettings.overrideSettings. Bypassing this error without updating the PDB may result in drain failures during upgrade process. Invalid PDBs details: 2 errors occurred:
* PDB ns/postgres-db-primary has minAvailable(1) >= expectedPods(1) can't proceed with put operation
* PDB ns/postgres-db-primary has minAvailable(1) >= expectedPods(1) can't proceed with put operation
We're using CNPG operator, which places certain PDB on primary to avoid disruption. During an upgrade, labels on Postgres pods change to adapt the new primary, so the upgrade can complete. Unfortunately now, we see that the upgrade does not even begin due to unsatisfiablePDB. As per AKS documentation, unsatisfiablePDB is when maxUnavailable = 0. In our case minAvailable is 1, allowed disruptions are 1, and maxUnavailable is N/A.
We've had the same PDB for a year and never had issues with upgrades, except from last week. Any help is appreciated. I feel that UnsatisfiablePDB's approach is changed without updating the documentation.
The text was updated successfully, but these errors were encountered:
Describe the bug
Change in behavior of upgrade error: Unsatisfiable PDB
To Reproduce
Steps to reproduce the behavior:
az aks nodepool upgrade --resource-group rgname --cluster-name cluster-name --name builderpool --no-wait --kubernetes-version 1.30.9
Expected behavior
Environment (please complete the following information):
Additional context
Cluster patch upgrade fails with UnsatisfiablePDB
We're using CNPG operator, which places certain PDB on primary to avoid disruption. During an upgrade, labels on Postgres pods change to adapt the new primary, so the upgrade can complete. Unfortunately now, we see that the upgrade does not even begin due to unsatisfiablePDB. As per AKS documentation, unsatisfiablePDB is when maxUnavailable = 0. In our case minAvailable is 1, allowed disruptions are 1, and maxUnavailable is N/A.
We've had the same PDB for a year and never had issues with upgrades, except from last week. Any help is appreciated. I feel that UnsatisfiablePDB's approach is changed without updating the documentation.
The text was updated successfully, but these errors were encountered: