You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The test will fail because there are 5 replicas, and the podAntiAffinity enforce that there is at most 1 pod on each node. We need to add precondition to check if such situation happened.
According to the kind documentation, we cannot add a node to the running cluster. Also, the rabbitmq operator states that it refuses to scale down. Therefore, I think it is impossible to test antiAffinity with replicas increment when the initial replicas count is 3. So before we testing scaling up, we should remove the podAntiAffinity section from the config.
why those were not surfaced earlier? because it happened to have the right resource configuration?
This was considered as misoperation before. The antiAffinity seemingly to be a valid one, but it leads to unhealthy state because the topology cannot be satisfied at the moment.
@Spedoske To fix this, we can consider configuring the cluster to have more nodes if we are going to run this test.
But there is tension between the number of nodes in each cluster and how many clusters we can run in parallel at the same time. Because each node would be one instance of Kubernetes, thus taking up a lot of resources. We can think about a smarter approach to only have larger size cluster when we need it for the tests.
The test will fail because there are 5 replicas, and the
podAntiAffinity
enforce that there is at most 1 pod on each node. We need to add precondition to check if such situation happened.The text was updated successfully, but these errors were encountered: