-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Operator cannot reliably bootstrap a cluster #811
Comments
Going to be honest here not having enough resources to host the cluster is probably where you are running into issues. OpenSearch gets really unstable when there is not enough memory. I've personally experienced this as well when running docker containers with OpenSearch. These logs are concerning but it's hard to say the are unrelated to OOM type issues. Node 0 Log: Node 1 Log:
|
Now I can confirm that this issue happens on an actul production Kubernetes cluster with plenty of resources too. The operator erroneously decides to do a rolling restart and fails to deliver, leaving the cluster in a yellow state. Seems like a concurrency issue as it doesn't always happen. |
We have facing this issue too. We have given ample amount of resources to all node groups but controller tries a rolling restart and then get stuck at yellow cluster state. We are able to use the cluster but any updates to manifests are not enforced by the operator due to yellow cluster state |
[Triage] |
Okay this feels very much like a stability issue I was having as well. @prudhvigodithi I have a feeling this is the same issue I had at reinvent where like 2/10 clusters wouldn’t bootstrap correctly. Might be worth checking with Kyle Davis who has the code from that and testing repeatedly. I can test on a local machine to see if I experience a similar issue. |
@prudhvigodithi Cluster deployment is not an issue for us. We are able to bootstrap a cluster. Problem arises when we try to update something in the cluster manifest and apply that. Operator tries to do a rolling restart of pods in order to enforce changes but gets is unable to trigger a restart for some reason. operator then proceeds to mark cluster in yellow state. Any further changes to manifests are ignored by the operator since the cluster is in yellow state. Let me know how can i help you understand this issue in more detail. |
@prudhvigodithi Tried it, still reproducible, took me only 4 tries. Updated versions: |
Adding @swoehrl-mw to this conversation to see if this happened while testing the operator. With my EKS setup i havent seen this issue. Thanks |
During local testing (running in k3d) I sometimes had the behaviour that the operator thought it needed to do a rolling restart, but it always managed to complete the cycle and produce a green cluster, and I also was not able to find out a reason for the restart. In earlier versions we had a problem where the operator did not always correctly reactivate shard allocation, but AFAIK that was fixed. |
I have also experienced unstable cluster bootstrap. I have fully recreated a cluster multiple times and periodically I saw that the cluster was stacked on bootstrapping the second node.
I believe it is caused by the fact that bootstrap pod is not using persistent disk, and if it is restarted it gets a new cluster UUID which is non equal with the UUID on node-0 |
What is the bug?
The operator sometimes fails to correctly bootstrap/initialize a new cluster, instead it settles on a yellow state with shards stuck in unallocated and initializing statuses.
How can one reproduce the bug?
Note that this doesn't always happen, so you might have to try multiple times; however it happens for me more often than not:
Apply the minimal example below. It's basically the first example from the docs, with the now mandatory TLS added, and Dashboards removed. Wait until the bootstrapping finishes.
When the setup process finishes, the bootstrap pod is removed. Also around this time when to operator sometimes decides to log the event "Starting to rolling restart", and recreate the first node (pod). If this happens, sometimes the cluster ends up in a yellow state, that the operator does not resolve. If at this point I manually delete the
cluster_manager
pod (second node, usually), it will be recreated and the issue seems to resolve itself.What is the expected behavior?
A cluster with a green state after setup. Preferably without unnecessary restarts.
What is your host/environment?
minikube v1.33.0 on Opensuse-Tumbleweed 20240511
w/docker driver
I'm currently evaluating the operator locally. It might be part of the problem, as it forces me to run 3 nodes on a single machine. (However it does have sufficient resources to accommodate the nodes. The issue was also reproduced on a MacBook, albeit also with minikube.)
Do you have any additional context?
See the attached files. Some logs are probably missing since a pod was recreated.
kubectl_describe.txt
operator.log
node-2.log
node-1.log
node-0.log
allocation_explain.json
cat_shards.txt
cat_nodes.txt
The text was updated successfully, but these errors were encountered: