-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
<install> statefulset , FailedScheduling #1657
Comments
it seems no apiVersion: v1
|
Hi @wiluen , Thanks for reporting this. This might happen if the cluster doesn't have a storage provisioner (the component responsible to create a PV from the PVC) How do you typically create persistent volumes ? Can you share the output of: |
thanks for your reply. but another question is
|
Hi @wiluen you can change the storage size in the
|
thanks very much! @arikalon1
|
@arikalon1 |
Hi @wiluen Where do we have a reference to Regarding the configuration options, you can see most of it in our defaults values.yaml file It also has a dependency to the |
Hi @arikalon1 ![]() actually I dont know what it was, but it appears in my k8s cluster. I thought this was part of the Robusta, and there are also some |
hey @wiluen The Looks like your robusta installation is up, and healthy! |
hi @arikalon1 there are so many problems, Thank you for your patient answer I finish the install, it is easy to install if there are no network problem. and use (2)I see the logs of Pod (3)besides,i see AI can do summary for logs, but in the UI of holmesgpt , it cant connect to the gpt |
Hi @wiluen Do you have network policies in your cluster? Can you share the robusta-runner and alert manager logs? |
Hi @arikalon1 |
hey @wiluen Can you share Do you have network policies defined in the cluster? |
hi @arikalon1 the results is: I don't think there are any additional network strategies because my cluster is just a simple testing cluster. |
the ip seems right, but prometheus is not able to connect to alert manager I suspect there's some networks restrictions in the cluster Can you share: |
hi @arikalon1 |
you can see it on the pods list you shared |
It it not easy to install
robusta
for me, when I install robusta using helm, they can not startalertmanager-robusta-kube-prometheus-st-alertmanager-0 0/2 Pending 0 18s
prometheus-robusta-kube-prometheus-st-prometheus-0 0/2 Pending 0 8s
the Event is:
Warning FailedScheduling 49s default-scheduler 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
and
kubectl get pv
shows nothingwhat's wrong with it?
The text was updated successfully, but these errors were encountered: