You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After a successfully deployment of the OVA the install does not complete and is stuck loading one of the pods.
When checking the log of the pod there are three errors that keep repeating
{"severity":"ERROR","timestamp":"2024-09-19T19:27:46.533080534Z","logger":"activator","caller":"websocket/connection.go:191","message":"Failed to send ping message to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2","error":"connection has not yet been established","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func3\n\tknative.dev/[email protected]/websocket/connection.go:191"}
{"severity":"WARNING","timestamp":"2024-09-19T19:27:48.607928458Z","logger":"activator","caller":"handler/healthz_handler.go:36","message":"Healthcheck failed: connection has not yet been established","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2"}
{"severity":"ERROR","timestamp":"2024-09-19T19:27:49.689291711Z","logger":"activator","caller":"websocket/connection.go:144","message":"Websocket connection could not be established","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2","error":"dial tcp: lookup autoscaler.knative-serving.svc.cluster.local: i/o timeout","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func1\n\tknative.dev/[email protected]/websocket/connection.go:144\nknative.dev/pkg/websocket.NewDurableConnection.func2.(*ManagedConnection).connect.func1\n\tknative.dev/[email protected]/websocket/connection.go:225\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\tk8s.io/[email protected]/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\tk8s.io/[email protected]/pkg/util/wait/backoff.go:461\nknative.dev/pkg/websocket.(*ManagedConnection).connect\n\tknative.dev/[email protected]/websocket/connection.go:222\nknative.dev/pkg/websocket.NewDurableConnection.func2\n\tknative.dev/[email protected]/websocket/connection.go:162"}
Here is the contents of the bootstrap-debugging log
. /root/setup/setup-05-knative.sh
++ set -euo pipefail
++ echo -e '\e[92mDeploying RabbitMQ Cluster Operator ...'
++ kubectl apply -f /root/download/cluster-operator.yml
++ echo -e '\e[92mDeploying Cert-Manager ...'
++ kubectl apply -f /root/download/cert-manager.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n cert-manager
++ echo -e '\e[92mDeploying RabbitMQ Messaging Operator ...'
++ kubectl apply -f /root/download/messaging-topology-operator-with-certmanager.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n rabbitmq-system
++ echo -e '\e[92mDeploying Knative Serving ...'
++ kubectl apply -f /root/download/serving-crds.yaml
++ kubectl apply -f /root/download/serving-core.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n knative-serving
timed out waiting for the condition on deployments/activator
timed out waiting for the condition on deployments/autoscaler
timed out waiting for the condition on deployments/controller
timed out waiting for the condition on deployments/webhook
troubleshooting
I have tried a redeploy of the OVA the results in the same issue and have deleted the pod and had it recreate with the same erros again.
** Versions of all**
OVA 0.8.0
vCenter 7.0.3
There is no /root/ran_customzation file that has been created
I am using a local read only vCenter account and have not opted for the optional vCenter plugins at this point.
No Proxy is need
Using local selfsigned certs
Any suggestions of assistance would be great. I am new to the project but would love to use it.
Thanks,
The text was updated successfully, but these errors were encountered:
POD not ready with errors
After a successfully deployment of the OVA the install does not complete and is stuck loading one of the pods.
When checking the log of the pod there are three errors that keep repeating
{"severity":"ERROR","timestamp":"2024-09-19T19:27:46.533080534Z","logger":"activator","caller":"websocket/connection.go:191","message":"Failed to send ping message to ws://autoscaler.knative-serving.svc.cluster.local:8080","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2","error":"connection has not yet been established","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func3\n\tknative.dev/[email protected]/websocket/connection.go:191"}
{"severity":"WARNING","timestamp":"2024-09-19T19:27:48.607928458Z","logger":"activator","caller":"handler/healthz_handler.go:36","message":"Healthcheck failed: connection has not yet been established","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2"}
{"severity":"ERROR","timestamp":"2024-09-19T19:27:49.689291711Z","logger":"activator","caller":"websocket/connection.go:144","message":"Websocket connection could not be established","commit":"41769de","knative.dev/controller":"activator","knative.dev/pod":"activator-58db57894b-vc2t2","error":"dial tcp: lookup autoscaler.knative-serving.svc.cluster.local: i/o timeout","stacktrace":"knative.dev/pkg/websocket.NewDurableConnection.func1\n\tknative.dev/[email protected]/websocket/connection.go:144\nknative.dev/pkg/websocket.NewDurableConnection.func2.(*ManagedConnection).connect.func1\n\tknative.dev/[email protected]/websocket/connection.go:225\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection\n\tk8s.io/[email protected]/pkg/util/wait/wait.go:145\nk8s.io/apimachinery/pkg/util/wait.ExponentialBackoff\n\tk8s.io/[email protected]/pkg/util/wait/backoff.go:461\nknative.dev/pkg/websocket.(*ManagedConnection).connect\n\tknative.dev/[email protected]/websocket/connection.go:222\nknative.dev/pkg/websocket.NewDurableConnection.func2\n\tknative.dev/[email protected]/websocket/connection.go:162"}
Here is the contents of the bootstrap-debugging log
++ HOME=/root
++ kubeadm init --ignore-preflight-errors SystemVerification --skip-token-print --config /root/config/kubernetes/kubeconfig.yaml
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
++ mkdir -p /root/.kube
++ cp -i /etc/kubernetes/admin.conf /root/.kube/config
+++ id -u
+++ id -g
++ chown 0:0 /root/.kube/config
++ kubectl taint nodes --all node-role.kubernetes.io/control-plane-
++ echo -e '\e[92mDeloying Antrea ...'
++ kubectl apply -f /root/download/antrea.yml
++ echo -e '\e[92mStarting k8s ...'
++ systemctl enable kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
+++ systemctl is-active kubelet.service
++ [[ active == \i\n\a\c\t\i\v\e ]]
++ echo -e '\e[92mDeploying Local Storage Provisioner ...'
++ mkdir -p /data/local-path-provisioner
++ chmod 777 /data/local-path-provisioner
++ kubectl apply -f /root/download/local-path-storage.yaml
++ kubectl patch sc local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
++ echo -e '\e[92mCreating VMware namespaces ...'
++ kubectl create namespace vmware-system
++ kubectl create namespace vmware-functions
++ set -euo pipefail
++ echo -e '\e[92mDeploying RabbitMQ Cluster Operator ...'
++ kubectl apply -f /root/download/cluster-operator.yml
++ echo -e '\e[92mDeploying Cert-Manager ...'
++ kubectl apply -f /root/download/cert-manager.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n cert-manager
++ echo -e '\e[92mDeploying RabbitMQ Messaging Operator ...'
++ kubectl apply -f /root/download/messaging-topology-operator-with-certmanager.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n rabbitmq-system
++ echo -e '\e[92mDeploying Knative Serving ...'
++ kubectl apply -f /root/download/serving-crds.yaml
++ kubectl apply -f /root/download/serving-core.yaml
++ kubectl wait deployment --all --timeout=10m --for=condition=Available -n knative-serving
timed out waiting for the condition on deployments/activator
timed out waiting for the condition on deployments/autoscaler
timed out waiting for the condition on deployments/controller
timed out waiting for the condition on deployments/webhook
troubleshooting
I have tried a redeploy of the OVA the results in the same issue and have deleted the pod and had it recreate with the same erros again.
** Versions of all**
OVA 0.8.0
vCenter 7.0.3
There is no /root/ran_customzation file that has been created
I am using a local read only vCenter account and have not opted for the optional vCenter plugins at this point.
No Proxy is need
Using local selfsigned certs
Any suggestions of assistance would be great. I am new to the project but would love to use it.
Thanks,
The text was updated successfully, but these errors were encountered: