You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here am using GCP clouds with two cluster,while am try to install multidomain nsm in my setup script running successfully for hello app. but in between i get some error.
==[ KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh]==
KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh
------------- Create nsm-system namespace ----------
namespace/nsm-system created
------------ Installing Crossconnect Monitor -------------
deployment.apps/crossconnect-monitor created
------------ Installing Jaeger -------------
service/jaeger created
deployment.apps/jaeger created
------------ Installing Skydive -------------
configmap/skydive-analyzer-config-file created
configmap/skydive-agent-config-file created
service/skydive-analyzer created
daemonset.apps/skydive-agent created
deployment.apps/skydive-analyzer created
------------ Installing NSM -------------
namespace/spire created
serviceaccount/spire-agent created
serviceaccount/spire-server created
serviceaccount/nse-acc created
serviceaccount/nsc-acc created
serviceaccount/nsmgr-acc created
serviceaccount/forward-plane-acc created
serviceaccount/proxy-nsmgr-acc created
serviceaccount/crossconnect-monitor-acc created
secret/nsm-admission-webhook-certs created
secret/spire-secret created
configmap/nsm-config created
configmap/spire-agent created
configmap/spire-entries created
configmap/spire-bundle created
configmap/spire-server created
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservicemanagers.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
clusterrole.rbac.authorization.k8s.io/nsm-role created
clusterrole.rbac.authorization.k8s.io/aggregate-network-services-view created
clusterrole.rbac.authorization.k8s.io/spire-agent-role created
clusterrole.rbac.authorization.k8s.io/spire-server-role created
clusterrolebinding.rbac.authorization.k8s.io/nsm-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-binding created
service/nsm-admission-webhook-svc created
service/spire-server created
daemonset.apps/spire-agent created
daemonset.apps/nsm-vpp-forwarder created
daemonset.apps/nsmgr created
deployment.apps/nsm-admission-webhook created
deployment.apps/prefix-service created
statefulset.apps/spire-server created
mutatingwebhookconfiguration.admissionregistration.k8s.io/nsm-admission-webhook-cfg created
------------ Installing NSM-addons -------------
service/nsmgr created
------------ Installing proxy NSM -------------
service/pnsmgr-svc created
daemonset.apps/proxy-nsmgr created
----DONE---- KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh
**** Wait for NSM pods to be ready in cluster 1
pod/nsm-admission-webhook-76445bc8d6-vth5m condition met
pod/nsmgr-4mrg9 condition met
pod/nsmgr-kmnw2 condition met
pod/proxy-nsmgr-dt6p9 condition met
pod/proxy-nsmgr-mpcnk condition met
pod/nsm-vpp-forwarder-4xsqf condition met
pod/nsm-vpp-forwarder-gw9zj condition met
pod/nsm-admission-webhook-76445bc8d6-gc452 condition met
pod/nsmgr-5n9j5 condition met
pod/nsmgr-825mb condition met
pod/proxy-nsmgr-tf6sv condition met
pod/proxy-nsmgr-xr2f9 condition met
pod/nsm-vpp-forwarder-4nwwc condition met
pod/nsm-vpp-forwarder-5c6d6 condition met
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide
No resources found in default namespace.
**** Cluster 2 vL3 NSEs
==[kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide]==
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide
No resources found in default namespace.
----DONE---- kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wid
e
service/helloworld-vl3-service created
deployment.apps/helloworld-vl3-service created error: no matching resources found
**** Install helloworld in cluster 2 ****
helm template deployments/helm/vl3_hello --set replicaCount=1 | kubectl apply --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context
-f -
service/helloworld-vl3-service created
deployment.apps/helloworld-vl3-service created error: no matching resources found
Eventhough In cluster helloworld pod is up and running in another cluster but it is still in error in another one
arun_prasathp@cloudshell:/gopath/src/github.com/cisco-app-networking/nsm-nse (techenggisgdeliver-1000286105)$ kubectl get all --kubeconfig=$KCONF1
NAME READY STATUS RESTARTS AGE
pod/helloworld-vl3-service-58844787bf-sp8bg 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-vl3-service ClusterIP 10.199.251.135 5000/TCP 50m
service/kubernetes ClusterIP 10.199.240.1 443/TCP 5h5m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helloworld-vl3-service 1/1 1 1 50m
deployment.apps/prefix-service 0/1 0 0 61m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helloworld-vl3-service-58844787bf 1 1 1 50m
replicaset.apps/prefix-service-6d77589cb7 1 0 0 61m
arun_prasathp@cloudshell:/gopath/src/github.com/cisco-app-networking/nsm-nse (techenggisgdeliver-1000286105)$ kubectl get all --kubeconfig=$KCONF2
NAME READY STATUS RESTARTS AGE
pod/helloworld-vl3-service-6b446fbcdb-226hz 0/3 Init:CrashLoopBackOff 9 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-vl3-service ClusterIP 10.51.251.31 5000/TCP 50m
service/kubernetes ClusterIP 10.51.240.1 443/TCP 5h14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helloworld-vl3-service 0/1 1 0 50m
deployment.apps/prefix-service 0/1 0 0 51m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helloworld-vl3-service-6b446fbcdb 1 1 0 50m
replicaset.apps/prefix-service-6d77589cb7 1 0 0 51m
Please help me to resolve this issue and let me know anything logs needed.
Thanks in advance!....
The text was updated successfully, but these errors were encountered:
Here am using GCP clouds with two cluster,while am try to install multidomain nsm in my setup script running successfully for hello app. but in between i get some error.
arun_prasathp@cloudshell:~/gopath/src/github.com/cisco-app-networking/nsm-nse (techenggisgdeliver-1000286105)$ ./scripts/vl3/demo_vl3.sh --kconf_clus1=${KCONF1} -
-kconf_clus2=${KCONF2} --hello --nowait
/home/arun_prasathp/gopath/src/github.com/nsm-istio === ./scripts/vl3 === ./scripts/vl3/../../deployments/helm === ./scripts/vl3/../k8s === foo.com
setting cluster 1=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context
setting cluster 2=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context
--------------------- NSM Installation + Inter-domain Setup ------------------------
**** Install NSM in cluster 1
==[ KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context scripts/vl3/nsm_install_interdomain.sh]==
KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context scripts/vl3/nsm_install_interdomain.sh
------------- Create nsm-system namespace ----------
Error from server (AlreadyExists): namespaces "nsm-system" already exists
------------ Installing Crossconnect Monitor -------------
deployment.apps/crossconnect-monitor unchanged
------------ Installing Jaeger -------------
service/jaeger unchanged
deployment.apps/jaeger unchanged
------------ Installing Skydive -------------
configmap/skydive-analyzer-config-file unchanged
configmap/skydive-agent-config-file unchanged
service/skydive-analyzer unchanged
daemonset.apps/skydive-agent configured
deployment.apps/skydive-analyzer configured
------------ Installing NSM -------------
namespace/spire unchanged
serviceaccount/spire-agent unchanged
serviceaccount/spire-server unchanged
serviceaccount/nse-acc unchanged
serviceaccount/nsc-acc unchanged
serviceaccount/nsmgr-acc unchanged
serviceaccount/forward-plane-acc unchanged
serviceaccount/proxy-nsmgr-acc unchanged
serviceaccount/crossconnect-monitor-acc unchanged
secret/nsm-admission-webhook-certs configured
secret/spire-secret unchanged
configmap/nsm-config unchanged
configmap/spire-agent unchanged
configmap/spire-entries unchanged
configmap/spire-bundle unchanged
configmap/spire-server unchanged
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io unchanged
customresourcedefinition.apiextensions.k8s.io/networkservicemanagers.networkservicemesh.io unchanged
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io unchanged
clusterrole.rbac.authorization.k8s.io/nsm-role unchanged
clusterrole.rbac.authorization.k8s.io/aggregate-network-services-view unchanged
clusterrole.rbac.authorization.k8s.io/spire-agent-role unchanged
clusterrole.rbac.authorization.k8s.io/spire-server-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/nsm-role-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/spire-server-binding unchanged
service/nsm-admission-webhook-svc unchanged
service/spire-server unchanged
daemonset.apps/spire-agent configured
daemonset.apps/nsm-vpp-forwarder unchanged
daemonset.apps/nsmgr configured
deployment.apps/nsm-admission-webhook unchanged
deployment.apps/prefix-service unchanged
statefulset.apps/spire-server configured
mutatingwebhookconfiguration.admissionregistration.k8s.io/nsm-admission-webhook-cfg configured
------------ Installing NSM-addons -------------
service/nsmgr unchanged
------------ Installing proxy NSM -------------
service/pnsmgr-svc unchanged
daemonset.apps/proxy-nsmgr unchanged
----DONE---- KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context scripts/vl3/nsm_install_interdomain.sh
**** Install NSM in cluster 2
==[ KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh]==
KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh
------------- Create nsm-system namespace ----------
namespace/nsm-system created
------------ Installing Crossconnect Monitor -------------
deployment.apps/crossconnect-monitor created
------------ Installing Jaeger -------------
service/jaeger created
deployment.apps/jaeger created
------------ Installing Skydive -------------
configmap/skydive-analyzer-config-file created
configmap/skydive-agent-config-file created
service/skydive-analyzer created
daemonset.apps/skydive-agent created
deployment.apps/skydive-analyzer created
------------ Installing NSM -------------
namespace/spire created
serviceaccount/spire-agent created
serviceaccount/spire-server created
serviceaccount/nse-acc created
serviceaccount/nsc-acc created
serviceaccount/nsmgr-acc created
serviceaccount/forward-plane-acc created
serviceaccount/proxy-nsmgr-acc created
serviceaccount/crossconnect-monitor-acc created
secret/nsm-admission-webhook-certs created
secret/spire-secret created
configmap/nsm-config created
configmap/spire-agent created
configmap/spire-entries created
configmap/spire-bundle created
configmap/spire-server created
customresourcedefinition.apiextensions.k8s.io/networkserviceendpoints.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservicemanagers.networkservicemesh.io created
customresourcedefinition.apiextensions.k8s.io/networkservices.networkservicemesh.io created
clusterrole.rbac.authorization.k8s.io/nsm-role created
clusterrole.rbac.authorization.k8s.io/aggregate-network-services-view created
clusterrole.rbac.authorization.k8s.io/spire-agent-role created
clusterrole.rbac.authorization.k8s.io/spire-server-role created
clusterrolebinding.rbac.authorization.k8s.io/nsm-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-agent-binding created
clusterrolebinding.rbac.authorization.k8s.io/spire-server-binding created
service/nsm-admission-webhook-svc created
service/spire-server created
daemonset.apps/spire-agent created
daemonset.apps/nsm-vpp-forwarder created
daemonset.apps/nsmgr created
deployment.apps/nsm-admission-webhook created
deployment.apps/prefix-service created
statefulset.apps/spire-server created
mutatingwebhookconfiguration.admissionregistration.k8s.io/nsm-admission-webhook-cfg created
------------ Installing NSM-addons -------------
service/nsmgr created
------------ Installing proxy NSM -------------
service/pnsmgr-svc created
daemonset.apps/proxy-nsmgr created
----DONE---- KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context scripts/vl3/nsm_install_interdomain.sh
**** Wait for NSM pods to be ready in cluster 1
pod/nsm-admission-webhook-76445bc8d6-vth5m condition met
pod/nsmgr-4mrg9 condition met
pod/nsmgr-kmnw2 condition met
pod/proxy-nsmgr-dt6p9 condition met
pod/proxy-nsmgr-mpcnk condition met
pod/nsm-vpp-forwarder-4xsqf condition met
pod/nsm-vpp-forwarder-gw9zj condition met
**** Show NSM pods in cluster 1
==[kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -n nsm-system -o wide]==
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -n nsm-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS
GATES
crossconnect-monitor-694fc789fd-wrrgr 1/1 Running 2 9m56s 10.196.1.15 gke-core-cluster-core-node1-fcd0650a-0dv8
jaeger-7ccd686bcb-2mwk8 1/1 Running 0 10m 10.196.1.13 gke-core-cluster-core-node1-fcd0650a-0dv8
nsm-admission-webhook-76445bc8d6-vth5m 1/1 Running 0 9m47s 10.196.1.16 gke-core-cluster-core-node1-fcd0650a-0dv8
nsm-vpp-forwarder-4xsqf 1/1 Running 0 9m49s 10.0.0.194 gke-core-cluster-core-node1-fcd0650a-0dv8
nsm-vpp-forwarder-gw9zj 1/1 Running 0 9m49s 10.0.0.193 gke-core-cluster-core-node1-fcd0650a-8m0v
nsmgr-4mrg9 3/3 Running 0 9m48s 10.196.0.13 gke-core-cluster-core-node1-fcd0650a-8m0v
nsmgr-kmnw2 3/3 Running 0 9m48s 10.196.1.17 gke-core-cluster-core-node1-fcd0650a-0dv8
proxy-nsmgr-dt6p9 2/2 Running 0 9m40s 10.196.1.19 gke-core-cluster-core-node1-fcd0650a-0dv8
proxy-nsmgr-mpcnk 2/2 Running 0 9m40s 10.196.0.14 gke-core-cluster-core-node1-fcd0650a-8m0v
skydive-agent-w8d8s 1/1 Running 0 10m 10.0.0.194 gke-core-cluster-core-node1-fcd0650a-0dv8
skydive-agent-xq6dn 1/1 Running 0 10m 10.0.0.193 gke-core-cluster-core-node1-fcd0650a-8m0v
skydive-analyzer-69498857fc-ggl29 1/1 Running 0 9m59s 10.196.1.14 gke-core-cluster-core-node1-fcd0650a-0dv8
----DONE---- kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -n nsm-system -o wide
**** Wait for NSM pods to be ready in cluster 2
pod/nsm-admission-webhook-76445bc8d6-gc452 condition met
pod/nsmgr-5n9j5 condition met
pod/nsmgr-825mb condition met
pod/proxy-nsmgr-tf6sv condition met
pod/proxy-nsmgr-xr2f9 condition met
pod/nsm-vpp-forwarder-4nwwc condition met
pod/nsm-vpp-forwarder-5c6d6 condition met
**** Show NSM pods in cluster 2
==[kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -n nsm-system -o wide]==
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -n nsm-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINES
S GATES
crossconnect-monitor-694fc789fd-shprl 1/1 Running 2 34s 10.48.1.13 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
jaeger-7ccd686bcb-vh8cb 1/1 Running 0 52s 10.48.1.12 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
nsm-admission-webhook-76445bc8d6-gc452 1/1 Running 0 25s 10.48.0.16 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
nsm-vpp-forwarder-4nwwc 1/1 Running 0 27s 10.1.0.61 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
nsm-vpp-forwarder-5c6d6 1/1 Running 0 27s 10.1.0.62 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
nsmgr-5n9j5 3/3 Running 0 26s 10.48.1.14 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
nsmgr-825mb 3/3 Running 0 26s 10.48.0.17 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
proxy-nsmgr-tf6sv 2/2 Running 0 18s 10.48.0.18 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
proxy-nsmgr-xr2f9 2/2 Running 0 18s 10.48.1.16 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
skydive-agent-9vmp8 1/1 Running 0 48s 10.1.0.62 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
skydive-agent-jgjp6 1/1 Running 0 48s 10.1.0.61 gke-remote-cluster-remote-node1-7f1c20e3-k9v1
skydive-analyzer-69498857fc-4hhtf 1/1 Running 0 42s 10.48.0.15 gke-remote-cluster-remote-node1-7f1c20e3-bjjj
----DONE---- kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -n nsm-system -o wide
--------------------- Virtual L3 Setup ------------------------
**** Install vL3 in cluster 1
==[ REMOTE_IP=104.196.32.230 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_interdomai
n.sh --ipamOctet=22 ]==
REMOTE_IP=104.196.32.230 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_interdomain.s
h --ipamOctet=22
ipamOctet is deprecated
Namespace wcm-system already exists
configmap/nsm-vl3-vl3-service created
---------------Install NSE-------------
serviceaccount/vl3-service-service-account created
configmap/ucnf-vl3-vl3-service created
service/nse-pod-service-vl3-service-vpp created
service/nse-pod-service-vl3-service created
deployment.apps/vl3-nse-vl3-service created
pod/vl3-nse-vl3-service-7bdf4c47c6-qk6pb condition met
----DONE---- REMOTE_IP=104.196.32.230 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_
interdomain.sh --ipamOctet=22
==[kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -o wide]==
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -o wide
No resources found in default namespace.
----DONE---- kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -o wide
**** Install vL3 in cluster 2
==[ REMOTE_IP=34.74.48.192 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_interdomai
n.sh --ipamOctet=33 ]==
REMOTE_IP=34.74.48.192 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_interdomain.s
h --ipamOctet=33
ipamOctet is deprecated
Namespace wcm-system already exists
configmap/nsm-vl3-vl3-service created
---------------Install NSE-------------
serviceaccount/vl3-service-service-account created
configmap/ucnf-vl3-vl3-service created
service/nse-pod-service-vl3-service-vpp created
service/nse-pod-service-vl3-service created
deployment.apps/vl3-nse-vl3-service created
pod/vl3-nse-vl3-service-85bc6b59f4-xv9h2 condition met
----DONE---- REMOTE_IP=34.74.48.192 KCONF=/home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context PULLPOLICY=Always NSEREPLICAS=1 scripts/vl3/vl3_
interdomain.sh --ipamOctet=33
**** Virtual L3 service definition (CRD) ***
helm template deployments/helm/vl3_hello --set replicaCount=1
Source: nsm-addons/templates/vl3-hello-svc.tpl
apiVersion: v1
kind: Service
metadata:
name: helloworld-vl3-service
labels:
app: helloworld-vl3-service
nsm/role: client
spec:
ports:
name: http
selector:
app: helloworld-vl3-service
Source: nsm-addons/templates/vl3-hello.tpl
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-vl3-service
labels:
version: v1
annotations:
ns.networkservicemesh.io: vl3-service
spec:
replicas: 1
selector:
matchLabels:
app: helloworld-vl3-service
version: v1
template:
metadata:
labels:
app: helloworld-vl3-service
version: v1
spec:
containers:
- name: helloworld
image: docker.io/istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent #Always
ports:
- containerPort: 5000
**** Cluster 1 vL3 NSEs
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide
No resources found in default namespace.
**** Cluster 2 vL3 NSEs
==[kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide]==
kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wide
No resources found in default namespace.
----DONE---- kubectl get pods --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context -l networkservicemesh.io/app=vl3-nse-ucnf -o wid
e
**** Install helloworld in cluster 1 ****
helm template deployments/helm/vl3_hello --set replicaCount=1 | kubectl apply --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/core-cluster.context -f
service/helloworld-vl3-service created
deployment.apps/helloworld-vl3-service created
error: no matching resources found
**** Install helloworld in cluster 2 ****
helm template deployments/helm/vl3_hello --set replicaCount=1 | kubectl apply --kubeconfig /home/arun_prasathp/GCP/anthos-workshop/workdir/remote-cluster.context
-f -
service/helloworld-vl3-service created
deployment.apps/helloworld-vl3-service created
error: no matching resources found
Eventhough In cluster helloworld pod is up and running in another cluster but it is still in error in another one
arun_prasathp@cloudshell:
/gopath/src/github.com/cisco-app-networking/nsm-nse (techenggisgdeliver-1000286105)$ kubectl get all --kubeconfig=$KCONF1/gopath/src/github.com/cisco-app-networking/nsm-nse (techenggisgdeliver-1000286105)$ kubectl get all --kubeconfig=$KCONF2NAME READY STATUS RESTARTS AGE
pod/helloworld-vl3-service-58844787bf-sp8bg 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-vl3-service ClusterIP 10.199.251.135 5000/TCP 50m
service/kubernetes ClusterIP 10.199.240.1 443/TCP 5h5m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helloworld-vl3-service 1/1 1 1 50m
deployment.apps/prefix-service 0/1 0 0 61m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helloworld-vl3-service-58844787bf 1 1 1 50m
replicaset.apps/prefix-service-6d77589cb7 1 0 0 61m
arun_prasathp@cloudshell:
NAME READY STATUS RESTARTS AGE
pod/helloworld-vl3-service-6b446fbcdb-226hz 0/3 Init:CrashLoopBackOff 9 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld-vl3-service ClusterIP 10.51.251.31 5000/TCP 50m
service/kubernetes ClusterIP 10.51.240.1 443/TCP 5h14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helloworld-vl3-service 0/1 1 0 50m
deployment.apps/prefix-service 0/1 0 0 51m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helloworld-vl3-service-6b446fbcdb 1 1 0 50m
replicaset.apps/prefix-service-6d77589cb7 1 0 0 51m
Please help me to resolve this issue and let me know anything logs needed.
Thanks in advance!....
The text was updated successfully, but these errors were encountered: