1. Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace
kubectl create sa pvviewer
kubectl create clusterrole pvviewer-role --verb=list --resource=persistentvolumes
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
kubectl run --generator=run-pod/v1 --dry-run --image redis pvviewer -o yaml > pod.yaml
kubectl create -f pod.yaml
Answer should be in the format: InternalIP of masterInternalIP of node1InternalIP of node2InternalIP of node3 (in a single line)
//kubectl get nodes -o=custom-columns=InternalIP:.status.addresses[0].address > /root/node_ips
// kubectl get nodes -o=jsonpath='{range .items[*]}{"InternalIP of "}{.status.addresses[1].address}{" "}{.status.addresses[0].address}{" "}' > /root/node_ips
//kubectl get nodes -o=jsonpath='{range .items[*]}{"InternalIP of "}{.status.addresses[1].address}{" "}{.status.addresses[0].address}{" "}' > /root/node_ips
kubectl get nodes -o=jsonpath='{.items[*].status.addresses[0].address}' > /root/node_ips
Alternative solution:
kubectl get nodes -o=jsonpath='{.items[*].status.addresses[?(@.type == "InternalIP")].address}' > /root/node_ips
Container 1, name: alpha, image: nginx Container 2: beta, image: busybox, command sleep 4800.
Environment Variables: container 1: name: alpha
Container 2: name: beta
kubectl run --generator=run-pod/v1 --dry-run --image nginx multi-pod -o yaml > pod2.yaml
kubectl run --generator=run-pod/v1 multi-pod --image busybox --env=name=beta --dry-run --command "sleep 4800" -o yaml
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: alpha
env:
- name: name
value: alpha
- image: busybox
name: beta
command: ["sleep", "4800"]
env:
- name: name
value: beta
EOF
runAsUser: 1000 fsGroup: 2000
kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: non-root-pod
spec:
containers:
- image: redis:alpine
name: non-root-pod
securityContext:
runAsUser: 1000
fsGroup: 2000
EOF
5. We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80
Important: Don't delete any current objects deployed.
https://kubernetes.io/docs/concepts/services-networking/network-policies
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
namespace: default
spec:
podSelector:
matchLabels:
run: np-test-1
policyTypes:
- Ingress
ingress:
- from:
ports:
- protocol: TCP
port: 80
EOF
- Notice that this network policy apply on all pods with label run=np-test-1 and accept traffing from everywhere.
6. Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.
key:env_type, value:production and operator:NoSchedule
https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
kubectl taint nodes node01 env_type=production:NoSchedule
kubectl run --generator=run-pod/v1 --image redis:alpine dev-redis
Check that the pod is not scheduled on node01:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: dev-redis
spec:
nodeName: node01
containers:
- name: redis
image: redis:alpine
EOF
Pod with tolerator on node01
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: prod-redis
spec:
containers:
- name: redis
image: redis:alpine
tolerations:
- key: "env_type"
operator: "Equal"
value: "production"
effect: "NoSchedule"
EOF
7. Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
image: redis:alpine
kubectl run --generator=run-pod/v1 -n hr hr-pod --image redis:alpine --labels=environment=production,tier=frontend
8. A kubeconfig file called super.kubeconfig has been created in /root. There is something wrong with the configuration. Troubleshoot and fix it.
Wrong port
kubectl get pods--kubeconfig=/root/super.kubeconfig
9. We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.
kubectl get pods -n kube-system
Check the controller manager. There is a typo.