Skip to content

Kubernetes Deployment

aaboyle878 edited this page Jan 10, 2024 · 2 revisions

Applying the manifest file

For deploying the manifest file to the cluster a similar approach was taken to that of terraform in which a dry run was initiated to ensure the changes being made were expected using the kubectl diff -f application-manifest.yaml as this compares the current kubernetes configuration within the cluster against the newly proposed manifest file similar to terraform plan. When all was reviewed and the service and deployment looked good the manifest was applied using kuberctl apply -f application-manifest.yaml and for here the service and deployment were created.

Checking the deployment

After apllying the manifest file there were a number of cammands used to check the deployment had been successful and ultimately validate the app was working by connecting to the container via port forwarding from the local machine. Some of the command used to check the deployment were:

  • kubectl get deployments- this command returns deployment name, number of pods ready/avaliable out of those specified in the yaml, number of pods using the latest manifest file, number of pods availbe to serve incoming requests, and finally how long the pods have been running
  • kubectl get pods- shows the pod name, how many containers are running on the on the pod in a ready state, the status of the pod i.e if it is running or there have been any issues, number of restarts, and pod age
  • kubectl get pods --show-labels- this shows the same as the above with the addition of the labels column
  • kubectl get services- this returns the serive name, type, internal ClusterIP, external/public facing IP, port and protocol mapping, and age
  • kubectl port-forward <pod-name> 5000:5000- this command will allow us to connect to the named container via localhost or 127.0.0.1 on port 5000 i.e 127.0.0.1:5000

Distribution

To distribute the app internally I would ideally create a node port service is the number of users was limited as this would allow a direct connection to the node on which the container is hosted to be established and remove the need for port forwarding. Additionally should the application need to be exposed to external users I would then change the node port to a load balancer as this will supports node to node peering and external traffic and would therefor allow for the external IP of the load balancer to be exposed to user which would allow them access to the app. In terms of application security for external users via the load balancer I would consider implementing Azure Active Directory to control access as this would require the person accessing to be set up on the system with the correct permission groups therefor blocking access to anyone else.

Clone this wiki locally