Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubezoo #188

Closed
wants to merge 13 commits into from
65 changes: 48 additions & 17 deletions Keda101/keda-lab.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,31 +120,62 @@ serviceAccount:

The part that needs to be modified is the `annotations` section. So if you want to scale an EKS cluster based on SQS messages, then you first need an IAM role that has access to SQS, and you need to add this role arn as an annotation.

<<<<<<< HEAD
<<<<<<< HEAD
=======
>>>>>>> 862561a80f0cc61c79b3360b66436b6e72775f8c
```
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<account-id>:role/<role-name>
If you added the arn, then setting up authentication is a simple matter. While Keda provides resources specifically geared towards authentication, you won't need to use any of that. In the Keda authentication types, there exists a type called `operator`. This type allows the keda service account to directly acquire the role of the IAM arn you provided. As long as the arn has the permissions necessary, keda can function. The triggers will look like the following:

```yaml
triggers:
- type: aws-sqs-queue
authenticationRef:
name: keda-trigger-auth-aws-credentials-activity-distributor
metadata:
queueURL: <your_queue_url>
queueLength: "1"
awsRegion: "us-east-1"
identityOwner: operator # This is where the identityOwner needs to be set
```

Next, you need to change the ScaleObject resource. The mysql-hpa.yaml has the trigger specified as the mysql db. However, it does not have an option called `identityOwner`. This is becase we are not using authentication here, and therefore do not need such a thing. In order to add authentication, this key should be added and the value set to `operator`:
If you set the `identityOwner` to something else, such as `pod`, you could set up Keda to authenticate by assuming a role that has the necessary permissions instead of acquiring the IAM role itself. You could also completely scrap this part and choose to provide access keys. In this case, you would use several additional resources. For starters, you need to include your access keys in a secret. So start by defining a resource of kind `secret`:

```yaml
apiVersion: v1
kind: Secret
metadata:
name: keda-secret
namespace: keda
data:
AWS_ACCESS_KEY_ID: <AWS ACCESS KEY>
AWS_SECRET_ACCESS_KEY: <AWS SECRET KEY>
```

You should then assign this resource to a Keda-specific custom resource called "TriggerAuthentication":

```yaml
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
...
identityOwner: operator
name: keda-trigger-authentication
namespace: keda
spec:
secretTargetRef:
- parameter: awsAccessKeyID
name: keda-secret
key: AWS_ACCESS_KEY_ID
- parameter: awsSecretAccessKey
name: keda-secret
key: AWS_SECRET_ACCESS_KEY
```

This `TriggerAuthentication` resource should then be referenced within the actual `ScaledJob` resource under the `triggered` section:

```yaml
authenticationRef:
name: keda-trigger-authentication
```

And that's it! You only needed to modify two lines and you have full authorization among the cluster.
This will allow your `ScaledJob` resource to read the authentication keys that you added to your secret via the `TriggerAuthentication` resource. Of course, if you don't want to have your access keys even as a secret, you can use the operator authentication type described above. Additionally, Keda support [several different authentication types](https://keda.sh/docs/2.11/concepts/authentication/) out of the box.

While this is the easiest way to provide authentication, it is not the only way to do it. You could also change the `identityOwner` to `pod`, and create a `TriggerAuthentication` resource and feed in the AWS access keys (which isn't very secure), or have the keda service account assume a role that has access to the necessary resources (which is much more secure). There is a number of different ways to authorize, and these are covered in the [KEDA documentation](https://keda.sh/docs/1.4/concepts/authentication/).
<<<<<<< HEAD
=======
=======
With the above configuration, a new Keda job will start every time a message is sent to the SQS queue. The job should have the necessary configurations to read the content of the message sent to the queue, and the message in SQS should get consumed by the job that starts. Once the job succeeds, it will terminate. If there is a failure, the job will exit and a new job will get created. It will then attempt to consume the message.

>>>>>>> 862561a80f0cc61c79b3360b66436b6e72775f8c
If you added the arn, then setting up authentication is a simple matter. While Keda provides resources specifically geared towards authentication, you won't need to use any of that. In the Keda authentication types, there exists a type called `operator`. This type allows the keda service account to directly acquire the role of the IAM arn you provided. As long as the arn has the permissions necessary, keda can function. The triggers will look like the following:

```yaml
Expand Down Expand Up @@ -203,4 +234,4 @@ With the above configuration, a new Keda job will start every time a message is

## Conclusion

This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples).
This wraps up the lesson on KEDA. What we tried out was a simple demonstration of a MySQL scaler followed by a demonstration of using various authentication methods to connect and consume messages from AWS SQS. This is a good representation of what you can expect from other data sources. If you were considering using this with a different Kubernetes engine running on a different cloud provider, the concept would still work. Make sure you read through the authentication page, which contains different methods of authentication for different cloud providers. If you want to try out other scalers, make sure you check out the [official samples page](https://github.com/kedacore/samples).
55 changes: 55 additions & 0 deletions Kubezoo/kubezoo-lab.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Kubezoo Lab

Now that we have covered what Kubezoo is, let's take a look at how we can set it up in a standard cluster. You could go ahead and use [Minikube](https://minikube.sigs.k8s.io/docs/start/), or you could create a cluster using [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). You can also use any Kubernetes cluster you have at the ready. Let's start by cloning the [KubeZoo repo](https://github.com/kubewharf/kubezoo.git):

```
git clone https://github.com/kubewharf/kubezoo.git
```

Now, go to the root of the repo you just cloned, and run the `make` command:

```
make local-up
```

This will get Kubezoo up and running on port 6443 as long as the port is free. Check to see if the API resources are up and running:

```
kubectl api-resources --context zoo
```

Now, let's create a sample tenant. For this, we will be using the config/setup/sample_tenant.yamlsample_tenant.yaml provided in the repo. If you take a look at the tenant yaml file, you will notice that this is a custom resource of type "tenant", and contains just a few lines specifying the type of resources this tenant requires. The name of the tenant is "111111". Since this is a regular Kubernetes resource, let's go ahead and deploy this tenant as we would a normal yaml:

```
kubectl apply -f config/setup/sample_tenant.yaml --context zoo
```

Check that the tenant is has been setup:

```
kubectl get tenant 111111 --context zoo
```

Since this tenant is basically a "cluster" in itself, it has it's own kubeconfig that gets created for it. You can extract it using:

```
kubectl get tenant 111111 --context zoo -o jsonpath='{.metadata.annotations.kubezoo\.io\/tenant\.kubeconfig\.base64}' | base64 --decode > 111111.kubeconfig
```

You should now be able to deploy all sorts of resources to the tenant by specifying the kubeconfig. For example, if you were to deploy a file called "application.yaml" into the tenant, you would use:

```
kubectl apply -f application.yaml --kubeconfig 111111.kubeconfig
```

You can check the pod as the tenant by specifying the kubeconfig as before:

```
kubectl get po --kubeconfig 111111.kubeconfig
```

The pod would have been created in the namespace that you assigned to the tenant. If you were to have multiple tenants, you would not be able to see the pods of the other tenants as long as you only have the kubeconfig of the tenant that you are dealing with, which allows for better isolation. Using your regular kubeconfig as a cluster admin, if you were to list all pods with `kubectl get po -A`, you would be able to see all the pods of all the tenants separated by namespace.

# Conclusion

This brings us to the end of the section on Kubezoo. Hopefully, by now, you understand what a multi-tenant system is, what the benefits of such a system are, and what possible challenges you could face when using a system. You also know what Kubezoo can do to help alleviate these challenges, specifically when you have constraints such as a smaller development team and a large number of small clients. We also covered a lab on setting up Kubezoo in a kind cluster and deploying the items to the Kubezoo tenant, as well as showing how to interact with multiple tenants as a cluster admin. This covers the basics of Kubezoo. If you want to learn more on the topic, the official Kubezoo [GitHub page](https://github.com/kubewharf/kubezoo) is the best place to start.
9 changes: 9 additions & 0 deletions Kubezoo/what-is-kubezoo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Kubezoo

If you have a large number of small clients that all rely on various services you provide, it makes little sense to have a separate Kubernetes cluster for each of them. An individual cluster will incur costs for control planes and each cluster needs to have supporting resources enabled which will result in resource usage. Multi-tenancy is the solution for this problem, where we only have a single cluster which we split into multiple namespaces, which are each assigned to a client or a "tenant".

However, this solution comes with its own host of problems. The biggest issue is resource consumption. Imagine you have 4 nodes, each with a memory capacity of 16GB. If you have 3 tenants running on 3 namespaces within a cluster, one of those 3 tenants may consume 75% of the memory with their workloads while the other 2 are left with only 25%. Whatever the distribution may be, there will be a difference in the amount of resources used, and if each tenant is paying the same amount, this will lead to a disparity. It is therefore necessary to individually assign resources to each tenant depending on the amount they have requested so that tenants don't bottleneck each other.

Now take a different situation. Instead of having 3 large clients, you have hundreds of small users. Each user needs to quickly run workloads in their own private "cluster", and it needs to be quick and efficient. This would be a pretty much impossible-to-manage situation without the proper tools. If we are talking about an average-sized team, it becomes infeasible from a manpower perspective to be able to handle these kinds of quick changes.

This is where Kubezoo comes in. The solution they provide is Kubernetes API as a Service (KAaaS). Kubezoo allows you to easily share your cluster among hundreds of tenants, and allows sharing both the control plane and the data plane. This makes the resource efficiency as high as simply having a namespace for each tenant. However, unlike a namespace isolation method, this also has increased API compatibility as well as resource isolation. So while there are several different multi-tenancy options to choose from, Kubezoo is one of the best when it comes to handling a large number of small tenants.
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,13 @@ A Curated List of Kubernetes Labs and Tutorials
# Featured Articles

- [Kubernetes CrashLoopBackOff Error: What It Is and How to Fix It?](https://collabnix.com/kubernetes-crashloopbackoff-error-what-it-is-and-how-to-fix-it/)
- [Top 5 Kubernetes Backup and Storage Solutions: Velero and More](https://collabnix.com/top-5-kubernetes-backup-tools-you-should-be-aware-of/)
- [Top 5 Storage Provider Tools for Kubernetes](https://collabnix.com/top-5-storage-provider-tools-for-kubernetes/)
- [Top 5 Alert and Monitoring Tools for Kubernetes](https://collabnix.com/top-5-alert-and-monitoring-tools-for-kubernetes/)
- [Top 5 Machine Learning Tools For Kubernetes](https://collabnix.com/top-5-machine-learning-tools-for-kubernetes/)
- [Top 5 Cluster Management Tools for Kubernetes in 2023](https://collabnix.com/top-5-cluster-management-tools-for-kubernetes-in-2023/)
- [10 Tips for Right Sizing Your Kubernetes Cluster](https://collabnix.com/10-tips-for-right-sizing-your-kubernetes-cluster/)
- [Step-by-Step Guide to Deploying and Managing Redis on Kubernetes](https://collabnix.com/deploying-and-managing-redis-on-kubernetes/)
- [Update Your Kubernetes App Configuration Dynamically using ConfigMap](https://collabnix.com/update-your-kubernetes-app-configuration-dynamically-using-configmap/)
- [Streamline Your Deployment Workflow: Utilizing Docker Desktop for Local Development and OpenShift for Production Deployment](https://collabnix.com/streamline-your-deployment-workflow-utilizing-docker-desktop-for-local-development-and-openshift-for-production-deployment/)
- [The Impact of Kube-proxy Downtime on Kubernetes Clusters](https://collabnix.com/the-impact-of-kube-proxy-downtime-on-kubernetes-clusters/)
Expand Down Expand Up @@ -320,6 +327,10 @@ A Curated List of Kubernetes Labs and Tutorials
- [What is Disaster Recovery](./DisasterRecovery101/what-is-dr.md)
- [DR Lab](./DisasterRecovery101/dr-lab.md)

## Kubezoo
- [What is Kubezoo](./Kubezoo/what-is-kubezoo.md)
- [Kubezoo lab](./Kubezoo/kubezoo-lab.md)

## For Node Developers
- [Kubernetes for Node Developers](./nodejs.md)

Expand Down
111 changes: 111 additions & 0 deletions bluegreen/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# What is Blue-Green Deployment ?
Blue-Green deployment is basically an application deployment method using which we can easily update one version – named as BLUE while the another version named as GREEN will keep serving the request once done we can switch back to BLUE if required . It has great advantage for real production scenarios, no downtime is required and guess what we can easily switch to older version whenever required .

# How Blue-Green Deployment works ?
This can be easily archived using labels and selectors; we will mostly use kubectl patch command as below, Note: we can even do this manually by editing service.
```
kubectl patch service SERVICENAME -p '{"spec":{"selector":{"KEY": "VALUE"}}}'
```
In this example we will create two pods with the httpd image and will change the “It Works “message to “It Works – Blue Deployment” and “It Works – Green Deployment” for second Pod. We will also create a service which will map to blue first and once the update is done we will patch it to green.

# Creating a Pod with Labels
```
git clone https://github.com/collabnix/kubelabs
cd kubelabs/bluegreen
```
```
$ kubectl apply -f blue.yml
pod/bluepod created
```

```
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
bluepod 1/1 Running 0 25m app=blue

$ kubectl apply -f green.yml
pod/greenpod created
```

```
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
bluepod 1/1 Running 0 25m app=blue
greenpod 1/1 Running 0 28m app=green
```

```
svc.yml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: testing
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: blue
status:
loadBalancer: {}
```
In the above service yaml file we are mapping to our blue pod via selectors.

Just for understanding purpose we are changind, the default landing page of our httpd application
```
kubectl exec -it bluepod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@bluepod:/usr/local/apache2# echo "Hello from Blue-Pod" >> htdocs/index.html
exit
```
```
kubectl exec -it greenpod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@greenpod:/usr/local/apache2# echo "Hello from Green-Pod" >> htdocs/index.html
exit
```
We can verify if both the pods are having the updated output or not.
```
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bluepod 1/1 Running 0 5m38s 192.168.1.7 node01 <none> <none>
greenpod 1/1 Running 0 4m48s 192.168.1.8 node01 <none> <none>

controlplane $ curl 192.168.1.7
<html><body><h1>It works!</h1></body></html>
Hello from Blue-Pod

controlplane $ curl 192.168.1.8
<html><body><h1>It works!</h1></body></html>
Hello from Green-Pod
controlplane $
```
Not let see how it works with service IP
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d5h
myapp ClusterIP 10.100.35.84 <none> 80/TCP 2s
```
curl 10.100.35.84
<html><body><h1>It works!</h1></body></html>
Hello from Blue-Pod
```
Lets try to switch to our green deployment by changing the service mapping using below command, if we try to curl the service IP it should take us to the green pod.
```
kubectl patch service myapp -p '{"spec":{"selector":{"app": "green"}}}'

controlplane $ curl 10.100.35.84
<html><body><h1>It works!</h1></body></html>
Hello from Green-Pod
```

In this way, we can conclude work on blue green deployment works

# Contributors
[Ashutosh S.Bhakare](https://www.linkedin.com/in/abhakare/).


10 changes: 10 additions & 0 deletions bluegreen/blue.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: bluepod
labels:
app: blue
spec:
containers:
- name: webpage
image: docker.io/httpd
10 changes: 10 additions & 0 deletions bluegreen/green.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: greenpod
labels:
app: green
spec:
containers:
- name: webpage
image: docker.io/httpd
16 changes: 16 additions & 0 deletions bluegreen/svc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: testing
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: blue
status:
loadBalancer: {}
Loading