Skip to content

Multi Container Pods

Yasser Sinjab edited this page Apr 8, 2019 · 16 revisions

For this module I will go through three different scenarios:

Ambassador design pattern

One example of this design pattern is request splitter. Let's say your frontend application depends on a backend service B (current version is B1). A new release from B (version B2) is not ready for being fully deployed to production but ready for experiment. Let's begin by creating two services, B1 and B2:

# B2
kubectl run backend-experiment  --image=hashicorp/http-echo -- -text="hello backend-experiment"
# B1
kubectl run backend --image=hashicorp/http-echo -- -text="hello backend"

kubectl expose deploy backend-experiment --port=80 --target-port=5678
kubectl expose deploy backend --port=80 --target-port=5678
kubectl create configmap nginx-config --from-file=nginx.conf

# nginx.conf 
events {
  worker_connections  4096;
}
http {
    upstream version_1a {
        server backend;
    }
    upstream version_1b {
        server backend-experiment;
    }
    split_clients $arg_token $appversion {
        50%     version_1a;
        *       version_1b;
    }

    server {
        listen 8000;
        location / {
            proxy_set_header Host $host;
            proxy_pass http://$appversion;
        }
    }
}
kubectl run frontend --image=tecnativa/tcp-proxy  --env="LISTEN=':80'" --env="TALK='localhost:8000'" -o yaml --dry-run > frontend.yaml

Edit frontend.yaml and do the following two changes:

# add this in the containers:
...
      - image: nginx
        name: request-splitter
        volumeMounts:
        - name: config-volume
          mountPath: /etc/nginx
        resources: {}
...

# With referencing to the volume under spec:
...
      volumes:
      - name: config-volume
        configMap:
          name: nginx-config
...

For testing the solution you can expose your service to port 8888 and run this command:

while :; do curl http://localhost:8888/?token=${RANDOM};  done;

Sidecar design pattern

The role of the sidecar is to augment and improve the application container, often without the application container’s knowledge. I will add a varnish cache container in front of nginx in the same pod.

kubectl run nginx --image=nginx --restart=Never --dry-run -o yaml > nginx.yaml

Edit the yaml by adding varnish as a side car container:

...
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
  - env:
    - name: BACKENDS
      value: 127.0.0.1
    - name: BACKENDS_PORT
      value: "80"
    - name: DNS_ENABLED
      value: "true"
    image: eeacms/varnish
    name: cache
    resources: {}
...

Then apply and expose the pod:

kubectl apply -f nginx.yaml
kubectl expose po nginx --port=6081 --target-port=6081 # default port of varnish
kubectl run busybox --image=busybox --restart=Never --rm -i -- wget -qO- http://nginx:6081
# the result of default home page of nginx will come from application container for the first time, but from cache for any subsequent requests

Adapter design pattern

In the adapter pattern, another container in the pod will be used to modify the interface of the application container. For example: you want to expose an endpoint '/metrics' that returns data in specific format that Prometheus system can understand. The following example will implement this to a PostgreSQL pod.

kubectl run postgresql --image=postgres --env='POSTGRES_PASSWORD=password' --restart=Never -o yaml --dry-run > postgresql.yaml

Then add exporter container that will expose the '/metrics' endpoint:

...
spec:
  containers:
  - env:
    - name: POSTGRES_PASSWORD
      value: password
    image: postgres
    name: postgresql
    resources: {}
  - env:
    - name: DATA_SOURCE_NAME
      value: "postgresql://postgres:password@localhost:5432/postgres?sslmode=disable"
    image: wrouesnel/postgres_exporter
    name: prometheus-postgresql-exporter
    resources: {}
...

Then expose and call the endpoint:

kubectl expose po postgresql --port=9187 --name=db
kubectl run busybox --image=busybox --restart=Never --rm -i -- wget -qO- http://db:9187/metrics
Clone this wiki locally