Steps:
- Clone an existing configuration as a base.
- Customize it.
- Create two different overlays (staging and production) from the customized base.
- Run kustomize and kubectl to deploy staging and production.
First define a place to work:
DEMO_HOME=$(mktemp -d)
Alternatively, use
DEMO_HOME=~/ldap
To use overlays to create variants, we must first establish a common base.
To keep this document shorter, the base resources are off in a supplemental data directory rather than declared here as HERE documents. Download them:
BASE=$DEMO_HOME/base
mkdir -p $BASE
CONTENT="https://raw.githubusercontent.com\
/kubernetes-sigs/kustomize\
/master/examples/ldap"
curl -s -o "$BASE/#1" "$CONTENT/base\
/{deployment.yaml,kustomization.yaml,service.yaml,env.startup.txt}"
Look at the directory:
tree $DEMO_HOME
Expect something like:
/tmp/tmp.IyYQQlHaJP └── base ├── deployment.yaml ├── env.startup.txt ├── kustomization.yaml └── service.yaml
One could immediately apply these resources to a cluster:
kubectl apply -f $DEMO_HOME/base
to instantiate the ldap service. kubectl
would only recognize the resource files.
The base
directory has a kustomization file:
more $BASE/kustomization.yaml
Optionally, run kustomize
on the base to emit
customized resources to stdout
:
kustomize build $BASE
A first customization step could be to set the name prefix to all resources:
cd $BASE
kustomize edit set nameprefix "my-"
See the effect:
kustomize build $BASE | grep -C 3 "my-"
Create a staging and production overlay:
- Staging adds a configMap.
- Production has a higher replica count and a persistent disk.
- variants will differ from each other.
OVERLAYS=$DEMO_HOME/overlays
mkdir -p $OVERLAYS/staging
mkdir -p $OVERLAYS/production
Download the staging customization and patch.
curl -s -o "$OVERLAYS/staging/#1" "$CONTENT/overlays/staging\
/{config.env,deployment.yaml,kustomization.yaml}"
The staging customization adds a configMap.
(...truncated) configMapGenerator: - name: env-config files: - config.env
as well as 2 replica
apiVersion: apps/v1 kind: Deployment metadata: name: ldap spec: replicas: 2
Download the production customization and patch.
curl -s -o "$OVERLAYS/production/#1" "$CONTENT/overlays/production\
/{deployment.yaml,kustomization.yaml}"
The production customization adds 6 replica as well as a consistent disk.
apiVersion: apps/v1 kind: Deployment metadata: name: ldap spec: replicas: 6 template: spec: volumes: - name: ldap-data emptyDir: null gcePersistentDisk: pdName: ldap-persistent-storage
DEMO_HOME
now contains:
-
a base directory - a slightly customized clone of the original configuration, and
-
an overlays directory, containing the kustomizations and patches required to create distinct staging and production variants in a cluster.
Review the directory structure and differences:
tree $DEMO_HOME
Expecting something like:
/tmp/tmp.IyYQQlHaJP1 ├── base │ ├── deployment.yaml │ ├── env.startup.txt │ ├── kustomization.yaml │ └── service.yaml └── overlays ├── production │ ├── deployment.yaml │ └── kustomization.yaml └── staging ├── config.env ├── deployment.yaml └── kustomization.yaml
Compare the output directly to see how staging and production differ:
diff \
<(kustomize build $OVERLAYS/staging) \
<(kustomize build $OVERLAYS/production) |\
more
The difference output should look something like
(...truncated) < name: staging-my-ldap-configmap-kftftt474h --- > name: production-my-ldap-configmap-k27f7hkg4f 85c75 < name: staging-my-ldap-service --- > name: production-my-ldap-service 97c87 < name: staging-my-ldap --- > name: production-my-ldap 99c89 < replicas: 2 --- > replicas: 6 (...truncated)
The individual resource sets are:
kustomize build $OVERLAYS/staging
kustomize build $OVERLAYS/production
To deploy, pipe the above commands to kubectl apply:
kustomize build $OVERLAYS/staging |\ kubectl apply -f -
kustomize build $OVERLAYS/production |\ kubectl apply -f -