Skip to content

Latest commit

 

History

History
146 lines (125 loc) · 5.36 KB

README.md

File metadata and controls

146 lines (125 loc) · 5.36 KB

k8s

status: proof-of-concept (working but no automated tests yet)

a bash-script to deploy kubernetes in containers via ssh

this exists mainly because learning-by-doing, soon there will be officially supported tooling that provides the same mechanism: kubernetes/kubernetes#13901

usage

all you need is the script k8s.sh, everything else is just documentation and testing. you can edit cluster-settings by just editing the script or by exporting environment-variables.

# <user>@<ip>/<flannel-iface>
export K8S_CONTROLLER="[email protected]/eth0"
export K8S_WORKERS="[email protected]/eth1 [email protected]/eth2"
./k8s.sh init-ssl # creates "./ssl" directory with certs in it
./k8s.sh kube-up # deploys kubernetes (using the certs in "./ssl")
./k8s.sh install-kubectl # install kubectl into /usr/local/bin
./k8s.sh setup-kubectl # setup kubectl (user, cluster and context)
kubectl cluster-info
kubectl get nodes
kubectl get rc,pods,svc,ing,secrets --all-namespaces

or just try everything with vagrant (see Vagrantfile, it will setup all the things for you):

vagrant up
vagrant ssh controller
kubectl cluster-info
kubectl get nodes
kubectl get rc,pods,svc,ing,secrets --all-namespaces

features

notes

overview

  • local-node (from where you setup k8s)

    • files
      • ssl/kube-admin.tar
        • kube-ca.pem
        • kube-admin-key.pem
        • kube-admin-cert.pem
      • ssl/kube-controller-<controller-ip>.tar
        • kube-ca.pem
        • kube-controller-key.pem
        • kube-controller-cert.pem
      • ssl/kube-worker-<woker-ip>.tar (every worker gets its own key/cert)
        • kube-ca.pem
        • kube-worker-key.pem
        • kube-worker-cert.pem
  • controller-node

    • files
      • /etc/kubernetes/kube-config.yaml
      • /etc/kubernetes/ssl/kube-ca.pem
      • /etc/kubernetes/ssl/kube-controller-cert.pem
      • /etc/kubernetes/ssl/kube-controller-key.pem
      • /etc/kubernetes/manifests-custom/controller.yml
        • kube-controller-manager
        • kube-apiserver
        • kube-scheduler
      • /etc/systemd/system/docker-bootstrap.service
    • processes
      • docker-bootstrap
        • etcd
        • flannel
      • docker
        • hyperkube:kubelet
          • controller-pod (/etc/kubernetes/manifests-custom/controller.yml)
            • hyperkube:controller-manager
            • hyperkube:apiserver
              • listening on https://0.0.0.0:443
              • and http://127.0.0.1:8080
            • hyperkube:scheduler
        • hyperkube:proxy
  • worker-node(s)

    • files
      • /etc/kubernetes/kube-config.yaml
      • /etc/kubernetes/ssl/kube-ca.pem
      • /etc/kubernetes/ssl/kube-worker-cert.pem
      • /etc/kubernetes/ssl/kube-worker-key.pem
      • /etc/systemd/system/docker-bootstrap.service
    • processes
      • docker-bootstrap
        • flannel
          • uses etcd running on the controller-node (secure ssl-connection)
      • docker
        • hyperkube:kubelet
        • hyperkube:proxy

things i dont fully understand yet:

  • where does cadvisor run? kubelet has a cli-option --cadvisor-port=0
  • is there a reason for kube-proxy to run in host-docker? would it be better to run it in the kubelet?

features that would be cool to add (maybe):

  • add tests for all the things (overlay-network, ingress-controllers, persistent-disks, ..) in some structured way
    • via vagrant (for all the distros)
    • on real cluster (clean up after every test)
  • make separate ssl-certs for etcd (which runs inside docker-boostrap, for flannel)? currently etcd just uses the same certs as the kube-apiserver
  • deploy heapster per default? kubedash? influxdb?
  • deploy docker-registry in k8s per default (but i think i prefer it to run outside)
  • provide options to run etcd outside k8s (on dedicated hardware). though i think for small clusters it is fine to have just one etcd on the controller-node
  • implement all the things into kubectl (this would be very nice :D)
  • support high-availabilty cluster (separate etcd-cluster, multiple apiservers that fight for master via raft)
  • make all images really small, to speed up everything

this project is free software released under the MIT license